/* Database+Disk+Performance™: Finding the Top 10 Long Running Queries */ Database+Disk+Performance™ /* */ /* -- -- */ /* Footer ----------------------------------------------- */ #footer { width:660px; clear:both; margin:0 auto; } #footer hr { display:none; } #footer p { margin:0; padding-top:15px; font:78%/1.6em "Trebuchet MS",Trebuchet,Verdana,Sans-serif; text-transform:uppercase; letter-spacing:.1em; }

Monday, June 26, 2006

Finding the Top 10 Long Running Queries

Introduction


When having a performance issue, the first thing the DBA needs is to define what the problem is. The first thing I ask when someone says, “it’s running slow…” is to respond, “can you please give me a list of the top 10 worst queries.” Usually, the response is, “I don’t know exactly what they are…”


This note will explain how to isolate the queries by letting the computer tell you where the problems are.


The process is simple, it encompasses the following methodology:


  1. Turn on SQL Server Profiler
  2. Run it for a few hours filtering on long duration or high reads
  3. Save the profiler trace into a temporary table
  4. Run a few queries against the data
  5. Prioritize them as a working list to attack

The key concept: Long running queries hammer the disk and cause poor cache hit ratios. If too many users run them, the disk subsystem can suffer because a few users are monopolizing all the resources.


Collecting the Data


Typically, I’ll start up profiler and run it for 2 or 3 hours to capture a representative sample of data. Then I’ll use this information to make my decisions. This data collected will serve as a baseline for whether I got better or worse as I tune.


  1. Start up SQL Server Profiler. Collect on these two events:
    1. RPC:Completed
    2. SQL:BatchComplete
    3. These two will show queries that have completed.
  2. Filter on columns:
    1. Duration and/or,
      1. The criteria should start off with 30,000
        1. The unit of measure is milliseconds, hence, 30,000 = 30 seconds.
    1. Reads
      1. The criteria should start with 10,000
        1. The unit of measure is 8K. 10,000 reads = 81,920,000 bytes of IO. If you are doing 81M of IO, you probably have a query that needs investigating!


Let the trace run for a while. Then stop is and “Save As” a profiler trace file. Once it’s in a file, the DBA can start analyzing the data.


Rolling Up Queries


Usually, the easiest way to analyze the information is from within SQL Server. Import the trace file and then run queries against it to find the problems.


The trace file itself has the issues in it. We’ve already filtered for long running queries. Now, we just need to organize the data a bit.


First import the trace file using the following SQL Server function call:


use tempdb

go

SELECT IDENTITY(int, 1, 1) AS RowNumber, * INTO profiler_analysis

FROM ::fn_trace_gettable('c:\tmp\profilerdata.trc', default)

go


Next, get an idea of what you are looking at. For example, how much IO occurred for the monitoring run? What was the overall duration for all the long running queries?


select sum(Reads)*8192. 'Bytes Read' from profiler where Reads is not NULL;

go



Bytes Read

---------------------------------------

277,818,179,584

(1 row(s) affected)


select sum(Duration)/1000. 'Number of Seconds' from profiler_analysis where Duration is not NULL;

go



Number of Seconds

---------------------------------------

8914.941000

(1 row(s) affected)



select sum(Duration)/1000./3600.'Number of Hours' from profiler_analysis where Duration is not NULL;

go



Number of Hours

---------------------------------------

2.47637250000

(1 row(s) affected)



The following query shows the total amount of Reads by user:


select convert(char(20),LoginName)'User Name', sum(Reads)*8192. 'Total Bytes Read'

from profiler_analysis

where Reads is not NULL

group by LoginName

order by sum(Reads) desc

go



User Name Total Bytes Read

-------------------- ---------------------------------------

jde 178276974592

sa 53321981952

usera 20445822976

userb 10917101568

userc 5227069440

userd 2638151680

usere 2081947648

userf 2063392768

userg 1147445248

userh 670384128

useri 406921216

userj 316260352

userk 169639936

userl 55287808

userm 43941888

usern 19152896

usero 9584640

userp 4866048

userq 2252800

(19 row(s) affected)




The following query shows the total amount of seconds by user:



select convert(char(20),LoginName)'User Name', sum(Duration)/1000. 'Seconds Run'

from profiler_analysis

where Duration is not NULL

group by LoginName

order by sum(Duration) desc

go



User Name Seconds Run

-------------------- ---------------------------------------

jde 5456.860000

JDEService 1999.540000

sa 313.579000

usera 240.462000

userb 176.452000

userc 135.483000

userd 115.636000

usere 100.881000

userf 90.918000

userg 76.247000

userh 52.656000

useri 40.941000

userj 37.466000

userk 28.084000

userl 19.438000

userm 11.656000

usern 11.329000

usero 4.673000

userp 2.640000

(19 row(s) affected)



Finally, these two queries show the DBA the top 10 queries for Reads and Duration:



select top 10 RowNumber, Duration, Reads, LoginName

from profiler_analysis

order by Reads desc

go



RowNumber Duration Reads LoginName

----------- -------------------- -------------------- -----------

485 257230 3886609 sa

239 87690 1370174 usera

853 101810 1264835 userb

629 142370 1264577 jde

7 118890 1264197 JDE

747 8596 801035 sa

289 13970 740066 sa

264 7063 661617 sa

665 8576 356531 jde

193 3483 313031 userb

(10 row(s) affected)



select top 10 RowNumber, Duration, Reads, LoginName

from profiler_analysis

order by Duration desc

go



RowNumber Duration Reads LoginName

----------- -------------------- -------------------- -----------

503 335213 23 JDEService

502 333026 631 JDEService

485 257230 3886609 sa

528 224200 108896 jde

831 203590 2 JDEService

347 184183 103651 jde

532 181400 14 JDEService

627 175056 77320 jde

411 153933 307751 JDE

823 152746 23 JDEService

(10 row(s) affected)



To find the actual query for RowNumber 485 run a select statement and get the TextData column which will have the statement. The following analysis shows that the high IO and duration were due to an index being built:



select RowNumber, Duration, Reads,TextData

from profiler_analysis

where RowNumber = 485

go



RowNumber Duration Reads TextData

----------- -------------------- -------------------- ---------------

485 257230 3886609

create index x on CRPDTA.F4111( ILITM, ILLOCN, ILLOTN, ILKCOO, ILDOCO, ILDCTO, ILLNID )




The query for RowNumber 155 shows that it did 162,386 Reads for 1,330,266,112 bytes probably because the user put a wild card on the front and back of the criteria: F4220 WHERE (F4220.SWLOTN LIKE @P1) '%208547%' And forced a table scan.



select RowNumber, Duration, Reads,TextData

from profiler_analysis

where RowNumber =155

go



RowNumber Duration Reads

----------- -------------------- --------------------

155 10186 162386

(1 row(s) affected)

declare @P1 int set @P1=180151263 declare @P2 int set @P2=1 declare @P3 int set @P3=1 declare @P4 int set @P4=5 exec sp_cursoropen @P1 output, N'SELECT F4220.SWSHPJ, F4220.SWLITM, F4220.SWDCTO, F4220.SWSRL2, F4220.SWDSC2, F4220.SWSRL1, F4220.SWLOTN, F4220.SWLOCN, F4220.SWAITM, F4220.SWSFXO, F4220.SWDOCO, F4220.SWAN8, F4220.SWITM, F4220.SWMCU, F4220.SWDSC1, F4220.SWLNID, F4220.SWORDJ, F4220.SWKCOO, F4220.SWVEND, F4220.SWSHAN FROM PRODDTA.F4220 F4220 WHERE (F4220.SWLOTN LIKE @P1) ORDER BY F4220.SWLITM ASC, F4220.SWLOTN ASC, F4220.SWSRL1 ASC ', @P2 output, @P3 output, @P4 output, N'@P1 varchar(8000) ', '%208547%' select @P1, @P2, @P3, @P4




Summary


The above queries collect data and roll it up into a digestable format instead of a pile of random bits. Once the DBA has the performance data they can prioritize the issues and figure out a plan of attack. For example, change the configuration of the application so users cannot put in leading wildcards that force a table scan.


Knowing your system and where it’s issues are forms the backbone of being able to isolate and resolve performance problems. The good news is it’s to do with a small tool box.

The key concept is to let SQL Server collect the information, and using a few simple queries shown above, show you where the exact problems exist.


Powered By Qumana

1 Comments:

Anonymous Anonymous said...

(法新社a倫敦二B十WE四日電) 「情色二零零七」情趣產品大產自二十三日起在成人網站情色A片下載的肯辛頓奧林匹亞展覽館舉行,倫色情敦人擺脫對性的保守態度踴躍參觀,許多穿皮衣與塑膠緊身衣色情影片的好色之成人電影徒擠進這項世界規模最大的成人生活展,估計三天情色電影展期可吸引八萬多好奇民眾參觀。
情色電影
活動計畫負責人米里根承諾成人影片:「要搞浪漫、誘惑人、玩虐成人電影待,你渴望的a片我們都有。」

他說:情色「時髦的設計與華麗女裝,從吊飾到束腹到真人大小的雕塑,是我們由今年展出的數千件產品精選出的一AV女優部分,參展產品還包括時尚服飾、貼av女優身女用內在美、鞋子、珠寶、玩具、影片、藝術、圖書及遊戲,更不要說性愛輔具及馬術a片下載裝備A片。」

色情觀民眾遊覽兩百五十多個攤位,有性感服裝、玩具及情色食品,迎合av各種a片品味。

大舞台上表演的是美國野蠻搖滾歌手瑪莉蓮曼森的前妻─全世界頭牌脫衣舞孃黛塔范提思,這是她今年在英國唯一一場表演。

以一九四零年代風格演出的黛塔范提思表演性感成人網站的天堂鳥、旋轉木馬及羽扇成人影片等舞蹈。

參展攤位有AV的推廣情趣用品,有的公開展示人體藝術和人體雕塑,也有情色藝術家工會成員提供建議。

7:34 AM  

Post a Comment

<< Home