If you ever used SQL server Database Engine Tuning Advisor then you know you have few options as a source of analysis workload:
– SQL server Script file *.sql
Includes a query or set of queries targeting one or more databases. Tuning advisor will evaluate the script against target databases(s) and attempt to provide any recommendations.
– Trace file using SQL server profiler *.trc
You can use SQL server Profiler to capture different events happening at the instance and record these events on disk.
You can use the GUI to capture the events and watch them in real time then LATER save that trace to a file on disk or use server-side traces and the trace will automatically be saved to the file mentioned in the trace definition.
The second option is much recommended in case you’ll run the trace for an extended period because it offers less overhead on the server beside being able to programatically pause and resume the trace without the need to create from scratch (beneficial if you are diagnosing a performance issue and want to run the trace for few minutes then pause it).
– SQL server trace table
Exact as trace file but instead saves the events to a table inside a database.You can’t, however, save to table if you will be using server-side trace.
– Query within SSMS
We can invoke Tuning advisor directly from SSMS query window to tune a highlighted query but selecting the query>right Click then choose “Analyze Query in Database Engine Tuning Advisor”
Denali took that one step further , it can now dip into the plan cache and selects the top 1,000 events from the plan cache to use for analysis. The significance here is that it’s the shortest path to tune top consuming queries instead of manually investigating through DMVs or tracing. The closest thing was the helpful Performance Dashboard that also recommends indexes for top queries.
So, TOP 1000 by which order ?!! Let’s try to explore that.
Opening Tuning advisor (DTA.exe) will show a new option now for the workload , labeled “Plan Cache” as show here
I’ve Denali CTP3 with Adeventureworks sample database attached.
I started first by selecting database [AdventureWorks2008R2] as the database to tune and left [master] database as the “database for workload analysis” then clicked on “Start analysis”; the first attempt failed with the tool unable to progress!
I tried changing some parameters such as the database for workload analysis and selecting fewer tables for tuning but it failed again; after some research , it seems that you need to grant SHOWPLAN to the use running the DTA which I did
USE AdventureWorks2008R2; GO GRANT SHOWPLAN TO [TEST_VM\user]; GO
After that , DTA completed but without generating any recommendations; the tuning log indeed displayed plenty of different messages including
Event references objects that are not selected to tune
This means there are queries fetched but the objects in the query are NOT part of the list of objects selected to tune (we selected all tables of AdventureWorks2008R2)
Event does not reference any tables
This happens if fetched query does not reference any tables , like case of DMVs , SET statements or XML parsing queries ;message can be ignored
Statement references only small tables
DTA won’t bother about a query if it references a small table (10 data pages or less), table scans may more efficient here; message can be ignored
So , what’s missing to get some recommendations ?
In order to know that, we need to understand how DTA fetched those 1000 “top” queries; I fired a profiler trace then ran a tuning session again to eventually capture the following query run by DTA
SELECT TOP 1000 st.TEXT, Isnull(st.objectid, 0), Isnull(pa.VALUE, 0), AVG(cp.execution_count) FROM sys.dm_exec_query_stats cp CROSS APPLY sys.Dm_exec_sql_text(cp.plan_handle) st CROSS APPLY sys.Dm_exec_plan_attributes(cp.plan_handle) pa WHERE cp.creation_time < N’2011-10-20 01:02:12.577′ AND pa.attribute = ‘dbid’ AND pa.VALUE IN ( 5 ) GROUP BY st.TEXT, st.objectid, pa.VALUE ORDER BY SUM(total_elapsed_time) DESC
Aha, that’s much better! We can now spot few facts:
– The top 1000 is based on Total elapsed time of the query. Yes right , we can’t choose to rank based on CPU , reads, execution count, …etc; This can be a nice improvement.
– Query must be compiled before the time the DTA runs , sys.dm_exec_query_stats.creation_time
– The Queries should point to objects under database chosen in “database to tune” option or , in case of Ad-hocs, the database from which the batch is executed , sys.Dm_exec_plan_attributes.attribute/.value
So to get some recommendations , I need to run some queries to make them appear in cache. Using some queries I wrote and some sample queries from BOL, I ran the following queries few times
USE AdventureWorks2008R2; GO SELECT p.Name AS ProductName, NonDiscountSales = (OrderQty * UnitPrice), Discounts = ((OrderQty * UnitPrice) * UnitPriceDiscount) FROM Production.Product AS p INNER JOIN Sales.SalesOrderDetail AS sod ON p.ProductID = sod.ProductID ORDER BY ProductName DESC; GO SELECT ProductID, AVG(OrderQty) AS AverageQuantity, SUM(LineTotal) AS Total FROM Sales.SalesOrderDetail GROUP BY ProductID HAVING SUM(LineTotal) > $1000000.00 AND AVG(OrderQty) < 3; GO SELECT DISTINCT pp.LastName, pp.FirstName FROM Person.Person pp JOIN HumanResources.Employee e ON e.BusinessEntityID = pp.BusinessEntityID WHERE pp.BusinessEntityID IN (SELECT SalesPersonID FROM Sales.SalesOrderHeader WHERE SalesOrderID IN (SELECT SalesOrderID FROM Sales.SalesOrderDetail WHERE ProductID IN (SELECT ProductID FROM Production.Product p WHERE ProductNumber = ‘BK-M68B-42’))); GO SELECT DISTINCT Name FROM Production.Product AS p WHERE EXISTS (SELECT * FROM Production.ProductModel AS pm WHERE p.ProductModelID = pm.ProductModelID AND pm.Name like ‘%Road%’) And p.ListPrice >10 ; GO
Now running the DTA top 1000 query will spot these queries in top 10. Running a new tuning session again against [AdventureWorks2008R2] with “Plan Cache” as the workload source generated three 10 recommendations: 3 indexes and 7 Stats, Savvy!
These are the **index** recommendations:
use [AdventureWorks2008R2] go CREATE NONCLUSTERED INDEX [_dta_index_Person_5_1765581328__K1_K7_K5] ON [Person].[Person] ( [BusinessEntityID] ASC, [LastName] ASC, [FirstName] ASC )WITH (SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PRIMARY] go CREATE NONCLUSTERED INDEX [_dta_index_SalesOrderDetail_5_1154103152__K8_4_5_7] ON [Sales].[SalesOrderDetail] ( [UnitPriceDiscount] ASC ) INCLUDE ( [OrderQty], [ProductID], [UnitPrice]) WITH (SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PRIMARY] go CREATE NONCLUSTERED INDEX [_dta_index_SalesOrderDetail_5_1154103152__K5_4_7_8] ON [Sales].[SalesOrderDetail] ( [ProductID] ASC ) INCLUDE ( [OrderQty], [UnitPrice], [UnitPriceDiscount]) WITH (SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PRIMARY] go
And that’s a screenshot of the tuning results (part of it) :
Many Queries didn’t qualify for the tuning for the same reasons mentioned above; this also tells some recommendation:
If you have a big database with big tables then you better know which objects you want to tune against instead of choosing all objects causing tuning to take considerable time.
It is also much recommended if you have a test server with similar specs as your production to test tuning on it. You can also capture a trace on production and replay it on test server.
So, in brief, we can summarize all above in few points:
– DTA added the ability to extract queries from query plan cache for tuning.
– The queries selected are TOP 1000 by Total Elapsed time (runniing duration). The selection is based on the database selected for tuning and the time the DTA runs compared to the time the plan is compiled.
– We can NOT control the order the queries are retrieved , so instead of [total_elapsed_time] , we can NOT choose [total_worker_time] for example.
– The number of objects chosen for tuning will control the eligible queries valid for tuning so we should be careful if we have a big database or lots of objects as this may make the tuning time increase proportionally.
Hope this was clear enough
and Just to put things into perspective , Denali is now officially SQL server 2012