Skip to main content

SQL Server Default Trace

The default trace in SQL Server is something I have been experimenting with lately. It is a feature included in SQL Server 2005 and beyond and it is enabled by default when you install SQL Server. It has a number of uses from auditing to capacity planning and I will look at an example here.

First though lets check to see that default trace is enabled on my SQL Server instance, the following code TSQL will check for this:

--Check to see that Trace is enabled.

FROM sys.configurations
WHERE configuration_id = 1568

If the trace is enabled you will get results like the screen shot below:


Notice the [Value] and [Value_in_use] columns set to 1 – this means the default trace is enabled.

If it's not enabled,[Value] and [Value_in_use] columns are set to 0 


If the trace is disabled the following code will enable it...You need to be a member sysadmin or serveradmin fixed server roles to run reconfigure.

sp_configure 'show advanced options', 1 ;
sp_configure 'default trace enabled', 1 ;

**CAUTION I believe if you disable this trace then certain reports in Management Studio will no longer function correctly.

This trace is stored in a file, and you have two TSQL options to get the path of the default trace file for your SQL Server instance. To quote from books online (BOL) "The sys.traces catalog view contains the current running traces on the system. This view is intended as a replacement for the fn_trace_getinfo function."

So if I run:

SELECT * FROM sys.traces

I get the following results returned (the screen shot is a subset):


You can see i have captured the path of file, the max size of the trace file 20MB, the time the trace will stop, and the maximum number of files to store. The query will in fact return information about all traces running on your instance, the default trace has an ID of 1

You can get some of this information by querying the fn_trace_getinfo function like so:

SELECT * FROM ::fn_trace_getinfo(0)

Passing the value 0 to the function returns information on all running traces. The results are:


The trace property values in the property column have the following Meaning

2 = File name

3 = Max size (MB)

4 = Stop time

5 = Current trace status - 1 running 0 - Stopped.


Example – Detecting auto grow

This contrived example looks at using the default trace to find out when your data and log files have auto-grown. Now we all know that auto-grow should only be used as a safety catch and we should be appropriately sizing our database files accordingly, so lets say your database needs auto grows, the safety catch kicks in you may want to know when and where that happened. Well this contrived example can show you how you can find out this information using the default trace in SQL Server:

Lets create a database to test this out on:

CREATE database default_trace

My model DB has the standard out of the box settings and is around 4MB in size, has auto growth turned. So the new database picks up these settings.

We then create a table to allow us to add data and grow the database files:

USE [Default_Trace]

CREATE TABLE [dbo].[T1](
[c1] [bigint] IDENTITY(1,1) NOT NULL,
[c2] [char](4000) NULL



Next we need to add enough data to the database to force the database files to auto grow. I used the following script:

SELECT @counter = 0;

WHILE (@counter < 1000)
SELECT @counter = @counter + 1;

Then we can run the following to pull out the auto growth statements from the default trace:

eventclass, as EventName
FROM ::fn_trace_gettable('C:\DATA\SQLServer2008\MSSQL10.SQLSERVER2008\MSSQL\Log\log_88.trc',0)
INNER JOIN sys.trace_events e
ON eventclass = trace_event_id
WHERE e.NAME LIKE '%auto grow%'

The following screen shot shows a partial result set, we can one log growth, and one data file growth. In fact though the insert caused the log to auto grow one which represents a 10% increase on 1MB and the data file to grow three times, each time adding 1mb to the data file.

I’m sure there is another post in this on the potential issues of auto-grow and why you should size you databases accordingly.


  1. Good, in-depth article!!

    One correction though. Passing 0 (or null or default for that matter) to fn_trace_getinfo() will return information on all running traces, not just the default trace. To get the info for a single trace, you have to specify the trace ID. The default trace is always trace ID of 1. However, if the default trace is not running, a custom trace can be trace ID of 1.

  2. Hi Robert, Thanks very much for the comment. I made a small change to my post based on your comments.

  3. Thanks a lot Gethyn Ellis, I have never used SQL profiler, your post made me get introduced to it. It's crystal clear. Thanks again.


  4. Rocky,

    I'm pleased that it helped you


Post a Comment

Popular posts from this blog

SQL Server 2012 and Virtual Service Accounts

This post is written by David Postlethwaite
If you are using SQL Server 2012 you will probably have noticed that the default account for the SQL services has changed from that used in previous versions. With SQL 2005 and 2008 the default account for SQL service and SQL Agent service was “NT Authority\System”. This is one the built in accounts on a Windows machine, managed by the machine and selectable from a dedicated dropdown list

The Network Service account was introduced in Windows 2003 as an alternative to using the LocalSystem account, which has full local system privileges on the local machine, a major security concern.
The Network Service has limited local privileges easing these security concerns but when many services on a machine use the Network Service account it becomes harder to track which service is actually accessing resources and performing actions, because all the services are using the one Network Service account.
Also, this account, by default, has sysadmin per…

Always Encrypted

By David Postlethwaite

Always Encrypted is new features in SQL Server 2016 and it is also available in Azure SQL Database. Here you can encrypt columns in a table with a master key and a certificate so that they will appear as encrypted strings to those who don’t have the required certificate installed on their pc.
Once the certificate is installed on the computer then the unencrypted data can then be seen as normal.

The data passes from database to your application as the encrypted value, only the application with the correct certificate can unencrypt the data so it is secure across the wire. This will go some way to resolving the concern of people worried about putting their sensitive data on a shared server in the cloud such as Microsoft Azure and accessing the data across the Internet.

At the time of writing Always Encrypted is only supported with ADO.NET 4.6, JDBC 6.0 and ODBC 13.1 but expect other driver to become available.

The calling application (including SSMS) must also hav…

New in SQL Server 2017: Graph Databases

David has recorded and published a video of his presentation on SQL Server Graph Database. In his video which you can watch below, David provides an excellent introduction into SQL Server 2017 Graph Databases. In his presentation he looks at Tennis results at tournaments for  his favourite player "The Fed"  Rodger Federer.

David  shows how to set up graph database and work with them in SQL Server 2017.

Graph Database is not new. Other vendors have had graph database capabilities for some time so Microsoft are quite late to the market. In David presentation it appears that Microsoft have done a reasonable job of implementing some of the graph database features but he does point some of the limitations of the Microsoft product too and suggests that it is not ready for production yet but Microsoft seem serious about this feature.

Please watch the video and feel free to leave a comment or feedback - David is delivering a version of this talk on Graph databases in SQL Saturday Ka…