Sunday, 28 February 2010

SQL Agent – Giving non-SA users permissions on jobs

There are some new database roles in the MSDB database in SQL Server 2005 that allow you to grant more granular permission to  non-sa users in the SQL Server agent.

If you work in an environment similar to the one i support, application and database code will pass through several environments en-route to production. Starting on the Dev box then moving to system testing and then to user acceptance testing, if you have any SQL Agent jobs that relate to the database being tested but you don’t want these jobs to run on to run on schedule, instead you want your testers to run as and when required. You can use these new roles to give your developers and testers permissions to run the jobs when they need too.

There are three roles in MSDB in SQL Server 2005 and onwards called:

SQLAgentUserRole – which is the least privileged of the three roles. Users in this role have permission on jobs and schedules that they own. This was the role i needed in my case.

SQLAgentReaderRole – Has all the permissions of SQLAgentUserRole plus the ability to view a list of multi-server jobs.

SQLAgentOperatorRole - is the most privileged of the SQL Server Agent fixed database roles. It includes all the permissions of SQLAgentUserRole and SQLAgentReaderRole. Members of this role can also view properties for operators and proxies, and enumerate available proxies and alerts on the server.

This information is freely available in Books Online (BOL)

When a non SA user logs into an instance using SSMS, they have to be a member of one of these roles to have the SQL Agent available to them.

The following script grants my login called test access to the MSDB database and then adds it to the SQLAgentUserRole:

USE [msdb]
USE [msdb]
EXEC sp_addrolemember N'SQLAgentUserRole', N'Test'

This allowed the testers to run jobs that they owned when it suited them.

Thursday, 25 February 2010

Renaming a Standalone Instance of SQL Server

I needed to upgrade the underlying IO subsystem of one of my IO intensive SQL Servers. After some discussion with various teams it was decided that the best way to achieve this whilst minimising disruption was to build a fresh server with a new name and a fresh SQL Server installation.We could then  configure that server as an exact replica of the current production box, with a different IO configuration. We could then run tests using SQLIO and other business related loads without interfering with the current production environment and at the same time ensure that the new kit was up to the the task.

When it came to switch over we would be in position whereby  we could have both machines running at the same time, with network connectivity so we could copy the necessary database backup files between servers, and start our load processing running again. As this database is only used to house read-only databases, with a weekday daily load process the switch over time (downtime) could be kept to a minimum.

Anyway, I digress, When we were ready to switch over, to avoid having to re-configure things like firewall ports we wanted to reconfigure the new server to have the same name and IP address as the old server. So at the point of switchover we would rename the server and change its IP address to be the same as the old server and turnoff the old server. (But keeping it as a failback should something go awry.) when you rename a computer housing SQL Server…SQL Server picks up the new name at start up, no re-installation is needed but you do have to update the SQL Server meta data so things like

SELECT @@ServerName 

will initially return the incorrect information, the old server name.  You can change this by running the following script, obviuolsy replacing the dummy syntax with your server details:

sp_dropserver 'oldname\instancename'
sp_addserver 'newname\instancename', local

As you can hopefully tell from the above script my SQL instance was a named instance, to do this on a default instance you simply drop the \instance name from each stored procedure.

sp_dropserver 'oldname'
sp_addserver 'newname', local

To check that the meta data has been updated correctly you can run

SELECT @@ServerName 

Maintenance plans can be affected by a rename, and may break after the rename. So you can either delete before the rename and re-create afterwards or Microsoft supply a script to fix them. This can be found here.

Some caveats:

  • This will not work for a cluster, it is only for stand-alone servers.

  • It won’t work on replicated instances

  • If your database is mirrored you will need to break mirroring and re-configure after the rename.

  • if your computer is used in reporting services a different approach is needed more information can be found here.

Tuesday, 23 February 2010

Setup a Database Snapshot

I have been tinkering lately with database snapshots and I thought the subject warranted a short post on the subject. This post will details a few things of note about snapshots and how to create a snapshot on your database.

Database snapshot provide a read-only point in time view of the source database. Database snapshot utilise sparse files and “copy on write” technology to take a snapshot of your database at a point in time. Database snapshots work at the page level, Before a page is changed  in the source database, the page as it exists before any change is made  is copied to the snapshot thus preserving the values on that page in the snapshot.

Because of the dependency on the source database, the snapshot must reside on the same SQL Server instance as the source database.

In terms of size, the largest a snapshot can get is the exact same size of the source database when the snapshot was taken. Thus if all the data in your database changes frequently you will need enough space to hold two copies of the source database.

Pages will only be copied to the snapshot once.

You cannot currently backup a snapshot, I think Steve Jones of SQL Server Central has posted a connect requested asking for snapshot backups and he has received a response, he has a post on the subject here

Snapshots are only available in Enterprise edition of SQL Server.

The write on copy (write before copy) procedure can be a performance overhead because of the additional IO needed to maintain the snapshot

Snapshots can be useful for reporting against the source database and also provide a way to recover the source database in certain scenarios.

OK so how do you create a database snapshot?

You need to use TSQL, you can’t create a snapshot through management studio. I have a bankingDB database on my instance of SQL Server (developer edition) I will create a snapshot of that database for the purposes of this example.

You create a snapshot using the CREATE DATABASE statement specifying AS SNAPSHOT OF clause. You also need to specify all the logical file names of the data files in the source database.

To get all the logical filenames of the data files in the source database you run this TSQL:

USE BankingDB
select * from sysfiles

When you have logical name of the database files you can form your TSQL script. This is script i used to create a snapshot of my BankingDB database:

( NAME = BankingDB, FILENAME =
'C:\DATA\SQLServer2008\' )

When you have created you snapshot you should have an entry under the database snapshots folder in SSMS


You can reference this snapshot as you would a normal database with the USE statement and you can run select statements against it too.

Sunday, 21 February 2010

SQL Server 2005 CLR execution error after restore

I recently restored a SQL Server 2005 database to a testing environment. The database contained several CLR assemblies, when the tester tried to execute one of the CLR’s SQL returned the following error:

An error occurred in the Microsoft .NET Framework while trying to load assembly id xxxxx. The server may be running out of resources, or the assembly may not be trusted with PERMISSION_SET = EXTERNAL_ACCESS or UNSAFE.

After a little search on Google, i came across the following Knowledge Base article 

A quick check in SSMS showed that I the database had been restored with an owner other than SA. So I run the following script to change the database owner:

EXEC sp_changedbowner 'sa'

I then get errors relating to trustworthy setting on the database, after checking in SSMS again I saw that the database trustworthy setting was set to false. I used the following Script to set the trustworthy setting:

USE master

This fixed the issue.

Thursday, 18 February 2010


This is a short note explaining he sys.dm_os_wait_stats DMV.

This DMV returns the following columns/values regarding waits encountered by executed threads:

  • Wait_type
  • waiting_tasks_count - The number of waits on this wait type.
  • wait_time_ms - The total wait time for this wait type in milliseconds (including signal_wait_time_ms)
  • max_wait_time_ms
  • signal_wait_time_ms The difference between the time the waiting thread was signalled and when it started running.

Most of these columns are self-explanatory with the exception of the last one, signal_wait_time_ms,  so I’ll explain the last column in a little more detail…

A thread enters a wait state when the resource it is waiting for is not available.

Once the resource becomes available, the waiting thread is signalled. However, the CPU might be busy at this point serving other threads.

The attribute signal_wait_time_ms indicates the time it took from the moment the thread is signalled that the resource is available until the thread gets CPU time and starts using the resource.

As you have probably guessed, high values in this attribute can typically indicate CPU problems.

The wait types encountered by SQL server are a good indicator of the queues or bottlenecks facing a SQL Server instance, so using this DMV is a useful tool when using the waits and queues troubleshooting methodology

You can reset the values in the sys.dm_os_wait_stats DMV by running the following command: You

DBCC SQLPERF ('sys.dm_os_wait_stats', CLEAR)

The values are also reset when SQL Server restarts.

Tuesday, 16 February 2010

SQL Server - Checkpoint

What is a SQL Server checkpoint?

A SQL Server checkpoint is the process of writing all dirty datafile pages  out to disk. A dirty page is page that has changed in memory (buffer cache) since they were read from disk or since the last checkpoint. This is done regardless of the transaction that made the change.

SQL Server uses a protocol called Write Ahead Logging (WAL) and it is this process that writes all log records describing a change to the data page to disk before the actual page is written to disk.

Checkpoints can occur concurrently on any number of databases on an instance.

How do database checkpoint occur?

Before a backup the database engine automatically performs a checkpoint, this ensures that all database changes are contain in the backup.

You issue a manual checkpoint command, a checkpoint is the run against the database in use

SQL Server is shutdown. If the checkpoint is skipped (SHUTDOWN WITH NOWAIT) the restart will take much longer

ALTER DATABASE is used to add or remove a database file.

If you change  the recovery model from bulk-logged to full or full to simple recovery model.

If your database is in full or bulk logged recovery mode checkpoints are run periodically as specified by the recovery interval server setting

In simple recovery checkpoints are run when the log becomes 70% full or based on the recovery interval setting, which ever comes first.

SQL server Recovery Interval

A note on the recovery interval – the time between checkpoints is determined by the recovery interval and the number of records in the transaction log. The recovery interval is set for an entire instance of SQL Server. The value represents the number of minutes that you choose to allow SQL server for automatic recovery. SQL server uses an algorithm to determine when to perform the next check point. The algorithm will create checkpoint to ensure that in the event of system failure the recovery will not take longer than the time specified in your recovery interval.

Sunday, 14 February 2010

Useful reading this week

This is a short post for a Sunday afternoon, I thought I’d share some useful reading from the week gone bye. I have started with Brent’s blog and interview with Joe Sack about the Microsoft Certified Master Program. This led me to discovering the MCM reading list, hopefully you have read at least some of these but its a comprehensive list of reading that as SQL Server professional you will find interesting. Everything on the list is freely available on the internet.

Brent Ozar’s interview with Joe Sack:

MCM reading list:

SQL Server 2005 SP4!

Microsoft has recently announced the planned release dates for the next round of SQL Server service packs. SQL Server 2008 SP is planned for release in Q3 2010.

Microsoft does pay attention to the SQL Server community and duly noted the demand for another service pack for SQL Server 2005. SP4 for SQL Server 2005  is due for release in Q4 2010.

The blog post can be found here.

Thursday, 11 February 2010

SQL Server Trace Events

I answered a forum post the other day over on SSC on the differences between  the trace events stmtcompleted and batchcompleted. In hindsight I’m not sure that I made a very good job of it, so I thought I would write a blog post to re-enforce and confirm my understanding.

The question went something like this…

“Can you explain the difference between BatchCompleted and StmtCompleted events.”

So I will start by defining the type of events that these could relate too.

T-SQL Events

A category called T-SQL exists in SQL Server profiler and amongst others contains the following two events:

  • SQL:StmtCompleted
  • SQL:BatchCompleted

T-SQL events…“Includes event classes produced by the execution of Transact-SQL statements passed to an instance of SQL Server from the client.” The following definitions are taken from the help text you get when you hover over each event in profiler:

SQL:StmtCompleted: Occurs when the Transact-SQL statement has completed.

SQL: BatchCompleted: Occurs when the Transact-SQL statement has completed.

From the description here you might be confused that these two events are exactly the same…The demo below shows the difference between these two events.

So what is a batch, a batch is described as one or multiple T-SQL statements executed by SQL Server as a single unit. A batch is separated by GO statements

By definition then a SQL:BatchCompleted event gest captured by the trace when a batch of T-SQL completes

A SQL:stmtCompleted event get captured when one of the  T-SQL statements in a batch completes.

So for example say you have 3 SELECT statements submitted to SQL Server as part of the same batch we will get 3 SQL:StmtCompleted events captured in the trace and 1 SQL:BatchCompleted events.

The demo below demonstrates this


I have a database I use to hold some contact details of members of my cricket club, mainly used to send email mail shots and news letters. So I have used that for this example. I wrote a simple T-SQL statement that returns members names that I have no email address for and the second T-SQL statement pulls out club members that I have no mobile telephone number for.

The script looks like this:

select name from PlayerContacts
where EMAIL is null

select name from PlayerContacts
where Mobile is null

Before we run this code I will setup a trace, using the profiler GUI to trap the execution of this script. I select two T-SQL events from profiler:

TSQL events

With the profiler trace running I executed the SQL script above once and I got the following results recorded in my profiler trace:

Profiler results

As you can see we have two SQL:stmtcompleted events and although its not obvious from the above screenshot each row relates to one of the two SELECT statements above, the TSQL was submitted as one unit of code and was made up of two different SELECT statements, when each individual statement completes the SQL:StmtCompleted event fired.

We have one SQL:BatchCompleted entry in the trace. Both statements were submitted in the same unit of code with no GO statements separating them when the script was executed and hence were part of the same batch. If you look at the TextData column you will see the text of the entire batch - both statements that were executed as part of the same batch of code. So when the query completed we got one SQL:BatchCompleted entry in the trace.

In this case SQL:StmtCompleted captured the completion of each select statement individually. The SQL:BatchCompleted event captured the entire batch (both queries) as a whole.

Stored Procedure Events

There is a set of trace events that relate to and captures the execution of Stored Procedures. The profiler help text provides the following definition

“Includes event classes produced by the execution of stored procedures.”

There is an SP event called SP:stmtCompleted. The help text offers the following information on this event “Indicates that a Transact-SQL statement within a stored procedure has completed.”

We start a new trace with just the SP:StmtCompleted event selected:


We turn the above T-SQL statements into a stored procedure called usp_TestProfiler

We then execute the stored procedure

exec usp_TestProfiler

and we get the following captured in the trace


As you can see, we also have two sp:StmtCompleted events in the trace, again once for each statement that makes up the stored proc.

I’ll now setup a trace that has all three of the events discussed here selected


I will then the run the two select statements as a batch, and we get the following profiler output:


As you can see the results are the same as our first example.

If we execute the stored procedure we get the following profiler output:


As you can see with all three events selected in the trace, the execution of the Stored Procedure results in four events in the profiler trace. We have two SP:StmtCompleted events for the two SELECT statements in the SP. The batch submitted  consists of one TSQL statement executed in management studio So we get one SQL:Stmtcompleted event and one batch completed event, the batch was made up of one statement. That called a Stored procedure, made up of two TSQL statements.

Tuesday, 9 February 2010

Sunday, 7 February 2010

SQL Server Waits and Queues

QueueAndWaits Useful link. This is a great best practice article/white paper from Tom Davidson and Danny Tambs.



The word document (880kb) can be downloaded from here

SQL Server Storage

This is a short post on SQL Server storage best practice and what i have learned over the years… Much of this information can be found in Microsoft’s document on Storage Best Practice and this can be found here

Test your setup before deployment. MS have useful utility called SQLIO that can be used for this purpose.

Place log file on a RAID 10 disks…This provides fast write performance, which suits the sequential writes of the log while maintaining fault tolerance.

Separate log and data files at the physical disk levels. Spreading the write intensive write IO of the log from the Read/Write of the data files across different spindles can reduce can reduce contention

Configuring TEMPDB correctly – MS state in its SQL Server storage best practices document that you should create one TEMPDB data file per CPU and place the TEMPDB database on a RAID 10 disks

Database data files should be of equal size…The SQL Server allocation algorithm favours files with more space.

Pre-size data and log file, this will prevent these files growing in small increments as the database grows in size, (especially if you have AUTOGROW) these small but high frequency growth can cause physical file fragmentation on the disk.

Mange file database file growth manually, AUTOGROW can be left as a safety measure but don’t rely on it for regular database growth.

The deeper the HBA queue depth the better for SQL IO volumes.

Thursday, 4 February 2010

SQL Server Default Trace

The default trace in SQL Server is something I have been experimenting with lately. It is a feature included in SQL Server 2005 and beyond and it is enabled by default when you install SQL Server. It has a number of uses from auditing to capacity planning and I will look at an example here.

First though lets check to see that default trace is enabled on my SQL Server instance, the following code TSQL will check for this:

--Check to see that Trace is enabled.

FROM sys.configurations
WHERE configuration_id = 1568

If the trace is enabled you will get results like the screen shot below:


Notice the [Value] and [Value_in_use] columns set to 1 – this means the default trace is enabled.

If it's not enabled,[Value] and [Value_in_use] columns are set to 0 


If the trace is disabled the following code will enable it...You need to be a member sysadmin or serveradmin fixed server roles to run reconfigure.

sp_configure 'show advanced options', 1 ;
sp_configure 'default trace enabled', 1 ;

**CAUTION I believe if you disable this trace then certain reports in Management Studio will no longer function correctly.

This trace is stored in a file, and you have two TSQL options to get the path of the default trace file for your SQL Server instance. To quote from books online (BOL) "The sys.traces catalog view contains the current running traces on the system. This view is intended as a replacement for the fn_trace_getinfo function."

So if I run:

SELECT * FROM sys.traces

I get the following results returned (the screen shot is a subset):


You can see i have captured the path of file, the max size of the trace file 20MB, the time the trace will stop, and the maximum number of files to store. The query will in fact return information about all traces running on your instance, the default trace has an ID of 1

You can get some of this information by querying the fn_trace_getinfo function like so:

SELECT * FROM ::fn_trace_getinfo(0)

Passing the value 0 to the function returns information on all running traces. The results are:


The trace property values in the property column have the following Meaning

2 = File name

3 = Max size (MB)

4 = Stop time

5 = Current trace status - 1 running 0 - Stopped.


Example – Detecting auto grow

This contrived example looks at using the default trace to find out when your data and log files have auto-grown. Now we all know that auto-grow should only be used as a safety catch and we should be appropriately sizing our database files accordingly, so lets say your database needs auto grows, the safety catch kicks in you may want to know when and where that happened. Well this contrived example can show you how you can find out this information using the default trace in SQL Server:

Lets create a database to test this out on:

CREATE database default_trace

My model DB has the standard out of the box settings and is around 4MB in size, has auto growth turned. So the new database picks up these settings.

We then create a table to allow us to add data and grow the database files:

USE [Default_Trace]

CREATE TABLE [dbo].[T1](
[c1] [bigint] IDENTITY(1,1) NOT NULL,
[c2] [char](4000) NULL



Next we need to add enough data to the database to force the database files to auto grow. I used the following script:

SELECT @counter = 0;

WHILE (@counter < 1000)
SELECT @counter = @counter + 1;

Then we can run the following to pull out the auto growth statements from the default trace:

eventclass, as EventName
FROM ::fn_trace_gettable('C:\DATA\SQLServer2008\MSSQL10.SQLSERVER2008\MSSQL\Log\log_88.trc',0)
INNER JOIN sys.trace_events e
ON eventclass = trace_event_id
WHERE e.NAME LIKE '%auto grow%'

The following screen shot shows a partial result set, we can one log growth, and one data file growth. In fact though the insert caused the log to auto grow one which represents a 10% increase on 1MB and the data file to grow three times, each time adding 1mb to the data file.

I’m sure there is another post in this on the potential issues of auto-grow and why you should size you databases accordingly.

Tuesday, 2 February 2010

“Building a Better Blog” with Steve Jones

Build a better blog I recently watched  Steve Jones’ (Blog | Twitter) "Building a better blog" presentation from the 24 hours of PASS virtual conference from back last year. As I mentioned in a previous post I have known about PASS for sometime, the yearly PASS conference in particular  from advertising on sites like SQL Server Central, I have never been to that conference and I have never really looked into it any great detail either until recently. Aside from this conference and the PASS Europe conference, PASS offers a wealth of resources available to the SQL Server professional.

Steve’s presentation focused on the development of your blog and how it can help your career. It wasn’t a technical presentation about SQL Server but more how you can use your blog to create a brand and how you can use it to prove and improve your knowledge of SQL Server. It is also a great way to learn and document your knowledge. He gave some good advice in how to promote your blog, what subjects to write about and how to fit writing on it into your schedule.

He also gave a link to really cool tool called Windows Live Writer (which I have used to write this post) which is a neat little blogging tool that allows you to write offline and schedule your post for publication…If you are interested this is the link

Featured post

Creating a Linked Server to an Azure SQL Database

Why would you create a Linked Server to an Azure SQL Database? If you work in a hybrid environment with some databases stored on your on ...