Wednesday, 31 August 2011

Sharon Dooley’s Lightening Talk and the WIT Luncheon

At the end of last week I received an email from my friend and colleague Sharon Dooley telling me her “Lightening Talk” had been accepted and she would be speaking at the Summit in October. Sharon has many years experience in the IT industry, a former SQL Server MVP, currently working as a trainer and course author specializing in SQL Server. She is a volunteer with PASS and is the Virtual Chapter Lead for the DBA virtual chapter. Sharon is also involved in the Women in Technology virtual chapter.

A couple of weeks ago Sharon and I were chatting about the PASS Summit and even though she hadn’t submitted a full session she was considering submitting a lightening talk. Sharon doesn’t currently maintain a blog so I said if she did submit a talk and it was accepted I would promote her “Lightening Talk.” Sharon has a really awesome query that returns a user’s permissions; she will be demonstrating this query during her talk. She also promised to share the query with us after the Summit so those of us who can’t get to Seattle can use it too. So we’ll have an excellent guest post to look forward to later in the year.

There will be more details on Sharon’s talk to follow but Sharon has asked me if I would mention the PASS Women in Technology Luncheon at which she is also on the Panel. The 9th Women in Technology Luncheon and Panel discussion will take place at the PASS Summit in Seattle on Thursday October 13, 2011. This year's topic is: Make Yourself Heard: How to Ask for What You Want at Work. In 2003 the first Women in Technology panel discussion was held at the PASS Community Summit. Since that inaugural panel, the WIT event has been a highlight of the Summit, with captivating speakers and lively discussion on topics relating to women in the field of technology. Participants at these panel discussions were eager to continue networking and learning from each other which led to the launch of the WIT Virtual Chapter. You can find more information on the Women in Technology Luncheon and the virtual chapter in general here ( ) You can follow the Chapter on twitter @PASS_WIT or following the #PASSWIT hash tag.

Tuesday, 30 August 2011

Using PowerShell to Monitor SQL Server

By Ian Treasure

Over the past few years, I have used a number of approaches to monitor SQL Server. For example, I have configured jobs to e-mail the DBA team if they fail. I have written programs to check that everything is OK. I have used a number of third-party tools to monitor my instances. Each approach has its own strengths and weaknesses. If the SQL Server Agent is not running, the jobs don’t run and I don’t get any messages telling me that they have failed. To stop this happening, I have used server monitoring tools for example  Big Brother which can warn if a service has stopped. This sort of tool is pretty good, but you need to invest time and effort setting it up. The same applies to monitoring tools like Ideas Diagnostic Manager or Red Gates SQL Monitor. Also, these tools can cost a lot of money and can themselves affect performance of the server.
So I want to find a tool to monitor SQL that is independent of SQL Server, free and easy to use. PowerShell fits the first two requirements, and it is not too difficult to learn. I’ll suggest an approach to monitor your backups that took me a few hours to implement, is independent of SQL server, takes few resources and costs nothing apart from your time.
I have a production database in full recovery. A scheduled SQL Server Agent backs up the log on the hour between 07:00 and 19:00. If the job to backup the log fails, it will e-mail the SQL Server DBA group. However, if the SQL Agent fails there is a danger that there will be no backup notification. To address this issue, I have written a PowerShell job that runs on a monitor server (Windows 2003) and checks that the log file is in the backup directory. If the file is not present, the PowerShell script e-mails the DBA team.
The first thing that I have written is a PowerShell function library. Create a directory called C:\PSScripts. Create this a file in this directory called CalcFunctions.ps1 and paste the following code into this file:

function SendMail($filename)
$emailFrom = ""
$emailTo = " "
$subject = "Missing backup file - $filename does not exist"
$body = "Missing backup file - $filename does not exist"
$smtpServer = "mail1"
$smtp = new-object Net.Mail.SmtpClient($smtpServer)
$smtp.Send($emailFrom, $emailTo, $subject, $body)

function CheckBackupFile($filename)
$bck = get-childitem $filename | where {((get-date) - $_.LastWriteTime).minutes -lt 9}
if (!$bck) {

There are two functions

The first function – SendMail – will send an e-mail to whoever you configure in your code. I am sending the email to a mail group called SQLDBAs. The function accepts a parameter – filename – which indicates the name of the missing log file, which forms part of the message.
The second function – CheckBAckupFile – is more interesting. It accepts as a parameter the name of a backup file. It then loads a list of files matching the parameter $filename (get-childitem $filename) which were written to in the last 9 minutes (where {((get-date) - $_.LastWriteTime).minutes -lt 9}). If this list is empty the SendMail function is invoked with the $filename as a parameter. The 9 minutes is used because the job will be scheduled to run at 7 minutes past the hour, and there should always be a log file for the production database available at this time.
Now create a script to use these functions – create a file called ChackLogBackups.ps1 and paste the following code into it:

param([string]$logFilename = "")

. c:\psscripts\CalcFunctions.ps1

In this script, note that there is a space between the dot and the c: (. \) – originally I missed the space and it took me a long time to identify the problem.
Next, create a batch file RunPS1.bat as follows:

powershell.exe -command "& 'c:\psscripts\checklogbackups.ps1'" “\\sql\r$\sqlbackup\log_prod*.sqb”

In the above file, the script file checkLogBAckups is run with a parameter pointing to the location of the backup log file. This is on disk at \\sql1\r$\sqlbackup. The wildcard (‘*’) is used because the backup files are named using a datetime stamp – for example log_prod20110801_09:00.sqb.
Finally, create a scheduled job with the windows scheduler. This job must run at 7 minutes past the hour, and run the batch file RunPS1.
This is a simple example. It can be extended to check for other log files on other servers, or overnight full backups. By creating a file of files to be checked, it will be possible to be certain that your production databases are being backed up as expected with little cost, effort or  expense.

Wednesday, 24 August 2011

VMFS LUN resizing options in vSphere


First off, many thanks to Gethyn for the opportunity to post on his excellent blog.

I’m Christian Dadswell and I am a Senior Infrastructure Analyst at the Intellectual Property Office. I have 15+ years cross platform IT experience and currently support and maintain, amongst other things, the vSphere infrastructure from SAN (fabric and storage) thru vSphere to the virtual machines.

In this article it is my intention to give you some options for resizing a VMFS LUN to allow you to increase the size of a Windows VM volume. It will also cover a scenario whereby a VM has been frozen in operation due to lack of disk space.

Previous experience has shown me that VM’s occasionally require more space for whatever reason. This happens more often than not with SQL Server VM’s. Sorry, but you SQL DBA types love to eat SAN space. But seriously if a VM needs more space for its OS or data volumes something will need to be done.

To kick things off I will create a couple of scenarios that you may well of come across in your day to day vSphere admin. I will then discuss how to remedy them.

The scenarios assume you are using vSphere 4.x and above with an IBM SVC. Other storage virtualisation vendors should allow vdisk resizing. More on vdisk resizing later.

Scenario One:

A Windows 2008 SQL VM (SQL01) has been designated it’s very own VMFS LUN within vSphere, for arguments sake, this VMFS LUN contains all volumes that the VM will use.

VMFS LUN size = 1TB

VM volumes and sizes:

C: OS - 40GB



Total = 990GB

The DBA requires more OS space and the vSphere admin notes that an extra 20% of space should be available to allow for snapshots. Unfortunately, when the VM was created nearly all the available space was used up.

The SQL01 OS volume needs resizing as its critically low on disk space. As this is a Windows 2008 VM it is very easy to resize the boot volume without even rebooting the VM. But the problem exists that no extra space exists on the VMFS LUN.

Scenario One Options:

As the VMFS LUN has no extra space there are two options;

1. Extent the VMFS LUN from within vSphere with space from another LUN

2. Resize the underlying VDISK presented to vSphere from within the SVC console and then resize the VMFS LUN into the new space.

Option one will not be covered in this article, but is a very straight forward task. Option two is a little more interesting and I will now cover how to achieve this task.

For the sake of this article we will need to assume that there is available space on the underlying VDISK that has been presented to vSphere. See this Wiki for the full skinny on VDISKS (

Scenario One Tasks:

● Resize the underlying VDISK via the SVC console

● Resize the VMFS LUN within vSphere

● Adjust the SQL01 VM Boot Volume to 60GB

● Resize SQL01’s OS volume

Resize the underlying VDISK via the SVC console

1. Log into the SVC console and select “Work with Virtual Disks”

2. Locate the VDISK you wish to re-size in the table and select it

3. From the drop down at the top of the table, select “Resize Virtual Disk”

4. Follow the wizard and enter the extra size that is required, in this example we’ll add 500GB.

5. Finish the wizard and log out of the SVC console

Resize the VMFS LUN within vSphere

1. Log into the vSphere client

2. Select a vSphere host and go to the Configuration tab and select Storage Adaptors

3. Select Rescan All

a. This will pick up the extra space given to the VMFS volume we just re-sized in the previous step

4. Once the rescan has completed (check the status in the bottom of the client) locate the VMFS LUN, right-click and select Properties

5. You should now see the new VMFS LUN size of 1.5TB

6. Click Next and you should now see that there is free space available (under the Capacity label)

7. Select Next and tick the Maximize capacity tick box

8. Confirm you want to resize by selecting Finish

9. The VMFS LUN is now 1.5TB

Adjust the SQL01 VM Boot Volume to 60GB

NOTE: Ensure you have a backup before running any volume resizes.

1. Shut SQL01 down

2. Right-click the VM within the vSphere client and select Edit Settings

3. Select the hard disk that is designated as the boot drive

a. This should be the 40GB volume

4. Modify this to 60GB and OK the dialog

5. Power on SQL01

Resize SQL01’s OS volume

1. Log into the SQL01 VM as an Administrator

2. Open Computer Management and select Disk Management under Storage

3. Right-click the OS volume and select Extend Volume

4. Extend the volume by 20GB

At this point we have achieved what we set out to do. But what if the VM was Windows 2003 Server?

Resizing a Windows 2003 OS volume

NOTE: Ensure you have a backup before running any volume resizes.

In order to resize an OS volume in Windows 2003 you will need a third party tool and downtime on the VM. One that I use more often than not is GParted Live ( It comes in ISO/USB/PXD and HD formats. ISO format is perfect for attaching to VM CDs so just chuck a copy in your ISO store.


Something that I have found that is worth doing before running GParted on the OS volume is to do a ‘CHKDSK /f C:’ from the command prompt of the Windows server. This will require a reboot* to do a check on next boot up and fix any errors.

*The reboot is required due to the requirement of locking the file system.

Back to the resize...

1. Take the VM down and attach the GParted ISO to the VM’s CD drive and ensure that “Power on boot” and “Connected” are ticked before pressing OK.

2. Power on the VM and click inside the console window as soon as you can and press ESC. You need to get into the boot menu and select CD.

3. Once inside the GParted interface select the boot volume and re-size into the extra space as required.

4. Apply the changes, and let it do its thing

5. Once completed, power off the VM, disconnect the ISO and power it back up

You have now re-sized the boot partition on a Windows 2003 Server.

Scenario Two:

This next scenario is very similar to the above, but the VM has been frozen by vSphere as the VMFS LUN has completely run out of space.

In this situation vSphere will freeze the VM at the point that disk space ran out and you will not be able to do any of the following;

● vMotion

● Storage vMotion

● Snapshot

● Edit machine settings

● Power off

● Power on

On the main screen for the VM within the vSphere client you will notice a yellow box is now advising you that you need to free up space to be able to run the VM again. Do not make a decision on this box yet.

This machine is now effectively offline so we need to free space up on the VMFS LUN in order for vSphere to be able to commence running the machine again.

Fortunately, we can just run through the two steps in the previous scenario in order to get this VM up and running again. Namely;

● Resize the underlying VDISK via the SVC console

● Resize the VMFS LUN within vSphere

Once you have performed these tasks you will be able to select ‘Retry’ on the yellow dialog on the VM’s general screen. The VM will now continue to run exactly where it left off!

Unfortunately, it won’t deal with the carnage that ensues after the service has been restored.

I hope you enjoyed this guest article. If you wish to get in touch, you may do so in the following ways;

Add me to your Circle - +Chris Dadswell

Follow me on Twitter - @chrisdadswell

Subscribe to my blog - Chris Dadswell

Connect with me on LinkedIn - LinkedIn profile

Monday, 22 August 2011

Get your AD information with .NET

.NET has a useful class called system.DirectoryServices to query LDAP such as Active Directory

Here is some example code to retrieve all the AD information for a user called FullName,


On a web site, your Active Directory Name can be found using


FullName = HttpContext.Current.Request.ServerVariables["AUTH_USER"];


(If you are not using AD then this will return a blank)


In this procedure we first write the results to a data table. we can then sort the results before appending them to a stringbuilder which is then return to the web page.


using System.DirectoryServices;


       public static String GetUserInfo(string Fullname)


            DataTable UserInfo = new DataTable();

            UserInfo.Columns.Add("Name", typeof(string));

            UserInfo.Columns.Add("Info", typeof(string));


            System.Text.StringBuilder sb = new System.Text.StringBuilder();


            //extract the user name from DomainName

            string username = Fullname.Substring(Fullname.IndexOf("\\") + 1);


            //create a directorySearcher

            DirectorySearcher ds = new DirectorySearcher(new DirectoryEntry("LDAP://"));


            //filter the search to just the user name

            ds.Filter = "samaccountname=" + username;

            ds.PageSize = 1000;

            //use the findone function rather than the findall to return the user's data

            SearchResult sr = ds.FindOne();


            //loop through the search results properties inserting in to a data table

            foreach (string var in sr.Properties.PropertyNames)


                UserInfo.Rows.Add(var, sr.Properties[var][0].ToString());


            DataView dv = new DataView(UserInfo);

            dv.Sort = "Name";


            //write out the proprties in name order to the stringbuilder

            foreach (DataRowView drv in dv)


                sb.AppendLine(string.Format("{0} : {1}", drv[0], drv[1]));




            return sb.ToString();



Wednesday, 17 August 2011

Another New Contributor–Christian Dadswell

This is just a short post to introduce you all to my good friend Christian Dadswell who has kindly agreed to write guest post for

Christian has a wealth of experience in IT especially in the field of virtualisation. I had the pleasure of working with Christian on some recent projects to virtualise a SQL Server estate

Christian’s post will be focusing on resizing LUNS in vSphere…a subject which we are all likely to have interest in.

As a hobby Christian writes and develops mobile apps for the Android market. I’m told that he is actually very good it. His BBC Browse Cast application had in the region of 20,000 downloads the last time I checked with him… It is probably much more by now.

Chris’ post will be published in the next couple of weeks

Tuesday, 16 August 2011

Auditing SQL Logins Addendum

One thing that has come out of auditing successful logins is the number of applications called ".Net SqlClient Data Provider". This is slightly annoying because we don't want to audit a generic name because you won't then pick a rogue application.

This can be easily fixed by adding " Application Name=AppName" to the connection string of the .NET application.


connectionString="Data Source=Server1;Initial Catalog=master;Integrated Security=SSPI; Application Name=Application1;" providerName="System.Data.SqlClient

Monday, 15 August 2011

Auditing SQL Logins Part II

Written by David Postlethwaite

In a previous blog  I talked about how to audit successful logins. So what about failed logins?

As mentioned before logon Auditing can be configured under server properties. Here you can audit failed logins..




This puts the audit records into the Windows event log but a more useful solution would be to have these failed logins written to a table.

It turns out to be quite simple using the master..xp_ReadErrorLog command which reads from the Windows Event log.


So first let's create a table to store the results


  CREATE TABLE [dbo].[lv_Failed_Logins](

      [Failed_Date] [datetime] NULL,

      [Failed_Userid] [varchar](50) NULL,

      [Failed_Hostname] [varchar](50) NULL,

      [Failed_ServerName] [varchar](100) NULL


In this example we will run a scheduled job every night to write the failed logins into a table

First we will cycle the error log using the following command. This makes the current log file an archive file and creates a new empty log.


We will then run the following procedure. This will read the 1st archive file (the one that we've just created) finding every line that contains the words failed and login and write it to a temporary table.

We then cycle through the temporary table looking for the right substrings to extract the login name and host name and writing them to our failed_logins table.

create procedure [dbo].[sp_lv_Failed_logins]


declare @cmdF varchar(max)

declare @search1 char(6)

declare @search2 char(2)

declare @search3 char(8)

declare @search4 char(1)


select @search1 = "user '"

select @search2 = "'."

select @search3 = "CLIENT: "

select @search4 = "]"

select @cmdF = "master..xp_ReadErrorLog 1, 1, 'failed', 'login'"


set nocount on


create table #lv_FailLog(date datetime,processinfo varchar(30),message varchar(4000))

insert #lv_FailLog exec (@cmdF)


insert      lv_monitoring..lv_failed_logins

  select date,


  (charindex(@search2,message)) - (charindex(@search1,message)+6)),


  (charindex(@search4,message)) - (charindex(@search3,message)+8)),@@SERVERNAME

  from      #lv_FailLog where message like '%user%'


drop table #lv_FailLog




By cycling the error log each night we can ensure that we don't get any duplicates, but if the job fails for any reason then we would lose that day's data.

The major disadvantage is that we only get failed logins for yesterday, never for today.

If we wanted an instant alert of a failed login we would need to use log reader, like MS Operations Manager (SCOM), to pick up the error and report on it.

Monday, 8 August 2011

Putting a Database into Single User Mode

I was recently discussing with a group of budding DBA’s how you go about performing a restore of database. They were trying the restore for themselves  and a few of them got an error about not having exclusive access:

Msg 3101, Level 16, State 1, Line 5
Exclusive access could not be obtained because the database is in use.
Msg 3013, Level 16, State 1, Line 5
RESTORE DATABASE is terminating abnormally.

This was a test setup and obviously we had other sessions connecting to the databases. these connections were  preventing the restore.

If you are restoring a database and you have other sessions connected it can be quite simple to kill those connections in one go  by putting the database into single user mode and using the ‘ROLLBACK IMMEDIATE’ switch. This will rollback any open transactions and allow you to restore the database. I like to wrap this around the restore statement

alter database TestDB set single_user with rollback immediate 



Most of this information including example code is available in Books Online, which is great document for all SQL Server professionals to make use of?

Friday, 5 August 2011

Audit SQL Logins

Written by David Postlethwaite
There are several ways that you can audit who has logged in to your SQL Server. The easiest way is to turn on logon Auditing under server properties. Here you can audit failed logins, successful logins, or both.
This will put the audit records into the Windows event log. The failed logins is pretty useful because there shouldn't be many of them. But the successful logins turns out not to be so helpful because it records every single login which includes the service account. It turns out this logs into the server every few seconds so you end up with so much data in the event log that it's impossible to find anything else, especially the rogue user.
The Enterprise version has a built in auditing option which writes to a user defined event log but again this still suffers the same problem in that it audits every login.
What would be better is a way of selectively auditing only those logins we want to see (or not recording those we aren't interested in).
One way this can be achieve is by using a login trigger to write the login information to a table. We can easily collect the login information we want:
App_Name() will tell us the calling application
HOST_NAME() will tell us the calling computer
eventdata() can return the login name
The trigger can capture the login data and run a query to filter out the logins we don't want to record and then write the data to a table.
In this case I chose to filter on the application name rather than a login name. My thinking here was that a user who can use the application might be tempted to try and use Excel or Access to view the raw data. This worked well but I found the table was filling up with records of me using SSMS. I didn't want to audit the DBA team doing their day to day job but I definitely wanted to audit anyone else using SSMS. So I added a second filter to look for a combination of User Name, Application Name and connecting Host Name.
1st  create a table to hold the login information
CREATE TABLE [dbo].[Succ_Logins](
      [Succ_ServerName] [varchar](100) NULL,
      [Succ_EventTime] [datetime] NULL,
      [Succ_EventType] [varchar](100) NULL,
      [Succ_LoginName] [varchar](100) NULL,
      [Succ_IPAddress] [varchar](100) NULL,
      [Succ_HostName] [varchar](100) NULL,
      [Succ_AppName] [varchar](100) NULL,
      [Succ_EventData] [xml] NULL
2nd  create a table to hold the logins that we don't want to audit
CREATE TABLE dbo. LoginsNotToAudit(
      [App_AppName] [varchar](100) NULL,
      [App_HostName] [varchar](100) NULL,
      [App_LoginName] [varchar](100) NULL
ALTER TABLE [dbo].[ LoginsNotToAudit] ADD  CONSTRAINT [DF_LoginsNotToAudit_App_HostName]  DEFAULT ('') FOR [App_HostName]
ALTER TABLE [dbo].[ LoginsNotToAudit] ADD  CONSTRAINT [DF_LoginsNotToAudit_App_LoginName]  DEFAULT ('') FOR [App_LoginName]
Add some records . Note you can use a % in the value
insert into dbo.LoginsNotToAudit (App_AppName) values ('Microsoft SQL Server')
insert into dbo.LoginsNotToAudit (App_AppName) values ('Report Server%')
insert into dbo.LoginsNotToAudit (App_AppName) values ('SQL Server Data Collector%')
insert into dbo.LoginsNotToAudit (App_AppName) values ('SQLAgent%')
insert into dbo.LoginsNotToAudit (App_AppName) values ('Data Collector%')
insert into dbo.LoginsNotToAudit (App_AppName) values ('DatabaseMail%')
insert into dbo.LoginsNotToAudit (App_AppName) values ('Microsoft SQL Server Management Studio - Transact-SQL IntelliSense')
insert into dbo.LoginsNotToAudit ([App_AppName],App_HostName,App_LoginName) values('.Net SqlClient Data Provider%','Computer1','User1')
And here is the trigger
on all server
with execute as 'sa'
  DECLARE @event XML
  SET @event = eventdata()
  IF (select count(App_AppName) from auditDB.dbo.LoginsNotToAudit
  where App_Name() LIKE App_AppName AND APP_HostName='' AND App_LoginName=''
  App_Name() LIKE App_AppName AND APP_HostName=HOST_NAME()
  AND App_LoginName=CAST(@event.query('/EVENT_INSTANCE/LoginName/text()') AS VARCHAR(100))) = 0
    INSERT INTO auditDB.dbo.Succ_logins (Succ_EventTime,Succ_EventType,Succ_LoginName,Succ_IPAddress,Succ_AppName,Succ_HostName,Succ_ServerName,Succ_EventData)
    VALUES(CAST(CAST(@event.query('/EVENT_INSTANCE/PostTime/text()') AS VARCHAR(64)) AS DATETIME),
       CAST(@event.query('/EVENT_INSTANCE/EventType/text()') AS VARCHAR(100)),
       CAST(@event.query('/EVENT_INSTANCE/LoginName/text()') AS VARCHAR(100)),
       CAST(@event.query('/EVENT_INSTANCE/ClientHost/text()') AS VARCHAR(100)),
   -- This will not stop the trigger from returning an error
I've run this on several 2008 servers without any noticeable degradation in performance.
But there is a major drawback of using a Login trigger.
If the code causes an error then the trigger will fail. If it fails then the users can't login. They will receive an error Logon failed for login 'Domain\username' due to trigger execution.
No amount of try- catch, error trapping or transaction rollbacks will help. Any error will cause the trigger to exit as failed I found this out when a new application started using the server but it wasn't added to the LoginsNotToAudit table, the database ran out of room so the insert failed and so the trigger failed and no one could login I now have a daily job that counts the records in the table to check it's not filling up too quickly
If the trigger is failing then the only way to login is to use the Dedicated Administrator Connection or DAC. The Login Trigger doesn't fire when you login using the DAC.
  • Only sysadmins can use DAC
  • There can be only one DAC connection at a time
  • To use DAC with SSMS, prefix the server name with ADMIN:  The important point with SSMS is you can only open a query window with DAC you can't use Object Explorer since this uses more than one connection -you will get an error.
  • You can also use SQLCMD with the "-A" switch  SQLCMD –A –S [SQL Server Name] –U [User Name] –P [Password]  –Q [query]
  • By default you can't connect remotely using the DAC so make sure you can RDP to the SQL Server
To allows remote connections use sp_configure, but there is are security implications of allowing remote DAC connections
sp_configure 'remote admin connections', 1
reconfigure with override
Once you have your DAC connections you can disable the query using
disable trigger AuditSQlLogins ON ALL SERVER
If you do need to audit logins to SQL this does work very well but does require some administration. Make sure all the DBAs know how to disable the trigger in the event it does fail and keep an eye on the number of records you are collecting. In my experience once you filtered out he DBAs and the application connections there aren't that many records. Which makes looking for the rogue user a bit easier.

Featured post

Creating a Linked Server to an Azure SQL Database

Why would you create a Linked Server to an Azure SQL Database? If you work in a hybrid environment with some databases stored on your on ...