Thursday, 29 April 2010

When was SQL Server last restarted?

How do you tell when your SQL instance was lasted restarted?

I was reading a post by Joe Webb (Blog|Twitter) about a query he uses to to gather information about his clients SQL Server instance including how he find out when an instance of SQL Server was last restarted.

You can find Joe’s script and his latest post here.

He basically queries the sys.dm_exec_sessions to find the login time of the first session.

As Joe mentions in his post there a number of ways to get this information. I use a slightly different method where I query sys.databases to get the create time of tempdb…Tempdb db is recreated each time SQL Server is restarted so the created date time of TEMPDB will also give you the date and time of the last restart of the instance. My query looks like this:

SELECT  create_date
FROM sys.databases
WHERE database_id = 2 --TEMPDB's database_id


which produces the following result:



LastRestart

Wednesday, 28 April 2010

SQL Server Connection Strings

The purpose of this short post is to demonstrate a simple method of generating an accurate SQL Server connection string for your SQL Server servers and databases.  This tip was first showed to me by a friend of mine, Paul Blackwell, a while back and I wanted to share it with you here.

As pre-requisite to this tip you need to be using a windows based operating system.

In order to build our SQL Server connection string we need to clear all open windows and have the desktop in front of us…

so click the <show desktop> button

<Right click> on the desk top

Select <New> <Text Document>

ConnString1

Call it <sqlconn>

Open the new text document called <sqlconn>on your desktop

When the document is open select <File><Save As>

Keep the file name the same but change the <save as type> to all files.

After the file name add the extension .udl like in the screen shot below and click <Save>

ConnString2

There will now be a second item on the desktop called <sqlconn> and will look similar to this (depending on your version of windows)

ConnString3

Double click on the icon will take you a data link connection page like the following.

Select the <Provider> tab and select <Microsoft OLE DB Provider for SQL Server> click <Next>

ConnString4

On the <Connection> tab enter the details of the server you want to connect to including any connection details. I always like test the connection too

ConnString5

Click <OK>

The will take you back to the desktop…

The final step to get the code for your SQL Server Connection string…

<Right Click > on the .UDL file <sqlconn>
Select <open with> and choose notepad
And the file that opens should contain the following code:

[oledb]
; Everything after this line is an OLE DB initstring
Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=RBBS;Data Source=localhost




You can you use this method to create a number of connection strings, in this tip we have looked at generating code for SQL Server connection string but it can also be used for host of other connection strings including Analysis Services and SSAS.

Tuesday, 27 April 2010

SQL Server Replication Overview

For my next series of blog posts I intend to document and write about SQL Server replication. This is a pretty large topic and I make no claims to an expert in replication but over the last couple years I have been working with replication more and more and I think it is time I wrote some blog posts on the topic to document and reinforce my understanding of the subject and  hopefully learn a lot along the way.


SQL Server replication is a technology  that can be used to  distribute and update data from one database to another and also synchronising the databases to maintain consistency.


SQL Server offers the following types of Replication:

Snapshot Replication - a snapshot of the database is taken at a point in time and the whole data set it rather than individual transactions is applied to the subscriber. This is not a continuous process so the subscribing database always has some latency when compared to the publishing database.

Transactional Replication - Transactional replication allows incremental changes to be applied to the subscribing databases. These changes can either be applied continuously or incrementally - at set intervals. Transactional replication is normally used when there is high number of write transactions against the database (INSERT, UDPATED, DELETE)

Merge Replication - I struggled to put merge replication in to words but Book Online describes it as "data changes and schema modifications made at the Publisher and Subscribers are tracked with triggers. The Subscriber synchronizes with the Publisher when connected to the network and exchanges all rows that have changed between the Publisher and Subscriber since the last time synchronisation occurred." I think Merge replication is probably the most complicated and difficult type of replication to configure and manage.

The type of replication you choose for a given application will vary according to a number of factors. This MSDN article http://msdn.microsoft.com/en-us/library/ms152565.aspx details what type of replication is best suited for a given scenario.

I have worked mainly with snapshot and transaction replication so I will start with some posts on how to configure these two type of replication. I have not done much with merge replication in truth, but I will be looking at it here too. All these will be in posts over the newt few weeks and  months  but for the purpose of this post I am going to explain some the  components of replication:

The Distributor - Can be generally thought of as the link between the components involved in replication. It plays a key role in 'distributing' data from the publisher to the subscribers

Publisher - The publisher is the source database that makes it components available to those wishing to subscribe to it. The Publisher is the data to be replicated.

Subscriber - stores the replicated data and received updates

Article – “is a grouping of data to replicated”

There are several agents that play a key role in replication, there are:

  • Snapshot Agent
  • Log Reader Agent
  • Queue Reader Agent
  • Distribution Agent
  • Merge Agent

These agents are standalone programs that play a key role in replication. By default they are run using the SQL Server agent. bare in mind that the SQL agent needs to be running for these jobs to run successfully.

Next we’ll be looking at configuring replication.

Wednesday, 21 April 2010

Microsoft Teleconference – Ted Kummert (Senior VP) and Tom Casey (General Manager of BI)

Microsoft  held a big teleconference today, which I unfortunately got called away from a minutes into it. The call was basically the launch of SQL Server R2. There are numerous posts from floating around from some high profile community members:

Brent Ozar’s post is here

Mr Denny’s Post is here

From what I can gather there is an evaluation copy available to download today, Technet and MSDN subscribers get it on 3rd May and Volume License customers get it later on the 13th may. Check out the above posts for more detailed, probably more accurate info though, I did miss most of it after all.

Virtualisation – An Introduction

For the last two years or so, in one capacity or another I have been running SQL Server in virtual land and I wanted to start writing some blog posts on the subject .

I thought I would start providing  a definition of virtualisation and then I thought better of it and I decided to leave it for the virtualisation experts, Scott Lowe in his book Mastering VMware VSphere 4 (you can find his blog here) provides a great definition of virtualisation:

“Virtualisation is the abstraction of one computing resource from another computing resource...when most information technology professionals think of Virtualisation , they think of hardware Virtualisation : abstracting the operating system from the underlying hardware upon which it runs and thus enabling multiple operating systems to run simultaneously on the same physical server.”

So the crux of virtualisation is  thus…You have one ( more likely many) relatively powerful server that act as the physical hosts with a hypervisor technology installed and from that setup  you have enough resources to run many virtual' machines (VMs) all sharing the hosts (physical servers) resources.

I assume you are reading this as a SQL Server professional thinking about running SQL Server in a virtual environment. My initial thought about virtualisation was “It might work for small web servers and applications server but it won’t be suitable for my resource hungry SQL Servers.” but I have come around to the idea of virtualisation and I don’t think that anymore.That said it is not necessarily true that virtualisation will suit all SQL Servers so the answer to the question “Should I virtualise  my SQL Servers?” like the answer to most DBA questions  “It depends!” Brent Ozar (Blog|Twitter) has some great articles and posts on his blog about why you should and why you shouldn’t virtualise SQL Server

Why Would You Virtualize SQL Server?

Reasons Why You Shouldn’t Virtualize SQL Server

If you are thinking about or researching running SQL Server on a virtual platform I recommend checking out Brent’s blog as a place to start.

So to the virtualisation software…In truth I have not got down to the nitty gritty of installing and setting up the virtualisation software myself, that has fallen to the sys and SAN admins. My experience has been with VMware’s ESX although Microsoft have a Hypervisor offering in the form of HyperV and XEN also have an open source hypervisor.

The main benefit of virtualising SQL Server, any server for that matter is you can greatly reduce the physical servers in your data centre thus reducing your power consumption and also reduce the heat (and cooling needed) generated by your server farm…which in turn can result in lower costs and reduced carbon foot print. As a DBA you will be concerned about getting the same performance from your virtual servers as your bare metal servers. From my experiences thus far I have got near enough the the same performance from a virtually provisioned server compared to a bare metal box

You also get a lot more flexibility with virtual hardware, servers can be provisioned very quickly at very little cost.

As I have said there are many benefits of virtualisation and some drawbacks its important to consider both sides when making a decision to virtualisation a SQL Server box.

Thursday, 15 April 2010

SQL Server 2008 R2 – Release Date

This is a very short post

I’m probably very late to this party but…SQL Server 2008 R2 is scheduled for release on 21st May 2010.

Third Party Backup Tools – Redgate SQL Backup

To continue my little blog series on SQL Server backups, any series would not be complete without a look at some of the third party products that exist to supplement a backup plan with additional functionality not always offered in the native SQL Server backup.
My favourite tool and the backup product I have used the most over the years is Redgate’s SQL backup, there are other tools out there such as Hyperbac (Actually this has just been acquired by Redgate) and Litespeed, but over the years, for one reason or another I have used Redgate and have been very pleased with it.

Reasons for using Third Party Tools

“SQL ships with its own backup utility!” I hear you cry. “Why would you spend some extra $ on a third party tool when out of the box SQL Server can do this stuff for you?” Well I have a couple of reasons:
Compression – The main reason I started using Redgate backup way back on SQL Server 2000 was due to its ability to compress the backups it took. This remained true for SQL Server 2005 and with relatively large databases to look after for some clients with disk space at a relative premium and in some cases the need arose to keep three days worth of daily full backups on disk (backup were stored on tape too) for speed of recovery, this tool proved to be a real space saver.
Here’s an example:
This is a backup of the AdventureworksDW database on a SQL Server 2008 instance (More about SQL Server 2008 shortly) . The database is approximately 85MB
The native backup, with no compression looks like this:
BACKUP DATABASE [AdventureWorksDW2008] TO DISK =
N'C:\DATA\BackupComp\AW_Native_backup.bak' WITH NOFORMAT, NOINIT, NAME =
N'AdventureWorksDW2008-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,
NO_COMPRESSION, STATS = 10
GO



The Redgate (Redgate SQL backup needs to be installed on SQL instance for this script to work) script looks like this


EXECUTE master..sqlbackup
'-SQL "BACKUP DATABASE [AdventureWorksDW2008]
TO DISK = ''C:\DATA\BackupComp\AW_Redgate_Backup_Compressed.sqb''
WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 4"'





BackupFileComparison


As you can see even with this small databases the percentage disk space saving of the third party compressed backup can really save disk space with large databases.


SQL Server 2008 Enterprise Edition, SQL Server R2 Enterprise edition and Standard edition (new for SQL Server 2008 R2) include compression. This link gives details of the R2 compression with Standard edition.


How does the native encryption compare to the third party tool:


Here’s a native SQL Server 2008 backup with compression:


BACKUP DATABASE [AdventureWorksDW2008] TO DISK =
N'C:\DATA\BackupComp\AW_Native_Compressed.BAK' WITH NOFORMAT, INIT, NAME =
N'AdventureWorksDW2008-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,
COMPRESSION, STATS = 10
GO



Here are the backup files and their respective sizes


NativeVRedGate


As you can see the third party tool compression still outperforms the native compression in terms of disk space footprint.


Encryption – Native SQL backups are stored in plain text format, which means that the data stored in the backup in a text editor. so potentially someone with access to the file system, who shouldn’t have access to the database could potentially see the data. With Redgate SQL backup and other third party tools you can encrypt the backup so it cannot be read without being decrypted.


This article has highlighted some of the benefits of using third part tools and a full round up would not be complete without highlighting what I think is the main negative. You need to have the software installed on all the instances that you intend to restore too, so for example if you use the third party product to backup the production database on a production server if you want to take the third party generated backup file from production and restore to a test or dev environment to work with production data you need an extra license for the software on that environment. This can increase the cost slightly. That said Redgate, I can’t really speak for the other vendors, do provide a utility to convert their backup files to native SQL Server backup for restoring to severs without their software installed so although not completely straight forward it is possible to restore third party backup files to servers without the the third party tool installed.


Related Posts



Backup All Your Databases


Transaction Log Backups


Managing Your Backup Files


Differential Backups

Tuesday, 13 April 2010

T-SQL Tuesday #005 – Offloading Reporting (#TSQL2sDay)

Aaron Nelson (Blog|@SQLvariant) is hosting this months #TSQL2sDay with the theme of reporting. My post looks at some of things I have done in the past to offload reporting to another server to reduce the overhead of running reports against production OLTP databases.
A few years ago, back when I was implementing a HA solution in SQL Server 2000 utilising clustering to prevent against node failure  and Log shipping to maintain an off-site, second copy of the database. The organisation wanted to implement a reporting and BI solution so that management information could be made available on demand for selected reports and also to load a data warehouse. Even though we copied the logs over to the standby server they were only restored once a day, so it was possible for the logged shipped DB to be a day behind. If we ever needed to recover the db in the event of a failover at most one days worth of logs would need to be applied before the database could be used and the day's latency was acceptable to the report writers and the load process.
The OLTP solution was a 24 hour service with  roughly a 50-50 split between reads and writes and it was not easy to differentiate between peak and off peak time in this organisation, because of the nature of their business, for scheduling the load of the data mart or the running of resource intensive reports during off-peak times. So we went for the option of making the log shipped database read-only and allowing the read intensive report writers queries and the load process to read the data from the read-only log shipped copy thus not interfering with the clustered production database.
This worked well and definitely helped spread the load, in later versions of SQL Server I may well have made use of Database Mirroring to kill two birds with one stone, I would have still had a need to cluster the instance but I could have used Database mirroring and database snapshots to maintain a second up to date off-site copy of the database for reporting purposes but also to allow automatic failover should the entire cluster become available, setting that up is probably another post entirely though.
I guess the point I'm trying to make here, I have found if you have a database which has to satisfy the demands of multiple  workloads and functions there can be some benefit in splitting the read intensive queries of reports and data loads to a different server.

Monday, 12 April 2010

db_executor role

The process of creating a db_executor might not be granular enough for some but there is a nice simple way of creating a database role in a SQL Server database (called db_executor)  and granting the db_executor role  execute permissions on all stored procedures, then when you want to give someone execute permissions to all stored procedures you simply add them to the db_executor role. As I have already said this may not be for everyone but I have used several times for various clients.

I will use the Adventure database as example here:

First we will create a role in the database called db_executor:
USE [AdventureWorks]
GO
CREATE ROLE [db_executor] AUTHORIZATION [dbo]
GO


The next step is to grant the db_executor role execute permissions, which can be done as follows:


--GRANT Execute Permission on the role


GRANT EXECUTE TO [db_executor]
GO


We will next create a login and user that can be added to the db_executor role in Adventureworks to pick up the permissions:


Create the login:


USE [master]
GO
CREATE LOGIN [Test] WITH PASSWORD=N'Pa$$w0rd', DEFAULT_DATABASE=[master], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF
GO


Create the user for the login


USE [AdventureWorks]
GO
CREATE USER [Test] FOR LOGIN [Test]
GO


Add the user to the database role db_executor:


USE [AdventureWorks]
GO
EXEC sp_addrolemember N'db_executor', N'Test'
GO

Thursday, 8 April 2010

Q1 - review

I didn’t post my professional goals for the year on my blog at the start of this year, my blog was new and I wasn’t sure that it was appropriate. Instead I made a written note in my paper based diary – I still keep one of those, not for managing my appointments or scheduling my time, I have an iPhone for that and it handles it just fine. I find it good for scribbling notes and ideas down when I don’t have my laptop or computer at hand, and if you do it properly it keeps a note of the date you wrote the notes too. Having read everybody else’s goals though,  I will definitely publish next years goals here at the end of this year.
One of the initial goals was to publish on my blog regularly…when I wrote that down and reviewed it I saw that this was not really a measurable target and I needed to put some numbers next to it to be able to measure it successfully. So I thought about and I aimed to post at least once preferably twice a week, every week. This would mean that I would have between 52 and 104 posts on my blog by the end of the year and thus gave me an measurable target. I have currently been going at around three a week, I did have a lot I wanted to write about and I got about six posts in front. Which I thought was great, but lately I have suffered a little from writers block, and although I have not run out of things to write about  they are all quite large subject areas so they will take a bit more effort and hence the frequency with which I publish may suffer. As it stands though I am on target to meet my initial target in terms of the number of posts
Another closely related goal was to improve readership and exposure of my blog, when I started this last year Google analytics and the feed burner stats told me that nobody was reading it. This can be disheartening. Through some help through SSC syndication and I managed to get a post on the home page recently too. This month I have had over 500 unique visitors so things are going in the right direction.
I want to learn more about power shell, I will try to post on the blog as I learn. I have one power shell post, mainly detailing resources that could be useful for learning the subject. I will get to these soon.
I’d like to read all the Chapters in my MVP deep dives book. I have read about a quarter of them thus far so I’m on course for that too.
I’ll post some more about this years goals/targets later in the year.

Tuesday, 6 April 2010

Uninstall 64 bit and reinstall 32 bit SQL Server

For reason's beyond the scope of this post I needed to uninstall a 64 bit version of SQL server standard edition (Including Integration Services ) and re-install the 32 bit SQL Server standard edition.

The uninstall seemed to complete successfully so I went on my merry way and I attempted the re-install of the 32 bit version:

The database engine installed perfectly well but when the installation got to Integration services, it  failed with the following error:

Failed to install and configure assemblies E:\Program Files (x86)\Microsoft SQL Server (x86)\90\DTS\Tasks\Microsoft.SqlServer.MSMQTask.dll in the COM+ catalog. Error: -2146233087
Error message: Unknown error 0x80131501
Error description: One or more of the components being installed are already registered as 64 bit components in the target application. You must install the 32 bit versions of the components being installed in a different COM+ application, or delete the existing 64 bit versions of the components being installed from the target COM+ application prior to attempting install of the 32 bit versions. COM+ applications cannot contain bit neutral components.

After a few expletives and a google search I found this post on the MSDN forums, I had got the exact same error as the OP in this post:

http://social.msdn.microsoft.com/Forums/en-US/sqlsetupandupgrade/thread/a433b076-5b6f-48e0-9927-76f7f5a1e0ab

so I followed the proposed answer by Ekrem Onsoy:

- Open: Start\Programs\Administrative Tools\Components Services,
- Expand "Component Services" -> "My Computer" -> "COM+ Applications",
- You'll see "Microsoft.SqlServer.MSMQTask" component. Right click on it and Delete it.
- If that dialog box is open which poped up during SQL Server Setup then click Retry, otherwise start over and this problem will be gone.

My next attempt at a reinstall resulted in a different error:

"Failed to install and configure assemblies E:\Program Files (x86)\Microsoft SQL Server (x86)\90\DTS\Tasks\Microsoft.SqlServer.MSMQTask.dll in the COM+ catalog. Error: -2146233087
Error message: Unknown error 0x80131501
Error description: FATAL: Could not find component 'Microsoft.SqlServer.Dts.Tasks.MessageQueueTask.ServCompMQTask' we just installed."

All other components including the database engine seem to install correctly.

I tried the proposed fix above and the component was not there. I rebooted the machine and started again and the install completed successfully.

Sunday, 4 April 2010

Differential Backups

My last few posts have been backup orientated, I have a post on Full Backup, a post on Transaction Log backups and a note on how to manage backup files. The next in my little series on  backups is a short post on differential backups in SQL Server.

What is a Differential Backup

A differential backup in SQL Server is a backup of all changes made to the database since the last full backup was taken.

How would you use a Differential Backup?

If you have large database that changes by a small amount each day, then to narrow you backup windows and possibly your restore time you make use of differential backups for example if you currently take a full daily database backup of your 250GB database you could possibly reduce the backup window for this database during week by switching the full daily backup to a full weekly backup and taking a differential backup each day in place of the full backup
Or if point in time recovery is important to you, it is for many people, and you take a high frequency of  transaction log backups between each daily full backup, say a transaction log backup every 15 minutes, then you end up with a lot of transaction log backup files that need to be applied in the event of a restore. You can utilise differential backups to reduce the number of transaction logs that need to be applied and at the same time speed up the restore process.
For example, lets say you have a full database backup at 21.00 each day. You take log backups every 15 minutes. If you then find yourself in a situation where you need to restore you database to 20.30 you need to restore from the full backup taken the previous night and then apply all the log files taken since then up to 20.30…That is a lot of files to apply, including the full backup file that’s 95 files in total!
If you add to your back up schedule a differential backup at say every four hours throughout the day in between the full backup, the differential backups will taken at 01:00, 05:00 09:00 13:00 17:00. Then instead of 95 files you will have the full backup, the differential backup, and the 9 transaction log backups taken since the differential taken at 1700. a total of 12 files. which is much more manageable number.

What are the Advantages of using a Differential backup?

The main advantage of a differential backups lie in the fact you have less logs to apply in a  restore scenario as demonstrated above.
You can benefit from a reduced backup window

What are the drawbacks of a differential backup

Any restore is still dependant on the full backup. If the full backup is unavailable, the differentials are not useable.
If you have a highly volatile database that changes regularly, the size of the database backup maybe of similar size and the time to take maybe similar to a full backup, in which you may as well take the full backups.

How do you take a diff backup

This is the code for taking a differential backup of my crickets clubs contact database:
BACKUP DATABASE [CricketContacts] TO DISK = N'C:\Backup\ContactsDiff.dif' WITH
DIFFERENTIAL, NOFORMAT, NOINIT, SKIP, NOREWIND, NOUNLOAD, STATS = 10,
CHECKSUM
GO



As you can see it is similar statement for taking a full backup, except we we specify the WITH DIFFERENTIAL statement. I like to give the files the .dif extension too, so I know what type of backup files they are.

Thursday, 1 April 2010

Powershell – Useful Resources

One of my aims at the start of the year was to improve my Powershell and scripting skills. I run a few Google searches looking for resources and had plenty of results returned, but finding the ‘wheat from the chaff’ proved difficult. So I turned to twitter and asked the SQL community using the #SQLHELP hash tag. As an aside if you have a SQL problem and need an instance response and you can formulate a question in less than 140 characters add #sqlhelp to the end of your post and you’ll get a whole bunch of help…I digress. I got a whole heap of replies and I thought I would formulate the URLs in a txt document for future reference, then I realised I should post them on my blog so everyone can use them and here they are:

Useful Powershell URL's

http://midnightdba.itbookworm.com/VidPages/PowershellStartJobs/PowershellStartJobs.aspx

http://blogs.technet.com/heyscriptingguy/archive/2009/04/20/windows-powershell-an-introduction.aspx

http://www.databasejournal.com/features/mssql/article.php/3752866/Check-your-SQL-Server-using-Windows-PowerShell--Part-1.htm

http://sqlvariant.com/wordpress/index.php/category/powershell/

http://sqlvariant.com/wordpress/index.php/2010/02/get-more-done-with-sqlpsx/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+Sqlvariations+(SQLvariations:+SQL+Server,+a+little+PowerShell,+maybe+some+Hyper-V)

http://appdev.sqlpass.org/MeetingArchive/tabid/2005/Default.aspx

Thanks to Aaron Nelson @SQLvariant Chuck Boyce @chuckboycejr and Lorie Edwards @loriedwards for their replies and help.

If you know of a useful powershell resource that is not shown here please feel free to leave it in a comment.