Tuesday, 30 March 2010

SP_change_users_login and SQL Server 2008 onwards

Today I discovered, from a post on the forums and a follow up dig into BOL that sp_change_users_login is to be depreciated in future versions of SQL Server and the new ALTER USER should be used instead.  Now let me make it clear here, sp_change_users_login is still available in SQL Server 2008, but BOL suggests that  it will be removed in future versions.

Curious, I looked up ALTER USER in BOL and I find that it does most of what sp_change_users_login  does in terms of fixing orphaned users…with a small exception  it doesn’t have the equivalent of the

sp_change_users_login 'report'


to give us a report of any orphaned users in the database.



This prompted me to dig into the code of the sp_change_users_login to see what the report returns… it runs a select from on sysusers. According to BOL  the sysusers system table is included as a view for backward compatibility and that this will be removed in future versions. I checked the BOL entry that matches old SQL 2000 system tables to SQL Server 2005 system views and sysusers becomes sys.database_principals in SQL Server 2005 and onward . Here’s my attempt at finding orphaned users, i have tried this in two tests and it returns the same results as sp_change_users_login:



select dp.name, sid, *
from sys.database_principals dp
where
dp.sid not in (select sid from sys.server_principals)
and type = 'S' -- SQL Server User
and sid is not null -- DB principal has a Sid
and sid <> 0x00 -- The sid is not this



So run the above select to pull out orphaned users…I have one with a user name of SCTest in a database just restored from a backup of production into  test… I know I have login called SCTEST on the test instance so how do match them up using the new syntax :



ALTER USER SCTest with LOGIN = SCTest


This will  match the database user with server login.



I know best practice dictates to always use windows authentication where possible…but in case you do have any legacy applications or non-windows users hanging around you may find this useful.

Monday, 29 March 2010

Scheduling backups in SQL Server Express

I have seen many forum posts asking how do you schedule backups for SQL Express databases and the answers always have similar responses that go something like:

“There is no SQL agent in Express…the only way to backup databases on a schedule is to write a script and call it from a batch file.”

I saw this recent Microsoft  knowledge base article and thought I would post a link to it here, it talks you through in detail how to schedule backups in SQL express using the above method:

http://support.microsoft.com/default.aspx?scid=kb;en-us;2019698&sd=rss&spid=2855

Sunday, 28 March 2010

TempDB – Relocate the TempDB Database Files

I took control of a new server recently and noticed that Tempdb was not configured to best practice. It was not on its own drive and it consisted of one data file and one log file…The server had a couple of cores, I’m not sure how relevant this is anymore but SQL server 2000 it was recommended you create a TEMPDB data file for CPU…I’m not sure that you need a file for each CPU/core in later versions but as Brad McGehee says in his TEMPDB optimisation chapter in MVP deep dives (This is my review of the book) it does give you a place to start.

My first script here moves both the current log and data file to its new home.

USE master;
GO
ALTER DATABASE tempdb
MODIFY FILE (NAME = tempdev, FILENAME = 'S:\SQLTEMP\tempdb.mdf');
GO
ALTER DATABASE tempdb
MODIFY FILE (NAME = templog, FILENAME = 'S:\SQLTEMP\templog.ldf');
GO


Once you have run this you will need to restart SQL Server for the changes to take affect. Also you will have two copies of the old TEMPDB files in the in the old location, I like to delete these after the restart  to avoid confusion later.



The next script below adds an additional data file to my tempdb database, this ensures the number of data files in tempdb matches the number of cores on my machine. The reason for multi-data files is a reduction in contention on each file as IO requests are spread across the multiple files. This can be beneficial, even if the files reside on the same volume as is the case here, if you put each file on a volume of their own the performance gain is even greater…With a greater cost attached of course.



Anyhow, the following script adds an additional data file to the tempdb database. Its important to keep the size of each data file identical preferably sized according to TEMPDB usage. Also ensure that the same auto grow settings are set for each file. The algorithm SQL Server uses for filling pages will distribute the load equally if the files are the same size.



 
USE [master]
GO
ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev', SIZE = 153600KB )
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'TempDev2', FILENAME = N'S:\SQLTEMP\TempDev2.ndf' , SIZE = 153600KB , FILEGROWTH = 10%)
GO
ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'templog', SIZE = 51200KB )
GO



As with all things please take the time to test the optimal configuration in your environment, as multiple files may or may not benefit you.

Thursday, 25 March 2010

SQL Server 2008 Activity Monitor

For those people who like to use the Activity Monitor in the SQL Server Management Studio GUI instead of, or along with,  sp_who, sp_who2 and DMVs in SQL Server, it is not where you left it in SQL Server 2005.

To view the activity monitor interface in SQL Server 2008 Management studio you need to:

1. Connect to the instance in Object Explorer

2. Right-Click the <Server Name>

3. Select <Activity Monitor>

It should the open as a pane on the right-hand window

ActivityMonitor

Brad McGehee has a good article over on Simple Talk where he explains the new improvements and features that can be found in Activity Monitor in SQL Server 2008 http://www.simple-talk.com/sql/learn-sql-server/management-studio-improvements-in-sql-server-2008/

Tuesday, 23 March 2010

Managing Your Backup Files

I recently posted a couple of  scripts that backup all databases on your SQL Server instance to disk with a date and time stamp, so you know from the file name when it was taken and hopefully helps you identify how useful those backup are to you. The second script is a script that runs through the databases in full recovery mode and takes a transaction log backup of each one.
You can then combine these scripts with the SQL Agent to create and schedule some jobs to ensure your backup schedule meets the restore requirements of your business.A ‘standard’ backup schedule, that I have seen many shops implement unless there is a specific requirement not met by this process:
Fullback – All databases daily
Transaction Log backup – Hourly throughout the day. All copied or mirrored to another server.
All of these backups are to disk are written off to tape on nightly basis and then become part of the tape cycle.
Which brings us onto the next issue, with at least 25 files per database being created per database daily, how long should they be kept on disk after they have been safely written off to tape? Well that answer, like a lot of other answers to questions in the database sphere is  ‘it depends’. It can depend on such things as how much disk space you have available for backups. How far back you are going to need to restore from? I know the backups are on tape but if you need to do a restore the time needed to perform the restore will increase significantly if the backup first needs to be recovered from tape.
If I have the space available I generally keep between 2 and 3 days worth of backup files. The following script, is a modified version of Andy Leonards (BLOG | @AndyLeonard ) that i use for deleting old files
declare @DeleteDate nvarchar(50)
declare @DeleteDateTime datetime
declare @BackupPath nvarchar(255)
Declare @FileType NCHAR(3)
set @DeleteDateTime = DateAdd(hh, -72, GetDate())
set @DeleteDate = ( Select  Replace(Convert(nvarchar, @DeleteDateTime, 111),
'/', '-') + 'T'
+ Convert(nvarchar, @DeleteDateTime, 108)
)
set @BackupPath = N'E:\Backup\'
set @FileType = N'bak'
EXECUTE master.dbo.xp_delete_file 0, @BackupPath, @FileType, @DeleteDate, 0
--the final parametes is will allow the Extended Proc to drop to the 1st level of subfolders,
--you need to set it to 1 to delete from the 1st level of subfolder



The final parameter in the extended stored proc allow you to specify the proc delete from the first level subfolders in the folder specified. Again this can be scheduled in a SQL Agent Job and run in conjunction with the full and transaction log backups.


Related Posts



Transaction Log Backups


Full Database backups

Sunday, 21 March 2010

Script: Transaction Log Backup

My last post showed a little script that I use for taking full backups of all my databases on an Instance.
This script below will take a transaction log backup of all your database in full recovery mode, again it can be used in conjunction with the SQL agent to schedule it regularly. This is one size fits all to transaction log backups though and this may not fit all environments perfectly. If you have a database that needs a different more specific schedule from transaction log backups on the instance in question then you will need to find a way to work that in.
DECLARE @DBName varchar(255)
DECLARE @DS VARCHAR(50)
DECLARE @Path VARCHAR(255)



DECLARE TLOG_BACKUP CURSOR FOR

select name from sys.databases
where recovery_model = 1 --Only return databases in full recovery 
AND name NOT IN ('TEMPDB') -- Exclude TEMPDB
AND state = 0 --Exclude offline databases, they won't backup if they offline anyway
AND Source_database_id is null -- Removes snapshots from the databases returned

OPEN TLOG_BACKUP


FETCH NEXT FROM TLOG_BACKUP
INTO @DBName

-- Check @@FETCH_STATUS to see if there are any more rows to fetch.
WHILE @@FETCH_STATUS = 0
BEGIN
--Set the filename values of the transaction log backup files
SET @DS = REPLACE(CONVERT(VARCHAR(10), GETDATE(), 111), '/', '') + '_'
+ REPLACE(CONVERT(VARCHAR(8), GETDATE(), 108), ':', '')
SET @Path = 'C:\DATA\Backup\'
SET @Path = @path + @DBNAME + '_'+ 'LOG'+'_' + @DS + '.trn'
--Take the backup

BACKUP LOG @DBNAME 
TO DISK = @Path 
WITH
FORMAT, INIT,  SKIP, NOREWIND,
NOUNLOAD, STATS = 10

FETCH NEXT FROM TLOG_BACKUP
INTO @DBName


END

CLOSE TLOG_BACKUP
DEALLOCATE TLOG_BACKUP
GO



Related Posts



Full Database Backups


Managing You Backup Files

Thursday, 18 March 2010

Script: Backup All your Databases

We all know the importance of taking regular backups of our SQL Server databases. This is a useful little script that I use to take a full backup of my databases on an instance. It can be used in combination with the SQL Agent to schedule:
DECLARE @DBName varchar(255)
DECLARE @DS VARCHAR(50)
DECLARE @Path VARCHAR(255)



DECLARE Full_Backup CURSOR FOR

SELECT name from sys.databases 
WHERE name NOT IN ('TEMPDB')
AND state = 0 --Exclude offline databases, they won't backup if they offline anyway
AND Source_database_id is null -- Removes snapshots from the databases returned, these can't be backed up eith

OPEN Full_Backup


FETCH NEXT FROM Full_Backup
INTO @DBName

-- Check @@FETCH_STATUS to see if there are any more rows to fetch.
WHILE @@FETCH_STATUS = 0
BEGIN
--Set the filename values of the backup files
SET @DS = REPLACE(CONVERT(VARCHAR(10), GETDATE(), 111), '/', '') + '_'
+ REPLACE(CONVERT(VARCHAR(8), GETDATE(), 108), ':', '')
SET @Path = 'C:\DATA\Backup\'
SET @Path = @path + @DBNAME + '_' + @DS + '.bak'
--Take the backup

BACKUP DATABASE @DBNAME 
TO DISK = @Path 
WITH
FORMAT, INIT,  SKIP, NOREWIND,
NOUNLOAD, STATS = 10

FETCH NEXT FROM Full_Backup
INTO @DBName


END

CLOSE Full_Backup
DEALLOCATE Full_Backup
GO






Your backups are only good if you can run a restore from them, it is important to regularly test this process.


Related Posts



Transaction Log Backups


Managing You Backup Files

Tuesday, 16 March 2010

Enabling Filestreaming in SQL Server 2008

This is a short post on enabling file streaming. I have read a few sources that cover the same topic but I like to write about these things to reinforce my knowledge. If you are interested in these other useful resources and to be honest this is where I learned about this process, along with actually doing it for myself check out these websites:

Books Online (BOL)

The SQL Server Storage Engine Blog

Filestream Storage in SQL Server 2008 -  A white paper by Paul Randal

Lets assume that you have just completed your installation of SQL Server 2008 and applied SP1 and you also know that this new instance will be used to house a file stream database.

Open SQL Server configuration manager:
<START><Program Files><Microsoft SQL Server 2008><Configuration Tools><SQL Server Configuration manager>

When configuration manager has opened right click on the SQL Server 2008 instance

Select <Properties>

Select the <FILESTREAM> tab

Ensure that the following check boxes are selected:

  • Enable FILESTREAM for Transact=SQL access
  • Enable FILESTREAM for file I/O streaming access (allows you to read and write file stream data from windows). Enter the name of the share.
  • Allow remote clients to have streaming access to FILESTREAM data. (enabling this option will allow remote clients to have access to FILESTREAM data.)
  • Select <APPLY><OK>

EnableFilestream

You then need to run the following script from management studio.

EXEC sp_configure filestream_access_level, 2
RECONFIGURE



So we have enabled FILESTREAM for the 2008 instance. The next task is to create a Filestream enabled database.

Sunday, 14 March 2010

Setting Up a Server Side Trace

Recently I wrote a short post  on the default trace in SQL Server. You can read that here. In this post I am going to discuss the advantages of a server side trace over the SQL Server profiler GUI. I’ll look at an easy method for creating server side traces, especially if you are familiar with the SQL Server profiler GUI. And I’ll demonstrate how to stop and start and  remove a server side trace.

Server Side Trace Versus SQL Server Profiler GUI

The main advantage of server side traces over its SQL Profiler counterpart lie in the fact that server side traces use less resources than a SQL profiler trace. You can also programmatically start and stop Server side traces.

OK, technically to create a server side trace you need to learn the syntax of the following system stored procedures:

  • sp_trace_create
  • sp_trace_generateevent
  • sp_trace_setevent
  • sp_trace_setfilter
  • sp_trace_setstatus

Along with 

I’m not going to explain the syntax of these events here, instead I’m going to show how can get the Profiler GUI to generate the necessary scripts to do this for you.

So you fire up the SQL Profiler GUI and setup your your profiler trace. In this case I am looking to audit backup and restores events on one of my Dev instances, but you can use whatever events suit your needs performance tuning, auditing etc.

The first step is to fire up profiler and connect to an instance of SQL Server.

Then you setup the trace as you would normally, you can save the results to a file if you like too:

SetupTraceInGUI

Then move on to <Event Selection> tab and select the events you want to include in your trace. In my case I am auditing backups and restores so I selected the Audit Backup/Restore event, along with Audit Login and Audit Logout.

SelectTheEvents

You have to click on <run> here to start the trace in the GUI and this can be a bit of a pain if you don’t want the trace to start straight away but you can stop as soon as it starts if you want too. If this case I let it run for a while and took a backup of my database to ensure that it was capturing what I expected:

GUITraceResults

So I then stop the trace and click on the following Menu…File > Export > Script Trace Definition > For SQL Server 2005 – 2008 the generates the necessary scripts to create the profiler trace on the server.

You have to pick a location and a name for your newly created script and click on save.

On the instance that I want to create and run the trace, I run the following query to return the currently running traces, on my instance I just have the default trace running:

select * from sys.traces



RunningTraces



 



The output of my trace file is below, if i run this, after entering a valid file name for the trace, a server side trace capturing the same events as my profiler trace above is created:



/****************************************************/
/* Created by: SQL Server 2008 Profiler */
/* Date: 10/03/2010 22:55:01 */
/****************************************************/


-- Create a Queue
declare @rc int
declare @TraceID int
declare @maxfilesize bigint
set @maxfilesize = 5

-- Please replace the text InsertFileNameHere, with an appropriate
-- filename prefixed by a path, e.g., c:\MyFolder\MyTrace. The .trc extension
-- will be appended to the filename automatically. If you are writing from
-- remote server to local drive, please use UNC path and make sure server has
-- write access to your network share

exec @rc = sp_trace_create @TraceID output, 0, N'InsertFileNameHere', @maxfilesize, NULL
if (@rc != 0) goto error

-- Client side File and Table cannot be scripted

-- Set the events
declare @on bit
set @on = 1
exec sp_trace_setevent @TraceID, 115, 7, @on
exec sp_trace_setevent @TraceID, 115, 23, @on
exec sp_trace_setevent @TraceID, 115, 8, @on
exec sp_trace_setevent @TraceID, 115, 40, @on
exec sp_trace_setevent @TraceID, 115, 64, @on
exec sp_trace_setevent @TraceID, 115, 1, @on
exec sp_trace_setevent @TraceID, 115, 9, @on
exec sp_trace_setevent @TraceID, 115, 21, @on
exec sp_trace_setevent @TraceID, 115, 41, @on
exec sp_trace_setevent @TraceID, 115, 49, @on
exec sp_trace_setevent @TraceID, 115, 6, @on
exec sp_trace_setevent @TraceID, 115, 10, @on
exec sp_trace_setevent @TraceID, 115, 14, @on
exec sp_trace_setevent @TraceID, 115, 26, @on
exec sp_trace_setevent @TraceID, 115, 50, @on
exec sp_trace_setevent @TraceID, 115, 3, @on
exec sp_trace_setevent @TraceID, 115, 11, @on
exec sp_trace_setevent @TraceID, 115, 35, @on
exec sp_trace_setevent @TraceID, 115, 51, @on
exec sp_trace_setevent @TraceID, 115, 4, @on
exec sp_trace_setevent @TraceID, 115, 12, @on
exec sp_trace_setevent @TraceID, 115, 60, @on
exec sp_trace_setevent @TraceID, 14, 1, @on
exec sp_trace_setevent @TraceID, 14, 9, @on
exec sp_trace_setevent @TraceID, 14, 2, @on
exec sp_trace_setevent @TraceID, 14, 6, @on
exec sp_trace_setevent @TraceID, 14, 10, @on
exec sp_trace_setevent @TraceID, 14, 14, @on
exec sp_trace_setevent @TraceID, 14, 11, @on
exec sp_trace_setevent @TraceID, 14, 12, @on
exec sp_trace_setevent @TraceID, 15, 15, @on
exec sp_trace_setevent @TraceID, 15, 16, @on
exec sp_trace_setevent @TraceID, 15, 9, @on
exec sp_trace_setevent @TraceID, 15, 13, @on
exec sp_trace_setevent @TraceID, 15, 17, @on
exec sp_trace_setevent @TraceID, 15, 6, @on
exec sp_trace_setevent @TraceID, 15, 10, @on
exec sp_trace_setevent @TraceID, 15, 14, @on
exec sp_trace_setevent @TraceID, 15, 18, @on
exec sp_trace_setevent @TraceID, 15, 11, @on
exec sp_trace_setevent @TraceID, 15, 12, @on


-- Set the Filters
declare @intfilter int
declare @bigintfilter bigint

exec sp_trace_setfilter @TraceID, 10, 0, 7, N'SQL Server Profiler - 6a02a8ea-d71c-4318-87a0-5289ec84fa1f'
-- Set the trace status to start
exec sp_trace_setstatus @TraceID, 1

-- display trace id for future references
select TraceID=@TraceID
goto finish

error:
select ErrorCode=@rc

finish:
go





After running the script the following result is returned:



ResultofTraceScript



 



The output is the ID of the Server Side trace that has just been created. You need the ID of the trace to stop, start and remove traces. When you run the script it automatically starts the trace. You can prove this by running:



select * from sys.traces



ServerSideTraceResults 



As you can see we now have two traces running, the status of 1 means the trace is running, a status or 0 means that the trace is stopped.



You can stop and start the trace by running the sp_trace_setstatus stored procedure and pass the traceid and required status as parameters. For the Trace we have just created:



exec sp_trace_setstatus 2, 0 -- This stops the trace running
exec sp_trace_setstatus 2, 1 --This starts the trace running
exec sp_trace_setstatus 2, 2 -- This closes the trace and deletes the definition from the server



Thursday, 11 March 2010

Setup Database Snapshot against a Mirrored Database (Part 2)

This post is the second post in my series about Database mirroring. For part one is titled Configuring Database Mirroring in SQL Server. I believe that high availability is the primary reason people mirror their important production databases. Depending on your situation there can be some other benefits of using mirroring that can help performance of your production database. With database mirroring in SQL Server you have two exact copies of your production database on two separate instances , wouldn’t it be nice to have access to that mirrored database to act as a read-only database supporting all your reporting requirements? This post will show how to set that up.
So we have our mirrored database all setup and its sat in a RECOVERING state, so how can we use it as reporting solution? Well, the answer lies in another feature new to SQL 2005… Database Snapshots.
You can’t run queries against your mirrored DB because it is a in a RECOVERING state but you can create a snapshot of your mirrored database and run queries against that…here’s how:
You need to get the logical file names of the data files that make up the mirrored database, these will be the same for both databases, to get these I run the following script against the principal database:
use MirrorDBtest
Go
select * from sysfiles





The result of this query is:


sysfiles


So with the logical name of my data file


I run the following CREATE DATABASE statement to create a snapshot on the mirror database:


CREATE DATABASE MirrorDBTestSS
ON (NAME = 'MirrorDBtest', FILENAME = 'C:\DATA\SQLServer2008\MirrorDBTest_mirror.snp')
AS SNAPSHOT OF MirrorDBtest

In the above script I specifying the location of the snapshot file on the secondary instance 



Once this script has been run you should have a fully fledged database snapshot created against the mirrored database.


You can then run queries against the snapshot:


QueryTheSnapshot





Related Posts:


Part 1 - Configuring Database Mirroring in SQL Server.

Tuesday, 9 March 2010

TSQL Tuesday #4: IO

“IO IO, It’s off to disk we go!” was in Mike Walsh’s blog post that introduced this TSQL Tuesday and I thought this would be an ideal opportunity for me to get my feet wet with a TSQL Tuesday contribution of my own, my very first. This kind of got me thinking….IO what a vast and varied subject to write about, so I started looking back through my career about the some of the IO related issues I have come across and I realised that I had a couple to chose from that are IO related.

My most recent issue/problem was a combination of IO, Virtualisation, method’s of working. The problem was caused as a result of snapshot taken by a sys admin against a VM that housed a SQL Server that I support and  that snapshot was not removed when it should have been. When my SQL disk backups (Full and Transaction log backups) were being written to disk, then tape, then deleted some days later, the snapshot recording all these disk changes and filled up all its available space, This in turn caused the server to fall over.I will state here that this was a test server and no major business issues came about as a result of this problem, so you could argue that it is not really a major troubleshooting crisis in my  career but it did identify to me a couple of things that I needed to change in my daily checks routine, and a few additional skills I needed to learn as we moved some of our production instances to the virtual world , so this ‘little’ problem potentially prevented a much nastier one down the road.

Normal disk space checks need to be changed for SQL Servers running in virtual land – I have scripts, jobs, alerts and manual checks that check the size of my databases and the drives they reside how much free space each drive in my SQL Servers has remaining. These checks did not show anything to be concerned about. The snapshot on my server was not using any of that space, but it had run out of space allocated to it. I added to my daily check list, a simple manual check – Check for snapshots and its size on the VM.

It also identified a more general area for me to improve, in order to check the physical machines hosting the VM’s and their  I needed to become more familiar with VM tools (VMWare’s infrastructure client in my case) as they will be necessary tools for troubleshooting VMs down the road.

Sunday, 7 March 2010

Configuring Database Mirroring in SQL Server

I plan on writing a series of posts on database mirroring, this is the first. Database Mirroring is a new feature of SQL Server 2005 and I have been configuring and testing it lately so I thought I would write a short post to document what I have learned.
This post will walk through configuring mirroring, I will be using my laptop and a couple of SQL 2008 instances for this example, so this is nothing more than a demo, but the process should be the same for two separate SQL Server instances on two separate servers.
The key to configuring database mirroring as with most things is in the preparation.

Pre – Preparation Steps

If you are following along and you have a database you want to mirror, then you don’t need this step, I have included this section for completeness, I need a database to mirror. On my primary server I run the following script safe in the knowledge  my  model database is in Full recovery mode.
/*CREATE A DATABASE TO TEST MIRRORING on Primary Instance */


USE [master]
GO

CREATE DATABASE [Mirror_Test] 



OK so that’s the pre-preparation steps complete. I have a database setup in full recovery mode that I want to mirror.


Preparation Steps



In order to prepare for mirroring at a minimum you need a full backup of the principal database and a transaction log backup of the principal database, both of these files need to be restored creating the mirrored database on the secondary instance.


So we create a full backup, followed by a transaction log backup on the on the primary, soon to be principal database:


/*Take a full backup of Primary database */


BACKUP DATABASE [Mirror_Test] 
TO  DISK = N'C:\Backup\Mirror_Test.BAK' 
WITH NOFORMAT, NOINIT,  NAME = N'Mirror_Test-Full Database Backup', 
SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO



We then need to take a transaction log backup of the principal database





/*Take a T/LOG backup of Mirror_DB*/

BACKUP LOG [Mirror_Test] 
TO  DISK = N'C:\Backup\Mirror_Test.TRN' 
WITH NOFORMAT, NOINIT,  NAME = N'Mirror_Test-Transaction Log  Backup', 
SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO



These files need to be copied to the secondary/other server to restore and create the mirrored database. My other instance is on the same physical server as the instance housing the principal database. So i don’t need to copy any files. Assuming you have the full backup and transaction log file available, I then restore the FULL Backup file creating a database with same name, I need  to specify the WITH MOVE statement as my databases are on the same physical server and hence the secondary mirrored database files need to be in a different location, so I include that in the restore statement:


RESTORE DATABASE [Mirror_Test] 
FROM  DISK = N'C:\Backup\Mirror_Test.BAK' 
WITH  FILE = 1, 
MOVE N'Mirror_Test' TO N'C:\DATA\SQLMIRROR\Mirror_Test.mdf',  
MOVE N'Mirror_Test_log' TO N'C:\DATA\SQLMIRROR\Mirror_Test_1.LDF',  
NORECOVERY,  NOUNLOAD,  STATS = 10
GO


My next step is to restore the transaction log backup:


/*Restore the log backup taken, again leaving the DB in a recovering state*/

RESTORE LOG [Mirror_Test] 
FROM  DISK = N'C:\Backup\Mirror_Test.TRN'
WITH  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 10
GO



As you can  hopefully see from the scripts, I left the mirrored database in  standby mode by specifying NORECOVERY in my restore script. This is needed to allow me to setup mirroring. After the restore we have a copy of both databases on both instances with the secondary, mirrored database is a recovering state, we can now configure mirroring.


Configure Mirroring



On the instance hosting the primary database right click on the database select <Tasks><Mirror>


ConfigureMirroring


This takes us to the Database Properties dialog box and the <Mirroring> tab.



Click on the <configure security> button


This takes us to the <Configure database mirroring security wizard>


click <Next>


If you want to use a witness server, this is where you specify which instance will be the witness, I am not using a witness for this demo, but if I were to use Mirroring as a high availability option for my database in order to use automatic failover you need to have a witness server.


Click <Next>


On the next screen you specify the principal server. I’m taking the default values here.


PrincipleServer


Once your done here, click <Next>


One the Mirror Server instance screen, the first thing you need to do is connect to the mirror instance. You quote from the wizard:



“You must connect to this server instance using a login or account with the necessary permissions to save the security configuration before continuing this wizard.”



Ensure you have the correct instance in the <Mirror Instance Server box> and click <Connect> connect to the instance with necessary credentials. Again I have taken the default values here for port number and endpoint name. Because both instances are on the same physical server in my case I have to specify different port numbers. If they were on different physical server you could use the same port numbers.


MirrorServer


On the next screen you specify   the service accounts for the principal and mirror instances. Click <Next>


Click <Finish> on the <Complete Wizard> screen.


The database mirroring endpoints then get created


CreateEndpoints





When that is done click <Close>


You will then be prompted to <Start Mirroring> or <Do Not Start Mirror>. If you want to start mirroring select it.


Notes



It’s probably a good idea not to interfere too much with the business that mirroring configuration be run out side of core hours because the overhead of configuring mirroring can have an adverse affect on database performance.


When implementing this on my laptop with two SQL 2008 instances I had no end of fun with my Firewall and it blocking the ports, a small config change to the firewall soon sorted that out though.


Related Posts



Database Mirroring – Part 2

Thursday, 4 March 2010

Time Off, Down Time, Round Up

I have taken some time off this week, I have been to the beach with my two golden retriever puppies. My one dog, Seth, hates the car so we were a little concerned that he wouldn’t like it. After a couple of puppy travel sickness tablets though he calmed right down and chilled out for the hour or so car ride. So the four of us made our way to the Gower in South Wales. I think we have the second largest tidal range in the world and the tide was out, so we had miles and miles of wet sand to play fetch and chase the seagulls. I went to University in Swansea, I left there in 2002 and I never really a lot of time in the Gower but this week we have been there a few times and my dogs think its the best place on the planet.

I do have some great photo’s but I haven’t got them off the camera yet, when I do I will post the here.

Anyway, I have been scheduling the blog posts for a while now and writing them using Windows Live Writer, its a neat tool and I would recommend it to anyone thinking of blogging. I have have had a few posts lined up for this week so I haven’t had to write about SQL Server while I was off and yet my blog gets updated regularly. My travels got me thinking about my blog though, downtime usually does that for me and I have few posts lined for the near future, some of which include:

  • Setting up a mirrored database
  • ‘Sargable’ WHERE clauses
  • Minimising the downtime when migrating between SQL Server versions.

So I have a few more subjects that I want to write about but I have no more posts scheduled. If I’m going to stick to my schedule of 3 posts a week I need to write one before Sunday, which I think should be a more than achievable target. When I started the blog I didn’t know what frequency with which to write. but 3 times a week I’m just about keeping up with, I’m syndicated on SSC and since that happened I know that a lot more people are reading what I’m writing which does give me a lot more motivation for writing.

Anyway my posts thus far have been about what I have done and what I have learned in the week gone by, usually about SQL Server so I thought seeing that this week I have learned that my puppies like the sea, chasing seagulls and sausage and chips so I thought that I should write about it. I hope you don’t mind. My next posts will be a more technical post

Tuesday, 2 March 2010

SQL Server - MVP Deep Dives

I got my copy of MVP deep dives earlier this month and have been making good progress in reading some of the chapters. Its not a start to finish book and by that I mean you can really read any chapter in any order you like. Me being me, I started somewhere toward the back. These are some of the Chapters that I have read thus far:

47 - How to use Dynamic Management Views
Glenn Berry

43 - How to optimize tempdb performance
Brad M. McGehee

39 Running SQL Server on Hyper-V
John Paul Cook

45 - Correlating SQL Profiler with PerfMon
Kevin Kline

I intend to read this chapter soon, as I would like to learn more about power shell

26 Power Shell in SQL Server
Richard Siddaway

The authors profit’s from this book go to a charity called War Child which is a very worth while cause. The book has 53 authors and 6 technical editors all with MVP status, and include names like Paul Randal, Adam Mechanic, Kimberly Tripp, Gail Shaw, Paul Nielsen and Itzak Ben-Gan to name just a few, which basically means some of the most authoritative people in the SQL Server Community have come together to produce this work, and in my humble opinion is a must have for any SQL Server professional.

SQL Server MVP Deep Dives