Friday, 31 December 2010

Happy New Year

2011 Well it’s Friday again and another big day.

Hopefully you all had a great Christmas and as we move to a New Year, the end of 2010 and start 2011 I’d like to wish everyone Happy and prosperous New Year.

U2N9AD334FTK

Thursday, 30 December 2010

SELECT * FROM SQLBLOGGERS WHERE LOCATION = ‘UK’

I published a post yesterday where I listed my top 5 bloggers from the worldwide SQL community for 2010. The long and short of it, these were the people whose blog’s I had read the most throughout the year. There are many other great blogs and bloggers out there that I read regularly. When that post published and fed out to twitter it prompted a few people, whose blogs I do follow and value, to give me a bit of stick . Pretending to be  all hurt and upset. I won’t name any names (John Sansom Blog|@johnsansom) but it did get me thinking.

There are many great bloggers based here in the UK several of which were recently nominated in the sqlserverpedia.com annual blog awards for 2010 (a couple of us won in our categories too).  I realised that although I read many of these UK based blogs and several were considered for the top 5 bloggers 2010 post, none of them made my list! So having thought about it I decided to make my own UK specific list. I was going to go for the top 5 approach, but after a little more thought I decided to compose a list of all the UK based SQL blogs that I follow. This is my list to date (no particular order):

Updated 2 Jan 2011

Undoubtedly I will have missed someone off that I should have included, and for that I am truly sorry. That little oversight on my part is easily rectified. Simply leave a comment on this post with the bloggers name, blog URL and twitter account (if one exists). I will then update this list and add you to my feed reader.

Go on then, who have I missed?

Wednesday, 29 December 2010

SELECT TOP 5 * FROM BLOGGERS

Hopefully my title gives away what this post is about. This my list of the top five people or more precisely their blogs that have made most use out in the year gone, I am not going to put them in any specific order they all add to my knowledge in different ways:

Also it is worth noting that besides these guys there are many other great blogs and bloggers out there that I read and fully appreciate, I needed a cut-off that wasn’t that deep but if you do blog and write keep going, keep writing and keep blogging.

Who would be in your top five, would it differ from mine? let me know in the comments below.

Tuesday, 28 December 2010

Test Post

I have been tinkering with the settings for my blog today, please ignore this post. If I remember I will delete when I’m done.

Friday, 24 December 2010

Merry Christmas

This is just a short post for a Friday and not just any old Friday either, Santa is on his way :-)

Santa is on his way 

 

I would just like to wish everybody a very Merry Christmas.

Thursday, 23 December 2010

SQL Bits 8

It seems the dates for the next SQLBITS have been released will take between April the 7th and April 9th 2011.

SQLBitsLogo-300x101

Location: The Grand Hotel, Brighton

Monday, 20 December 2010

Turn Off AutoShrink!

This is kind of a follow up to a post I published last week regarding autogrowth. A forum poster said they had turned on AutoGrow and they were using autogrow as way of managing the growth of their databases – No! This post titled “Should I have AutoGrowth enabled on my SQL Server Database?” addresses why you should have autogrow enabled and why you should only use it as a fail back. Instead monitor and grow your databases manually.

The same forum poster also said “that just in case the database grew too much and needed space released back to Operating System, he had enabled AutoShrink.” No!

Why?

This is the short answer: Shrinking databases is generally a bad thing to do it can will cause fragmentation in your indexes, which in turn can be bad for performance. Even though you may *occasionally* need to shrink your database, Shrinking a database should not be  part of your  regular  database maintenance routines. If shrink is used at all it should be used very very infrequently. Letting SQL Server shrink your database for you automatically, when *it* thinks it needs to done and with no regard for the time of day is much much worse for you as a DBA. You may find that this process kicks in the middle of day when your system is at its busiest taking up valuable resources as well as completely fragmenting your indexes.

Addition Reading

Paul Randal (@PaulRandal) has a blog post discussing the very same topic http://www.sqlskills.com/blogs/paul/post/Auto-shrink-e28093-turn-it-OFF!.aspx he has a great example that demonstrates how shrink can affect index fragmentation levels, so if you are looking for proof head over to Paul’s post above

Saturday, 18 December 2010

SQL Server 2005 SP4 Released

Steve Jones mentioned recently that SP4 for SQL Server 2005 is on its way because Microsoft closed his connect item.

Well looks like it was released yesterday, you can download it here:

 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=b953e84f-9307-405e-bceb-47bd345baece&displaylang=en

Let it Snow, Let it Snow, Let it Snow!

I don’t usually post on my blog on the weekend, but this weekend, the weekend before a Christmas is a little different. Where I live in South Wales in the UK we don’t usually get a lot of snow, but since Thursday this week we have had the best part of a foot of snow fall and where my parents live, which is only about fifteen miles away, they have had more like a foot and a half of the white stuff. That is a lot of snow for us. Yesterday, Friday, ground pretty much to a halt. I heard off several friends who had to cancel their office Christmas parties as a result of the weather.But it  does make you feel like it’s Christmas and I like Christmas.

My two golden retrievers, are particularly fond of the snow, I guess their double fur coats come in pretty useful this time of year. Anyway I just thought I’d post a couple of pictures:

Jake Bounding  DogsAndSnow2 DogsAndSnow1

Friday, 17 December 2010

INSERT INTO Reading2011 VALUES…

Last Friday I posted a short note on what books I have read this year past, so this Friday keeping up with the same theme and the fact we are quickly approaching the end of the year I thought I would write a post  on what books I want to read next year:

Wednesday, 15 December 2010

Should I have AutoGrowth enabled on my SQL Server Database

I was reading a few forum posts yesterday where a few people were complaining that the transaction log had grown during index maintenance and they were asking:
  1. If they should have autogrowth enabled?
  2. If they should restrict the size of the file?
I think autogrowth should be enabled on all databases but it should only be used as a fall back, or fail safe. The DBA should be sizing the database appropriately right up front and then monitoring the database size and space used to decide if and when to grow it again manually, and always increase size for reasonable medium term use.  I think autogrowth is fine as a fail safe in case you miss something but if you rely on it to size your database you will find it kicks in at the most inappropriate times, which you have no control over, which can affect performance. When setting up autogrowth I like to set the files to grow by fixed amounts in megabytes, and not let the files grow by a percentage of their current size. This is especially important for the log because  as the size of the log increases the more megabytes it will grow by increases. 10 percent of 100 is not the same as 10 percent of 200 etc and the size of the growth of the log can effect the size and number of Virtual Log Files (VLFs) inside the transaction log. The link takes you to the blog of SQL Server MVP Kimberly Tripp (Blog). So to summarise, I like to enable autogrowth as a fail safe, set the growth to be fixed amounts of megabytes and monitor database file sizes and set the file size to appropriate values  and grow manually when needed.

Regarding the ‘restrict the size’ of a database file option. I generally leave that set to unrestricted because I monitor and have alerts setup on free disk space so I’m able to work on storage before it becomes an issue and I don’t want my database falling over because I set a limit on the size of the file. I’m sure other people have different views on this and I would be more than interested to know if people do things differently and why, in the comments below.

Tuesday, 14 December 2010

T-SQL Tuesday #13 - What the Business Says Is Not What the Business Wants

So this month’s blog party is being held by non-other than Steve Jones (blog|@Way0utwest) of SQL Server Central fame. This month’s topic is all about the business,TSQL2sDay your customers, clients etc asking for something but not really knowing what they are asking for. Or  in my interpretation the business, client, customer, telling you the technology or methodology or tool to use instead of what really they should be telling  you…what the business wants. Steve’s example of in his post sums in it up perfectly.

I have many examples to draw on here, I guess the one that springs to mind the most was a client that told me they wanted to ‘Shrink the Database’ as part of their maintenance routines. When I asked why? that’s self defeating. I was told that the databases were growing uncontrollably and this needed to be included in order to keep the systems up and running and to avoid eating up all available disk space. Smelling a rat I decided to do a little investigation of my own and I found that they had several databases that were in full recovery mode BUT they were not taking regular transaction log backups. When I mentioned this to my clients and explained the difference between simple and full recovery mode and what it meant in terms of point in time recovery we decided that maybe some regular transaction log backup were in fact needed. we then re-sized the log and freed up a lot of space and my client was very happy.

Anyway that’s my post for the last TSQL Tuesday of 2010, It is a very interesting, if not possibly controversial topic for someand I’m sure that many of the community will have examples of their own and I look forward to reading them and how others have overcome these issues.  

Friday, 10 December 2010

SELECT TOP 3 SQLBOOKS FROM 2010

This is just a short post for a Friday. This is my list of the Top 3 SQL Server books from the year[ish]:

Granted some of these book were published earlier than 2010 but not by much and these have been what I have been reading this year.

If you have a list of good books let us all know about them in the comments

Wednesday, 8 December 2010

British Heart Foundation – Christmas Event

CHRISTMAS AT THE MUSICALS

Originally posted on Senghenydd Cricket Club website.

Join The Romilly Singers as they entertain you through a selection of musicals through the ages on Sunday December 12th at St Peter's Hall, Roath, 6.30pm. Tickets are £20 and include a two course dinner plus entertainment. To purchase tickets contact St Peter's Hall on 029 20494425 or Sharon Owen 029 20382408 or by email owensh@bhf.org.uk.

Tuesday, 7 December 2010

Blog – Domain Changes Update

As I mentioned in an earlier post I have recently moved my blog to a new domain name. This is something I have been thinking about for while but in truth and with hindsight I did not manage the transition all that well. I let the old domain and hosting run down to expiration before getting my new one up and running which meant that I lost control of the old domain and I wasn’t able to use some of Google’s ‘Change of Address’ features to help the transition of the old domain to the new name. BUT we live and learn and not all is lost. My new domain has been picked up in the Google crawl already and although it is very likely to be six months to recover in terms of Search Engine rankings and SEO traffic I feel the benefit of the change will benefit my brand in the long run. So although is not quite like starting again I do feel like it has been a step backwards but sometimes a step backwards can help you go forwards. This is an opportunity for me to learn a few things and take my blog and my brand forward.

Over the last couple of years in particular I realised the power of blogging and how it adds to your brand. I work as a SQL Server contractor in UK and as such I run and own a company. Contracting generally consist of short term assignments and the contracts that my company secure generally involve  working with clients SQL Server infrastructure. Having a blog can definitely help you secure work and contracts. I will post more on how this has helped me later, probably next year, but my company’s business model is based on selling services or more to the point selling me. So I thought a domain name that included my product name and also allowed me some freedom of topic choices would be beneficial which is why I changed my domain to my name -  gethynellis.com. I can see the benefits of my blog and the value it adds to my brand and my business because of this I follow a few ‘how to blog successfully’ blogs and including http://www.problogger.net to try and keep a breast of good techniques and practices.

ProBlogger

I’m not going to go into much detail about why you should blog or how you should go about it if you are a SQL Server professional. I will say that blogs can work effectively for permanent employees too, in effect you are improving your brand and employability with potential employers not only demonstrating and documenting your technical skills but also demonstrating and proving your  communication which are essential skills for any DBA. I may post on this myself later but for now I’m not going to re-invent the wheel,if you are interested in blogging and you are a SQL Server professional go and check out Brent Ozar’s posts on how to start blogging.

It was Brent’s post along with fact I read their blog led me to buying ProBlogger: Secrets for Blogging Your Way to a Six-Figure Income. There will be a review to follow but I picked this up over the weekend and having read the first few chapters it seems like a great book for any blogger, who is serious about blogging to read.

This post will be my 100’s for the year, which was one of my targets at the start of the year. I’d like to say that my first full year blogging has been relatively successful, hopefully I will be able to continue that success into next year with new domain.

Monday, 6 December 2010

My Weekend

We are now well into December and Christmas is fast approaching, I have most, but not all of my shopping done, so a few more things to get before the Party season starts in earnest. Which for me is this Friday :-)  It was a busy week for me last week and I only managed to publish one post. Last week and this weekend gone has been very busy for me, I was on driving duty Friday night taking Mrs Ellis to and from her Christmas Party, Saturday was the day to decorate the house ready for Christmas, this is one of Mrs Ellis’ favourite days of the year. The dogs needed lots of long W.A.L.K.I..E.S….They can tell when I type it these days, never mind say it so shhh! don’t say it again. This is Seth posing for a Picture:

Seth Posing for Picture 

Sunday was a little more restful and in between all of that I tried to take in as much of the 2nd #ashes test match as possible, as well as reading my new copy of ProBlogger: Secrets for Blogging Your Way to a Six-Figure Income. I will post more on this tomorrow but for now that’s post 99 for the year done and published.

Sunday, 5 December 2010

Digg

I’m currently doing some experiments with my blog and trying out digg.com. In order to do that I need to paste this key <!--261640fe8cc94a1194161809571dac4c—> into a post to verify that it is, in fact, my blog.

Friday, 3 December 2010

MCM – Useful Resources

This is just a short post for a Friday. There has been a lot written over the past few weeks since the PASS summit about changes to the Microsoft Certified Master (MCM) programme. As part of that announcement several very useful resources were made available, an series of MCM readiness video list, put together by the good people over at sqlskills.com which can be found here:

http://technet.microsoft.com/en-us/sqlserver/ff977043.aspx

There is also a very good reading list which can be found here:

http://www.microsoft.com/learning/en/us/certification/master-sql-path.aspx#mcm-microsoft-sql-server-2008-readiness

This is a post my own future reference, I would recommend that if you are seriously considering taking the re-vamped MCM, or even if you are not but want a more detailed knowledge of SQL Server, then check out these resources.

Sunday, 28 November 2010

New Domain - My blog has a new domain name gethynellis.com

About a year ago I started to write a blog, previously to that, for about three years I had been writing a SQL Server themed tips and articles on my website. When I started writing the blog I decided to build it into my existing domain name, that I think was a little mistake on my part. It’s that time of year again when it was time to renew my domain name, I hadn’t written on the actual site for well over a year, people reading the site itself had fallen through the floor as I was no longer paying it any attention,  the hosting costs were creeping up with that particular provider and I found that as I’ve developed and increased my contribution  to the SQL Server community my name has become more and more important. The existing domain name, I felt did not really represent me, it was long, possibly too long and difficult to remember and if you came across it on a Search Engine search you wouldn’t necessarily know that it was my site without clicking the link and visiting the site. So after some deliberation and thought I have decided to go for a new look. I revamped the blog site a month or so back which I think now looks clean and tidy and pretty easy to navigate. Yesterday, which incidentally coincided with my old domain expiring, I started up gethynellis.com and pointed it at my old blog. Nothing has been lost and all my old blog posts have been kept so all is good there, it is now obvious from the domain name whose site you are on and that will go that little bit further in promoting my brand – me! to the wider world. A key factor in my decision to change though was down to the fact that a lot of people read what I write on other sites, like SQLServerCentral.com and SQLServerPedia.com when I started the blog a year ago I followed some of Brent Ozar’s (Blog | Twitter) advice and used feed burner for my feeds and got myself syndicated on a couple of the large SQL Server sites. This allowed my to change the base domain for my feed without needing any of my readers to change their subscriptions. The syndication on SQL Server Pedia uses this feed. I’m not sure if SQL Server central does though I will find out if the posts makes it onto their site. If doesn’t I’ll have to email Steve and ask him to update it.

The Drawbacks and mistakes that I made

Changing the domain name of my blog, technically, was an easy thing to do, in reality though this approach has many drawbacks. It is pretty much like starting again. Even though all my content is still out there on the web any external links to it will be pointing to the wrong place, Which will not help my new domain in the search engine rankings but more importantly if you link to my content on the old domain  that link will no longer exist and your site users will get taken to a horrible error message that they will no doubt find frustrating and I am really very sorry about. If you have links to any of my blog posts can I please ask that you update them with the new URL it’ll help your readers and of course me :-)

I am in the process of updating my old posts which link to other pages  on my blog so I maintain a nice set of internal links that I’m told search engines like. This after years worth of posts, which total at just over 100 posts, is quite a lot of work.

As I type this Google and Bing and other search engines still don’t know about my new domain, hopefully it won’t be long before they pick up the new content. I did look into Google’s “Change of Address” feature in webmaster tools but unfortunately I no longer have access to the old domain so 301 redirects are out of the question, I’m not sure they the old hosting company supported them anyway, and although I verified the old domain in webmaster tools years ago, Google tried to re-verify it when I tried to change the address but unfortunately that failed too. The sub domain my blog was sitting on couldn't be re-directed because it’s not a root level domain so that tool, at least for now unless someone advises me otherwise, seems unlikely to help. I have followed the other advice Google offers when changing a domain name, I have added my new domain to webmaster tools created a site map and updating internal and external links to the site. They also recommend retaining control of the old domain for 6 months after the move, not planning this domain change that far in advance and the fact the old domain has now disappeared I won’t be able to this either. So we’ll see how it goes but I do envisage my visitor numbers falling significantly over the next few months while the Search Engines sort themselves and my new domain out.

This last year I have had just under 10,000 unique visitors to my site (according to Google Analytics), a figure I’m pretty proud of, although another 6 people visiting would have been a great figure to look at! It will be interesting to see how this changes impacts on that figure over the next few months, here’s how the unique visitors numbers over the last year looks on my blog today:

Traffic Stats Last Year

Lessons Learned:

  • Give yourself plenty of time with the old domain still running, it will allow you a much smoother transition to your new domain name and keep you much more in control. Although I have been thinking about this for a while, I only really looked into the actual logistics of this when the old domain was about to run out. Then it was too late for me.
  • Pick a good domain for your blog right up front, save bothering with all this in the first place.

If you have any further advice to offer me (Or others) about what I should have done differently or anything that I can still do to improve the efficiency of the move to the new domain then please let me know in the comments below.

I will keep you posted in the New Year with and update on how the transition went.

Wednesday, 10 November 2010

SQLServerPedia – Awards 2010

The results of the SQLServerPedia Awards 2010 are out....I was nominated in two categories and I won the "New to SQLServerPedia" award. Being nominated was a nice unexpected surprise and winning is unbelievable, it actually ties in nicely with my blog's one Hundredth post and a nice way to finish my first year writing on the blog. The "New to SQLServerPedia" category had some some greater bloggers nominated and two of those nominated in the "New to SQLServerPedia” category won awards in some of the other categories,  Jason Brimhall (Blog|Twitter) won an award for the "Book Review" category, you can view that post here and Bob Pusateri (Blog|Twitter) won an award in the "Internals" category, you can read Bob’s post here. So there was lots excellent of competition and I can’t believe I actually won. Congratulations to Bob and Jason and all the contestants on on their success . I'm just sorry I couldn't make the trip to Seattle to pick up the award in person.!
I'd like to take this opportunity to thank everyone that voted for me.
I have got my Winners Badge:
badge-winner-ns
Hopefully my statue/trophy will be winging its way to me in due course (Pics to follow).
You can check out all the results here: http://pulse.sqlserverpedia.com/awards/

Friday, 5 November 2010

Post 100 – A Year Gone (not quite)!

I wasn’t going to make a special effort for the one hundredth post on my blog, I was intending to let it slip by without much of a mention…But I couldn’t help myself. I have maintained a SQL themed website since 2007 without much luck in term of hits and visitors. When I started out in 2007 I bought some hosting that included a content management system, but that was clunky and at best difficult to use. In three years of it running and writing for that site I probably churned about 50 articles/posts…tops. Late last year (2009) after following along and reading some of the well known bloggers out there like Steve Jones (Blog|Twitter) and Brent Ozar (Blog|Twitter) and just before attending the SQLBITS conference at the Celtic Manor (home of the 2010 Ryder Cup) I decided to start blogging. My reason being, I was writing for my website and now on my blog mainly for my own benefit. I find writing helps me learn and remember technology much better than reading alone, it forces you to think about a subject and test and experiment and use the technology so you know what you are writing is accurate…in short it makes you do the work. You learn from the work you do, writing makes you understand and remember more clearly than reading and practicing and using alone. You always have documentation on hand, that you wrote, for future reference. It also stands as a good example of what you know and what technical skills you have. I saw people like Steve and Brent churn out more posts in a week on their respective blogs than I would be able to do in a month on my website. So I started a blog and incorporated it into the website, (they actually share a domain name) and I started writing on the blog. I have been doing it for a year and I have hit my one hundredth post. I feel like I have learnt a lot in the last year!
But it doesn’t stop here, my blog has had some minor success, when I started it, obviously visitors numbers were non-existent very small, but through some help, from people like Steve and Brent who syndicated my blog on their very large and very successful SQL Server websites and some sound SEO advice from my good friend Mandeep Kaler my visitor numbers grew, slowly, but they grew. I then found out about #TSQL2SDAY the monthly blog party, the brain child of Adam Mechanic (Blog|Twitter), and I started writing in a few of those, I found these improved my exposure to the other bloggers in SQL Land and again my visitor numbers crept up. I checked Google Analytics today for the first time how many unique visitors have been to my blog this past year…I’m about 800 unique visitors shy of 10,000! WOW! I was shocked and I still am, I didn’t know that so many people had been to my blog and I hope that those people got something useful from it.
I recently saw an an announcement on SQLSERVERPEDIA,  that they were to hold a blogger awards to coincide  with the PASS summit. The fact I’m not attending the PASS summit, even though I would love too meant I didn’t pay as much attention to these awards, I saw post from various well-known bloggers, promoting their nomination but I have been busy and not had time to catch up on my reading, until today I saw a post today by Grant Fritchley that mentioned he was up for nomination for his post on Powershell, I liked Grant’s post on powershell, so I thought I would pop along to the sqlserverpedia.com website, have a look at the nominations and probably, in all likelihood vote for Grant. I clicked the link, the page loaded and the first entry I saw was a post called “Policy Based Management - Disable Autoclose” I thought to myself, I wrote a post with a similar title not so long ago, I then looked at the bio and the author and I was shocked to find that it was actually my post up for nomination. I’m sorry Grant I didn’t vote for you in that category, although I’m way behind you in the count so I don’t think I will prove much competition for you.
Having shamelessly voted for myself in “Automation and Management” I then, after I got over the initial excitement of being nominated, thought I would check out some of the other categories, I did vote for Grant in the “Series” category though for his series “An Introduction to Indexes” series I looked at the the “New to SqlServerPedia” category and to my shock and amazement I found that I was nominated there too!!! Again though I’m lagging way behind the rest of field , not that that bother me at all, I wasn’t even expecting a nomination, I have been shocked to have been nominated, not once but twice!
That said, people are on the campaign trail and I don’t want to lose by a country mile so if you like to vote for me or my post then you can do so:
New to SqlServerPedia
Automation and Management
I’ll be honest, these nominations have been a fantastic way for to round off my first year blogging properly and my one hundredth post, so standby for the next one hundred.

Thursday, 4 November 2010

SQL ServerPedia – Blog Awards

I have been catching up on some blog posts and I read a few posts by well known bloggers like Grant Fritchely and Mike Walsh about voting for their posts in the SQLServerPedia blog awards. So I visited the links they provided to have look at what was what and who was nominated and then maybe vote. I was stunned to learn that I have been nominated in two categories!
I have an entry in the management and automation category for my Policy Based Management post. I’m in there with along with Grant and a few others.  I have also been nominated in the New To SQLServerPedia category. WOW! I still pretty stunned to be honest, I’m in there with some great writers.
So if you would like to vote then a list of all the categories can be found here. And If you did happen to vote for me, then that would be fabulous.

What’s in a Full Backup?

I got asked a question today that I though was quite interesting. If my backup starts at 6pm and it runs for say 30 minutes what happens to the transactions that occur during the 30 minutes that the backup is running?

The short answer:

All committed transactions that run during the duration of the backup will be included in the full backup, open transactions will not be included.

The proof:

Please note, that your database will get around 784MB in size and the backup created will be roughly the same size, so before running these scripts, make sure you have enough space on the box you testing it on.

I will create a dummy database called buptest for the purpose of this test

USE master
go
CREATE DATABASE buptest
GO
ALTER DATABASE buptest SET RECOVERY SIMPLE





I have put the database into simple recovery mode just to keep the log manageable, I am not worried about point in time recovery for this demo.


Next we will create two tables to hold some test data

USE buptest
go

CREATE TABLE t1 (id INT, val CHAR(8000))
go

CREATE TABLE t2 (id INT, val CHAR(8000))
go



Next we will populate our table t1 with some data, 100,000 rows. I’m doing this so the backup will run for long enough to allow me to execute some transactions against the database whilst the backup is running. I don’t know of another way of doing this…If you do, please leave me a note in the comments.

DECLARE @i INT
SET @i = 0
WHILE @i < 100000
BEGIN

INSERT INTO t1
VALUES (@i, REPLICATE('a',8000))
SET @i=@i+1
END



The next step has two parts, I want to kick off a full backup in one query window and then in a second query window I want to run two transactions that each insert a row into the t2 table. The first transaction will be committed before the backup completes the second transaction will be committed after backup has finished. So I suggest if you are following along, you open two query windows, paste the backup code below into the first window, paste the insert transaction into the second window (You will see that the second commit statement in commented out), switch to the first window, start the backup, quickly switch to the second window, fire the script making sure that the commit statement is commented out. Wait for the backup to complete. When the backup has completed uncommented the commit transaction statement and commit the currently open transaction.


The script to run the backup

--We then take a backup of the database
--While this is running fire the second script below
BACKUP DATABASE buptest TO DISK = 'c:\Backup\buptest.bak'



While the backup script above is running (it took about a minute on my laptop to complete) run this:

USE buptest
GO

--Run in a new query window
BEGIN TRANSACTION
INSERT INTO t2
VALUES (1, 'gethyn')
COMMIT TRANSACTION


BEGIN TRANSACTION
INSERT INTO t2
VALUES (2,'Ian')





When the backup completes run the final commit transaction in the second window. This ensures that the second transaction was not committed prior to the backup completing.

COMMIT TRANSACTION



If you run

SELECT * FROM t2



We get two rows returned.









1Gethyn
2Ian

Next we run a restore of our database from the backup taken

USE master
go
ALTER DATABASE buptest SET SINGLE_USER WITH ROLLBACK IMMEDIATE

RESTORE DATABASE buptest FROM DISK = 'c:\Backup\buptest.bak' WITH replace



Then re-run the same select

SELECT * FROM t2



We see that only the first inserted record is present.






1Gethyn

Hence only committed transaction are included in a full database backup

 

Tuesday, 2 November 2010

TSQL2SDAY #12 - Why are DBA Skills Necessary?

WOW this is the 12th TSQL2Sday albeit this month’s is week early because of the PASS summit next week, so these little blog parties have been running for a year now. This month is being hosted by none other than Paul Randal (Blog|Twitter) and we are going to be discussing Why are DBA Skills necessary?
There is a wide range of topics that I could write about here and Paul has listed several topics areas on his blog, I am going to be discussing how business continuity could be affected by a lack of DBA skills.
Paul in his Master class, in London in the summer just gone (review here) gave a great example from his time with Microsoft when a bank got their disaster recovery plan wrong and it ended up costing them dearly, I’m won’t go into much more than that here but it was a great story demonstrating firstly how important data can be to company or firm and also how important it is protect and look after that data – ultimately in lots of cases (backing up the database) = (backing up the business). Business Continuity and Disaster Recovery are one the main responsibilities of the DBA and it is important to get it right, not only for the safety of your own job but also for the safety of the business.
Thankfully, (knock on wood) I count on one hand the times I have needed to restore a production because of a disaster or an unexpected  problem of some kind, fortunately when those situations have come about I have been fully prepared and practiced for it and had relevant up to date documentation on hand that made the process as painless as possible. Database backups are only good if you can restore and recover from them, so I would recommend that anyone with responsibility for backing up a production database has a process in place for periodically testing those backups so you can have confidence in your restore process should the need ever arrive.
If you are backing up your databases regularly, hopefully you are also storing your backups in a secondary location(s) hopefully on a different server, hopefully off site some place so if your main building becomes unavailable…for whatever reason you can get your database back online by using the backups stored in a separate location.
I see lots of forum posts, “my database is corrupt and I have no backup please help”, “my database has accidentally been deleted, can I get it back without a backup?” which means lots of businesses out there is the big wide world are suffering because they didn’t have the essential skills of a DBA. All of these could have been prevented if someone with DBA skills/knowledge had been involved with DR Business continuity plan

Monday, 1 November 2010

Movember – Team IPMO

Some friends of mine  decided to support the Movember charity this year. I would try and explain what Movember is but the charity website  does a much better job than anything I could put together:

“Movember challenges men to change their appearance and the face of men’s health by growing a moustache. The rules are simple, start Movember 1st  clean shaven and then grow a moustache for the entire month.  The moustache becomes the ribbon for men’s health, the means by which awareness and funds are raised for cancers that affect men.  Much like the commitment to run or walk for charity, the men of Movember commit to growing a moustache for 30 days.”

The team was put together by lloyd Bayliss and as such he is the team captain, the rest of the team are made up of (in no particular order):

me
Chris Unsted
Michael J Elvy
Ian Treasure
Mike Larner – always has The Mo but has kindly shaved it off and is growing it again for this good cause.
Mark Edwards
Ian Smith
Graham Atwick
Christian Dadswell   
Adrian Morris
Neil Carless

We hope to raise as much money as possible for Movember and Men’s cancer charities, if you would like to support us and have a laugh at our new facial master piece’s then follow along here, I will try to provide weekly updates and pictures here.

If you would like to donate then you can do so here: http://uk.movember.com/mospace/570732/

Thursday, 28 October 2010

Bulk Insert Permissions

In order to be able to execute bulk operations you need to certain level of privilege both on the database (table) you are performing the operation against and the instance you are performing the operation on.
If you don't have the necessary permissions you will get an error similar to the following:
Msg 4834, Level 16, State 4, Line 5
You do not have permission to use the bulk load statement.
This is what one of my users got earlier today when trying to perform a BULK INSERT. To be able to do this successfully they will need the ability (permission) to INSERT records into the table in question.
In our case the following script will do the trick:
USE orderdemo
go
GRANT INSERT ON dbo.orders_BULK  TO bulktest
go


They will also need Administer Bulk Operations permission, you can give that permission by executing the following, this is a server level permission:


USE master
go 
GRANT ADMINISTER BULK OPERATIONS TO  bulktest



The user will need ALTER table permissions in certain circumstances.


This information is available in BOL but I came across the need to give this permission today and I thought I would share it.


Monday, 18 October 2010

What I Did This Weekend

About two weeks ago my good friend and former colleague Justin Hostettler-Davie -JHD (Blog | Twitter) asked me via my wife on  Facebook if I fancied climbing Scafell Pike, the highest peak in England, with him on Saturday the 16th October and for some reason I said OK why not.

photo (4)

Justin is training to climb Kilimanjaro in the new year (January 2011) to raise money for Velindre Cancer Centre. You can read more about Justin’s challenge and make a donation if you want to here. He has climbed several mountains in preparation for his challenge including Cadair Idris, Pen Y Fan, and Snowden and this Saturday we attempted Scafell Pike he has been accompanied by several friends over the course of his training, but Lance Thomas (No Blog | No Twitter) has been on most of his training climbs with him.

We set off from South Wales Friday lunch time, slightly late because JHD missed my junction on the motorway, again! He never remembers where I live, so instead of leaving at 1230 we left at 1315. JHD, Lance and myself made our way from South Wales via the M4, M5 and M6 finally arriving in  Cumbria and the Lake District. When we left my house  the TomTom said it would take us 5.5 hours to reach our destination, in reality though, with Friday afternoon traffic it took us just over 7 hours and we arrived at our digs at around 2030. We had a couple of beers in the bar and a game of pool and we got a relatively early night.

We were up at 0700 Saturday morning, had some breakfast and made our way to the mountain. We started our climb at around 915. The mountain stands at 978 metres high (3208 feet) and the first 400 metres were very very steep and very very difficult. after that though despite it getting colder and windy the climb eased out and we ascended to the top in just over 2 hours. Here are some pictures from the summit:

Me (left) JHD (Right)  View from the top View from the top Lance Thomas at the top

As you can see there are some amazing views from the summit and despite how I struggled up the first half the climb it was well worth the effort.

The terrain on Scafell Pike is very rocky and after spending about 30 minutes at the summit taking photo’s and admiring the views we began our decent. The rockiness of the terrain meant that we had be as careful the descent as we were on the way up and it took us about 1.5 hours to down, we got back to the car with a feeling of satisfaction and achievement at around 1445 and made our 6 hour drive home. I had a great time and intend to go on a few more training climbs before JHD heads off to Africa in the New Year.

There are two reasons for this post. I wanted to tell you about my weekend and I also wanted to raise awareness for my friends cause and challenge. If you would like to give a donation please use this link; http://www.justgiving.com/JustinTreksKilimanjaro if you can’t give but would like to show you support please retweet this post on twitter on post on facebook.

What I Did This Weekend

About two weeks ago my good friend and former colleague Justin Hostettler-Davie -JHD (Blog | Twitter) asked me via my wife on  Facebook if I fancied climbing Scafell Pike, the highest peak in England, with him on Saturday the 16th October and for some reason I said OK why not.

photo (4)

Justin is training to climb Kilimanjaro in the new year (January 2011) to raise money for Velindre Cancer Centre. You can read more about Justin’s challenge and make a donation if you want to here. He has climbed several mountains in preparation for his challenge including Cadair Idris, Pen Y Fan, and Snowden and this Saturday we attempted Scafell Pike he has been accompanied by several friends over the course of his training, but Lance Thomas (No Blog | No Twitter) has been on most of his training climbs with him.

We set off from South Wales Friday lunch time, slightly late because JHD missed my junction on the motorway, again! He never remembers where I live, so instead of leaving at 1230 we left at 1315. JHD, Lance and myself made our way from South Wales via the M4, M5 and M6 finally arriving in  Cumbria and the Lake District. When we left my house  the TomTom said it would take us 5.5 hours to reach our destination, in reality though, with Friday afternoon traffic it took us just over 7 hours and we arrived at our digs at around 2030. We had a couple of beers in the bar and a game of pool and we got a relatively early night.

We were up at 0700 Saturday morning, had some breakfast and made our way to the mountain. We started our climb at around 915. The mountain stands at 978 metres high (3208 feet) and the first 400 metres were very very steep and very very difficult. after that though despite it getting colder and windy the climb eased out and we ascended to the top in just over 2 hours. Here are some pictures from the summit:

Me (Right) JHD (Left)  View from the top View from the top Lance Thomas at the top

As you can see there are some amazing views from the summit and despite how I struggled up the first half the climb it was well worth the effort.

The terrain on Scafell Pike is very rocky and after spending about 30 minutes at the summit taking photo’s and admiring the views we began our decent. The rockiness of the terrain meant that we had be as careful the descent as we were on the way up and it took us about 1.5 hours to down, we got back to the car with a feeling of satisfaction and achievement at around 1445 and made our 6 hour drive home. I had a great time and intend to go on a few more training climbs before JHD heads off to Africa in the New Year.

There are two reasons for this post. I wanted to tell you about my weekend and I also wanted to raise awareness for my friends cause and challenge. If you would like to give a donation please use this link; http://www.justgiving.com/JustinTreksKilimanjaro if you can’t give but would like to show you support please retweet this post on twitter on post on facebook.

Wednesday, 13 October 2010

TSQL Tuesday Misconceptions in SQL Server

I haven’t written a post for a TSQL Tuesday for a little while and I know that I’m late getting this one out too. This month’s party is being hosted by Sankar Reddy This topic  I think, maybe expect, really I know, that this topic has been covered already, I know that the Steve Jones has posted on this misconception already today and Paul Randal (Blog | Twitter) in his misconceptions series back in April covered this too much better than I have here so go and check out Paul’s post . The misconception I will look at here is - SQL Server will truncate the the transaction log after a full backup when the database is in full recovery mode. After my blog post around recovery models earlier today I think this maybe quite apt for me.
TSQL2sDay
The misconception I am going to talk about here is SQL Server truncates the log of a database in full recovery mode when a full backup is taken. It doesn’t.
This one of the biggest problems I come across, a database in full recovery mode and regular full backups being taken but  no transaction log backups being taken. The log grows and grows and eventually fills up all drive space.
I tested this on a SQL Server 2000 instance earlier today and it holds true in that version too. SQL Server does not truncate the log when a full backup of a database is taken in full recovery mode, although I believe in versions gone by it may have been the case that a full backup truncated the log. If you know the exact version please leave a comment.

The Proof

I will create a database for the purpose of this test called demodb and then create a table in that database called t1 that will hold some data.
CREATE DATABASE demodb
GO

CREATE  TABLE t1 ( id int, amount int )
go 





Firstly I will take a full database backup, the reason for backup is to ensures the database is in Full recovery mode and not in psuedo-simple mode.


BACKUP database demodb to disk = 'C:\BACKUP\demodb.bak'



If I then run a DBCC LOGINFO we can see that 1 of the 2 virtual log files (VLFs) are are active, they have a status of 2. I will cover VLF’s and the circular nature of the transaction log in a later posts, but for now DBCC LOGINFO shows some information on VLFs. a status of 2 means the VLF  is active, a status of 0 means that it is inactive or has been been truncated and can be used again.


TST1


I will then run some inserts on my table, all of which will be logged in the transaction log:


DECLARE @i INT
SET @i = 0

DECLARE @j INT
SET @j = 100000

WHILE @i < 10000

BEGIN
INSERT  INTO dbo.t1 ( id, Amount )
VALUES  ( @i, @j + @i )
SET @i = @i + 1
END
SELECT  GETDATE()



Running the DBCC LOGINFO command again,  you can see from the below diagram the log has grown and the number of active VLF’s has increased, I now have 103 VLF’s in my log and 102 are active, this means they cannot be re-used until they are marked as inactive and truncated.


TST2



I will then run another full backup of my database, to prove that a full backup of a database in full recovery mode does not truncate the log.


backup database demodb to disk = 'C:\BACKUP\demodb2.bak'



Running DBCC LOG info shows that the log has not been truncated by the full backup, the same number of VLFs are still active with a status of 2.


dbcc loginfo


The following results show that we still have 103 VLFs with 102 with a status of 2 – active.


TST3


If we then run a LOG backup and then run the DBCC LOGINFO command we can see that the log has been truncated and more of the VLF’s have been marked as inactive with a status of 0 and can now be reused.






backup log demodb to disk = 'C:\BACKUP\demodblog.trn'
GO
dbcc loginfo


TST4 


You will see that I still have 103 VLFs (more on that later) but now only one of those is active with a status of 2. The rest have been truncated and can be reused in the log. I hope that is helpful. I have rushed this post to get it out in time, I didn’t realise it was that time again until a couple of hours ago, if you find any inaccuracies in the post please let me know in the comments.

Tuesday, 12 October 2010

SQL Server Recovery Models

A SQL Server database can be in one of three recovery models. FULL, BULK_LOGGED and SIMPLE. The recovery model you choose for your databases will  affect your restore plans in the event of failure or need to restore, the recovery model you pick will also influence how you mange the transaction log of your database. So you need to choose carefully.

Full

The full recovery model offers your database the most protection. Full recovery mode means all database operations are fully logged. It allows point in time and point of failure restores. It does come with some overhead though, you need to implement both Full and Log backups as part of your backup plan. You need the transaction log backups for two reasons. 1. Transaction log backups allow you to perform point of failure or point in time restores. 2. If you don’t take transaction log backups of your database when its in Full recovery mode then your transaction log will grow uncontrollably and will eventually use up all available drive space. In fact this is one of the biggest problems I have seen in my  experience, people having a database in Full recovery mode, taking regular full backups but not taking regular log backups, they only notice the problem when they run out of disk space. By backing up the log you allow the inactive portions of the log to be truncated and reused.

Bulk Logged

Bulk Logged recovery mode is not dissimilar to the Full recovery mode, Transaction log backups still need to be an integral part of your backup plan. The difference between bulk logged and full is that certain bulk operations are only minimally logged when a database is in ‘Bulk Logged’ mode. Operations such as CREATE INDEX, SELECT * INTO, BCP, BULK INSERT are only minimally logged, according to Books Online (BOL) “ Minimal logging involves logging only the information that is required to recover the transaction without supporting point-in-time recovery.” This means that if your database crashes during one of these minimally logged operations and you need to restore you will not be able to restore to a point in time.

Simple

The main advantage of the simple recovery mode is that you do not need to worry  about managing the transaction log. In fact you are not able to take transaction log backups when a database is in Simple mode. The obvious disadvantage then is that you are not able to perform a point-in-time restore when a database is in simple mode which may not be appropriate for OLTP production systems.

What do I Use?

Well in true DBA fashion, it depends. I will generalise a little here, usually for production OLTP databases I go with the full recovery model, these are business critical systems that I need to be able to recover point of failure or point in time. In this mode I will use Full backups combined with transaction log backups (differentials if needed.) on a schedule dictated by the business and what it needs in terms of recovery time. I like to copy the log backups to a different server and write the full backup to tape and store off-site. I do use the Simple recovery model on certain production systems occasionally, mainly for databases where the data remains static or does not change then I tend to go with simple with because that is usually enough and removes the need for me to manage the log.

What to consider when choosing a recover model

1. How much data loss, if any, is acceptable.

2. How often does the data change.

Setting the Recovery Model

When you create a database,  the newly created database will get it’s recovery model from the Model database.

My very simple CREATE DATABASE statement below creates a new database called ModRecTest:

CREATE DATABASE [RecModTest]
GO



The following query selects from sys.databases and selects the recovery model of both the Model database and the newly created RecModTest database:



SELECT  Name AS 'DBname', recovery_model_desc
FROM sys.databases
WHERE name = 'Model'
OR Name = 'RecModTest'

GO



Here are the results:



Img1



As you can see the recovery model is the same for both databases.



If you would like to change the recovery model of your database you can do this from object explorer in Management Studio by right clicking on the database in question selecting <Properties> move to the <options> tab and you can change the recovery model by choosing a different option from the drop-down list <Recovery Model> and select <OK> you can also run TSQL:



ALTER DATABASE [dbname] SET RECOVERY BULK_LOGGED



Is your Database Really in Full Recovery?



It is worth noting here, even though sys.databases tells me that my database is full recovery mode it is not in technically in ‘Full’ recovery until a Full database backup is taken. You can prove this in several ways:



1. If you try to run a transaction log backup against your database you will get an error saying that no full backup exists:



BACKUP LOG RecModTest TO DISK = 'C:\DATA\Backup\RecModTest.BAK'


You will get an error:



Img2



2. Another way to see if your database really is in full recovery mode is to use  the following query:



SELECT d.Name, d.recovery_model_desc, dr.last_log_backup_lsn 
FROM sys.databases d
INNER JOIN sys.database_recovery_status dr
ON d.database_id = dr.database_id
WHERE d.database_id = db_id('RecModTest')



Now if this query returns values similar to the following for your database then, even though it says it is Full recovery the lack of a value in the last_log_backup_lsn field means that a Full backup does not exist and the database is still in auto truncate (pseudo-simple) mode.



Img4



If you run a FULL database backup and run the same query again, you will see you get an LSN in the last_log_backup_lsn field. The full backup kicks off the log sequence chain and SQL Server knows to expect log backups.



BACKUP DATABASE RecModTest TO DISK = 'C:\DATA\Backup\RecModTest.BAK'


Img5 



If you create the database in simple mode, take a full backup and re-run this query you will see that the last_log_backup_lsn field is still a NULL value. This means that SQL Server does not expect log backups (It won’t allow them on a database in Simple mode) and the database in auto-truncate mode.



USE master 
GO

DROP DATABASE RecModTest



CREATE DATABASE RecModTest
GO
ALTER DATABASE RecModTest SET RECOVERY SIMPLE


Then run a full backup:



BACKUP DATABASE RecModTest TO DISK = 'C:\DATA\Backup\RecModTest.BAK'



Then re-run the above query and we can see that the last_log_backup_lsn  field is still NULL and therefore SQL Server is not expecting log backups and is auto-truncate mode.



Img3