Skip to main content

T-SQL Tuesday #005 – Offloading Reporting (#TSQL2sDay)

Aaron Nelson (Blog|@SQLvariant) is hosting this months #TSQL2sDay with the theme of reporting. My post looks at some of things I have done in the past to offload reporting to another server to reduce the overhead of running reports against production OLTP databases.
A few years ago, back when I was implementing a HA solution in SQL Server 2000 utilising clustering to prevent against node failure  and Log shipping to maintain an off-site, second copy of the database. The organisation wanted to implement a reporting and BI solution so that management information could be made available on demand for selected reports and also to load a data warehouse. Even though we copied the logs over to the standby server they were only restored once a day, so it was possible for the logged shipped DB to be a day behind. If we ever needed to recover the db in the event of a failover at most one days worth of logs would need to be applied before the database could be used and the day's latency was acceptable to the report writers and the load process.
The OLTP solution was a 24 hour service with  roughly a 50-50 split between reads and writes and it was not easy to differentiate between peak and off peak time in this organisation, because of the nature of their business, for scheduling the load of the data mart or the running of resource intensive reports during off-peak times. So we went for the option of making the log shipped database read-only and allowing the read intensive report writers queries and the load process to read the data from the read-only log shipped copy thus not interfering with the clustered production database.
This worked well and definitely helped spread the load, in later versions of SQL Server I may well have made use of Database Mirroring to kill two birds with one stone, I would have still had a need to cluster the instance but I could have used Database mirroring and database snapshots to maintain a second up to date off-site copy of the database for reporting purposes but also to allow automatic failover should the entire cluster become available, setting that up is probably another post entirely though.
I guess the point I'm trying to make here, I have found if you have a database which has to satisfy the demands of multiple  workloads and functions there can be some benefit in splitting the read intensive queries of reports and data loads to a different server.


  1. Hi Gethyn - I am interested in using Mirroring and snapshots to offload reporting. The problem I have is that i would need to recreate the snapshot every hour to keep the data relevant. Replication is a no go because our ERP system does not allow it. I am concerned that there is a split second every hour when all reporting is unavailable - especially if the snapshot recreation occurs DURING a report being executed.
    Do you know of any way around this? I have considered using the SQL Native Client and naming a failover server but I am not sure if this will work mid-transaction. Hopefully there is an easier way.

  2. Hi Rob, thanks for your comment. I'm not sure that there is a way around this, obviously you are pointing the reports at the mirrored database snapshot, but when the snapshot is created , assuming you are dropping and recreating the same snapshot on the mirror then there will be a period, all be it short, of unavailability. You could also look to creating more than one snapshot on the mirror...that way reports will not be disrupted because they will be pointing at an existing unaffected snapshot, when the new snapshot is created you can the change the connection string in the report to point at the new snapshot.

  3. Hi Gethyn, thanks for the reply. I have considered this but unfortunately our main reporting tool is Crystal Reports and there is no way to dynamically change the connection string. I could programmatically update a system DSN to point at the new database but of course this will only work for new report executions. Reports currently in process would be cut off as soon as the snapshot object was destroyed. Sadly we have many very large reports which can take over 5 minutes to run and export to excel so could realistically hit this hourly "refresh" period.
    Until Microsoft improve this snapshot function and enable an online recreation of the snapshot I think I must carry on with the old method of data warehousing my report data first. Thanks again.

  4. I thought this was a novel approach


Post a Comment

Popular posts from this blog

SQL Server 2012 and Virtual Service Accounts

This post is written by David Postlethwaite
If you are using SQL Server 2012 you will probably have noticed that the default account for the SQL services has changed from that used in previous versions. With SQL 2005 and 2008 the default account for SQL service and SQL Agent service was “NT Authority\System”. This is one the built in accounts on a Windows machine, managed by the machine and selectable from a dedicated dropdown list

The Network Service account was introduced in Windows 2003 as an alternative to using the LocalSystem account, which has full local system privileges on the local machine, a major security concern.
The Network Service has limited local privileges easing these security concerns but when many services on a machine use the Network Service account it becomes harder to track which service is actually accessing resources and performing actions, because all the services are using the one Network Service account.
Also, this account, by default, has sysadmin per…

Always Encrypted

By David Postlethwaite

Always Encrypted is new features in SQL Server 2016 and it is also available in Azure SQL Database. Here you can encrypt columns in a table with a master key and a certificate so that they will appear as encrypted strings to those who don’t have the required certificate installed on their pc.
Once the certificate is installed on the computer then the unencrypted data can then be seen as normal.

The data passes from database to your application as the encrypted value, only the application with the correct certificate can unencrypt the data so it is secure across the wire. This will go some way to resolving the concern of people worried about putting their sensitive data on a shared server in the cloud such as Microsoft Azure and accessing the data across the Internet.

At the time of writing Always Encrypted is only supported with ADO.NET 4.6, JDBC 6.0 and ODBC 13.1 but expect other driver to become available.

The calling application (including SSMS) must also hav…

How to Setup Kerberos Correctly

David was in Copenhagen this weekend delivering his Kerberos talk Taming the Beast: Kerberos for the SQL DBA to SQL Saturday Denmark. I have had a quick chat with him via email since he got back and he said he had a great time. The event was very well attended with 280+ attendees and his talk was well attended.

I think David is planning submitting a few sessions to SQL Saturday events in Europe in the next few months so look out for him there and we'll keep you posted as to his whereabouts when schedules get finalised later in the year.

David has pre-recorded his Kerberos talk. You can watch on you tube and I have also embedded it in this post if you want to see what his kerberos talk  covers...

If we can help you with a SQL Sever problem visit our SQL Server Consulting page or contact us