Skip to main content

Auto grow–On or Off Straw Poll

David Postlethwaite who contributes to this blog regularly sent me an email this morning asking about Database Auto grow.

This was his question

Do you set the database and logs to grow on demand.
Advantage:       Only using expensive shared disk space as required
Disadvantage:   Causes slow database response when the files have to expand. If auto growth size  is set too low then too many small virtual logs are created leading to poor performance
If you set it what sort of growth sizes do you use (%tage or fixed size)
How many people have changed the auto growth settings from the default of 1MB and 10%?
Do you manually size the database and logs and disk for expected growth for the year and then turn off auto growth
Advantage: No slow response caused by auto growth during busy periods
Disadvantage: Lots of expensive shared disk is used up with empty space

I replied with the following:

“I set the file sizes to decent amount to allow for say 12-18 months growth and monitor it regularly.

I prefer to change file sizes manually as this will allow you to control when the files grow. File growth can have an overhead.

I prefer to leave auto grow turned on as a fail back.You won't need or want to get called out about it overnight on the weekend etc. If you turn it off the files need to grow the DB fall over until it can grow.

I prefer a fixed size in MB for the growth a % can mean it grows by different amounts. 10% of 100mb is different from 10% of 100GB etc.

Re the log growth: You need to ensure the log is sized appropriately and if auto grow is set, it grows by a reasonable size. You will need to manage the VLFs (Virtual Log Files) - this is interesting and Kimberly l Tripp over at SQL Skills  has some great stuff on  VLFs on her blog.

I have an old post on database auto growth that also argues these points

So then I’d thought this might be an interesting talking point. I have set up a poll which you can see on the right-hand side of the blog. If you are reading through a RSS reader you will need to visit  to take part.

I think David will be interested in your responses, so please take a minute to leave your thoughts


  1. Disk space is cheap.

    Instead of monitoring the size of the data and log files, I monitor disk free space. Network alarms go off when free space reaches 30%. My rule of thumb is to keep it around 50%. I think that is reasonable to maintain a healthy production server.

  2. Thanks for taking the time to comment.

    Monitoring the free disk space implies you let the auto-grow function take care of growing the database files, with little or no intervention from the DBA is that correct?

  3. Up to now we have allowed the databases to autogrow. I have a c# program that checks the free disk space on each server and alerts either via email or sms if any of the drives or luns drop below a certain percentage or fixed value.

    The program can also check to see if there is enough disk space for the next autogrowth increment and again alert vai email or sms.

    However the thinking here is now to try and avoid autogrowth during the day by keeping the database files large enough. I expect we will use SCOM (or I will alter my program) to monitor the amount of free space within the database files and warn us so we can grow it out of hours when its quiet.

    Having said all that we've never had complaints from the business that databases are going slow that we can link to autogrowth.
    As long as the growth increment isn't so massive that it takes too long to grow or so small that it spends all its time growing then the odd slowish moment is acceptable.

    Ensuring your growth increment hasn't been left on the default is quite important.


  4. I'd like to know what people do in virtual environment where disk space is being shared by several systems. We'd hope our server support teams configure so that there's enough space for eveyone but might it make sense to grab a whole lot up front just in case?

  5. To me it would work the same way regardless of a physical or virtual server...if they both use shared (SAN) storage.

    your SQL Server will need adequate storage to house your databases and its expected growth over the short medium term.

    I don't know how you carve up the LUNS etc to say whats shared with what but if you size the database file accordingly you won't have to worry about the database auto growing or needing free space to grow into for the foreseeable future.

    Obviously you need to monitor space and growth so you get a good idea when you will need to grow the file sizes again. You can the plan to do that at a suitable time. That way you can avoid the database auto growing during a busy period

    I like leave autogrow on - just in case so I don't get called out in the middle of night but its only used as a backup.

    By sizing the files accoringly you can also remove the VLF issue that you can get when you let the transaction log autogrow. Take a look at some of the links in the blog post that link to articles discussing this

  6. For datafiles I'm mostly happy with autogrow - in fairly large chunks (grow by original size each time) to avoid getting too much fragmentation.
    For logfiles, I have autogrow on and set it at 100% (ie double each time). This can cause a clip, but experience is that when the log file grows beyond the norm it grows a lot.

  7. I always give the same advise to clients:
    1. Autogrow is a contingency solution, not a space management technique.
    2. SQL Server doesn't have a predictive growth capability, if you need a 100MB growth, and Autogrow is set to 1MB, you will have 100 * 1MB growth events, not 1 * 100MB growth event. So, set your Autogrow appropriately to the database / activity.
    3. A file growth event pauses all activity for the database, or the WHOLE INSTANCE if it's TempDB that is growing.
    4. Use Instant File Initialization, at least growth events on Data files will be fast, Log files are always zero-initialized / unaffected by IFI.
    5. Be mindful of VLFs, don't make them too big or small. If you need a Log file of 100GB, allocate 4000MB, then grow in 4000MB chunks (be aware that in SQL 2008 R2 and earlier there is an 'undocumented feature' where growth by exactly 4GB (or multiples) doesn't work).

  8. I set auto growth on and usually have a number that is arrived after db sizing. In critical systems I make sure to have it set coz I much rather have the db grow than have it crash even when there is space left. Usually the rate of growth is not much or within the expected norms. Basically Auto growth is a backup for proper database sizing and not a workaround as long as we follow this maxim I think we should be ok.

  9. Thanks for all your comments everyone, Simon that is pretty much what I tell my clients too. Seems like a lot of people use auto grow the way I do. We had a great respone on the poll too so thanks to everyone that took the time to vote.


Post a Comment

Popular posts from this blog

Always Encrypted

By David Postlethwaite

Always Encrypted is new features in SQL Server 2016 and it is also available in Azure SQL Database. Here you can encrypt columns in a table with a master key and a certificate so that they will appear as encrypted strings to those who don’t have the required certificate installed on their pc.
Once the certificate is installed on the computer then the unencrypted data can then be seen as normal.

The data passes from database to your application as the encrypted value, only the application with the correct certificate can unencrypt the data so it is secure across the wire. This will go some way to resolving the concern of people worried about putting their sensitive data on a shared server in the cloud such as Microsoft Azure and accessing the data across the Internet.

At the time of writing Always Encrypted is only supported with ADO.NET 4.6, JDBC 6.0 and ODBC 13.1 but expect other driver to become available.

The calling application (including SSMS) must also hav…

SQL Server 2012 and Virtual Service Accounts

This post is written by David Postlethwaite
If you are using SQL Server 2012 you will probably have noticed that the default account for the SQL services has changed from that used in previous versions. With SQL 2005 and 2008 the default account for SQL service and SQL Agent service was “NT Authority\System”. This is one the built in accounts on a Windows machine, managed by the machine and selectable from a dedicated dropdown list

The Network Service account was introduced in Windows 2003 as an alternative to using the LocalSystem account, which has full local system privileges on the local machine, a major security concern.
The Network Service has limited local privileges easing these security concerns but when many services on a machine use the Network Service account it becomes harder to track which service is actually accessing resources and performing actions, because all the services are using the one Network Service account.
Also, this account, by default, has sysadmin per…

SQL Server Express v SQL Server Developer Edition

SQL Server Express v  SQL Server Developer Edition
Over the weekend I received the following in an email from Ali Ahmad who asked me some questions about learning SQL Server. We exchanged a couple of emails so I have boiled this down to the most salient points.

Goal: I want to learn SQL Server inside out… for career progression as DBA/BI/data mining. I'm a data analyst and want to learn inside out about data warehousing.
•I understand relational database concepts...
•I have SQL Server 2014 express installed…
•I need to download the adventure works sample in order to play with it.
•So much knowledge on Microsoft website it's easy to get lost… where do I begin?

SQL Server Express v SQL Server Developer
This is interesting. If you want to learn SQL Server inside out including the Business Intelligence suite of applications I would suggest downloading the developer edition of SQL Server which since 2016 has been made available free of charge. Prior to that there was a f…