SQL Konferenz 2016 – Darmstadt Germany

I am pleased and proud to announce that I have been selected to speak at SQL Konferenz 2016. It is taking place on 23.02.2016 – 25.02.2016 in Darmstadt, Germany and is the leading SQL Server conference in Germany.

This year was the first time I attended and also spoke at the conference and I loved it, so when the call for speakers opened, I threw my hat in the ring. The conference organisers got in touch recently and chose my session “You don’t need Enterprise Edition for that”. I have delivered this session a couple of times before and had fun, so look forward to tweaking it slightly and presenting it in February.

Although the Agenda isn’t online at the time I am writing this, registrations are open and an early bird discount is available, so go on over and sign-up!

Growing a community is not easy……

…… But it is a lot of fun!

Last Friday I put on a small community event in my home town (#SQLGrillen). It was no SQLSaturday, with just four sessions and only 2 months preparation time, but it was a lot of fun!

Why would you do this? Why for free?!

I’ve heard those two questions a lot recently in the run up to the event itself, but also over the last year since founding the Emsland SQL Server User Group.

It seems to confuse people greatly that I would want to spend my private/personal time on “work”. I have wondered if this is a regional bias (I live in a quite rural part of Germany), or maybe a cultural one (many people I have met in Germany strictly separate work from private life). If the cultural reasoning holds true, then it would explain some of the difficulties of getting attendees for the user group. Either way, I don’t consider it to be “work”. Of course there is an overlap, it is a SQL Server user group and I work with SQL Server. However, I find the social aspect of a user group/event to be almost as important as the technical side. I probably enjoy exchanging ideas/war stories with fellow data professionals more than the technical content – somehow, even though I’m an awkward geek, the human interaction is the source of greater fulfillment.

Why would I do this for free? Why do people run marathons? Why do they join various other clubs in their spare time? Do they get paid to do this volunteer work? I think this goes back to the work/private life separation thing and people not understanding how I can mix the two.

Why would I not do this?

I have made some great friends and had some fantastic experiences as part of the community and hope to “infect” other people with the community virus. I also gained a lot in my career through the things I have learned (that I would never have been able to see in my normal work day) and have gained so many connections that I now don’t need to really look for new opportunities. If I wanted to swap jobs, I just need to explore the ones that get sent my way on a daily basis.

I am not that good that people want to hire me all the time, but being out there in the community makes me visible to companies/recruiters like nothing else.

We want you!

I realise that I am preaching to the choir when writing here (if you are reading a blog on SQL Server, you are already investing time that 99% of others don’t), but I’d still like to suggest that you get yourself involved in the community somehow. There are a wide range of tasks that User Groups, Events or Associations have that need doing and are too much for their current members/helpers to achieve. Even if you don’t want to speak, there is plenty of opportunity to get involved in another way.

So please, for the altruistic side of you, or for the “get ahead in my career” side of you; get in touch with your local User Group or Association and see if you can help out in some way. You will not regret it, I assure you.

P.S. Thanks to Rob Sewell ( b | t ) for the reminder: A few places for you to start would be finding your local PASS Chapter, or if there isn’t one where you are, there are also the Virtual Chapters.

Your presentation design matters more than you think

….. So give the design the attention it deserves!

I have been speaking/presenting at events around Europe for a few years now and have learned a lot about the art of speaking: Repeating questions back to the audience, surpressing “um” and “err” sounds, making eye contact with the audience etc. These are all well communicated best-practices across all industries when it comes to presenting information to both small and large audiences.

Something that is less written and talked about is the style or design of the supporting content.

Many people think of a presentation and the dreaded

  • Death
  • by
  • Bulletpoint
  • presentation

We’ve all been in one of those presentations before, unfortunately this happens more often than it should. The most regular (ab)users of this presentation method are our bosses/managers, who may be great managers but can be the worst presenters out there! Basically throwing up walls of text in PowerPoint and reading the words back to the audience.

After initially falling into this trap (though not quite as text-book fail as just outlined), I have tried to change my presentation style to engage the audience more and use PowerPoint for what it should be used; a support tool for the ideas I am trying to communicate, rather than a display for the content itself. The idea being, the viewer should see the slide and listen to me speaking and merge the two sources of input into their own understanding of the topic. When the audience member reviews the slides at a later date, they should be able to recall the topic and their ideas and continue to benefit from the session.

While trying out these ideas, I have searched online for inspiration and spoke with friends/colleagues about how they approach their presentations. This included some great discussions with Boris Hristov ( b | t ) and Cathrine Wilhelmsen ( b | t ) at the SQLKonferenz 2015 in Darmstadt, Germany. They gave me some ideas to help simplify my slide content and generally improve my presentations and make them more useful for the audience.

After these discussions, Boris obviously saw an opportunity to explore the area of presentation design as a service (I claim the trademark on PDaaS!). If you have ever seen Boris present, you will agree with me that Boris is the perfect person to tackle this idea. His presentations are so clear and to the point, that it seems obvious that he should offer his skills to people and companies that want to stand out from the crowd when presenting. He has proved this with his excellent Pluralsight course on the topic of presentation design.

Boris’ new venture is called 356labs and is focused purely on designing top-notch presentations and training people in presentation design, so that they can produce their own high quality content. Go and take a look at the portfolio page and you will see what I’m talking about. Boris has an eye for presentation design, which takes some complex topics and provides simplistic supporting slides that allow the viewer to concentrate on the topic at hand. So if you are looking for mentoring or training on presentation skills, be sure that 356labs with its top rated international speakers will be able to help you out.

I’m looking forward to what Boris has in store with 356labs and am sure that a lot of people will benefit from the service and advice he has to offer.

Good luck Boris!

Who blew up my TempDB?

Who blew up my TempDB?

Who hasn’t experienced a server all of a sudden running out of TempDB space and seeing the following error message?

Could not allocate a new page for database 'tempdb' because of insufficient disk space in filegroup 'PRIMARY'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.

The difficulty about investigating a TempDB issue like this is that in general the solution chosen to fix a full TempDB is to restart the SQL Server instance. Seeing as how the instance has quite often locked up completely and the customer wants their server to work again ASAP, then a restart is almost inevitable.

A restart is an option to combat the symptom, but not the cause. It also (in)conveniently removes the evidence of the transaction that caused TempDB to fill up in the first place.

To see how to deal with TempDB (or other databases) filling up/filling up the disk it is on please take a look at this blogpost.

Returning to the question at hand: Who filled my TempDB?

One solution I looked into, and have implemented for this exact reason, uses Extended Events.

Extended Events is an event processing framework built inside SQL Server since SQL 2008. It allows you to run very lightweight background monitoring of system events, and in this case can capture information about which command/transaction caused a file growth event to occur.

For a full introduction to Extended Events, please take a look at the 31 Days of Extended Events series from Jonathan Kehayias ( b | t ).

Catch the culprit

To be able to catch the offending process/user, we will create an Extended Events session on the server and the following code does just that:

ADD EVENT sqlserver.database_file_size_change (
 ACTION ( sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.database_name, sqlserver.session_nt_username, sqlserver.sql_text )
 WHERE ( [database_id] = ( 2 ) ) -- We filter on database_id=2 to get TempDB growth only
ADD TARGET package0.event_file ( SET filename = 'D:\Temp\Database_Growth_Wathdog.xel',
 max_file_size = ( 10 ) )

Some things to note about the code:

  • We are monitoring for data file and log file growth. The event sqlserver.database_file_size_change fires for any file size change. We do this because we want to be informed of any and all file growth just to make sure we don’t miss anything.
  • If you have multiple data files for TempDB (like you may for a multi-core environment) you will see one event fire for each file that is growing. E.g. You have 4 data files and the database grows, you will see 4 entries in the extended events output.
  • The session is set to flush events to the output file in 1 second intervals (MAX_DISPATCH_LATENCY). This is done to ensure we lose as few entries to the output file as possible. If TempDB fills up, the entire instance can often stop working completely. We want to catch as much information as possible before this happens, so we flush to the output file in very short intervals.
  • We start the session at instance startup (STARTUP_STATE). This ensures we have the event active immediately on server startup. As this is monitoring file growth events, we should remain very lightweight so not have to worry about swamping our system with extended events processing.
  • We limit the output file to 10MB and allow it to deliver to a total of 5 files. This means we have to have 50MB for the files in the output directory and won’t spam the folder with too much data.
  • When the event fires, we collect information about the query/command that caused the file growth to occur. This includes NT username, Hostname, origin database of the query, command text and application name.

The information collected by this session can be vital in pinpointing the cause for the TempDB filling up. However, there is the possibility of false positives in this setup. TempDB may have been almost completely filled by another previous transaction and the transaction causing the growth event is an innocent bystander. This is an unavoidable situation, but needs to be kept in mind when analysing the data. If you don’t catch the exact transaction this way, you are still on the right track.

Analyse the data

Once the data has been collected, we need to load and parse the output files to make sense of what has happened. The following code will parse the xml output that is in the (up to) 5 files.

DECLARE @TraceFileLocation NVARCHAR(255)= N'D:\Temp\Database_Growth_Watchdog*.xel';
WITH FileSizeChangedEvent
 AS (
 SELECT object_name Event,
 CONVERT(XML, event_data) Data
 FROM sys.fn_xe_file_target_read_file(@TraceFileLocation, NULL, NULL, NULL)
 SELECT Data.value('(/event/@timestamp)[1]', 'DATETIME') EventTime,
 Data.value('(/event/data/value)[7]', 'BIGINT') GrowthInKB,
 Data.value('(/event/action/value)[2]', 'VARCHAR(MAX)') ClientUsername,
 Data.value('(/event/action/value)[4]', 'VARCHAR(MAX)') ClientHostname,
 Data.value('(/event/action/value)[5]', 'VARCHAR(MAX)') ClientAppName,
 Data.value('(/event/action/value)[3]', 'VARCHAR(MAX)') ClientAppDBName,
 Data.value('(/event/action/value)[1]', 'VARCHAR(MAX)') SQLCommandText,
 Data.value('(/event/data/value)[1]', 'BIGINT') SystemDuration,
 Data.value('(/event/data/value)[2]', 'BIGINT') SystemDatabaseId,
 Data.value('(/event/data/value)[8]', 'VARCHAR(MAX)') SystemDatabaseFileName,
 Data.value('(/event/data/text)[1]', 'VARCHAR(MAX)') SystemDatabaseFileType,
 Data.value('(/event/data/value)[5]', 'VARCHAR(MAX)') SystemIsAutomaticGrowth,
 FROM FileSizeChangedEvent;

Please take note of the variable @TraceFileLocation. The example uses a wildcard to allow loading/parsing of multiple files, this is particularly useful if you really do rollover into multiple files.

The results from the query provide a range of information regarding to who the offending process was, what command they had submitted last and a set of information about the client.

So, now we have a starting point to discover who/what is causing TempDB to grow and can discuss this with application owners.

Other ideas

It is extremely simple to to extend the session to monitor all databases and run a separate monitoring solution to inform you of such growth events. In fact, I saw Gianluca Satori ( b | t ) talk about streaming Extended Event data and processing them in near real time at his SQLSat Oslo session. I am certain that this would be a fantastic way of doing some sort of real time monitoring of growth events.

Some homework for you: What could you use this Extended Events session for? How could you modify/improve on it to help with your system monitoring and administration? Let us know in the comments.

Full disks and databases

How do you deal with a full hard disk/volume/partition?

I have recently had a number of discussions about dealing with hard drives or databases themselves filling up on SQL Servers. There seems to be a few different opinions on how to deal with this and I’d like to share those here and give my preferred solution.

N.B. (For brevity, I will refer to disks, but mean either disk, volume or partition where applicable)

The main error that people are confronted with when either a disk or a database file has filled to the brim is something like this:

Could not allocate a new page for database 'DatabaseName' because of insufficient disk space in filegroup 'PRIMARY'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.

There are a few reasons this could happen (not an exhaustive list):

  1. The database file is full but the disk holding the database file does not have enough room to accommodate the file growth required for auto-growth to occur.
  2. The database file is full but the disk holding the database file has a quota activated that has been reached, so does not have enough room to accommodate the file growth required for auto-growth to occur.
  3. The database file is full but database does not have auto-growth setup.

The problem with the first two situations I mentioned is that they are generally not easy/quick situations to fix. If a disk is full, then extending that disk may not be possible, or only possible with a lot of administrative overhead. The same applies for disk quotas, even more so with quotas, as quotas usually have some sort of “internal politics” attached to them.

The third reason occurs surprisingly often. For some reason, certain companies just don’t want their databases to grow automatically. They want to keep everything under human control, so they turn off all sorts of automatic support options like auto-growth. I also heard a customer mention that they preferred the data files to be static in size and it was easier for them to extend these than the disk (again confirming my point about disk/partition/volume extensions being difficult).

Regardless of the reasoning for databases/database files actually filling up and the error occurring, what can be done in an attempt to help prevent this from happening, or at least allow for a speedier resolution of the issue?

Dummy/Buffer files

If you have a volume size limitation and need to ensure that you can quickly arrange for more disk space very quickly, one option is to use “dummy” or “buffer” files. These are files that are of a certain size, but have no actual content. They are created on the disk that you want to have “insurance” for and kept around until disk space is either low (a pre-defined threshold) or the disk is actually full. When either of these situations occurs, you simply delete one or more of these “dummy” files and you immediately have more disk space.

To create these files you can use fsutil a great little tool that is bundled with windows and allows sysadmins to do a number of things with disks. You call fsutil from the command line and provide it with the correct parameters and an “empty” file will be created.

Syntax of fsutil (length is in bytes)
fsutil file createnew <filename> <length>

Example to create a single 50MB file
fsutil file createnew d:\temp\dummyfile 52428800


Once the file(s) has been created you have a buffer of space so that you can quickly react to a disk filling up (even with some sort of scripted automation). Of course, with the consideration that the disk now has less space available for “real” data, meaning a full disk can actually occur quicker.

Using Max Size Limits for a Database

Among the settings that a database can have, you can control in what size “steps” a database should grow by if auto-growth is activated. You can also set a maximum size for the database file(s) to grow to.  This is similar to the idea of a disk quota, but on a database file level. Setting this limit can be useful to prevent a database from growing so large that it will completely fill a disk. I actually prefer the idea of setting an upper limit on the database file(s) rather than allowing unlimited growth. The reason for my preference is down to the fact that it is easier to change the maximum growth limitation than it is to extend a disk.

Setting the maximum size for a database is a simple operation, either in SSMS via the GUI:


Or via T-SQL:

ALTER DATABASE [sandbox] MODIFY FILE ( NAME = N'sandbox', MAXSIZE = 102400KB )

With this soft-limit for a database file, you are able to control file growth with greater precision. The main benefit is, that you can increase the maxsize at a moment’s notice and allow normal operations to continue if a database file should accidentally/suddenly fill up. This is not an ideal solution, but still improves control and allows normal operations to resume.

I like dummies!

I prefer the dummy file solution out of the two that I have explained above. Using a dummy file allows you to have auto-growth turned on (to cover those immediate growth requirements), while stilling offering you the chance to also avoid a disk from filling up completely through an auto-growth event.

Using the maxsize solution is ok, so long as you are able to connect to the instance to extend the maxsize. This is often possible for user databases, but what about TempDB? I have experienced systems where the system has locked up because TempDB is full and attempts to connect were fruitless (Remote DAC was not activated and no local access to the SQL instance), so it was not possible to extend TempDB.

Having a few dummy files around would allow me to quickly supply TempDB with the required disk space for an auto-growth and for me to connect to the instance again and take charge of the situation. The other alternative (which customers often chose in the past), was to reboot the server / restart the instance. The issue with this being, the culprit for the “crash” is no longer identifiable, unless you have a solution implemented to do just that (which I will blog about soon!).


In my experience most installations / customers do not have a solution implemented to counteract a disk/database filling up. The two solutions I have explained are good starting points to help avoid catastrophe in future. Please take these ideas and see if you can improve the stability and availability of your systems using them, or even better, provide your ideas/solutions in the remarks of this blogpost.

Thanks for reading and have fun with your diskspace!

The case of the non-starting SQL Agent Job

I ran into an interesting situation recently that involved a SQL Agent job that would not/could not run. Looking at the job settings everything seemed fine, yet the SQL Agent deemed the job “un-runnable”.

As we can see here “My Test Job” is enabled, and scheduled, but has no “Next Run” and is deemed not “Runnable”.

Job Activity Monitor

Taking a closer look at the job properties, we can further see that the job is enabled:

Job Properties - General_3

We also see that there really is a schedule for the job:

Job Properties - Schedules

And on further investigation I saw something that I have never really looked at before. The “Targets” properties:

There are no entries here, but more importantly, the radio button “Target local server” is not selected. This turned out to be the cause of the job not running!

If I try and exit the properties with “OK” (which you should never do unless you have intentionally changed a setting somewhere), then we are presented with a clear error message informing us about the missing Target:

Job Save Error

“You must specify the Target Servers on which this multi server job will execute.”

The “Targets” section of a SQL Agent Job is something that I have never delved into in any detail, but is a long standing feature to allow for administration of large server environments. You can read more into Multiserver Environment Administration on MSDN.

The reason for the setting being missing was through an incomplete scripting process for the affected job. If you script out a SQL Agent Job in SQL Server Management Studio, you will see that one of the last steps in the script is a system stored procedure “msdb.dbo.sp_add_jobserver”:

EXEC msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'

This stored procedure associates a job to a jobserver, which in my experience has always been “(local)”. Obviously, this will be different if you are using master servers and target servers in your environment.

As soon as I had set the target server to local for the job, everything was back to normal and the job ran again according to it’s schedule.

1 Year Emsland SQL User Group

Moin Moin!

Unser User Group feiert 1 Jahr mit einem Sondertreffen:

Am 04.09.2015 ab 15:00 Uhr finden insgesamt 4 Sessions statt. Es sind 4 Auswärtssprecher am Start die zu unterschiedlichen Themen vortragen werden.

Die Sessions werden gerade festgelegt, aber der Termin steht schon fest. Schaut euch die Details an und meldet euch mit dem unten stehenden Button an!

Eventbrite - Emsland SQL Server User Group: SQLGrillen

Wir freuen uns auf euch!!

P.S. Gebt das bitte weiter, wir wollen so viele Leute da haben wie möglich, danke!