Saturday, July 7, 2012

Backup Master Database

The story came from a question someone asked me.

Does Master database support full recovery mode?

As I remembered, by default, Master database is in "Simple" recovery mode. I never try to put it into Full recovery mode because master database is not updated frequently, and it is very small.  I only use daily backup for master database.

Here is the database properties page of master database on SQL Server 2008 R2.

So we can change it recovery mode to "FULL", but does it really work?  I tried to backup it by SSMS

Well, I can only select "FULL" backup type. How about run backup command?

BACKUP DATABASE [Master] TO  DISK = N'C:\master.dif' WITH  DIFFERENTIAL
GO
I got error:

Msg 3024, Level 16, State 0, Line 1
You can only perform a full backup of the master database. Use BACKUP DATABASE to back up the entire master database.
Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.

In SQL BOL, http://msdn.microsoft.com/en-us/library/ms365937(v=sql.105).aspx, for master database:
For backwards compatibility with earlier versions of Microsoft SQL Server, the recovery model of master can be set to FULL or BULK_LOGGED. However, BACKUP LOG is not supported for master. Therefore, even if the recovery model of master is changed to full or bulk-logged, the database continues to operate as if it were using the simple recovery model.

Forget mentioning that
Not only transaction log backup, but also differential backup can not be run on master database. Why? because you can not restore master database with norecovery mode, without norecovery, why we need to keep differential backup and log backup? :)

Master database contains the data blow
1. Login account info
2. System parameter (sp_configure)
3. Database info
4. Server objects, for instance the linked server, system trigger.
5. Other server level security info

So except for the regular backup, you need to take a full database backup after you change the data upper. However, if you maintain the system for customer, you will not be notified when customer change the master database. If the server is corrupted, as there is no differential and log backup for master database, you can not restore the database to point of time,  there will be data lost.

I was thinking if there is an appropriate way to backup the master database automatically after any change.The first thing is we need to capture the change event .

1. Using system trigger monitor the event.
    DDL trigger can monitor the server level DDL, using the script below can list all the DDL trigger hierarchically
================================
;WITH EVENTCATALOG([TYPE],[TYPE_NAME], [PARENT_NAME], LEVEL)
AS
(
 SELECT [TYPE]
   ,[TYPE_NAME]
   ,[TYPE_NAME] AS PARENT_NAME
   ,0 AS LEVEL
 FROM SYS.trigger_event_types
 WHERE TYPE=10002 OR TYPE=296

 UNION ALL

 SELECT
   TET.TYPE
   ,TET.TYPE_NAME
   ,EC.TYPE_NAME AS PARENT_NAME
   ,EC.LEVEL+1 AS LEVEL
 FROM SYS.trigger_event_types TET
 INNER JOIN EVENTCATALOG EC ON TET.parent_type=EC.TYPE
)
SELECT [TYPE],[TYPE_NAME], [PARENT_NAME], LEVEL
FROM EVENTCATALOG ORDER BY LEVEL, TYPE_NAME
================================

So we just need to create ddl trigger for the root event ALTER_SERVER_CONFIGURATION and DDL_SERVER_LEVEL_EVENTS.  However, event ALTER_SERVER_CONFIGURATION cannot be monitored, fortunately, we don't need to run "ALTER SERVER CONFIGURATION" or SP_CONFIGURE frequently, so monitor event DDL_SERVER_LEVEL_EVENTS is enough.  here is the sample code
================================
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TRIGGER [SystemTrigger] ON ALL SERVER
FOR DDL_SERVER_LEVEL_EVENTS
AS
DECLARE @data XML;
DECLARE @eventtype sysname;
DECLARE @PostTime sysname;
DECLARE @LoginName sysname;
DECLARE @UserName sysname;
DECLARE @object sysname;
DECLARE @tsql sysname;
DECLARE @message varchar(max);
DECLARE @path varchar(200);
SET @data = EVENTDATA();
SET @eventType = @data.value('(/EVENT_INSTANCE/EventType)[1]', 'sysname')
SET @PostTime = @data.value('(/EVENT_INSTANCE/PostTime)[1]','sysname')
SET @LoginName = @data.value('(/EVENT_INSTANCE/LoginName)[1]','sysname')
SET @UserName = @data.value('(/EVENT_INSTANCE/UserName)[1]','sysname')
SET @object = @data.value('(/EVENT_INSTANCE/DatabaseName)[1]','sysname')
SET @tsql=EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','nvarchar(max)')
SET @message=' On server '+ @@servername+ ' login name '+ isnull(@LoginName,'UNKNOWN')+',user '
    +ISNULL(@UserName,'UNKNOWN')+' run the command '+ isnull(@eventType, 'Unknown Database Operation')
    + ' on database '+ISNULL(@object,'UNKNOWN')+' at '+ISNULL(@PostTime,'UNKNOWN') + '. '
SET @message=@message + 'SQL Command is:'
SET @message=@message + isnull(@tsql,'')

raiserror (@message, 0, 0) with log

GO
SET ANSI_NULLS OFF
GO
SET QUOTED_IDENTIFIER OFF
GO
ENABLE TRIGGER [SystemTrigger] ON ALL SERVER
================================

if we run any server level DDL, the command will be logged into sql server errorlog file.  Here I dropped a database and detached another one, then in the sql server errorlog, I can find message:



2. Backup master database
In the first step, we capture the system level event, now we need to backup the master database automatically. It is a challenge, because  you can not backup database in the DDL trigger. we need to find out another solutions:
             a) Call sql job in the DDL Trigger
                 well, you can create a sql job which backup the master database only without schedule , then start the job from the DDL trigger.
================================
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER TRIGGER [SystemTrigger] ON ALL SERVER
FOR DDL_SERVER_LEVEL_EVENTS
AS
DECLARE @data XML;
DECLARE @eventtype sysname;
DECLARE @PostTime sysname;
DECLARE @LoginName sysname;
DECLARE @UserName sysname;
DECLARE @object sysname;
DECLARE @tsql sysname;
DECLARE @message varchar(max);
DECLARE @path varchar(200);
SET @data = EVENTDATA();
SET @eventType = @data.value('(/EVENT_INSTANCE/EventType)[1]', 'sysname')
SET @PostTime = @data.value('(/EVENT_INSTANCE/PostTime)[1]','sysname')
SET @LoginName = @data.value('(/EVENT_INSTANCE/LoginName)[1]','sysname')
SET @UserName = @data.value('(/EVENT_INSTANCE/UserName)[1]','sysname')
SET @object = @data.value('(/EVENT_INSTANCE/DatabaseName)[1]','sysname')
SET @tsql=EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','nvarchar(max)')
SET @message=' On server '+ @@servername+ ' login name '+ isnull(@LoginName,'UNKNOWN')+',user '
    +ISNULL(@UserName,'UNKNOWN')+' run the command '+ isnull(@eventType, 'Unknown Database Operation')
    + ' on database '+ISNULL(@object,'UNKNOWN')+' at '+ISNULL(@PostTime,'UNKNOWN') + '. '
SET @message=@message + 'SQL Command is:'
SET @message=@message + isnull(@tsql,'')

WAITFOR DELAY '00:00:02'
EXEC msdb..sp_start_job 'backupmaster'

raiserror (@message, 0, 0) with log;
GO
SET ANSI_NULLS OFF
GO
SET QUOTED_IDENTIFIER OFF
GO
ENABLE TRIGGER [SystemTrigger] ON ALL SERVER
================================
The solution is not perfect, because
  • The ddl trigger calls sql job to backup the master db, if the job is running, call will be failed, and you will get error message
  • If sql agent is not running, the backup doesn't work, and ddl trigger will report error
  • Becareful when you use SSMS, for instance, if you create a database from SSMS, the "CREATE_DATABASE" and "ALERT_DATABASE" event will be triggered many times, and the DDL trigger will be run many times as well. You'd better replace the "DDL_SERVER_LEVEL_EVENTS" in the trigger with a smaller scope trigger, like create the trigger only for event "CREATE_DATABASE".
  • When calling sql job to backup the master db, it is possible that the trigger hasn't finished but sql job has been run. So it is possible the backup of master db doesn't have the new changes......although it doesn't happen during my testing.
             b) backup master database with sql job runing priodically
            By this way, you even don't need DDL Trigger, just create a sql job to backup the master database periodically, maybe 1 minute or 5 minutes. The problem is you will have many backup files for master databases, and backing up master database frequently might have a little bit performance impact. So we can setup the flag in the DDL trigger, have the sql job check the flag, if it is set, then backup the master database.

This solution is also not perfect:
  • More code and more program
  • When set the flag, it is possible to cause blocking issue.
  • Larger time interval between the change and the master backup. 
So we can see without transaction log backup, how difficult it is to implement the disaster recovery.

Sunday, July 1, 2012

New in SQL 2012 (1)

Learning new is a interesting thing. Today I read several posts regarding the SQL Server 2012 new feature:

1. Indirect Checkpoint
Prior to SQL Server 2012, SQL Server Database Engine Support 3 type of checkpoints: Automatic, manual and internal. Now there is a new checkpoint type added : Indirect Checkpoint.

Not like automatic checkpoint(sp_configure 'recovery interval'), Indirect Checkpoing is set on the database level:
ALTER DATABASE [DatabaseName] SET TARGET_RECOVERY_TIME = 15 SECONDS;

By default, the target recovery time is 0, and Indirect Checkpoing is disabled. Here are some excellent posts which is made  by MSFT PFE.
http://sqluninterrupted.com/2012/03/19/target_recovery_time-indirect-checkpoint-sql-server-2012/
http://www.sqlserverfaq.net/2012/03/18/185/
http://blogs.technet.com/b/sqlpfeil/archive/2012/06/13/target-recovery-time-new-database-option-in-sql-server-2012.aspx


The design goal of Indirect Checkpoint is to accurately control the recovery time. In some case, Automatic Checkpoint might be able to cause spiky disk io and performance issue. 
By setting a smaller value of Target Recovery Time,  the Indirect Checkpoint generates much smoother constant checkpoint instead of spiky checkpoint. However the data page is flushed out to disk more frequently,  the thoughput will degrade and disk IO workload will be increased. So making a full testing before you apply it on product will be a good idea.


Another change for checkpoint is the FlushCache message in the error log file
2012-05-30 02:01:56.31 spid14s FlushCache: cleaned up 216539 bufs with 154471 writes in 69071 ms (avoided 11796 new dirty bufs) for db 6:0
2012-05-30 02:01:56.31 spid14s average throughput: 24.49 MB/sec, I/O saturation: 68365, context switches 80348 2012-05-30 02:01:56.31 spid14s last target outstanding: 1560, avgWriteLatency 15

Prior to SQL Server 2012, you need to enable it with trace flag 3504. Now SQL Server 2012 add a new check condition logical
If the trace flag is enabled or the checkpoint 'TRUE == IsLong' the message is added to the SQL Server error log.

I got this info from
http://blogs.msdn.com/b/psssql/archive/2012/06/01/how-it-works-when-is-the-flushcache-message-added-to-sql-server-error-log.aspx

I am thinking if Indirect Checkpoint is worth to try in this situation :), so we can smooth out the spiky checkpoint with constant smaller checkpoint


2. Memory Management
This's a big change in SQL Server 2012.
  • Multi-Page allocations and CLR allocations are also included in memory limits that are controlled by max server memory (MB) and min server memory (MB).
  • Memory_to_reserve value does not include the multi-page allocations
          http://support.microsoft.com/kb/2663912
          (does it mean multi-page allocations has been moved into buffer pool in SQL 2012? )
  • The "awe enabled" SQL Server feature is deprecated
           http://support.microsoft.com/default.aspx?scid=kb;EN-US;2644592
           If you still use sql server 32bit version, you can not use more than 4GB(32-bit SQL Server on 64-bit OS) memory any more
  • Standard/Business Intelligence Edition support "locked pages" now.
          http://support.microsoft.com/default.aspx?scid=kb;EN-US;2659143

Refer to
http://blogs.msdn.com/b/sqlosteam/archive/2012/06/20/sql-server-2012-memory-manager-kb-articles.aspx