Monday, April 30, 2012

Generate restore script automatically by backup history table

I wrote a store procedure which could generate restore script automatically by backup history table. it can help you select a best restore solution base on the backup history table.

For instance,  if you backup the database as the sequence below
ID backup_start_date backup_type
1 2012-04-30 18:28:43.000 Full
2 2012-04-30 18:29:22.000 Log
3 2012-04-30 18:29:52.000 Full
4 2012-04-30 18:30:28.000 Full
5 2012-04-30 18:33:11.000 Diff
6 2012-04-30 18:37:18.000 Diff
7 2012-04-30 18:39:26.000 Log
8 2012-04-30 18:40:58.000 Full
9 2012-04-30 18:43:21.000 Full
10 2012-04-30 18:43:56.000 Full
11 2012-04-30 18:44:10.000 Full
12 2012-04-30 18:48:50.000 Log
13 2012-04-30 18:50:53.000 Log

let's say you want to generate restore script which can restore the database to  '2012-04-30 18:34:00',
it will automatically generate restore script by using the backup file below
ID backup_start_date backup_type
4 2012-04-30 18:30:28.000 Full
5 2012-04-30 18:33:11.000 Diff
7 2012-04-30 18:39:26.000 Log

if you want to restore to '2012-04-30 18:44:00', it will generate script by backup file
10 2012-04-30 18:43:56.000 Full
12 2012-04-30 18:48:50.000 Log

The store procedure can also check if the backup file is existed.
you can also generate restore script for any point, let me know your feedback . thanks

---------------------------------------------------------------------------------------------------------------
USE [msdb]
GO
IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[sp_RestoreDBbyHistory]') AND type in (N'P', N'PC'))
DROP PROCEDURE [dbo].[sp_RestoreDBbyHistory]
GO
USE [msdb]
GO
CREATE PROCEDURE [dbo].[sp_RestoreDBbyHistory] (
 @SourceDB  varchar(200),
 @DestinationDB varchar(200)   =NULL,
 @RestoreToTime datetime    =NULL,   --'yyyy-mm-dd hh:mi:ss', ie. '2012-04-27 22:19:20'
 @RecoveryMode varchar(10)    ='Recovery',  --'Recovery' or 'Norecovery'
 @ListMode   varchar(10)   ='OnlyValid'   --'All' or 'OnlyValid'
 )
AS
/*********************************************************************************************
Generate database restore script by backup history, it can
1. give out the restore script and sequence
 @ListMode='All' :   lists all possible restore scripts before @RestoreToTime
 @ListMode='OnlyValid' :  gives out a best valid restore way with mininum script.
2. check if the backup file existed

blog: http://jamessql.blogspot.com/
Example:
1. Generate restore script for database RS
EXEC sp_RestoreDBbyHistory @SourceDB='RS'
2. Generate all possible restore script for database RS, restore it as name 'RS_restored' to '2012-04-30 14:13:30.000' and with norecovery
[sp_RestoreDBbyHistory]
@SourceDB='RS'
,@DestinationDB='RS_restored'
,@RestoreToTime ='2012-04-30 14:13:30.000'
,@RecoveryMode ='Norecovery'
,@ListMode='All'
Pleaes test it by yourself first.
*********************************************************************************************/
BEGIN

 DECLARE @Exists int
 DECLARE @File_Exists varchar(30)
 DECLARE @backup_start_date datetime
 DECLARE @backup_finish_date datetime
 DECLARE @first_lsn numeric(25,0)
 DECLARE @last_lsn numeric(25,0)
 DECLARE @position int
 DECLARE @backup_type varchar(20)
 DECLARE @backup_size numeric(20,0)
 DECLARE @physical_device_name varchar(500)
 DECLARE @backupset_name varchar(500)
 DECLARE @differential_base_lsn numeric(25,0)
 DECLARE @database_backup_lsn numeric(25,0)


 DECLARE @restore_command varchar(max)
 DECLARE @lastfile bit
 DECLARE @stopat varchar(50)
 DECLARE @MOVETO VARCHAR(MAX)
 DECLARE @MOVETO_temp VARCHAR(MAX)
 DECLARE @first_backup_set_id int

 IF (@SourceDB is NULL)
 BEGIN
  PRINT 'Please input the @SourceDB name!'
  RETURN
 END

 IF (@DestinationDB is NULL)
  SET @DestinationDB=@SourceDB

 if (@RestoreToTime is NULL)
 BEGIN
  SET @stopat=''
  SET @RestoreToTime=GETDATE()
 END
 else
  SET @stopat=', STOPAT = '''+CONVERT(varchar(50), @RestoreToTime)+''''

 IF (@RecoveryMode NOT IN ('Recovery', 'Norecovery'))
 BEGIN
  PRINT 'Please set parameter @RecoveryMode with value ''Recovery'' or ''Norecovery'''
  RETURN
 END

 IF (@ListMode NOT IN ('All', 'OnlyValid'))
 BEGIN
  PRINT 'Please set parameter @@Mode with value ''All'' or ''OnlyValid'''
  RETURN
 END

 SET @lastfile=0
 SET @MOVETO=''

 --FIND the last valid full backup
 SELECT TOP 1 @first_backup_set_id=backup_set_id
 FROM MSDB..backupset bs
 WHERE bs.type='D'
 AND bs.backup_start_date<=@RestoreToTime
 AND bs.database_name=@SourceDB
 ORDER BY bs.backup_start_date DESC

 IF (@first_backup_set_id IS NULL)
 BEGIN
  SELECT @first_backup_set_id=MIN(bs.backup_set_id)
  FROM MSDB..backupset bs
  WHERE bs.database_name=@SourceDB AND bs.backup_start_date<=@RestoreToTime
 
  IF (@first_backup_set_id IS NULL)
  BEGIN
   PRINT 'There is no any valid backup!!!'
   return
  END
  ELSE
   PRINT 'There is no valid full backup!!!'
  
 END
 ELSE
 BEGIN
  SELECT @MOVETO=@MOVETO+'MOVE N''' +bf.logical_name+''' TO N'''+
  REVERSE(RIGHT(REVERSE(bf.physical_name),(LEN(bf.physical_name)-
  CHARINDEX('\', REVERSE(bf.physical_name),1))+1))+
  CASE
   WHEN bf.file_number = 1 THEN @DestinationDB+'_data.mdf'
   WHEN (bf.file_number <> 1) and (bf.file_type = 'L') THEN @DestinationDB+'_log'+CONVERT(varchar(3),bf.file_number)+'.ldf'
   WHEN (bf.file_number <> 1) and (bf.file_type = 'D') THEN @DestinationDB+'_data'+CONVERT(varchar(3),bf.file_number)+'.ndf'
   ELSE @DestinationDB+'_'+CONVERT(varchar(3),bf.file_number)+'.ndf'
  END +
  ''','
  FROM msdb..backupfile bf LEFT JOIN msdb..backupset bs on bf.backup_set_id=bs.backup_set_id
  where bf.backup_set_id=@first_backup_set_id
  SET @MOVETO_temp=@MOVETO
 END

 CREATE TABLE #RestoreCommand(
  ID int NOT NULL IDENTITY (1, 1),
  backup_start_date datetime,
  backup_finish_date datetime,
  database_backup_lsn numeric(25,0),
  differential_base_lsn numeric(25,0),
  first_lsn numeric(25,0),
  last_lsn numeric(25,0),
  postion int,
  backup_type varchar(20),
  backup_size numeric(20,0),
  physical_device_name varchar(500) ,
  backupset_name varchar(500),
  restore_command varchar(max),
  fileExist varchar(20)
 )
 IF (@ListMode='All')
  SET @first_backup_set_id=1
 DECLARE backup_cursor CURSOR FOR
 SELECT
    bs.backup_start_date, 
    bs.backup_finish_date,
    bs.database_backup_lsn,
    bs.differential_base_lsn,
    bs.first_lsn,
    bs.last_lsn,
    position,
    CASE bs.type 
     WHEN 'D' THEN 'Full' 
     WHEN 'L' THEN 'Log' 
     WHEN 'I' THEN 'Diff'
    END AS backup_type, 
    bs.backup_size,
    bmf.physical_device_name,  
    bs.name AS backupset_name
 FROM   msdb.dbo.backupmediafamily  bmf
    INNER JOIN msdb.dbo.backupset bs ON bmf.media_set_id = bs.media_set_id 
 WHERE bs.database_name=@SourceDB
 AND bs.backup_set_id>=@first_backup_set_id
 ORDER BY 
    bs.backup_start_date

 OPEN backup_cursor;
 FETCH NEXT FROM backup_cursor
 INTO @backup_start_date,@backup_finish_date,@database_backup_lsn,@differential_base_lsn,@first_lsn, @last_lsn,@position, @backup_type,@backup_size,@physical_device_name, @backupset_name

 WHILE ((@@FETCH_STATUS = 0) and (@lastfile <>1))
 BEGIN
  --check if file exist
  EXEC Master.dbo.xp_fileexist @physical_device_name, @Exists OUT
  IF  (@Exists= 1)
   SET @File_Exists= 'File Found'
  ELSE
   SET @File_Exists= 'File Not Found'
  
  IF (@backup_start_date<=@RestoreToTime)
  BEGIN
   --if this diff backup, then remove all log backup before it.
   IF ((@backup_type='Diff') and (@ListMode='OnlyValid'))
    DELETE FROM #RestoreCommand
    WHERE backup_type IN ('Log', 'Diff')
  
   IF @backup_type='Full'
   BEGIN
    SET @MOVETO_temp=@MOVETO
    IF (@ListMode='OnlyValid')
     DELETE FROM #RestoreCommand
   END
   ELSE
    SET @MOVETO_temp=''
   SET @restore_command=
    CASE  
     WHEN @backup_type in ('Full','Diff') THEN 'RESTORE DATABASE [' + @DestinationDB + '] FROM  DISK = N'''+ @physical_device_name +''' WITH  FILE = '+convert(varchar(3),@position)+','+@MOVETO_temp+'NORECOVERY, NOUNLOAD,  STATS = 10'
     WHEN @backup_type = 'Log' THEN 'RESTORE LOG [' + @DestinationDB + '] FROM  DISK = N'''+ @physical_device_name +''' WITH  FILE = '+convert(varchar(3),@position)+','+@MOVETO_temp+'NORECOVERY, NOUNLOAD, STATS = 10'
    END
  
   INSERT INTO #RestoreCommand (
    backup_start_date,
    backup_finish_date,
    database_backup_lsn ,
    differential_base_lsn,
    first_lsn,
    last_lsn,
    postion,
    backup_type,
    backup_size,
    physical_device_name ,
    backupset_name,
    restore_command,
    fileExist)
   VALUES
   (
    @backup_start_date,
    @backup_finish_date,
    @database_backup_lsn ,
    @differential_base_lsn,
    @first_lsn,
    @last_lsn,
    @position,
    @backup_type,
    @backup_size,
    @physical_device_name,
    @backupset_name,
    @restore_command,
    @File_Exists
   )
  END
  ELSE
   IF (@backup_type='Log')
   BEGIN
    SET @lastfile=1
    SET @restore_command='RESTORE LOG [' + @DestinationDB + '] FROM  DISK = N'''+ @physical_device_name +''' WITH  FILE = '+convert(varchar(3),@position)+',NORECOVERY, NOUNLOAD, STATS = 10'
    SET @restore_command=REPLACE(@restore_command, 'NORECOVERY','RECOVERY')+@stopat
    INSERT INTO #RestoreCommand (
     backup_start_date,
     backup_finish_date,
     database_backup_lsn ,
     differential_base_lsn,
     first_lsn,
     last_lsn,
     postion,
     backup_type,
     backup_size,
     physical_device_name ,
     backupset_name,
     restore_command,
     fileExist)
    VALUES
    (
     @backup_start_date,
     @backup_finish_date,
     @database_backup_lsn ,
     @differential_base_lsn,
     @first_lsn,
     @last_lsn,
     @position,
     @backup_type,
     @backup_size,
     @physical_device_name,
     @backupset_name,
     @restore_command,
     @File_Exists
    )
   END
  FETCH NEXT FROM backup_cursor
  INTO @backup_start_date,@backup_finish_date,@database_backup_lsn,@differential_base_lsn,@first_lsn, @last_lsn,@position, @backup_type,@backup_size,@physical_device_name, @backupset_name
 END
 CLOSE backup_cursor;
 DEALLOCATE backup_cursor;

 IF (@lastfile<>1)
  INSERT INTO #RestoreCommand (
   restore_command)
  VALUES
   (
   'You need to back up the Tail of the Log on database [' +@SourceDB+'] before restoring, then restore the tail-log backup with recovery as last step!'
   )
 select * from #RestoreCommand order by [ID]
 DROP TABLE #RestoreCommand

END

Saturday, April 21, 2012

Setup database mirroring and log shipping together

SQL Server has multiple high availability solutions. Comparing with cluster, database mirroring and logshipping are easier to setup, and they don't need special hardware.

here is a great article about how to setup the database mirroring and log shipping together.
http://technet.microsoft.com/en-us/library/ms187016.aspx

In our system, we setup data mirroring with high-safety mode in a fast network domain, and setup log shipping with the secondary server in remote data center, which provide off-site disaster-tolerant for our system.

All system diagram and setup steps are in the artive upper, but there is less comments about log shipping monitor setup. Based on my setup, if you want to keep log shipping monitoring as seamless as possible, you'd better use a standalone server as log shipping monitor server. By this way, after database mirroring failover from primary server to mirror server, log shipping monitor can still work.

However in my current system, when I setup log shipping after database mirroring setup complete, I use the current primary server as monitor server, which will cause problem after database mirroring failover.

because the secondary server only look current primary server as monitor server, so after restore job complete, it only update the log_shipping_monitor_secondary on primary, not on mirror. if primary server is down, and failover happen. the original mirror server will be primary server, however, it will not be the monitor server, and there is no log shipping monitor server any more.


When setup database mirroring and log shipping together, I recommend using a standalone server as witness server for database mirroring and monitor server for log shipping. This server can also be the file share server for transaction log backup file of log shipping. Or you can setup log shipping alert job on all primary, mirror and seconary server, it is also a simple way to solve the monitor problem.





Thursday, April 19, 2012

script system object in resource database

When I troulbshooting the logshipping problem, I found many logshipping store procedure can not be found in the master or msdb database. for instance, in the master. [sys].[sp_add_log_shipping_secondary_primary] , it calls sys.sp_MSprocesslogshippingjob to create the copy job, however, sys.sp_MSprocesslogshippingjob is in the resource database, we can not review its code with sp_helptext command.

In order to check system object script, you need to follow up the steps below
1. open command prompt windows with administration, nevigate to sql server binn folder, by default it looks like C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn

2. run the command below
sqlservr -c -m

3. open another command prompt windows, run
sqlcmd -E -A
or
sqlcmd -E -A -S <instancename>

4. run the command below to get the sys object script

use mssqlsystemresource
go
:out c:\temp\script.txt
go
sp_helptext sp_MSprocesslogshippingjob
go


:out set the output desitnation.

you can also use another way to get the system object script

1.  copy to the mssqlsystemresource.mdf and mssqlsystemresource.ldf from C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn to other data folder
2. attach the mssqlsystemresource.mdf  with a new database name, let's say "RS"
3. open SSMS, swith to RS, then you can find the system store procedure sp_MSprocesslogshippingjob
under RS.

Wednesday, April 18, 2012

Log shipping setup problem

Last week, our log shipping setup had a problem, so I spent some time to dig into the log shipping.

there are 2 monitor tables after logshipping setup
1. log_shipping_monitor_primary
      This table has entries for all primary database on primary server and monitor server

2. log_shipping_monitor_secondary 
      This table has entries for all primary database on secondary server and monitor server

In my system, I use primary server as monitor server, so on primary server, log_shipping_monitor_primary should have the same entry number as log_shipping_monitor_secondary's. But after I setup 4 databases logshipping, in log_shipping_monitor_primary there were4 entries, however, in log_shipping_monitor_secondary there was only 1 entries

In order to troubleshooting the issue, I went through the logshipping setup steps first
1. Initialize the secondary database by restoring a full backup of the primary database on the secondary server.

2. On the primary server, execute master.dbo.sp_add_log_shipping_primary_database to add a primary database. The stored procedure returns the backup job ID and primary ID.
Internal Steps of master.dbo.sp_add_log_shipping_primary_database:
 o   Call sys.sp_MSlogshippingsysadmincheck to check the security

 o   Validate Parameter
 o   Check if database has already been in table log_shipping_primary_databases, if overwrite parameter is 1, then call sp_delete_log_shipping_primary_database to delete it
 o   If monitor server is not local server, then call msdb.sys.sp_MSprocesslogshipmonitorlink, which creates a link server for the remote server
 o   Call sys.sp_MSgetlogshipagentpath to get the tools\binn folder, by default it is 'C:\Program Files\Microsoft SQL Server\90\Tools\binn\'
 o   Call sys.fn_MSgetlogshippingjobname to get the new job name, this is where the job name “LSBackup_xxx”, “LSCopy_xxx”, “LSRestore_xxx”, and “LSAlert_xxx” come from.
 o   Call sys.sp_MSprocesslogshippingjob to create job
        In fact, sys.sp_MSprocesslogshippingjob  will call msdb.dbo.sp_add_job and msdb.dbo.sp_add_jobstep to create the job
 o   Add an entry in the log_shipping_primary_databases
 o   Call sp_MSprocesslogshippingmonitorprimary to setup monitor
  Before add monitor, it will delete info from msdb.dbo.log_shipping_monitor_error_detail, msdb.dbo.log_shipping_monitor_history_detail, msdb.dbo.log_shipping_monitor_primary,and call sys.sp_delete_log_shipping_alert_job_internal
        Then add entry in msdb.dbo.log_shipping_monitor_primary,  and call sys.sp_add_log_shipping_alert_job_internal to add alert if needed.
        If it is remote monitor server, call msdb.sys.sp_MSproxylogshippingmonitorprimary to setup monitor.

Briefly speaking,  sp_add_log_shipping_primary_database creates LSBackup_xxx job, add one row in log_shipping_primary_databases, and setup monitor, in the end, reture the new job id, and primaryid.

3.       On the primary server, execute sp_add_jobschedule to add a schedule for the backup job.
 Use the job id and primary id from step 2 to create job schedule for LSBackup_xxx job.

4. On the monitor server, execute sp_add_log_shipping_alert_job to add the alert job.
It calls sys.sp_add_log_shipping_alert_job_internal underlayer, which create alter job and add entry in msdb.dbo.log_shipping_monitor_alert.

5. On the primary server, enable the backup job.
 EXEC msdb.dbo.sp_update_job @job_id = @LS_BackupJobId ,@enabled = 1

6.  On the secondary server, execute sp_add_log_shipping_secondary_primary supplying the details of the primary server and database. This stored procedure returns the secondary ID and the copy and restore job IDs.
sp_add_log_shipping_secondary_primary does the similiar way as

7. On the secondary server, execute sp_add_jobschedule to set the schedule for the copy and restore jobs.
8. On the secondary server, execute sp_add_log_shipping_secondary_database to add a secondary database.
 Similar as sp_add_log_shipping_primary_database
9. On the primary server, execute sp_add_log_shipping_primary_secondary to add the required information about the new secondary database to the primary server.
10. On the secondary server, enable the copy and restore jobs.


let's return to my logshipping problem:
when I setup logshipping on secondary , I use script
                EXEC @LS_Add_RetCode2 = master.dbo.sp_add_log_shipping_secondary_database
                                         @secondary_database = '[DATABASENAME]'
                                                ,@primary_server = '[PRIMARYSERVER]'
                                                ,@primary_database = '[DATABASENAME]'
                                                ,@restore_delay = 0
                                                ,@restore_mode = 1
                                                ,@disconnect_users       = 1
                                                ,@restore_threshold = 180  
                                                ,@threshold_alert_enabled = 1
                                                ,@history_retention_period      = 5760
                                                ,@overwrite = 1
It will get the monitor server name from table log_shipping_secondary, here is the script in sp_add_log_shipping_secondary_database
    select @secondary_id = secondary_id
            ,@monitor_server = monitor_server
            ,@monitor_server_security_mode = monitor_server_security_mode
    from msdb.dbo.log_shipping_secondary

However, I found some of the monitor servers don't have domain name, while others have domain name.

let’s continue tracking the  sp_add_log_shipping_secondary_database.  After getting the monitor server name, it will call sp_MSprocesslogshippingmonitorsecondary to setup the monitor table, in sp_MSprocesslogshippingmonitorsecondary,
    exec @retcode = sp_MSprocesslogshippingmonitorsecondary @mode = 1
                ,@secondary_server = @@servername, …………………
1. you can see that it use @@servername as secondary server name,  which doesn’t have domain name.
2. then insert one entry in local(secondary server) log_shipping_monitor_secondary table
3. then try to insert entry in monitor server by creating link server

Here we can see there are 2 linked server created on DR SQL because we have 2 different monitor servernames (have domain name and not have domain name)
4. then call sp_processlogshippingmonitorsecondary on the remote linked server(it is primary server in our system)
5. in sp_processlogshippingmonitorsecondary , we found the answer
    if (@monitor_server is null or upper(@monitor_server) != upper(@@servername)
            or upper(@monitor_server) = upper(@secondary_server))
        return 0

because my monitor_server has domain name, it is different with the @@servername which doesn’t have domain name, so sp_processlogshippingmonitorsecondary will not really execute,  this is why I have only 1 entry in log_shipping_monitor_secondary.
so I need to use server name without domain name as a standard in all scripts!

Tuesday, April 10, 2012

Why I can not connect sql server ???

       Recently, I installed sql server 2012 on windows 7 laptop, today when I wanted to login sql server with TCP/IP, I got error.

1. Enable TCP protocol
I tried to login sql server by SSMS with Server Name "MachineName\InstanceName",  and select the Network protocol "TCP/IP"


but got error message below

My first thought was "TCP/IP" protocol had not been enabled. I opened SSCM, yes, it is not enabled.


I enabled the TCP protocol, restart the sql server

2. Add port in connection string
I tried to connect sql server again, but got the same error. Oh, you know what, I was connecting to the named instance,  I need to checked if SQL Server Browser service is enabled, all named instance will register its port in SQL Server Browser service when it startup. when client use MachineName\InstanceName to connect sql server, if there is no port specified,  client will talk with SQL Server Browser first to get the named instance port, then connect to that named instance by this port. if SQL Server Browser is disabled, you need to specify the port in connection string, so that client know how to talk with the named instance. for default instance, you don't need to spedify port if 1433 is default port.

If you use dynamic port, you can get the port info from errorlog(sql server will write the port to errorlog when it startup), or get it from SSCM

My SQL Browser service is disabled by default, So I use Server Name "MachineName\InstanceName, Port" in SSMS

3. Check Network
before I connected the server again, I try to verify the network setting was correct.
      a) Check errorlog to ensure the SQL Server is listening to the port
        Yes, I found it in the errorlog
2012-04-09 22:24:16.59 spid13s     Server is listening on [ 'any' <ipv4> 50909].

      b)netstat -a
          Yes, I can find the port opened
      c) Telnet MachineName Port
          Yes, I can telnet port 50909

looks like very thing was good! I connected to sql server again with Server Name "MachineName\InstanceName, Port"

Done! I connected to the sql server with TCP successfully!

4. sqlcmd connect issue
SSMS is fine now,  I tried sqlcmd with TCP , I run the command below:
sqlcmd -S TCP:MachineName\InsteanceName,port -E

but got error message:
Unable to complete login process due to delay in opening server connection".

so weird, SSMS was ok, why sqlcmd was failed, based on the error message, looked like it was a login timeout issue, I ping my laptop machine name, it took a long time to return......so I modified my sqlcmd command with l parameter

-l : Specifies the number of seconds before a sqlcmd login to the OLE DB provider times out when you try to connect to a server.The default time-out for login to sqlcmd is eight seconds.

8 second is too small, I need to change it with a bigger value 60

sqlcmd -S TCP:MachineName\InsteanceName,port -E -l60

ok, after a long waiting, I logined the sql server with sqlcmd.

5. Other troubleshooting method.
The above steps are only some general troubleshooting ways for the sql server connection issue. sometimes you need to test connection with more ways
       a) try to use different login authentication way
       b) try to login server with different protocol: TCP, NP, Shared Memory
       c) try to login sql server from different location: locally or remotely
       d) try to login sql server with different tools: SSMS, sqlcmd, ODBC client tools, OLEDB client tools
       e) if use windows authentication, check if use NTLM or Kerberos http://blogs.msdn.com/b/sql_protocols/archive/2006/12/02/understanding-kerberos-and-ntlm-authentication-in-sql-server-connections.aspx

       f) use different server name: hostname, 127.0.0.1, localhost, ipaddress, ".",
       g) some useful tools, like ping, telnet, netstat -a, hosts file
      
for some special cases, you need narrow down the problem with the ways I mentioned upper, then work with windows or network engineer to troubleshooting it.

normally, it is not difficult to solve connection issue , but if it can not be solved within a short time, it will be very tough issue!

Saturday, April 7, 2012

Why shrink sql server data file failed?

Today I need to shrink a big data file, I know shrink file is not a best practise, and it will cause data fragment, but sometimes after you clean the database with deleting old data, you had to shrink file, it should be fine as long as you run rebuild index after shrink file.

DBCC SHRINKFILE(1, 230000)
GO

the command run about 2 hours, then gave me an error message:
Msg 3140, Sev 16:
Could not adjust the space allocation for file 'xxxx'

What happend? this error always means when you shrink the file, there might be other processes moving the data on that file at the same time. But I remember I disabled the full backup job and diff backup job before I start shrink file. so I searched if there was any other job running during my shrink.

Suddenly I remember I have logshipping setup on this database, that should be the problem, I checked the logshipping share folder, there were many log backup file generated during my shrink. I think it was the cause.

I checked the file size, it didn't change, still the same size before shrink, does it mean I need to spend another 2 hours to shrink te file again? Fortunately, We don't need to.

I run the shrink command with TRUNCATEONLYagain, TRUNCATEONLY doesn't move data, only release the unused space from the end of the file.
DBCC SHRINKFILE (1, TRUNCATEONLY);

it returned very quickly, then I found the file size got smaller :)


Friday, April 6, 2012

Clean SQL Server Cache

           I lives alone, usually, I clean my small apartment at every weekend, wipe the table/firniture with cloth, clean the capet with vacuum cleaner, wash clothes with machine. the clea enviroment makes me feel confortable, and have a good start for the new week.

SQL Server memory cache just like an apartment(or house?), before we start testing , we'd better clean the memory cache first. just like we clean the house, we use tools to clean cache as well.

1. DBCC FREESYSTEMCACHE
BOOK ONLINE: manually remove unused entries from all caches or from a specified Resource Governor pool cache.
it has 2 parameters, the format is like:
DBCC FREESYSTEMCACHE ('ALL','default');

this is only the sample in BOOK online, there is no more description of the parameter. then I searched the parameter, here are some findings:
  • Clean all caches
         DBCC FREESYSTEMCACHE ('ALL')

        sometimes if you can not shrink the tempdb log file, and get the error below:
“DBCC SHRINKFILE: Page X:xxxxxxx could not be moved because it is a work table"

try this command first, but note, this command will clear all cache and cause your system slower for a period of time.
http://blogs.technet.com/technet_blog_images/b/sql_server_sizing_ha_and_performance_hints/archive/2011/03/03/shrink-tempdb-transaction-log-fails-after-overflow.aspx
  • Clean cache for a specific database:
         DBCC freesystemcache ('tempdb');
  • Clean adhoc queries from cache
         DBCC freesystemcache ('sql plans');
 
          you can use the query below to check the adhoc queries status

select objtype,
count(*) as number_of_plans,
sum(cast(size_in_bytes as bigint))/1024/1024 as size_in_MBs,
avg(usecounts) as avg_use_count
from sys.dm_exec_cached_plans
group by objtype

sometimes a large number of adhoc query plans in the cache will cause performance issue:
http://sqlblog.com/blogs/lara_rubbelke/archive/2008/04/18/memory-pressure-on-64-bit-sql-server-2005.aspx
clean the sql plan cache is one of the resorts.
  • Clear all table variables
        DBCC freesystemcache ('Temporary Tables & Table Variables');
  • Clean TokenAndPermUserStore
        DBCC FREESYSTEMCACHE ('TokenAndPermUserStore')
        there is a KB descript it.
        http://support.microsoft.com/kb/927396
http://blogs.msdn.com/b/psssql/archive/2008/06/16/query-performance-issues-associated-with-a-large-sized-security-cache.aspx
  • Other Cache Object
you can use the script below to get all cache object in the system, then use DBCC freesystemcache  to clean it
select name  from   sys.dm_os_memory_clerks group by name

2. DBCC DROPCLEANBUFFERS
Removes all clean buffers from the buffer pool.
please remember it only remove the "CLEAN" buffer from buffer pool, for what is "CLEAN" buffer, please refer to http://blogs.msdn.com/b/psssql/archive/2009/03/17/sql-server-what-is-a-cold-dirty-or-clean-buffer.aspx

so it is better to run checkpoint before run DBCC DROPCLEANBUFFERS, checkpoint will write all dirty pages back to disk, so you can release more buffer pool space

3. DBCC FREEPROCCACHE
Removes all elements from the plan cache, removes a specific plan from the plan cache by specifying a plan handle or SQL handle, or removes all cache entries associated with a specified resource pool

I think it is similar with "DBCC FREESYSTEMCACHE ", but it can only clean plan cache, and it provide parameter to let you control the clean more detail. you can specify the planid and pool name.
also this command has less impact than "DBCC FREESYSTEMCACHE ", MVP Glenn Berry mentioned the impact of FREEPROCCACHE is "pretty minor", and it is useful for some senarios
http://www.sqlservercentral.com/blogs/glennberry/2009/12/28/fun-with-dbcc-freeproccache/

4. DBCC FLUSHPROCINDB (@intDBID);
Flush the procedure cache for one database only


5. DBCC FREESESSIONCACHE
Flushes the distributed query connection cache used by distributed queries against an instance of Microsoft SQL Server.

so if you want to make a completely clean on the cache, you can try Rajesh Chandras 's script
DBCC FREESYSTEMCACHE(All)
DBCC FREESESSIONCACHE
DBCC FREEPROCCACHE
DBCC FLUSHPROCINDB( db_id )
CHECKPOINT
DBCC DROPCLEANBUFFERS

http://rschandrastechblog.blogspot.com/2011/06/how-to-clear-sql-server-cache.html




Thursday, April 5, 2012

SQL Server 2012 license

Not sure if it is a good news or bad news, SQL Server 2012 license shifts from processor-based to core-based.......here are the changes

1. There is no Datacenter Edition anymore, it is merged into Enterpise Edition. Furthermore, Workgroup and Small business Editions are also being retired.

Currently there are only 3 main editions:

Developer, Express and Compact editions are still available, but your choice will be less, you might need to pay more money :)


2. SQL Server 2012 will continue to offer two licensing options – one based on computing power, and one based on users or devices. In the computing power-based license model, however, license will shift from processors to cores.

is it a good news? before SQL Server 2012, we are encouraged to buy processor having multi-core as many as possible, because the license fee is the same... you can get more powerful computing ability without need to pay more money. are you missing SQL Server 2008? :)

3. To license a physical server, you must license all the cores in the server with a minimum of 4 core licenses required for each physical processor in the server.Core licenses will be priced at ¼ the cost of a SQL Server 2008 R2 (EE/SE) processor license

that means if you buy hardware, please remember to buy the processor with more than 4 cores. it sounds uncomfortable, however, it is hard to find a enterprise level processor with less 4-cores, I think it doesn't matter.

4. The SQL Server 2012 CAL price will increase by about 27%

no comments.........

5. for virtual environments, there are still 2 license ways : by processor and by Server+CAL
      a)To license a VM with core licenses, purchase a core license for each virtual core (virtual thread) allocated to the virtual machine (minimum of 4 core licenses per VM).

so create the VM with at least 4 virtual core, or less you loss money.

     b)if you have High Density Virtualization on physical server, you can do so by fully licensing the server (or server farm) with Enterprise Edition core licenses and Software Assurance based on the total number of physical cores on the servers. SA then enables the ability to run an unlimited number of virtual machines to handle dynamic workloads and fully utilize the hardware’s computing power.

 although looks like the SQL Server 2012 is a little(?) more expensive than previouse version, but comparing with the new feature and high performance, I thint it is worth upgrading :)

http://download.microsoft.com/download/4/F/7/4F74E127-827E-420D-971F-53CECB6778BD/SQL_Server_2012_Licensing_Datasheet_and_FAQ_Mar2012.docx

Sunday, April 1, 2012

Install SQL Server 2012 Book online on local disk

After installing the SQL Server 2012 on my laptop, I found there is only online document available, no local book online document found...........I thought I might not select the local help document during installation, but by researching, it is the new feature of SQL Server 2012..........I don't know what sql server develop team thought, but I still like the local book online, it is faster, and not depends on the internet.

if you want to have the local book online just like SQL Server 2008,  you can configure it by the steps below.

1. download book online local version from http://www.microsoft.com/download/en/details.aspx?id=347, save SQLServer2012Documentation.exe on local disk
2. run SQLServer2012Documentation.exe and unzip the documents on temp folder, let's say c:\temp
3. open "Manage Help Settings"
4. Select "Install Component from Disk"

4. Select the temp file where you unzip the document in step2

5. Click Add next to the documentation you want to install. Click "update"
6. return to the first page, click "Choose online or local help"


7. Click " I want to use local help"



Now, enjoy the local fast help document !

I am still wondering why Microsoft made this change?  before SQL2012, the help document is setup with the product, even if I want to have a copy on client side, it is very easy to do, just need to download the BOL package and install it directly without any configuration, but now, everything is changed, the new help document method makes the product more difficult to use....customer need to pay more effort to configure.