Saturday, March 31, 2012

Monitor SQL Server Backup job duration

if you run BACKUP DATABASE T-SQL, there is a parameter STATS which reports the percentage complete, However when you run the SQL Agant backup job which is created by maintenance plan,  it will be difficult to monitor the percentage of backup process.

here you can use the query below:
==========================================================
SELECT Command,
s.Text,
Start_Time,
Percent_Complete,
CAST(((DATEDIFF(s,start_time,GETDATE()))/3600) as VARCHAR) + ' hour(s), ' +
CAST((DATEDIFF(s,start_time,GETDATE ())%3600)/60 as VARCHAR) + 'min, ' +
CAST((DATEDIFF(s,start_time,GETDATE ())%60) as VARCHAR) + ' sec' as Running_Time,
CAST((estimated_completion_time/3600000) as VARCHAR) + ' hour(s), ' +
CAST((estimated_completion_time %3600000)/60000 as VARCHAR) + 'min, ' +
CAST((estimated_completion_time %60000)/1000 as VARCHAR) + ' sec' as Est_time_to_go,
DATEADD(second,estimated_completion_time/1000, getdate()) as Est_completion_time
FROM sys.dm_exec_requests r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) s
WHERE r.command IN
('RESTORE DATABASE', 'BACKUP DATABASE', 'RESTORE LOG', 'BACKUP LOG')
==========================================================

in fact this query can estimate other command duration, you can replace the IN condition with other value, here is from BOL

  • ALTER INDEX REORGANIZE

  • AUTO_SHRINK option with ALTER DATABASE

  • BACKUP DATABASE

  • CREATE INDEX

  • DBCC CHECKDB

  • DBCC CHECKFILEGROUP

  • DBCC CHECKTABLE

  • DBCC INDEXDEFRAG

  • DBCC SHRINKDATABASE

  • DBCC SHRINKFILE

  • KILL (Transact-SQL)

  • RESTORE DATABASE,

  • UPDATE STATISTICS.


  • the query get the estimate duration based on the data page it read, especially for long duration command, it is more accurate. 2 notes:
    1. the estimate is based on the data page which will be went through, sometime it is not accurate, especially for small database.

    2. it can not be used for "select" and DML query.

    Friday, March 30, 2012

    Select : Lock or No Lock?

    Here is a post I happen to find today: why select command is not blocked by update command?


    First, I thought it might be a transaction isolation level issue, but when I dig into the problem, something interesting pop up.

    Here is the script to repro this issue
    first create testing enviroment:
    USE master;
    GO
    IF @@TRANCOUNT > 0
          ROLLBACK TRAN;
    GO
    -- =======================================
    -- Create test database
    IF DB_ID(N'db_xlock_test') IS NOT NULL
    BEGIN;
          ALTER DATABASE db_xlock_test
          SET SINGLE_USER
          WITH
                ROLLBACK AFTER 0;
               
          DROP DATABASE db_xlock_test;
    END;
    CREATE DATABASE db_xlock_test;
    -- c. disable READ_COMMITTED_SNAPSHOT
    ALTER DATABASE db_xlock_test
    SET READ_COMMITTED_SNAPSHOT OFF;
    GO
    -- =======================================
    -- create table
    USE db_xlock_test;
    GO
    CREATE TABLE dbo.tb(
          id int IDENTITY
                PRIMARY KEY,
          name sysname
    );
    INSERT dbo.tb
    SELECT TOP(50000)
          O1.name + N'.' + O2.name + N'.' + O3.name
    FROM sys.objects O1 WITH(NOLOCK),
          sys.objects O2 WITH(NOLOCK),
          sys.objects O3 WITH(NOLOCK);
    GO

     Then, create a session to update the table
    -- =======================================
    -- session 1:create a transaction and update the table
    BEGIN TRAN
          --update table
          UPDATE dbo.tb SET name = name
          WHERE id <= 2;
         
          SELECT
                spid = @@SPID,
                tran_count = @@TRANCOUNT,
                database_name = DB_NAME(),
                object_id = OBJECT_ID(N'dbo.tb', N'Table');
               
          -- display lock info
          EXEC sp_lock @@SPID;

     Finally, create another session to select the data from table, theoretically, the select will be blocked and the session will be in waiting status.

    -- =======================================
    -- session 2: select data
    SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
    SELECT * FROM dbo.tb
    WHERE id <= 2;
    run session 2, and What happen?  we can still select data in session 2 without any blocking. The current transaction isolation level is “Read Committed”, why we can still select data out?

    From the profiler trace we can find the session 2 only acquire IS lock on page and object which is not blocked  by session 1 - update command.





    Then I reviewed the update command, the special is the “SET”


    UPDATE dbo.tb SET name = name WHERE id <= 2;
    so I changed the command with  
    UPDATE dbo.tb SET name = 'a' WHERE id <= 2;
    this time, the select command was blocked!!!! here is the profiler trace when session 2 was being blocked.
    then I rollback the session 1 transaction,  get the new trace
    so generally speaking,
    1. when "SET name = name ", the select command doesn't request S lock on Key, only need IS lock on page.
    2. when  "SET name = 'a' ", the select command request S lock on key(or page), so it was blocked.

    but why? for first condition("name=name"), actually, the data is not changed, but for second condition("name='a'), the data is changed.

    I did more testings:
    rollback session 1 (SET name = 'a' ), run session 1 with "SET name = name " again, but this time, the session 2 was still blocked, even if we use "set name=name".
    so I think if the data page is not touched(changed), SQL Server will use a smart way to get data without S lock, but once the data page is changed(even if we rollback the change), in order to avoid dirty data read, SQL Server will use S lock when selecting data, that make sense.
    To prove my thought, after rollback session 1(SET name='a'), I run "checkpoint" to write the dirty page to disk, then do the testing with condition (SET name=name), this time the  select query return quickly and no blocking issue.

    later, I searched on Internet, and found Paul White had shoot this issue before in a wonderful article:

    The Locking Optimization

    SQL Server contains an optimization that allows it to avoid taking row-level shared (S) locks in some circumstances. Specifically, it can skip shared locks if there is no risk of reading uncommitted data without them.

    but sometimes the optimization is not smart, when I changed select query to
    SELECT * FROM dbo.tb WHERE id <= 600

    then blocking occurs no matter what update command is................