January 2013 | SansSQL

Sunday, January 6, 2013

SQL Server IO - Continued...

Now that we understand what is "Transaction" and "ACID properties" of transaction... lets get moving on the next set...Write Ahead Log (WAL) :O

What!?... How is WAL related to ACID properties or transaction? Yes, they are related.

WAL is technique which helps to adhere to two of the four ACID properties "Atomicity" and "Durability". Now that we understand what is atomicity and durability... next is to understand "How WAL works" and also we need to peep into the advantages we get from it.

Let me just put my words as per SQL Books online which is very much easier to understand:
Write-ahead log (WAL) guarantees that no data modifications are written to disk before the associated log record is written to disk. This maintains the ACID properties for a transaction.

To understand how the write-ahead log works, it is important for us to know how modified data is written to disk. SQL Server maintains a buffer cache into which it reads data pages when data must be retrieved. Data modifications are not made directly to disk, but are made to the copy of the page in the buffer cache. The modification is not written to disk until a checkpoint occurs in the database, or the modification needs to be written to disk so the buffer can be used to hold a new page. Writing a modified data page from the buffer cache to disk is called flushing the page. A page modified in the cache, but not yet written to disk, is called a dirty page.

At the time when modification is made to a page in the buffer, a log record is built in the log cache that records the modification. This log record must be written to disk before the associated dirty page is flushed from the buffer cache to disk. If the dirty page is flushed before the log record is written, the dirty page creates a modification on the disk that cannot be rolled back if the server fails before the log record is written to disk. SQL Server has logic that prevents a dirty page from being flushed before the associated log record is written. Log records are written to disk when the transactions are committed.

I hope the above is very much clear in its explanation...

And this is it for now...

Saturday, January 5, 2013

SQL Server IO

I had the opportunity to look back into the basics of SQL Server IO. To start off... first was to understand "What is Transaction" and then "Transaction properties".

What is Transaction?
There can be many ways of defining the same... however, the most simple could be as follows:
A transaction is a logical unit of work in which, all the steps must be performed or none.

Once we understand "what is transaction"... next is to understand "the properties" exhibited by transaction.
There are FOUR important properties of a transaction:
1. Atomicity
2. Consistency
3. Isolation and then
4. Durability

ATOMICITY - Any... for that matter any database modifications must follow an “all or nothing” rule... which means: If one part of the transaction fails, the complete transaction fails. Each transaction is said to be “ATOMIC”. It is critical that SQL Server maintains the atomic nature of transactions... not only SQL Server any DBMS, operating system or hardware failure.

CONSISTENCY - States that only valid data will be written to the database. In simple words if the database was consistent before the execution of the transaction then it should remain consistent after the complete execution of that transaction.

For some reason, a transaction is executed that violates the database’s consistency rules, the entire transaction will be rolled back and the database will be restored to a state consistent with those rules.

On the other hand, if a transaction successfully executes, it will take the database from one state that is consistent with the rules to another state that is also consistent with the rules.

ISOLATION - First, let me put it in straight words: The transaction should not be interfered by any other transaction executing concurrently.

Now that means - multiple transactions occurring at the same time not impact each other’s execution. For example, if User A issues a transaction against a database at the same time that User B issues a different transaction, both transactions should operate on the database in an isolated manner. The database should either perform User A's entire transaction before executing User B's or vice-versa.

This helps in preventing User A's transaction from reading intermediate data due to User B's transaction that will not eventually be committed to the database.

We  always get into confusion w.r.t the way we describe isolation. We should alyways remember that isolation property does not ensure which transaction will execute first, it is just that transactions will not interfere with each other.

DURABILITY - means any changes made by the transaction should be permanently committed in the database. This ensures that any transaction committed to the database will not be lost in spite of any subsequent software or hardware failures.

To ensure Durability is at its best we make use of database backups and transaction logs that facilitate the restoration of committed transactions.

I hope the above helps us to understand ACID properties in a simple way! 

Password Protect a backup file

Password protection of databases backups helps a lot in protecting the database backup from misuse.
Once such case is, when you are sending a backup of database physically through disks which has very sensitive and critical data to a different office or data center.

To password protect backup file, you have include WITH PASSWORD option when backing up the database
-- Full Backup with password
BACKUP DATABASE SansSQL TO DISK = 'D:\Backup\SansSQL_FullBackup.bak' 
WITH PASSWORD = 'Password123'
Now, once the backup is taken with a password, the same password has to be provided while restoring for matching the decryption sequence.
--Restore Full backup with no recovery
RESTORE DATABASE SansSQL FROM DISK = 'D:\Backup\SansSQL_FullBackup.bak' 
WITH RECOVERY, PASSWORD='Password123'
Question 1: What happens if you give a wrong password or try to restore without giving password?
Answer: The restoration will fail with the below error
Msg 3279, Level 16, State 2, Line 1 
Access is denied due to a password failure 
Msg 3013, Level 16, State 1, Line 1 
RESTORE DATABASE is terminating abnormally.

Question 2: How will you come to know if a backup is password protected?
Answer: When you try to restore, it will give an error saying "Access is denied due to a password failure"
And when you try to execute RESTORE HEADERONLY, the file name will be shown as "*** PASSWORD PROTECTED ***"

Friday, January 4, 2013

Encrypted DTS packages... continued...

Yesterday I posted the way of getting to know about the DTS Package name from a job step which is encrypted "Encrypted DTS packages - Find DTS Package name". So, now once we are aware of the proper DTS calls used in the scheduled jobs... we may want to replace the encrypted DTS names with correct readable names... which helps us to understand better.

To perform the same:
Following list of SP's come in handy; one can refer to books online to know more about the same.

sp_update_jobstep
sp_add_jobstep
sp_delete_jobstep
sp_help_jobstep

I made use of one of the above SP's and created scripts to have the DTS jobs updated:

One such example is as shown below:
USE msdb
GO
EXEC dbo.sp_update_jobstep
@job_name = 'Job Name',
@step_id = 1,
@command = 'DTSRun /S "ServerName" /N "DTS package name" /E '
GO
This will do the trick!

Thursday, January 3, 2013

Encrypted DTS packages - Find DTS Package name

Happy New Year 2013 to all of you... I shall now start off with some techie stuff.

Most of us have been working on DTS\SSIS packages. Even though we get to hear very little about DTS packages now a days; sometimes it becomes inevitable to know some of the legacy left behind by older versions.
To identify which 'Encrypted' DTS package is being referenced by scheduled job.

Before I explain the method of doing the above task, let me tell you the reason why the 'DTS package calls' are 'Encrypted' at first place? I am sure many of us are already aware of the reason... but still... here it is...
Most of us are addicted to follow easy methodology; right click the DTS Package (SQL 2000) and schedule a job. The created SQL Job will have an encrypted name for DTS package in SQL job step.

For example:DTSRun /~Z0x99810D5EE6B8FC6BFEB92F1D9EB5849.....
the above value would be much more! I have cut short to keep it simple ;)

So, now the task; to identify which package is being called by scheduled job step.

For each Job which makes an encrypted call to DTS, from SQL Job Step, refer to the following steps:
  1. To the end of Command (DTSRun) line, add /!X /!C
    For example:
    DTSRun /~Z0x99810D5EE6B8FC6BFEB92F1D9EB5 /!X /!C
    /!X means 'do not execute' and /!C means 'copy results onto Windows Clipboard'
  2. Copy complete command DTSRUN with /!X /!C
  3. Run the command from command line. One thing I noticed: The command can be either run from target server where the JOB is or from your local machine if it is the same network as of target server. I hope I did make some sense!
  4. Open notepad, paste the clipboard result onto notepad. So, there it is... name of the package!!

Wednesday, January 2, 2013

SQL Agent Tokens

Consider a situation where in one needs to have SQL Jobs to be more independent of machine/instance and/or the main job itself. In such cases we can utilize one of the features of SQL Server called "TOKENS".

When a Job step is written using tokens it gives the same flexibility that "Variables" provide in software programs. I hope that makes sense! Many of us do understand the meaning and usage of Variables...  so will not explain the same.

When a token is used in a job step script, SQL Server Agent replaces the token at run time, before the job step is executed by the Transact-SQL subsystem.

Note: I have tried to explain the usage of tokens by making use of the following example.

For example: Consider a situation wherein we are saving the output logs of a Job from multiple servers (Development, Testing, and Production) to one central location. In such cases, we have two options of specifying the output path.

1. Manually setting the path for job on each of the Servers.

2. Or making use of "TOKENS" which makes our life easier.

Let’s make use of the tokens.

This centralization of output log location allows the use of common scripts for all environments.

Servername can be replaced by using SQL Server Agent Token [MACH], [DATE] and configure output to a common location shown below.

\abcd\Prod\SQLServersJobs\LOGS\

 The server name differentiates the environment context of the log.

Note:  The token templates used are different for SQL versions.

<Template>

SQL2000

[MACH]\[INST]_<jobname>_Step1_[DATE].log  -- Where <jobname> is the name of the job.

Example: SQL2000

\abcd\Prod\SQLServersJobs\LOGS\[MACH]_[INST]\MyJob_Step1_[DATE].log

The above would be converted to: (see that default instance returns as MSSQLSERVER)

 \abcd\Prod\SQLServersJobs\LOGS\A2MDEV101_MSSQLSERVER\MyJob_Step1_20091005.log

  SQL2005

$(ESCAPE_NONE(MACH))\$(ESCAPE_NONE(INST))_<jobname>_Step1_$(ESCAPE_NONE(DATE)).log --Where <jobname> is the name of the job.

Example: SQL2005 (SP1 & higher) & SQL2008

\abcd\Prod\SQLServersJobs\LOGS\$(ESCAPE_NONE(MACH))_$(ESCAPE_NONE(INST))\MyJob_Step1_$(ESCAPE_NONE(DATE)).log

 The above would be converted to: (see that default instance returns as MSSQLSERVER)

\abcd\Prod\SQLServersJobs\LOGS\A2MPRD151_LOGGING\MyJob_Step1_20091005.log

And that is all for now.

For more on tokens one can refer to the following web link:

http://msdn.microsoft.com/en-us/library/ms175575.aspx

Ads