Category Archives: Uncategorized

Free Database Lifecycle Management Dashboard from Redgate available

DLM Dashboard is a new free tool from Redgate, which ensures you’re always the first to know when schema changes are made. Previously it was named SQL Lighthouse. Below is a list of features that it sports:

  • Receive email alerts when your database schema changes
  • See exactly what changed, who made the changes and when
  • Browse through a detailed history of schema changes
  • Avoid problems by ensuring your databases are in the correct state

You can download DLM Dashboard from here

Advertisements

Friend of Redgate 2015

So glad I am selected to be a Friend of Redgate 2015!

forg-logo

This year is my first – and hopefully not my last – year being a Friend of Redgate. For those who are new to the SQL server world, or have been living under a rock for quite some time: Redgate (http://www.red-gate.com) is a leader of industrial strength tools for Microsoft SQL Server and related technologies. Being a friend, I will be in the know about a lot of new developments. Also, I will probably be learning new things faster because of the great community I will be part of.

So, I hope I can share much more of information through this blog and at a faster pace than up to now. Stay tuned!

Setting up an SSISDB Catalog

With the introduction of SQL Server 2012, project deployments became available. Project deployments contrast package deployment, which was the only way SSIS packages could be deployed to SSIS servers. Package deployment model SSIS solutions could be configured by creating configuration files, command shell environment variables, and user defined variables inside the packages.

The project deployment model introduced project parameters and package parameters, as well as environment variable definitions that reside in the SSISDB catalog itself, as opposed to the OS environment variables. Also, parameters defined in a project or package, can easily be mapped to this new SSISDB environment values. Note that more than one environment can be defined, such as Development, Acceptance, and Production.

The project deployment model is now the recommended way of creating SSIS solutions, and it is also the default deployment model when a new Integration Services solution is created in Visual Studio 2012 and above.

So, we want to use this new project deployment model and create an SSISDB catalog on our SSIS server. How do we do that? It’s very simple, yet take care of making a backup of the master key and database encryption keys when you do. The following example shows what steps need to be taken to create a secure SSISDB catalog.

In SQL Server Management Studio, navigate to the ‘Integration Services Catalogs’ folder and right-click on it. Select ‘Create Catalog…’. The ‘Create Catalog’ dialog opens.

Figure 1: Creating an SSISDB catalog.

Figure 2: the ‘Create Catalg’ dialog box

After the password is entered, an SSISDB database master key is created on the server. This master key is used for encrypting the packages, and is protected by the Service Master Key. The Service Master Key is created during SQL Server setup, and uses Windows Data Protection API for its own security.

We now need to make an encrypted and password protected backup file of the master key by executing the following SQL query:

SQL
USE SSISDB
BACKUP MASTER KEY TO FILE ‘C:\SSISDBkey’
ENCRYPTION BY PASSWORD ‘P@ssw0rd’

If recovery is needed, load the master key again using the following SQL (assuming that the master key file is named stored ‘SSISDBKey’ on and stored on C:\).

SQL
USE SSISDB
RESTORE MASTER KEY FROM FILE ‘C:\SSISDBkey’
DECRYPTION BY PASSWORD ‘P@ssw0rd’
ENCRYPTION BY PASSWORD ‘NewP@ssw0rd’ — FORCE — optionally add the FORCE clause
/*** NOTE: Use the FORCE option only if the master key is irretrievable or if decryption fails. ***/

Not only you now have created an SSISDB catalog, you also created a regular SSIDB database. They are one and the same thing, it’s just that the catalog provides SSIS specific commands.

Figure 3: Creating an SSISDB catalog also creates an SSISDB database

Interestingly, when you apply SSISDB catalog commands, such as ‘Execute…’, instead of pressing OK after the parameters are assigned, you can also create a T-SQL script that will do the same thing.

Figure 4: Executing a package in the SSISDB catalog

Figure 5: Generating a T-SQL script instead of directly executing the package after parameters are assigned.

For instance, the following T-SQL was created for the sample package execution:

SQL
Declare @execution_id bigint
EXEC [SSISDB].[catalog].[create_execution]
@package_name=N’Package.dtsx’, @execution_id=@execution_id OUTPUT, @folder_name=N’Test’, @project_name=N’Integration Services Project1′, @use32bitruntime=False, @reference_id=Null
Select @execution_id
DECLARE @var0 sql_variant N’D:\Processed’


EXEC [SSISDB].[catalog].[set_execution_parameter_value]
@execution_id, @object_type=20, @parameter_name=N’ProcessedFilePath’, @parameter_value=@var0
DECLARE @var1 sql_variant = N’D:\Rejected’

EXEC [SSISDB].[catalog].[set_execution_parameter_value]
@execution_id, @object_type=20, @parameter_name=N’RejectedFilePath’, @parameter_value=@var1

DECLARE @var2 sql_variant N’D:\dummy.txt’
EXEC [SSISDB].[catalog].[set_execution_parameter_value]
@execution_id, @object_type=30, @parameter_name=N’FullFileName’, @parameter_value=@var2

DECLARE @var3 smallint = 1
EXEC [SSISDB].[catalog].[set_execution_parameter_value]
@execution_id, @object_type=50, @parameter_name=N’LOGGING_LEVEL’, @parameter_value=@var3

EXEC [SSISDB].[catalog].[start_execution] @execution_id
GO

Altering a Foreign Key Referenced Table

If you want to truncate or alter a table that is referenced by a foreign key, there are many times you are not allowed to do so. I created a script that drops the constraints, then waits for you to click an OK button, and finally adds the foreign key constraints again. It copies the SQL generated script to the clipboard as well, and also dumps intermediate results.

When I shared my original code on the LINQPad forum (www.LINQPad.net), Joe Albahari kindly picked the query up, and improved it by creating a Disposable class to perform the actions. Instead of pressing OK, the ALTER TABLE or TRUNCATE TABLE statements can be put in between the Drop and Add statements. This way the foreign key constraints would be added again even in case of an error or process termination. Joe shared his version on share.linqpad.net/b7qwds.linq. I made some slight changes and share the code with you in my blog as well here. Enjoy!

#define TRACE

void Main()
{
string tableName = “(your table name)”;

using (BounceForeignKeys (tableName))
ExecuteCommand(“SELECT ‘(Insert yout ALTER TABLE ot TRUNCATE TABLE command here)’ AS Instruction”);
}

public IDisposable BounceForeignKeys (string tableName)
{
var keys = DropForeignKeyBuilder (tableName);
return Disposable.Create (() => AddForeignKeyBuilder (keys));
}

public IList DropForeignKeyBuilder( string tableName )
{
var sql = new StringBuilder();

sql.AppendLine( @”DECLARE @error_line int” );
sql.AppendLine( @” , @error_number int” );
sql.AppendLine( @” , @error_message nvarchar(4000)” );
sql.AppendLine( @” , @error_procedure nvarchar(128)” );
sql.AppendLine( @” , @error_severity int” );
sql.AppendLine( @” , @error_state int” );
sql.AppendLine();
sql.AppendLine( @”BEGIN TRANSACTION” );
sql.AppendLine();
sql.AppendLine( @”BEGIN TRY” );

var keysReferenced = from fk in sys.foreign_keys
join rt in sys.tables on fk.referenced_object_id equals rt.object_id
join pt in sys.tables on fk.parent_object_id equals pt.object_id
where rt.name == tableName
select new KeyInfo { primary_object_id = pt.object_id,
referenced_object_id = rt.object_id,
PrimaryTableName = pt.name,
ReferencedTableName = rt.name,
fk_object_id = fk.object_id,
FkName = fk.name,
FkColumnNames = “”,
ReferencedTablePkColumnNames = “”
};
// keysReferenced.Dump( “My foreign keys” );
var keysReferenced2 = ( from k in keysReferenced
where false
select k ).ToList();

foreach ( var k in keysReferenced )
{
var foreignKeys = from fc in sys.columns
join fkc in sys.foreign_key_columns
on new { fc.object_id, fc.column_id } equals new { object_id = fkc.parent_object_id, column_id = fkc.parent_column_id }
where k.primary_object_id == fkc.parent_object_id && k.referenced_object_id == fkc.referenced_object_id && k.fk_object_id == fkc.constraint_object_id
select new { fc.name };

// foreignKeys.Dump( “My foreign key column names” );

string fkColumnNames = “”;
string sep = “”;

foreach ( var fk in foreignKeys )
{
fkColumnNames += sep + fk.name;
sep = “, “;
}

var primaryKeys = from i in sys.indexes
join ic in sys.index_columns on new { i.object_id, i.index_id } equals new { ic.object_id, ic.index_id }
join pc in sys.columns on new { ic.object_id, ic.column_id } equals new { pc.object_id, pc.column_id }
where k.referenced_object_id == i.object_id && i.is_primary_key.Value
select new { pc.name };

// primaryKeys.Dump( “Primary key column names of referenced tables” );

string pkNames = “”;
sep = “”;

foreach ( var pk in primaryKeys )
{
pkNames += sep + pk.name;
sep = “, “;
}

keysReferenced2.Add( new KeyInfo { primary_object_id = k.primary_object_id,
referenced_object_id = k.referenced_object_id,
PrimaryTableName = k.PrimaryTableName,
ReferencedTableName = k.ReferencedTableName,
fk_object_id = k.fk_object_id,
FkName = k.FkName,
FkColumnNames = fkColumnNames,
ReferencedTablePkColumnNames = pkNames } );
}

keysReferenced2.DumpT(“Foreign Keys”);

// Generate SQL
foreach ( var k in keysReferenced2 )
{
sql.AppendFormat( @” IF EXISTS (SELECT * FROM sys.foreign_keys WHERE object_id = OBJECT_ID(N'[dbo].[{0}]’) AND parent_object_id = OBJECT_ID(N'[dbo].[{1}]’)){2}” +
@” ALTER TABLE [dbo].[{1}] DROP CONSTRAINT [{0}]{2}”, k.FkName, k.PrimaryTableName, Environment.NewLine );
sql.AppendLine();
}

sql.AppendLine( @” COMMIT” );
sql.AppendLine( @” SELECT ‘Drop foreign key constrained committed’ AS [Message]” );
sql.AppendLine( @”END TRY” );
sql.AppendLine();
sql.AppendLine( @”BEGIN CATCH” );
sql.AppendLine( @” SELECT @error_line = ERROR_LINE()” );
sql.AppendLine( @” , @error_number = ERROR_NUMBER()” );
sql.AppendLine( @” , @error_message = ERROR_MESSAGE()” );
sql.AppendLine( @” , @error_procedure = ERROR_PROCEDURE()” );
sql.AppendLine( @” , @error_severity = ERROR_SEVERITY()” );
sql.AppendLine( @” , @error_state = ERROR_STATE()” );
sql.AppendLine();
sql.AppendLine( @” ROLLBACK” );
sql.AppendLine();
sql.AppendLine( @” PRINT ‘Error occurred, transaction rolled back'” );
sql.AppendLine();
sql.AppendLine( @” SELECT @error_line AS ERROR_LINE” );
sql.AppendLine( @” , @error_number AS ERROR_NUMBER” );
sql.AppendLine( @” , @error_message AS ERROR_MESSAGE” );
sql.AppendLine( @” , @error_procedure AS ERROR_PROCEDURE” );
sql.AppendLine( @” , @error_severity AS ERROR_SEVERITY” );
sql.AppendLine( @” , @error_state AS ERROR_STATE” );
sql.AppendLine( @”END CATCH” );
sql.AppendLine( @”/* GO */” );

sql.ToString().DumpT(“– DROP SQL”);

ExecuteCommand( sql.ToString() );

return keysReferenced2;
}

public void AddForeignKeyBuilder( IList keysReferenced )
{
var sql = new StringBuilder();

sql.AppendLine( @”DECLARE @error_line int” );
sql.AppendLine( @” , @error_number int” );
sql.AppendLine( @” , @error_message nvarchar(4000)” );
sql.AppendLine( @” , @error_procedure nvarchar(128)” );
sql.AppendLine( @” , @error_severity int” );
sql.AppendLine( @” , @error_state int” );
sql.AppendLine();
sql.AppendLine( @”BEGIN TRANSACTION” );
sql.AppendLine();
sql.AppendLine( @”BEGIN TRY” );

foreach ( var k in keysReferenced )
{
sql.AppendFormat( ” ALTER TABLE [dbo].[{0}] WITH CHECK ADD CONSTRAINT [{1}] FOREIGN KEY([{2}])\n” +
” REFERENCES [dbo].[{3}] ([{4}]){5}”
, new [] { k.PrimaryTableName, k.FkName, k.FkColumnNames, k.ReferencedTableName, k.ReferencedTablePkColumnNames, Environment.NewLine } );
sql.AppendLine();
sql.AppendFormat( @” ALTER TABLE [dbo].[{0}] CHECK CONSTRAINT [{1}]{2}”, k.PrimaryTableName, k.FkName, Environment.NewLine );
sql.AppendLine();
}

sql.AppendLine( @” COMMIT” );
sql.AppendLine( @” SELECT ‘Add foreign key constrained committed’ AS [Message]” );
sql.AppendLine( @”END TRY” );
sql.AppendLine();
sql.AppendLine( @”BEGIN CATCH” );
sql.AppendLine( @” SELECT @error_line = ERROR_LINE()” );
sql.AppendLine( @” , @error_number = ERROR_NUMBER()” );
sql.AppendLine( @” , @error_message = ERROR_MESSAGE()” );
sql.AppendLine( @” , @error_procedure = ERROR_PROCEDURE()” );
sql.AppendLine( @” , @error_severity = ERROR_SEVERITY()” );
sql.AppendLine( @” , @error_state = ERROR_STATE()” );
sql.AppendLine();
sql.AppendLine( @” ROLLBACK” );
sql.AppendLine();
sql.AppendLine( @” PRINT ‘Error occurred, transaction rolled back'” );
sql.AppendLine();
sql.AppendLine( @” SELECT @error_line AS ERROR_LINE” );
sql.AppendLine( @” , @error_number AS ERROR_NUMBER” );
sql.AppendLine( @” , @error_message AS ERROR_MESSAGE” );
sql.AppendLine( @” , @error_procedure AS ERROR_PROCEDURE” );
sql.AppendLine( @” , @error_severity AS ERROR_SEVERITY” );
sql.AppendLine( @” , @error_state AS ERROR_STATE” );
sql.AppendLine( @”END CATCH” );

sql.ToString().DumpT(“– ADD SQL”);

ExecuteCommand( sql.ToString() );
}

public class KeyInfo {
public int primary_object_id;
public int referenced_object_id;
public string PrimaryTableName;
public string ReferencedTableName;
public int fk_object_id;
public string FkName;
public string FkColumnNames;
public string ReferencedTablePkColumnNames;
}

class Disposable : IDisposable
{
public static IDisposable Create (Action onDispose)
{
return new Disposable (onDispose);
}

Action _onDispose;
Disposable (Action onDispose) { _onDispose = onDispose; }

public void Dispose()
{
if (_onDispose != null) _onDispose ();
_onDispose = null;
}
}

static class Extensions
{
[Conditional(“TRACE”)]
public static void DumpT (this object o, string heading = null) { o.Dump(heading); }
}

How to Display Object Definitions

Sometimes it’s necessary to see the definitions of views, functions or stored procedures. These definitions can easily be found using the following query:

SELECT TOP 20

       o.object_id,

       o.type_desc,

       s1.name AS schema_name,

       o.name,

       s.definition,  — This column contains the definition

       o.principal_id,

       o.schema_id,

       o.parent_object_id,

       o.type,

       o.create_date,

       o.modify_date,

       o.is_ms_shipped,

       o.is_published,

       o.is_schema_published,

       s.uses_ansi_nulls,

       s.uses_quoted_identifier,

       s.is_schema_bound,

       s.uses_database_collation,

       s.is_recompiled,

       s.null_on_null_input,

       s.execute_as_principal_id

FROM sys.sql_modules AS s

JOIN sys.objects AS o

       ON o.object_id = s.object_id

JOIN sys.schemas AS s1

       ON s1.schema_id = o.schema_id

 

The definition columns contains the tekst that makes up the object. Of course, you can filter the result by object name (o.name), or create a table valued function that accepts a name as a parameter if you like. I hope you like this.

Count Records Fast

Counting records can be a long process, even disturbing other processes from performing well. There is an alternative that is way faster. It does not count the records itself, but it uses the partition information that every stored object has. This has nothing to do with database or table partitioning, the term partition is overloaded. Each database object, such as a table, view, index, etc., uses storage space, and this storage space is referred to as a partition. A system view shows us the row count of all tables, and also the way they are compressed (a feature that is available in SQL Server 2008 and above):

/* Determine row count and compression of tables.
Compression is only in SQL Server Developer and Enterprise editions.*/

 

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

 

SELECT T.name,
T.type, T.type_desc, P.index_id, P.rows, P.data_compression_desc

FROM sys.tables AS T

JOIN sys.partitions AS P

ON T.object_id = P.object_id  AND P.index_id < 2 — See note below

— WHERE T.name LIKE ‘%’

 

/*

Note: P.index_id Indicates the ID of the index within the
object to which this partition belongs.

0 = heap (= unindexed physical table)

1 = clustered index (= physical table)

2 or greater = nonclustered index

*/

Although the Micorosft documentation says that it’s an approximate number, I have yet to encounter an incorrect number of records used by this method. And the result is there in a split second! Very nice indeed.