Tuesday, February 16, 2021

Rotating Always Encrypted Keys

Once you have Always Encrypted set up you may find that you need to change (rotate) the encryption key that you used to set up the initial encryption. This could be due to an expiring certificate, security policy that states you need to rotate keys on a schedule or maybe you are moving the database to a different domain. There is a fantastic SQL Server Central blog post by Ed Leighton-Dick that goes into good detail and was where I learned how to rotate keys. This post may do a more through job or do a deeper dive into key rotation than I will in this post. This post is a relatively straight forward "down and dirty" how to guide. If you read my "Decoder Ring Not Needed" post, you may remember that I showed two ways of setting up Always Encrypted. The first method was to auto-generate the keys and certificates. I have not used that method for production however if you initially set up your environment that way and want to switch to using a different certificate for encryption this should help you. I will not be going over the certificate piece again, for that please refer to the original post.

Expanding out the sections under Always Encrypted Keys we can see the existing Column Master Key and Column Encryption Key.

Right click on the Column Master Key folder and choose New Column Master Key from the context menu.

The New Column Master Key GUI will pop up. Give it a unique name and choose the appropriate certificate from the correct Key store. Then click OK to create the new key.

You should now see both the old and new Column Master Keys.

Right click on the old Column Master Key and choose Rotate from the context menu.

Make sure the new Column Master Key is selected as the target. You will note that it shows which Column Encryption Keys will be affected by this rotation. Click OK to rotate the keys.

Right click on the Column Encryption Key and view the properties. You will see both Column Master Keys. This is fine. If you need to install the new certificate, if that is the reason for rotating keys, on additional servers make sure you get that done before proceeding to the next step which is cleanup. The key rotation process is very quick since it is only changing the encryption of the Column Encryption Key, the encrypted data is not changed or re-encrypted by this process.

Again, right click on the old Column Master Key and choose Cleanup from the context menu.

Verify that the correct key is in bold and click OK.

If you check the properties of the Column Encryption Key again you should only see the new Column Master Key.

Finally, right click on the old Column Master Key and delete it.

I hope you find this quick guide to rotating the keys useful.

Friday, January 22, 2021

Decoder Ring Not Needed

Like "My Top Ten+ Favorite Replication Tricks", this post took a long time for me to get it written and it started as a SQL Saturday presentation. I gave this one a couple times in 2018 and 2019, slides were posted here for one of the SQL Saturday presentations.

Why do we need to encrypt our data?

In this era of GDPR and similar laws there is very real need to protect you data. It should have always been protected, there is just additional incentive now. Here are a few of the biggest reasons to encrypt.

  • Regulatory Requirements
  • Good Data Stewards
  • Due Diligence
  • Job Security
  • Avoid Lawsuits
  • Stay Out Of Jail

Which type of encryption best fit our needs?

Encryption is not a one size fits all. Below is a quick list of the strengths and weaknesses of each encryption type. If needed, you can use more than one encryption type in a database.

Transparent Data Encryption (TDE)
Pros
  • Protect against loss, or theft, of data files or backups
  • No changes to applications required
  • Fairly easy to implement
Cons
  • In flight data not protected
  • Doesn't make sensitive data unreadable to users with select permissions
  • Additional considerations for backups and disaster recovery
  • Could have an impact on performance

Column Level Encryption
Pros
  • Selective encryption can have less impact on performance
  • Tighter access control to sensitive data
  • Many options for implementation
Cons
  • In flight data not protected
  • Can have lot of pieces (keys) to manage
  • Likely to require changes to existing application code

Always Encrypted
Pros
  • In flight data is protected
  • Easy to implement
  • Selective encryption can have less impact on performance
  • Tighter access control to sensitive data
  • Can enable for parameterization of queries
Cons

A quick look a each encryption architecture

Transparent Data Encryption (TDE)

The encryption architecture starts with the Windows Data Protection API/Local Machine Key. This key encrypts the SQL Server Service Master Key which is generated when it is needed to encrypt a linked server password, credential or database master key. Next is the Database Master Key which is then used to create a certificate that is used to create the Database Encryption Key. This is used to encrypt the database.

Column Level Encryption

Just like with TDE, Column Level Encryption starts with the Windows Data Protection/Local Machine Key and the Service Master Key. Once the Database Master Key has been created it is then used to create a certificate which is then used to create a Symmetric Key which is used to encrypt the data. Symmetric Keys can also be encrypted by the Database Master Key directly.

Always Encrypted

Always Encrypted is quite different. The certificate used to generate Column Master Keys and Column Encryption Keys need not be present on the SQL Server. The encryption work can be done remotely from a client server that has the certificate installed. Once the data is encrypted the encryption/decryption is done at the client server by the client driver (such as the .NET framework).

Always Encrypted Client Development

Setting up each encryption method

The scripts below are my demo scripts. As always, if you use these scripts be sure to change database names, file paths, file names, passwords and anything else that needs adjusting to fit your environment.

Transparent Data Encryption (TDE)

USE master
GO

-- View the Service Master Key
SELECT * FROM master.sys.symmetric_keys
WHERE name = '##MS_ServiceMasterKey##'

-- Backup the Service Master Key naming the file appropriately
BACKUP SERVICE MASTER KEY TO FILE = 'C:\KEY_BACKUPS\SMK_KeyBackup.key'
ENCRYPTION BY PASSWORD = '$MK_P@$$w0rd';

-- Use the block comment section to Restore Service Master Key
/*
RESTORE SERVICE MASTER KEY FROM FILE = 'C:\KEY_BACKUPS\SMK_KeyBackup.key'
-- Change this path to where the file is located
DECRYPTION BY PASSWORD = '$MK_P@$$w0rd'
-- Use password supplied when making the backup
-- FORCE option will cause data loss
*/

-- Create the DMK in master database that will protect the certificate
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '@V3ry$tr0ngP@$$w0rd';
GO
SELECT * FROM sys.symmetric_keys


-- Backup the DMK
BACKUP MASTER KEY TO FILE = 'C:\KEY_BACKUPS\MasterDB_DMK.dmk'
ENCRYPTION BY PASSWORD = 'M@$t3r_DMK_P@$$w0rd';

-- Create the certificate in master database used to create the Database Encryption Key for TDE
CREATE CERTIFICATE MyTDECertificate WITH SUBJECT = 'My Certificate For TDE';
GO


-- Backup the certificate using a private key
BACKUP CERTIFICATE MyTDECertificate
TO FILE = 'C:\KEY_BACKUPS\TDECertificateBackup.cer'
WITH PRIVATE KEY
(
FILE = 'C:\KEY_BACKUPS\TDECertificatePrivateKeyFile.pvk',
ENCRYPTION BY PASSWORD = '@n0th3r$Tr0ngP@$$W0rd'
);
GO

SELECT * FROM sys.certificates

USE NORTHWND
GO

-- Create the DEK in the user database
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE MyTDECertificate;
GO

-- Turn on TDE
ALTER DATABASE NORTHWND
SET ENCRYPTION ON;
GO

-- Check if encrypted
SELECT name FROM sys.databases WHERE is_encrypted = 1

SELECT * FROM sys.dm_database_encryption_keys;
-- State: 1 = unencrypted, 2 = in-process, 3 = encrypted

-- Lets back it up and try restoring on a different instance/server

BACKUP DATABASE NORTHWND TO DISK = 'C:\SQL_BACKUPS\NORTHWND.bak';


-- Change connection to a different instance/server and try to restore
-- This will fail since the master key and certificate are not present

RESTORE DATABASE [NORTHWND] FROM DISK = N'C:\SQL_BACKUPS\NORTHWND.bak' WITH
MOVE N'NORTHWND' TO N'C:\SQL2\NORTHWND.MDF',
MOVE N'NORTHWND_log' TO N'C:\SQL2\NORTHWND_log.ldf', STATS = 5;

-- To restore the certificate a different instance/server, copy the files and run this script
-- Once this is done you should be able to restore the database

USE master
GO

-- Change paths to where the file is located
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '@V3ry$tr0ngP@$$w0rd';
GO

CREATE CERTIFICATE MyTDECertificate FROM FILE = 'C:\KEY_BACKUPS\TDECertificateBackup.cer'
WITH PRIVATE KEY
(
FILE = 'C:\KEY_BACKUPS\TDECertificatePrivateKeyFile.pvk',
DECRYPTION BY PASSWORD = '@n0th3r$Tr0ngP@$$W0rd'
-- Use password supplied when making the backup
);

USE master
GO

-- Check if encrypted
SELECT name FROM sys.databases WHERE is_encrypted = 1

SELECT * FROM sys.dm_database_encryption_keys;
-- State: 1 = unencrypted, 2 = in-process, 3 = encrypted

Column Level Encryption

There were two scripts for this demo. The first script goes through the process of creating the symmetric keys. The second script, further down the page, goes over creating functions and stored procedures to more easily encrypt and decrypt the data.

In the first script you will see that you can create multiple passwords for the symmetric key, this would allow you able to give different passwords for the same key to control access. Likewise, you can also create multiple keys and encrypt different data with different keys to control access. However a more efficient method would be to create a stored procedure for data retrieval that automatically does the decryption, via a function, then restrict execute permissions on the stored procedure. If there are multiple columns of encrypted data and multiple groups that require access to some, but not all, of the columns you could create multiple stored procedures then only grant execute on the needed stored procedures to the groups our users that need them. For example, if COL1, COL3 and COL7 of a table are encrypted and the Personnel department needs the data in all the columns you could create a stored procedure for them that pulls back all the encrypted columns, but if Accounting only needs the data in COL3 you create a stored procedure for them that only pulls back COL3 decrypted.

-- Backup service master key if it hasn't been done already.

-- MAKE SURE YOU RUN IN THE RIGHT DATABASE CONTEXT!!!!!!!!!!!!!!!

-- Create Database Master Key
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'DMK_N0rthW!nd';

-- You should now see the database master key you created
SELECT * FROM sys.symmetric_keys

-- Backup Database Master Key
OPEN MASTER KEY DECRYPTION BY PASSWORD = 'DMK_N0rthW!nd'
BACKUP MASTER KEY TO FILE = 'C:\KEY_BACKUPS\NORTHWND.key'
ENCRYPTION BY PASSWORD = 'DMK_N0rthW!nd'
GO
CLOSE MASTER KEY;

-- Use the block comment section to Restore Service Master Key
/*
RESTORE MASTER KEY FROM FILE = 'C:\KEY_BACKUPS\NORTHWND.key'
-- Change this path to where the file is located
DECRYPTION BY PASSWORD = 'DMK_N0rthW!nd'
-- Use password supplied when making the backup
ENCRYPTION BY PASSWORD = 'DMK_N0rthW!nd';
*/

-- Create Symmetric Key
-- There are multiple ways to do this

-- Protected by a password
CREATE SYMMETRIC KEY ExampleKey1
WITH ALGORITHM = AES_256
ENCRYPTION BY PASSWORD = '3x@mp73P@$$w0rd';

SELECT * FROM sys.symmetric_keys;

OPEN SYMMETRIC KEY ExampleKey1 DECRYPTION BY PASSWORD = '3x@mp73P@$$w0rd';

SELECT * FROM sys.openkeys;

CLOSE SYMMETRIC KEY ExampleKey1;

SELECT * FROM sys.openkeys;

-- Protected by multiple passwords
CREATE SYMMETRIC KEY ExampleKey2
WITH ALGORITHM = AES_256
ENCRYPTION BY PASSWORD = '3x@Mp731',
PASSWORD = '2nd3X@mpL3',
PASSWORD = '0n3m0r3PW';

SELECT * FROM sys.symmetric_keys;

OPEN SYMMETRIC KEY ExampleKey2 DECRYPTION BY PASSWORD = '3x@Mp731';
OPEN SYMMETRIC KEY ExampleKey2 DECRYPTION BY PASSWORD = '2nd3X@mpL3';
OPEN SYMMETRIC KEY ExampleKey2 DECRYPTION BY PASSWORD = '0n3m0r3PW';

SELECT * FROM sys.openkeys;

CLOSE ALL SYMMETRIC KEYS;

SELECT * FROM sys.openkeys;

-- We can drop one password and still use the remaining passwords
OPEN SYMMETRIC KEY ExampleKey2 DECRYPTION BY PASSWORD = '3x@Mp731';--'0n3m0r3PW';

ALTER SYMMETRIC KEY ExampleKey2 DROP ENCRYPTION BY PASSWORD = '3x@Mp731';

CLOSE SYMMETRIC KEY ExampleKey2;

SELECT * FROM sys.symmetric_keys; -- key is still there

-- OPEN SYMMETRIC KEY ExampleKey2 DECRYPTION BY PASSWORD = '3x@Mp731';
OPEN SYMMETRIC KEY ExampleKey2 DECRYPTION BY PASSWORD = '2nd3X@mpL3';
OPEN SYMMETRIC KEY ExampleKey2 DECRYPTION BY PASSWORD = '0n3m0r3PW';

SELECT * FROM sys.openkeys;

CLOSE ALL SYMMETRIC KEYS;

-- We can add additional passwords
OPEN SYMMETRIC KEY ExampleKey2 DECRYPTION BY PASSWORD = '0n3m0r3PW';

ALTER SYMMETRIC KEY ExampleKey2 ADD ENCRYPTION BY PASSWORD = 'N3wP@$$w0rd';

CLOSE SYMMETRIC KEY ExampleKey2;

OPEN SYMMETRIC KEY ExampleKey2 DECRYPTION BY PASSWORD = 'N3wP@$$w0rd';
OPEN SYMMETRIC KEY ExampleKey2 DECRYPTION BY PASSWORD = '2nd3X@mpL3';
OPEN SYMMETRIC KEY ExampleKey2 DECRYPTION BY PASSWORD = '0n3m0r3PW';

SELECT * FROM sys.openkeys;

CLOSE ALL SYMMETRIC KEYS;

/*
NOTE: Symmetric keys cannot be backed up.
If you need to regenerate a symmetric key or move the key to a new server to
share data you will want to include a KEY_SOURCE and IDENTITY_VALUE
in the create statement. This can be done with any of the methods used
to create keys. You will want more complex phrases than I am using in this example.
*/

-- Protected by certificate

-- Create certificate used to encrypt the symmetric keys
OPEN MASTER KEY DECRYPTION BY PASSWORD = 'DMK_N0rthW!nd'
CREATE CERTIFICATE NorthwindEncryptionCertificate WITH SUBJECT = 'Protect Symmetric Keys';
CLOSE MASTER KEY;

-- You should see the certificate you just created
SELECT * FROM SYS.certificates

-- Backup the certificate using a private key
OPEN MASTER KEY DECRYPTION BY PASSWORD = 'DMK_N0rthW!nd'
BACKUP CERTIFICATE NorthwindEncryptionCertificate
TO FILE = 'C:\KEY_BACKUPS\NORTHWND.cer'
WITH PRIVATE KEY
(
FILE = 'C:\KEY_BACKUPS\NORTHWND.pvk',
ENCRYPTION BY PASSWORD = N'PVK_N0rthW!nd'
);
CLOSE MASTER KEY;

--Restore a dropped Certificate by recreating it if it was dropped
/*
OPEN MASTER KEY DECRYPTION BY PASSWORD = 'DMK_N0rthW!nd'
CREATE CERTIFICATE NorthwindEncryptionCertificate
FROM FILE = 'C:\KEY_BACKUPS\NORTHWND.cer'
WITH PRIVATE KEY
(
FILE = 'C:\KEY_BACKUPS\NORTHWND.pvk',
DECRYPTION BY PASSWORD = N'PVK_N0rthW!nd'
);
CLOSE MASTER KEY;
*/

CREATE SYMMETRIC KEY ExampleKey3
WITH ALGORITHM = AES_256,
KEY_SOURCE = 'NORTHWND',
IDENTITY_VALUE = 'Certificate for Encryption'
ENCRYPTION BY CERTIFICATE NorthwindEncryptionCertificate;

SELECT * FROM sys.symmetric_keys;

-- Can create multiple symmetric keys to better control access to encrypted data

-- Great! But how do we use these?

We will now look at a method of creating functions to handle the encryption and decryption of the data as well as stored procedures to use the functions to handle inserting and selecting the encrypted data. The script starts with creating a very simple sample table then setting up the symmetric keys. It then goes on to have some steps to test directly querying the table. Finally it goes on to create stored procedures to open and close the keys, functions to encrypt and decrypt the data and stored procedures to use the other procedures and functions to easily insert and retrieve data. Note, I would recommend encrypting the stored procedures that open and close the keys as well as the functions that do the encryption and decryption.

USE NORTHWND
GO

-- Using queries to encrypt and decrypt data

CREATE TABLE dbo.CRYPTO (
RID INT IDENTITY PRIMARY KEY,
SecretData VARBINARY(MAX)
)
GO

OPEN SYMMETRIC KEY ExampleKey1 DECRYPTION BY PASSWORD = '3x@mp73P@$$w0rd';
INSERT INTO dbo.CRYPTO (SecretData)
VALUES
(CONVERT(VARBINARY(MAX),(ENCRYPTBYKEY(KEY_GUID('ExampleKey1'), 'Lorem ipsum dolor sit amet.')))),
(CONVERT(VARBINARY(MAX),(ENCRYPTBYKEY(KEY_GUID('ExampleKey1'), 'Nam fermentum quam eu mattis venenatis')))),
(CONVERT(VARBINARY(MAX),(ENCRYPTBYKEY(KEY_GUID('ExampleKey1'), 'Sed dignissim nulla vitae ipsum vehicula ultricies.'))));
CLOSE SYMMETRIC KEY ExampleKey1;


-- What happens if we just query the table?
SELECT * FROM dbo.CRYPTO;

-- Let's open the key then try again
OPEN SYMMETRIC KEY ExampleKey1 DECRYPTION BY PASSWORD = '3x@mp73P@$$w0rd';
SELECT * FROM dbo.CRYPTO;

SELECT * FROM sys.openkeys;

-- One more try...remember the key is still open
SELECT RID, CONVERT(VARCHAR(MAX),DECRYPTBYKEY(SecretData)) FROM dbo.CRYPTO
CLOSE SYMMETRIC KEY ExampleKey1;


-- What if we try a different key?
OPEN SYMMETRIC KEY ExampleKey2 DECRYPTION BY PASSWORD = 'N3wP@$$w0rd';
SELECT RID, CONVERT(VARCHAR(MAX),DECRYPTBYKEY(SecretData)) FROM dbo.CRYPTO
CLOSE SYMMETRIC KEY ExampleKey2;

-- Did we leave any keys open?
SELECT * FROM sys.openkeys;

-- What if we don't want to be constantly writing open/close key statements and have passwords everywhere?

CREATE TABLE dbo.CIPHER (
RID INT IDENTITY PRIMARY KEY,
ProtectedData VARBINARY(MAX)
)
GO

-- Create Open Key Stored Procedure

CREATE PROCEDURE dbo.USP_CIPHER_OpenKey --WITH ENCRYPTION
AS
BEGIN
SET NOCOUNT ON;

OPEN SYMMETRIC KEY ExampleKey3
DECRYPTION BY CERTIFICATE NorthwindEncryptionCertificate;

RETURN 0;
END

GO

-- Create Close Key Stored Procedure

CREATE PROCEDURE dbo.USP_CIPHER_CloseKey --WITH ENCRYPTION
AS
BEGIN
SET NOCOUNT ON;

CLOSE SYMMETRIC KEY ExampleKey3;

RETURN 0;
END

GO

-- Create Encryption Function

CREATE FUNCTION dbo.FN_CIPHER_Encrypt --WITH ENCRYPTION
(
@ValueToEncrypt VARCHAR(MAX)
)
RETURNS VARBINARY(MAX) WITH ENCRYPTION
AS
BEGIN
-- Declare the return variable here
DECLARE @Result VARBINARY(MAX)

SET @Result = ENCRYPTBYKEY(KEY_GUID('ExampleKey3'), @ValueToEncrypt)

-- Return the result of the function
RETURN @Result
END

GO

-- Create Decryption Function

CREATE FUNCTION dbo.FN_CIPHER_Decrypt --WITH ENCRYPTION
(
@ValueToDecrypt VARBINARY(MAX)
)
RETURNS Varchar(MAX) WITH ENCRYPTION
AS
BEGIN
-- Declare the return variable here
DECLARE @Result VARCHAR(MAX)

SET @Result = CONVERT(VARCHAR(MAX),DECRYPTBYKEY(@ValueToDecrypt))

-- Return the result of the function
RETURN @Result;
END

GO

-- Create Insert Stored Procedure

CREATE PROCEDURE dbo.USP_AddProtectedData_INS
@NewData NVARCHAR(MAX)

AS
BEGIN
SET NOCOUNT ON;
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;

EXEC [dbo].[USP_CIPHER_OpenKey]

INSERT INTO dbo.CIPHER (ProtectedData)
VALUES
(dbo.FN_CIPHER_Encrypt(@NewData));

EXEC [dbo].[USP_CIPHER_CloseKey]

END

-- Create Select Stored Procedure

CREATE PROCEDURE dbo.USP_GetProtectedData_SEL

AS

SET NOCOUNT ON;
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;

EXEC dbo.USP_CIPHER_OpenKey

SELECT dbo.FN_CIPHER_Decrypt(ProtectedData) AS ProtectedData FROM dbo.CIPHER

EXEC dbo.USP_CIPHER_CloseKey


-- Using our new objects
EXEC dbo.USP_AddProtectedData_INS @NewData = N'Quisque in diam eu orci semper fermentum.';

-- Is it there?
SELECT * FROM dbo.CIPHER

-- Now get it decrypted
EXEC dbo.USP_GetProtectedData_SEL;

-- Did we leave any keys open?
SELECT * FROM sys.openkeys;

Always Encrypted

The first set of images will show the quickest/easiest way to get Always Encrypted set up. It is to let SQL Server auto-generate the keys and certificate. Further down we will look at certificates then setting up Always Encrypted using a certificate that is not generated by SQL Server. Where I work we did not want to use a SQL Server self-signed certificate but rather one signed by our certificate authority. If you do use the SQL Server generated certificate and need to access data from other servers you would need to export the certificate created by SQL Server and import it on the other servers.

You get started by right-clicking on the table in which you want to encrypt data and selecting the Encrypt Columns option.

The first screen is the introduction that briefly explains Always Encrypted. You can check the box to not show it in the future.

Next you select the columns in the table that you need to encrypt. My sample table is overly simple with only one column. The thing to notice here is the Encryption Key is automatically generated. You will also need to choose the type of encryption. If the column contains unique data, like social security or credit card numbers, you would probably want Randomized. If it is something that may be used in a filter, like a birth date, you would want to select Deterministic so it can be used with parameterized queries.

You will note that the master key will be auto generated.

If you want you can generate a PowerShell script or click next to proceed to the finish.

Double check your choices on the summary page and if everything is correct click Finish.

Next we will look at how to view certificates on your server and make sure the users have the needed permissions to use an external, not generated by SQL Server, certificate.

The first step is to run the Microsoft Management Console (MMC). I do this by typing MMC into the search box and clicking on the icon when it pops up.

You will notice that by default the console is empty. We need to add the certificate snap-in.

Go to File --> Add/Remove Snap-ins.

Add the certificates snap-in.

For my environment we want to manage certificates for the computer account. Your environment may be different.

Select Local computer unless you are doing this from a remote computer.

Click OK to add the snap-in.

The certificate for my server can be found under Personal --> Certificates. This certificate will need to be on all servers that will be querying encrypted data. It does not have to be on the SQL Server host itself if you do the encryption work remotely from a server that has the certificate installed.

Right-click on the certificate choose All Tasks --> Manage Private Keys.

Make sure the account you will be using to do the encryption has read permissions to the certificate private key. You would also add any application or service account here that will need to access the certificate to do decryption. If the certificate is on multiple machines only the accounts used on the specific machine need the permissions. So different servers may have different accounts with read permissions to the certificate private keys.

Now that the certificate work is done, we can use it to encrypt the data.

Rather than go directly to Encrypt Columns we will first create the Column Master Key (CMK).

Pick the Windows Certificate Store for the Local Machine since that is where our certificate was stored.

The information about our certificate will show. If you have multiple certificates be sure to choose the correct one. Click OK to create the CMK.

Now we create the Column Encryption Key.

We encrypt this using the Column Master Key we just created.

Now we can right-click and choose the Encrypt Columns option.

Now you will see the Encryption Key is the one we created rather than one that will be auto-generated. The same rules for deterministic and randomized encryption apply here as they did earlier in this post.

Since we manually created the keys no additional configuration is needed.

If you want you can generate a PowerShell script or click next to proceed to the finish.

Double check your choices on the summary page and if everything is correct click Finish.

Now that we have data encrypted we will want to query it.

If we simply query the table with the encrypted data we will see the data is encrypted. Yes, these images are from a different database.

To be able to see the data decrypted we start by going to Query --> Connection --> Change Connection in SQL Server Management Studio on a machine that hosts the certificate used to encrypt the data.

Select Options on the connection dialog.

Under Additional Connection Parameters add the string "Column Encryption Setting = Enabled" and click Connect. This string will need to be added into the connection string for any application or service that needs to connect and query encrypted data.

Run the query again and we see the decrypted data.

As much stuff as I have packed into this post, it still only scratches the surface especially for Column Level Encryption and Always Encrypted. In future posts I hope to cover some additional topics like rotating keys for Always Encrypted, some additional things you can do with Column Level Encryption as well as decoupling the Database Master Key from being encrypted by the Service Master Key as well as adding it back. Maybe some other stuff as well.

Tuesday, January 19, 2021

Goodby PASS

In the past I have had the opportunity to speak a several SQL Saturday events. I had hoped to submit a presentation for the PASS Summit. I was scheduled to speak at a SQL Saturday event in Madison, WI then the pandemic hit, and everything got cancelled. We are still working to recover from the global impact of the virus so in person events will not be happening for some time to come. I just hope that once this is all over some organization will rise as a successor to PASS and events like SQL Saturday will flourish again.

It took me too long to post my top replication tips presentation as a blog post. It has also taken me too long to get my encryption presentation posted as a blog post. I hope to fix that soon. I will also work to post my final SQL Saturday post on SQL Agent as a blog post soon. Once those are posted I will endeavor to work on taking the ideas I had for future SQL Saturday events and make them blog posts. Of course, working as a DBA for a corporation rather than as a consultant does cut into the time I have during the day to write...so hopefully "soon" doesn't turn into months.

Friday, March 8, 2019

My Top Ten+ Favorite Replication Tricks

This post has been a long time in coming. It started as a SQL Saturday presentation I gave three times in 2016 and 2017, slides were posted here for each SQL Saturday presentation, but I have not gotten around to writing an actual blog post...until now. Hopefully I will get a blog post for my 2018 SQL Saturday presentation on encryption written quicker than I got this one completed.

Disclaimer!! These are NOT best practices. Rather these are a few things that I have stumbled across in my job as a full-time production DBA that have helped me resolve real world situations and I am sharing this information in case someone runs into a similar situation. As always thoroughly test any solution in a non-production environment. I am not responsible for any mishaps that you may encounter by using these tricks it is up to you to make sure you are doing what is best for your situation and data.

In my environment I only have transactional and snapshot replication. I don't have any merge replication so I don't have any tricks specifically for that. Also, this is not a replication basics or how to create replication post. This post is intended for the DBA that is familiar with replication and may need a few tricks to overcome some unique situations.

Trick #1 - Replicating from multiple publications to multiple subscribers

Replication is not just a one publisher to one subscriber architecture. A publication database can have multiple publications. Each publication can have multiple subscribers. A subscriber database can receive multiple publications. A database can even be a subscriber and a publisher although I would not publish tables that are populated through replication from another SQL Server publisher. The distribution of the publications can run on different schedules for different subscribers.

In my environment there is a central business intelligence server where data from production OLTP systems is replicated. This server is used by various SSRS, SSAS (both tabular and multidimensional) and PowerBI solutions. Some solutions require near real-time data and others need more static day-old data. The ERP system has an Oracle back end. To replicate this data a tool called Attunity Replicate is used. This data is continuously replicated to the database that hosts the near real-time data. To get the data that is needed in the database that hosts the day-old data, SQL Server transactional and snapshot replication (more on when to use one versus the other in a bit) is used. This is the rare circumstance where I must publish tables that are populated through replication...but they are not populated by SQL Server replication and that is the difference. The one thing that is a bit of a problem with this is that when I need to reload data from Oracle into the near real-time database is that I must drop one or more tables out of SQL Server replication. Sometimes this requires dropping an entire publication. I have a few tricks to make this easier.

All other systems have SQL Server back ends and the data is replicated to the central business intelligence server via SQL Server replication. The distribution to the near real-time database runs continuously but distributes to the day-old database on a schedule. The interesting part of this setup is that the OLTP systems are generally running on a newer version of SQL Server and replicating to a older version. This can present some challenges, but I have tricks for that as well.

Trick #2 - Know when to use Snapshot Replication vs. Transactional Replication

Earlier in this post I mentioned that I use a combination of Snapshot and Transactional replication to replicate data from the near real-time database to the day-old data database. Several years ago, when replication from Oracle to the near real-time database to the day-old database was set up it was just one large Attunity task replicating data to the near real-time data then one transactional publication that was distributed to the day-old database. This was rather problematic in that any issue for a table would cause the Attunity replication to fatal error requiring a complete reload of all the tables in the near real-time database and then resynchronizing the data to the day-old database. To help reduce the impact of any fatal errors* I split the Attunity replication from one giant task into ten smaller, but still sizable, tasks with tables grouped by functional use. I also broke the SQL Server replication into multiple publications to align with the Attunity tasks. This helped however there were some publications that pushed data from the near real-time database to the day-old database that had issues. A little research into the data in the problematic tables showed that although the tables were quite small they had a TON of transactions throughout the day. For the day-old data all that mattered was the data at a given point in time not all the changes throughout the day. Moving smaller sized tables to Snapshot replication alleviated the issues. As with anything SQL Server related there isn't a "one size fits all" solution. Below are some of the considerations I use when deciding to use Snapshot or Transactional replication.

Snapshot

  • Need point in time data
  • Relatively small table
  • High volume of transactions
  • You need to perform bulk operations on the table at the publisher (does not mark the table as replicated)
  • Primary key not required

Transactional

  • Need near real-time data
  • Large table size
  • Low volume of transactions
  • You do not need to perform bulk operations on the table at the publisher (marks the table as replicated)
  • Primary key required

*Upgrading the Attunity application also helped reduce then occurrences of fatal errors.

Trick #3 - Putting data in different schemas and collations at the subscriber than at the publisher

These fall into the category of "not necessarily best practice" however they can be useful so test thoroughly before implementing.

Many of the source systems that replicate to our business intelligence server have all their tables in the dbo schema. This would make things very messy, so I have them broke out into separate schemas on the business intelligence server by source system name. Since everything that reads from this server is developed in-house this is not an issue and clarifies things for the developers.

Additionally, not all source systems use our standard collation. Sometimes it is very close like using Latin1_General_CI_AS rather than SQL_Latin1_General_CP1_CI_AS. Other times it may be case sensitive (CS) or in once case it needed the supplementary characters (SC) option. One of the reasons for consolidating all the data was to alleviate collation conflicts so the decision was made early on to have all data brought in have the collation of the business intelligence server rather than the source server. For the most part this has worked well but there are cases where the case sensitive data replicated from Oracle into the case insensitive business intelligence server has had some issues with "duplicates" that are not duplicates when the data is case sensitive. Again, this is not necessarily a best practice.

To put the data in a different schema or collation on the subscriber than on the publisher you should get familiar with your publication properties.

Destination object owner is the schema name on the subscriber. Default is to use the same owner as is on the publisher.

Copy collation is boolean. True means the tables will retain the same collation they had on the publisher, false means they will use the subscriber collation. Default is true.

Trick #4 - Truncate your subscriber tables rather than drop and recreate them

As I have mentioned the replicated data is mainly used for reporting or other business intelligence activities. The query patterns for this can be quite different than on the OLTP servers so many times indexes are added to the subscriber tables to improve performance. When I first started working with replication I left many properties defaulted...at first, I left them all defaulted and was just happy it worked. When I did have to reload data, I would get questions like "where did our indexes go?" I soon discovered another publication property "Action if name is in use". The default is to drop the existing object and create a new one. If you have added indexes on the table at the subscriber this can be a problem. I now change this setting to truncate data in the existing object for all my publications.

Here is a little bonus that wasn't in the SQL Saturday presentations.

Semi-annually we reload our non-production systems with production data. We do this by restoring a backup of the production source system database on the corresponding non-production source systems. After we do that we have to resynchronize all the replication. Once and a while I will get a "cannot bulk copy into column [column_name]" message. This is usually due to a DDL change getting made in production--remember these are third party vendor systems that are administered by someone other than the DBAs, so they have a bad habit of skipping non-production systems sometimes. This isn't typically an issue when replication is running since DDL changes replicate from the publisher to the subscriber database just fine. When this occurs, I have to go find the table in production then alter the non-production table as needed. Once I do this the synchronization agent will automatically start up again in a few moments. The problem is often identifying the table since the message only gives the column name. Below is a script I run to help find the table or at least narrow down the possible tables. The script below should be run against the publication database.

SELECT c.name, t.name
FROM sys.columns c JOIN sys.tables t ON c.object_id = t.object_id
WHERE c.name = 'column_name'

Trick #5 - Distributing a snapshot as soon as it is available

This is specifically for snapshot replication. When configuring the publication, I will set the schedule that I want for the snapshot.

The problem I used to have was how do I get the snapshot distributed right away. I could have the distributor run continuously however I don't like having an agent running when it isn't needed plus if I get asked "has that finished distributing yet?" I would have no way of being quickly able to tell. I could define a schedule but that involves guess work and hoping the snapshot agent doesn't run long. My solution was to have the distributor run on demand only.

So my previous statement about having it run on demand only seems counter intuitive at first glance. Snapshot replication uses two SQL Agent jobs; one for the snapshot and one for distribution. What I do is to get into the SQL Agent and modify the Agent Job for the Snapshot.

The SQL Agent job created by SQL Server when setting up the publication has three steps. I add an additional step.

The new step will call sp_start_job in msdb and I give it the distribution SQL Agent job name for the @job_name parameter. I grab the job name from the General page of the SQL Agent job properties for the distribution job.

Here is the code.

EXEC [msdb].[dbo].[sp_start_job] @job_name = '[distribution job name]';

Next I move the new step to be the third step and modify the On Success and On Failure actions so that the Run Agent step will Go to the next step on success rather than quit reporting success. I have the new step quit reporting success on success and have both go to Detect nonlogged agent shutdown on failure.

At one SQL Saturday an attendee was shocked and questioned if modifying something that SQL Server created was a good idea. This is simply adding a step to a SQL Agent job not altering a system stored proc so I don't have any issue doing this and it works just fine for all the snapshot replication I have running.

Trick #6 - Changing the transactional replication distribution schedule from continuous to scheduled and vice versa

This trick also involves getting into the SQL Agent jobs. Transactional replication uses three jobs; one for the snapshot, one for the distribution and one for the log reader. I will modify the schedule the Agent Job for the Distribution.

For either going from continuous to scheduled or vice versa you make the change by changing the schedule type. If going from continuous to scheduled, you will need to first stop the distribution agent job the define a schedule. If going from scheduled to running continuously you will change the schedule type to Start automatically when SQL Server Agent starts the manually start the SQL Agent job for distribution since you probably won't want to cycle the SQL Agent service.

Trick #7 - Finding your articles

If you have lots of articles (tables) in a bunch of different publications it can be a challenge to keep track of what article is in which publication. The script below will show you what is where. It is also useful for checking to make sure you don't have the same article in multiple publications.

USE distribution
GO
SELECT p.publication, a.article
FROM dbo.MSpublications p JOIN dbo.MSarticles a ON p.publication_id = a.publication_id
ORDER BY a.article;

Trick #8 - Painlessly add articles to your publication

First off, this trick is one I found online that helped reduce the pain of adding new articles to my transactional replication publications. For years I hated adding a new article since it forced me to reinitialize the entire subscription which could take a couple hours. Finally, I stumbled across this article "Adding an Article Without a Full Snapshot" On SQL Server Central, and things got much easier.

There are a few key points for this trick to be successful. First, you have to run the following scripts against the publication database before adding the publication and second, you have to add the article using the GUI; if you use the stored procs it will cause a snapshot of the entire publication to be generated.

EXEC sp_changepublication
@publication = N‘PublicationName',
@property = N'allow_anonymous',
@value = 'false';
GO

EXEC sp_changepublication
@publication = N'PublicationName',
@property = N'immediate_sync',
@value = 'false';
GO

If you need to set properties, like in trick #5 and #6, for this article you will need to set the properties for just the article(s) added. If you set the properties for All Table Articles, it will cause a snapshot of the entire publication to be generated.

One small disclaimer. This trick works most of the time however there are many things that may not be apparent that can still cause a snapshot of the entire publication to be generated.

Trick #9 - Changing the snapshot location

Occasionally the location you designate for your snapshots when you configure distribution can become unsuitable for several reasons. Perhaps your snapshots have grown large enough they no longer fit and you need to point to a location with more space. There are two ways of doing this. You can use a system stored procedure or you can use the GUI.

EXEC sp_changedistpublisher
@publisher = 'YourServerName',
@property = 'working_directory',
@value = 'S:\ReplData\'

OR

The next two tricks were quite controversial at one of the SQL Saturday where I gave this presentation. Especially trick #10. What these tricks show is how to overcome a problem that I ran into in my environment and others may run into as well. I am by no means suggesting that this is how you should do things but rather, when you have no other feasible options, you can make things work by using these tricks. Again, these are not necessarily best practices but rather tricks that may be useful in the real world.

Trick #10 - Overcoming version differences

When your publisher is an older version of SQL Server than your subscriber you may get this strange message if you are doing all the work from the publisher server.

It does seem strange that if you are trying to replicate from SQL Server 2012 to SQL Server 2016 it would tell you that SQL Server 2016 does not satisfy the minimum version level. To get around this you can create the subscription from the subscriber, after you have created the publication, rather than from the publisher. Or you can use SSMS 2016, or higher, to connect to both the publisher and subscriber.

Warning...the following is not supported by Microsoft

Deprecated Features in SQL Server Replication.

Even though it is not officially supported by Microsoft I was able to replicate from SQL Server 2008 R2 to SQL Server 2016 but had to use a pull subscription. I was also able to replicate from SQL Server 2016 to SQL Server 2008 R2 using both push and pull subscriptions. This can be useful if you have legacy systems that you are not able to upgrade but still need to use in a replication solution.

After the controversy at the SQL Saturday I contacted the Microsoft TAM for the company I work for and asked about this. He told me that even though it isn't supported it may work however just be aware that future updates to either version of SQL Server could break your replication as Microsoft won't test this functionality prior to release.

Trick #11 - Crossing domains

If you are replicating across domains, you can get a few different messages.

If you just put the server name without the domain, it will tell you the server can’t be found.

Try to fully qualify the name or use the IP address and it tells you that won’t work either.

To get around this...create an alias.

Yes, I realize the second error message says aliases aren't supported...however they work. Again, this isn't best practices but tricks that I use to get out of tough spots.

Additionally, if you don’t have trust between the domains you may need to use a SQL Login to connect to the subscriber.

Around this point I usually get asked what permissions are needed for these accounts.

Security Role Requirements for Replication Setup

Security Role Requirements for Replication Maintenance

Trick #12 - Script your publications

Back in trick #4 I mentioned we do semi-annual refreshes of our non-production environments by doing a restore of a backup from production. Before restoring the databases, I first drop the publications (on the non-production servers). Once the restore is complete I need to recreate the replication architecture. If I were to manually recreate it from scratch every time it would be very tedious and the chances that I would forget something would be very high. So, to make sure that things are put back exactly as before I make sure I have all my publications scripted. This is also useful for DR as well as setting up a new publication say a test server, copying the script to a UAT server and with a quick update to server names and credentials in the script I can create the matching UAT publication with very little effort.

To script a publication right click on the publication name and select Generate Scripts

From the Generate Script dropdown I prefer to Open in New Query Window then in the script I make a few modifications before saving the script.

Here are a few things you may want to update in your script before saving.

  • Clean up the publication access section of the script removing unneeded access.
  • I prefer to comment out the create subscription section and add subscriptions manually.

For multiple transactional publications on a database you may want to comment out the create log reader section on all but the first script you would run. If you don't it won't cause a problem but will keep telling you it can't create a log reader since one already exists.

As you add new articles to your publication, you can update the script to add new articles...or just generate a new script. I like keeping my scripts up to date for DR.

One good thing to know is that if you have configured article/publication properties, like were mentioned in tricks #3 and #4, these will be persisted when you script your publication.

Bonus Trick - Completely removing replication

As mentioned above I delete replication before restoring a database then recreate replication. Sometimes there are replication artifacts that come over from the server where the backup was taken, and this can cause issues with recreating publications. The fix I have found for this is to completely remove replication on the database, and then start from scratch. This works for transactional and merge replication.

EXEC sp_removedreplication @dbname = '[DatabaseName]', @type = 'tran'; -- for merge replication you would change 'tran' to 'merge'

Bonus Tip - Replication Monitor is your friend

There are many times where the only way to get a good error message is to dig it out of Replication Monitor. If you encounter issues with your replication such as seeing the SQL Agent jobs failing, and you don't see a useful message in the job history in SQL Agent, look in Replication Monitor.

I hope that you find some of these tricks useful.