Wednesday, February 22, 2017

Speaking at SQLSaturday #584

I will be speaking at SQLSaturday #584 in Colorado Springs, CO on March 25th 2017!

I will doing my upgraded presentation on "My Top 10+ Replication Tricks"

In the next few weeks I will be blogging on the content of the presentation.

Register today and I hope to see you there!

Thursday, January 19, 2017

Where oh where did my LDF go?

This is a follow-up to my previous post on recovering system databases. This will cover several ways to recover missing LDF files.

For this and some other testing I have used the Chinook database found at Codeplex. The convention used in this post is SQL Data files are on the S: drive and LDF files are on the L: drive.

If the database has been cleanly detached and the LDF gets deleted it can be recreated when attaching by using any of the following scripts:

CREATE DATABASE Chinook ON (FILENAME = 'S:\SQL_DATA\Chinook.mdf') FOR ATTACH;

CREATE DATABASE Chinook ON (FILENAME = 'S:\SQL_DATA\Chinook.mdf') FOR ATTACH_REBUILD_LOG;

EXEC sp_attach_single_file_db @dbname = 'Chinook' , @physname = 'S:\SQL_DATA\Chinook.mdf';

If SQL Server has been cleanly shutdown and an LDF gets deleted testing with the Chinook database shows that when SQL Server is restarted it will recreate the LDF.

To see what happens when SQL Server crashes I ran an update query with a BEGIN TRANSACTION but no COMMIT to create an open transaction. I then stopped the SQL Server database engine causing an error.

(1 row(s) affected) Msg 596, Level 21, State 1, Line 0 Cannot continue the execution because the session is in the kill state. Msg 0, Level 20, State 0, Line 0 A severe error occurred on the current command. The results, if any, should be discarded.

The LDF files is then deleted and when SQL Server database engine is restarted the Chinook database shows Recovery Pending.

Best case is that the location for the LDF is still present or can be recreated by adding a new L: drive. If no backup set is available to restore and get back to as close as the time of crash as possible the database can be made useable by running the following scripts however any transactions in the LDF that were not written to the MDF (NDFs) will be lost.

ALTER DATABASE [Chinook] SET EMERGENCY; ALTER DATABASE [Chinook] SET SINGLE_USER; DBCC CHECKDB([Chinook], REPAIR_ALLOW_DATA_LOSS); ALTER DATABASE [Chinook] SET MULTI_USER;

You will see the following messages before the DBCC results:

File activation failure. The physical file name "L:\SQL_LOGS\Chinook_log.ldf" may be incorrect. The log cannot be rebuilt because there were open transactions/users when the database was shutdown, no checkpoint occurred to the database, or the database was read-only. This error could occur if the transaction log file was manually deleted or lost due to a hardware or environment failure. Warning: The log for database 'Chinook' has been rebuilt. Transactional consistency has been lost. The RESTORE chain was broken, and the server no longer has context on the previous log files, so you will need to know what they were. You should run DBCC CHECKDB to validate physical consistency. The database has been put in dbo-only mode. When you are ready to make the database available for use, you will need to reset database options and delete any extra log files. DBCC results for 'Chinook'.

If the database data files must be moved to a different server a “hack attach” must be used.

To do a hack attach you first create a new database with the same name as the broken database. Create the same file structure as the broken database. Take the database offline then replace the data files with the ones from the broken database then bring the database online; you may receive an error message; just click OK. It will show as recovery pending; you may need to refresh the databases list to see this. Now you can run the scripts previously listed to make the database usable. You will see this in the messages window informing you that the files didn’t come from same database originally.

Msg 5173, Level 16, State 1, Line 8 One or more files do not match the primary file of the database. If you are attempting to attach a database, retry the operation with the correct files. If this is an existing database, the file may be corrupted and should be restored from a backup. Log file 'L:\SQL_LOGS\Chinook_log.ldf' does not match the primary file. It may be from a different database or the log may have been rebuilt previously. Warning: The log for database 'Chinook' has been rebuilt. Transactional consistency has been lost. The RESTORE chain was broken, and the server no longer has context on the previous log files, so you will need to know what they were. You should run DBCC CHECKDB to validate physical consistency. The database has been put in dbo-only mode. When you are ready to make the database available for use, you will need to reset database options and delete any extra log files. DBCC results for 'Chinook'.

If the database previously had service broker enabled you will not be able to simply enable it since the database service broker guid will not match that stored in master.sys.databases. If you look at the database properties it will appear to match but that simply tells us that SQL Server pulls the database properties from master database. If you try to enable service broker you will see the following:

ALTER DATABASE Chinook SET ENABLE_BROKER

Msg 9776, Level 16, State 1, Line 8 Cannot enable the Service Broker in database "Chinook" because the Service Broker GUID in the database (AD93393D-7798-486E-8406-1D887CE9655A) does not match the one in sys.databases (D49E0203-E48D-46CE-ABA9-3585BA768ECA). Msg 5069, Level 16, State 1, Line 8 ALTER DATABASE statement failed.

You can fix this by running the following statement:

ALTER DATABASE Chinook SET NEW_BROKER WITH ROLLBACK IMMEDIATE;

This will reset the guid in both places and enable the broker. You can check this with this query:

SELECT [name] ,[service_broker_guid] ,[is_broker_enabled] FROM [master].[sys].[databases]

Tuesday, January 10, 2017

HELP! HELP! My system databases have disappeared.

Disaster! The SAN array that hosts our system databases crashed. Now what do we do?

Fortunately, the array that hosted the C: drive where SQL Server was installed was intact as was our drives for the data files, transaction logs and backups. Here is how we got our server back up and running. First you will need to copy the ISO of the appropriate version and edition of SQL Server to your VM and mount it.

Next run command prompt as admin and execute the following script changing the drive letter, instance name if a named instance, Windows account to add with administrative access and password to fit your circumstances:

E:\>Setup /QUIET /ACTION=REBUILDDATABASE /INSTANCENAME=MSSQLSERVER /SQLSYSADMINACCOUNTS="domain\YourLogin" /SAPWD=$0m3P@$$w0rD

Once it finishes rebuilding the system databases it should look similar to the following.

If you look in C:\Program Files\Microsoft SQL Server\...\MSSQL\Data you should now see system data files and transaction logs.

Go to SQL Server Configuration Manager and in the Startup Parameters add the -m flag to start SQL Server in single user mode.

Now in SQL Server Management Studio you can connect a query window, not object explorer, and use the following query to restore master database altering the query to fit your situation. You could also do this from SQLCMD if you prefer. Before running this you need to make sure that new drives with the same drive letters have been created for all system databases.

RESTORE DATABASE master FROM DISK = 'C:\IT\master_backup_2016_09_09_140813_6577087.bak' WITH REPLACE

Once the restore completes it will stop SQL Server. Remove the -m flag. You should also update the location for master database under Startup Parameters and copy the newly restored data and log files to the correct locations. You can now start SQL Server and reconnect SQL Server Management Studio and restore msdb; just be sure the SQL Agent service isn’t running. You can restore model and distribution if needed as well.

You should now be back in business.

Friday, July 29, 2016

Replicating from old versions of SQL Server to SQL Server 2016

At SQL Server Saturday I was asked about replicating from SQL Server 2014 to SQL Server 2016. The attendee told me that when they tried to replicate from SQL Server 2014 to SQL Server 2016 it “blew up.” This past week, as I have had time, I have been doing some testing in a sandbox environment. I was able to create a publication in SQL Server 2014 and replicate it to SQL Server 2016. I was also able to replicate from SQL Server 2012 to SQL Server 2016. The key is that while you create the publication on the lower version you have to go to your SQL Server 2016 subscriber to create the subscription. If you try to create the subscription from the lower version publisher you will receive this error.

Yes SQL Server 2016 should more than meet the requirements when replicating from SQL Server 2012 or SQL Server 2014. It just checks for major version differences and throws this error regardless.

There is one oddity that I haven’t experienced when replicating from SQL Server 2008 to SQL Server 2012. If you right click on the subscription in SQL Server 2016 and try to View Synchronization Status you will receive this error.

So you have to view the status from the publisher.

If you need to replicate from SQL Server 2008 or SQL Server 2008 R2 to SQL Server 2016 you are out of luck. According to MDSN “Replication is supported if each SQL Server endpoint is within two major versions of the current version of SQL Server. Consequently, SQL Server 2016 does not support replication to or from SQL Server 2008 or SQL Server 2008 R2.”

One workaround, that I tried in a sandbox environment, would be to replicate from your SQL Server 2008 or SQL Server 2008 R2 system to an intermediary SQL Server 2012 system and then from the SQL Server 2012 system to SQL Server 2016 system.

UPDATE!

After applying CU1 to the SQL Server 2016 sandbox server I am now able to View Synchronization Status from the SQL Server 2016 subscriber. You can get the cumulative update here.

Friday, July 22, 2016

Speaking at SQL Server Saturday 539

I am speaking at SQL Server Saturday 539 in Sioux Falls South Dakota tomorrow July 23, 2016. My presentation is titled "My Top Ten Replication Tricks." Here are the slides. I will do a more in depth post as I have time.

Tuesday, May 10, 2016

Upgrading your MasterDataServices database and cleaning the tblTransactions table

If you have been running Master Data Services (MDS) you may have noticed the tblTransactions in your MasterDataServices database growing rather large. This is a situation we recently faced. Our tblTransactions was over 100 GB and by far the largest table in the database. We were faced with the challenge of trying to clean up this table. After some research we found that starting with SQL Server 2012 SP1 CU15 there would be built in stored procedures to clean up this table and related tables while maintain referential integrity. We found that for our purposes SQL Server 2012 SP2 CU7 is the build we needed. I scheduled a maintenance window for our test environment and ran the update. Once it was done I connected to the database, navigated to the stored procedures and…nothing. Discussing the issue with Microsoft PFE Josh Gerszweski provided me the answer. With Master Data Services running the cumulative update is not the end of the process but only the beginning.

First you run the cumulative update as normal. If Master Data Services service is running on a different server you will need to run the cumulative update on that server as well.

If you check the stored procedures in the MasterDataServices database you will see that there are no new procedures for cleaning up the tblTransactions table.

The next step is to connect to the Master Data Services Configuration Manager. If you are not familiar with this tool you will have to determine where you have the Master Data Services running. If it is on the same server as the MasterDataService database you can look in C:\Program Files\Microsoft SQL Server\110\Master Data Services\Configuration\MDSConfigTool.exe however if it is running on a different server you will need to look in the previously mentioned location on the application server.

Change to the Database Configuration page, click the Select Database button.

Enter the information to connect to your MasterDataServices database.

You will notice that the configuration manager detects the MasterDataServices database needs an upgrade.

Click the Upgrade Database button...but first you should backup the MasterDataServices database.

Click Next, Next, Finish to complete the upgrade. Now go back refresh the stored procedures in the MasterDataServices database. You will now see mdm.udpTransactionsCleanup available for you to clean up the tblTransactions table.

To make this work you need to know the Model_ID that you need to clean up and give it a CleanupOlderThanDate. It should be noted that this only uses a date not date time. To find your model IDs you can use this query:

SELECT ID, Name FROM mdm.tblModel ORDER BY ID;

Once you have found your model ID and decided on a CleanupOlderThanDate you are ready to run the stored procedure:

EXEC mdm.udpTransactionsCleanup
@Model_ID = [model id from query],
@CleanupOlderThanDate = [date you picked]

If your tblTransactions table is large enough you may encounter issues like filling the transaction log if you try to clean up all the records in one iteration and may need to batch the process to get down to the desired date. Once you have done that you can create a SQL Agent Job to keep the table trimmed to an acceptable size.

Friday, December 11, 2015

In-memory OLTP

A brief overview of what is in-memory OLTP, how to convert tables to be memory optimized, some limitations with in-memory tables and a few additional items of interest.

What is in-memory OLTP? It was first introduced in SQL Server 2014 as code name Hekaton. It takes a traditional disk based table and loads it into memory for improved performance. Although it exists in-memory the data can be persisted to disk for data durability (it is only read from disk on server startup). Data durability can be disabled or set to delayed durability which may improve latency but is a risk in the event of a server crash or unexpected shutdown.

Since the data lives completely in-memory it eliminates latches and locks. Indexes are also in-memory and inherently covering. Fragmentation and fill-factor do not apply. In-memory tables can be used in conjunction with disk based tables but query performance will be limited by the performance capability of the disk based tables.

In-memory tables are best for high volume, short running transactions. Typical workloads would include high volume read operations and operations where low latency is needed. Many articles mention session state management or other web and social networking applications. They are not well suited for long running transactions such as ETL or OLAP.

Converting disk based tables to in-memory

A database that hosts in-memory tables will need a new FILESTREAM file group for memory optimized data. Fortunately if you use the Memory Optimization Advisor in Management Studio to convert disk based tables to in-memory this will be added for you automatically.

In-memory tables must have a primary key. This key is not a clustered index since those are not allowed for in-memory tables, but is rather a non-clustered hash key. The disk based table you are converting does not need to have this prior to migration, it will be created during the migration process, but it must have a way to uniquely identify each row. The easiest way to do this is a single column that is an integer such as a row id column. This does not even need to be a primary key prior to migration although it can be. You can use other types of columns for identifying each record however: any string data type columns that will be in the primary key need to be a BIN2 collation (SQL Server 2014), nullable columns are not allowed in the primary key and unique constraints are not allowed.

To identify other items that would prevent your candidate table from being memory optimized run the Memory Optimization Advisor from the context menu when you right click on the candidate table

You can see my table has a couple issues. I changed the text data type to character data type and dropped the default constraint. If you have to make similar adjustments you can see that it could require code updates in the application to now insert values that used to be handled by a default setting for a column.

Now it passes all the validation checks.

Note the warnings. Once the table has been memory optimized there are a variety of things that you will be unable to do on the table that you could have on a disk based table. You also cannot alter the table nor any of the indexes once the memory optimized table has been created.

Here is where the FILESTREAM file group will be created for you. Make any adjustments needed to make sure it conforms to your standards and best practices. You will notice this also renames your disk based table. If you want the data in the disk based table copied to the memory optimized table you will need to check the box. It is unchecked by default. Lastly, you can check the box to make the table without data durability.

This is where you identify what will be the primary key (non-clustered hash key) for the memory optimized table. In this example I am converting the pre-existing primary key that was an integer data type. By default it is set to create the non-clustered hash key; these are good for point lookups but not as good for ranges. You can choose a non-clustered index which will work better for ranges if that is your anticipated workload. If you do go with the non-clustered hash key you will need to give it a bucket count. Here is a good explanation of how to figure out the bucket count. This cannot be adjusted once the table has been created.

Review the settings and migrate your table.

Migration complete.

If we check our database properties we can see that the FILESTREAM file group has been added.

Once the table has been created you can’t adjust the bucket count, alter or add indexes or alter the table. So what do you do if you need to change that varchar(50) to varchar(500) or add a new column or adjust the bucket count? You have to revert the table back to disk based, make your changes then migrate the table to memory optimized once again. To revert the table back to disk based just follow these steps:

  1. SELECT * INTO new_table FROM memory_optimized_table
  2. Add needed indexes to new_table
  3. DROP memory_optimized_table
  4. Rename new_table to previous table name
Other than the previously mentioned limitations with altering the table and indexes some of the more significant limitations in my opinion are: DBCC CheckDB will skip memory optimized tables, DBCC CheckTable will fail on memory optimized tables, check and foreign key constraints are not supported by memory optimized tables, memory optimized tables cannot be accessed by distributed transactions and memory optimized tables cannot be replicated. Click here for a full list of Transact-SQL constructs that are not supported by in-memory OLTP.

What is improving with in-memory OLTP in SQL Server 2016?

Alter table after creation will be supported. Natively compiled stored procedures can be altered. Better collation support so string data types in the primary key will not be required to be a BIN2 collation. As SQL Server 2016 continues to release CTPs the list is likely to grow. Here is one other article on improvements with in-memory OLTP in SQL Server 2016.

One of the challenges you may face is deciding which of your current tables would benefit from memory optimization. There are tools in Management Studio to assist you.

If you haven’t already you will want to configure the Management Data Warehouse. You can access the wizard for this by expanding the Management folder in the Object Explorer and right clicking Data Collection to bring up the context menu. From this menu go to Tasks/Configure Management Data Warehouse. Pick the server to host it and give it a database name. Also map the logins and users you would like to use with the Management Data Warehouse. Once the warehouse is set up we need to put data into it.

To get to the data collection wizard you follow the same path as we used for configuring the warehouse but choose the Configure Data Collection option. Pick the server and database you just set up and when you are selecting data collector sets be sure to include the Transaction Performance Collection. Once you have this configured you will have to give it some time to collect data…how much time is up to you.

Once you have data you will want to analyze it. Right click on your Management Data Warehouse database and from the context menu choose Reports/Management Data Warehouse/Transaction Performance Analysis Overview. There should be links to three different analysis reports: two for tables, one for stored procedures. You will want to look at the table analysis reports.

Both of the analysis reports have a similar format. There is a performance gain axis running up and down on the left and the horizontal axis indicates the level of work required to migrate a given table. This divides the report into four quadrants. The tables you will likely want to consider migrating to memory optimized will be in the High Gain/Minimal Migration Work quadrant.

I hope this has brought some insight into in-memory OLTP, maybe answered a few question and maybe caused you to think of a few more than you previously had. There is a lot of information out there and one blog post can’t cover it all. I am sure as in-memory evolves and I have more exposure to it through use I will find additional things to write about.