MS SQL 2014

Checklist for SQL Server implementations

The purpose of this article is to serve as a checklist for whenever I encounter an existing SQL Server instance or when I’m to implement a new SQL Server instance. There are a bunch of things that I want to check out, verify etc. Most of these aren’t particulary difficult, but the trick is to remember to do them.There will always be more stuff to put in here, or things that you and I prefer to do differently. And, this will always be a work in-progress state.

Before and during installation. Or check and document for existing installations.

  • No powersave in bios or Windows (High Performance)
  • What components to install
  • Collation
  • Security mode
  • Password for “sa”
  • Service accounts
  • Default folders for database files, backup files
  • Configuration for tempdb (where to put it, number of files, size for each file)


  • Instant File Initialization, implement and verify
  • Lock Pages In Memory
  • Max Server Memory Setting
  • Cost Threshold for parallelism, increase to 25 or whatever is your preference.
  • Check for unexpected weird backups, such that will break the chain of scheduled backups and consequently ruin the expected restore scenarios.
  • Optimize performance for ad-hoc workloads, set to true
  • Install my sp_dbinfo, sp_tableinfo and sp_indexinfo procedures.
  • Verify no auto-shrink and no scheduled shrink jobs.
  • Verify no auto-close database settings.
  • Verify page verify is checksum. If not, set to checksum.
  • Verify reasonable size for ldf files (sp_dbinfo).
  • Local Books Online (SQL Server Documentation). Update it.
  • Turn on trace flag 2371?
  • Turn on trace flag 4199? Might not be neccesary for 2016, depending on database compatibility level and other settings.


  • Database file location
  • Database file autogrow not percentage
  • Database file autogrow increments reasonable (not too small, not too large)
  • Verify proper recovery model for each database. Simple if we aren’t supposed to take log backup. Else full, or possibly bulk-load.
  • Verify no auto-shrink and no scheduled shrink jobs.
  • Verify no auto-close database settings.
  • Verify page verify is checksum. If not, set to checksum.
  • Verify reasonable size for ldf files (sp_dbinfo).
  • Not too many Virtual Log Files (max ca 200).


  • Set tempdb initial size to reasonable size.
  • Multiple tempdb data files, reasonable initial size and autogrow on all.
  • Turn on trace flag 1118?
  • Turn on trace flag 1117?

SQL Server Agent

  • Implement mainentence routines. I typically use Ola Hallengren’s solution.
  • Increase job history to 200 rows per job and 50000 rows total.
  • Inplement Database Mail. Test and verify. Configure Agent to use the profile, restart Agent service. Create test job that only send mail on execution and verify the mail config.
  • Install my MailAfterJob procedure.
  • Implement SQL Server Agent Event Alerts, as per my scipt.
  • Possibly implement Master and Target server environment for Agent (MSX-TSX).
  • Implement routine to trim mail history tables in msdb.

Windows updates cause SQL startup issues due to “TCP port is already in use” errors

TCP/IP functions that manages the TCP port pool which was introduced in the July 10, 2018 Windows updates for Windows 7/Server 2008 R2 and Windows 8.1/Server 2012 R2.

This regression may cause the restart of the SQL Server service to fail with the error, “TCP port is already in use”. We have also observed this issue preventing Availability Group listeners from coming online during failover events for both planned and/or unexpected failovers. When this occurs, you may observe errors similar to below in the SQL ERRORLOGs: Error: 26023, Severity: 16, State: 1.
Server TCP provider failed to listen on [ <IP ADDRESS> <ipv4> <PORT>]. Tcp port is already in use.
Error: 17182, Severity: 16, State: 1.

If the issue is impacting an Availability Group listener, you may also observe the below error in addition to the above:

Error: 26075, Severity: 16, State: 1.
Failed to start a listener for virtual network name ‘<LISTENER NAME>’. Error: 10048.

Additionally, you may also observe the following errors in the Windows System logs:

The SQL Server (<INSTANCE NAME>) service entered the stopped state. 
The SQL Server (<INSTANCE NAME>) service terminated with the following service-specific error:  Only one usage of each socket address (protocol/network address/port) is normally permitted.


The below tables list the KB articles for the patches that introduced the regression and the KB articles for their correlating hotfixes.


For Windows 7/Server 2008 R2

KBs that introduced the regression KBs that fix the regression
July 10, 2018—KB4338818 (Monthly Rollup) July 18, 2018—KB4338821 (Preview of Monthly Rollup)
July 10, 2018—KB4338823 (Security-only update) Improvements and fixes – Windows 7 Service Pack 1 and Windows Server 2008 R2 Service Pack 1 (KB4345459)

For Windows Server 2012

KBs that introduced the regression KBs that fix the regression
July 10, 2018—KB4338830 (Monthly Rollup) July 18, 2018—KB4338816 (Preview of Monthly Rollup)
July 10, 2018—KB4338820 (Security-only update) Improvements and fixes – Windows Server 2012 (KB4345425)

For Windows 8.1/Server 2012 R2

KBs that introduced the regression KBs that fix the regression
July 10, 2018—KB4338815 (Monthly Rollup) July 18, 2018—KB4338831 (Preview of Monthly Rollup)
July 10, 2018—KB4338824 (Security-only update) Improvements and fixes – Windows 8.1 and Server 2012 R2 (KB4345424)

Please note that the build-specific updates do not have a correlating hotfix-only patch, therefore each build only has one applicable patch to address the regression as noted in the table below.


KB that introduced the regression KB that fixes the regression
July 10, 2018—KB4338819 (OS Build 17134.165) July 16, 2018—KB4345421 (OS Build 17134.167)
July 10, 2018—KB4338825 (OS Build 16299.547) July 16, 2018—KB4345420 (OS Build 16299.551)
July 10, 2018—KB4338826 (OS Build 15063.1206) July 16, 2018—KB4345419 (OS Build 15063.1209)
July 10, 2018—KB4338814 (OS Build 14393.2363) July 16, 2018—KB4345418 (OS Build 14393.2368)
July 10, 2018—KB4338829 (OS Build 10240.17914) July 16, 2018—KB4345455 (OS Build 10240.17918)

There can be other causes of the “TCP port is already in use” errors preventing SQL resources from starting/coming online which are not due to the regression mentioned above. 

– useful in identifying whether any other process(es) may be using the port meant for your SQL instance(s).

Let me know if you have any further question and your comments will be learning point.

— Microsoft Certified Solutions Associate (MCSA)

Tlog File

You probably read this because you have a database with a large transaction log file (.ldf).

How to manage the ldf file
How to make the ldf file smaller

Transaction handling,

Transaction logging,


Various backup types and Restore scenarios etc.

How to manage the ldf file
SQL Server logs every modification to the ldf file, for several reasons. You have two options for how to make sure that the ldf file doesn’t grow indefinitely:

Set recovery model for the database to simple. Do this if you don’t want to take transaction log backups.
Set recovery model to full. Do this if you do want to take transaction log backups.

Yes, it really is that simple, so re-read above points again! So, there are two ways to go wrong:
Have simple recovery and attempt to do a log backup. Result: You will get an error message from the BACKUP LOG command.
Have full recovery and don’t do log backup. Result: ldf file will just grow and grow, endlessly! This is a very common cause for large ldf files.

When you do a log backup, SQL Server will mark space in the ldf file as “re-usable”, so that space can be re-used by subsequent log records that are produced. This is often referred to as “truncate the log”, or “empty the log”. If you have simple recovery, then it is not your responsibility to “empty the log”, SQL Server will do this by itself. However, old open transactions or long running transactions will set a limit for how much can be emptied in the log. So, it is perfectly possible to have large ldf files in simple recovery model.

Your recovery requirements
It is imperative that you match your backup and restore strategy to your recovery requirements. This cannot be stressed enough.
How much data can you afford to lose? Sometimes referred to as Recovery Point Objective (RPO).
For how long can you accept your database to be unavailable? Sometimes referred to as Recovery Time Objective (RTO).
This is a whole topic in itself. Here’s a good primer on the topic of RPO and RTO. Don’t just set the recovery model to simple in order to keep the ldf file small! Make sure you have a recovery model, backup strategy and restore strategy that supports your RPO and RTO.

How large should the ldf file be?
How long is a piece of string? The technically correct answer to above question is:

Large enough to accommodate the log records produced between your log backups (or whenever SQL Server itself will empty the log, in simple recovery model). Long running transactions taken into account.

Problem with above is that the answer isn’t very helpful. So allow me to generalize a little bit:
In simple recovery model, you can frequently get by with a small log file (about 5-10% of the database size). But you still need to accommodate your largest transactions. So if you find that the ldf file keep growing to a certain size, then just leave it at that size!
In full recovery, you might just need as large ldf file as the amount of data you have. Say you have a data file of 100 GB, filled to 85 GB. Then don’t be surprised if you need an ldf file which is about 85 GB. In the end, it depends on what you do in the database.

But if you for instance rebuild all indexes once a week, then you have modified all data in the database and those modifications have been logged to the ldf file. (You can be smarter and only rebuild fragmented indexes, using for instance Ola Hallengren’s maintenance procedures.) Rebuilding indexes is only one example; in general, you want to watch out for batches (typically running nights/weekends). It isn’t common to during normal “daytime work” produce a massive amount of log records between two log backup occasions. How frequent you backup your log is of course also a factor, common frequencies are once every 10 minutes or once every hour.

How to make the ldf file smaller
First you want to determine whether there are any “blockers” that prohibits SQL Server to re-use space inside the ldf file.
Exeute below command from a query window:
SELECT name, log_reuse_wait_desc FROM sys.databases

Find your database and see what you have in the log_reuse_wait_desc column for the database.

If it is NOTHIN or LOG_BACKUP, then just keep reading.
If it is something else, then you have some blocker that probibits re-use (and in the end also shrinking) of the ldf file. You need to sort this out first. A good Internet search engine is your friend here. If it is REPLICATION, then probably the log reader has stopped processing log records. Sort this up by troubleshooting why it stopped or remove replication if it isn’t supposed to be there (some leftover).

Now that you know how to manage the ldf file, you can determine if you think the ldf file is “too large” and you want to shrink it. The ldf file does notshrink by itself, or when you produce a log backup. To shrink an ldf file, you use a command called DBCC SHRINKFILE (documented here). You can do this in SSMS by right-clicking the database, select “Tasks”, “Shrink” and “Files”. I recommend that you script the command into a query window and execute from there. That allow you to copy the command you executed into forum posts etc. in case you need more help – much better than try to explain how you clicked in the GUI (and also makes it easier to include possible error messages).

If you are in simple recovery model
Just shrink the file! If the file doesn’t shrink to the size you want, then try again, and again. If it still doesn’t shrink to the size you want, then read this (the “Shrinking of transaction log file” section).

If you are in full recovery model
… and this is where you want to be, then you have to make a decision:

Is it OK to break the transaction log backup chain?
By this we mean the chain of log backups you may have. If you don’t have any prior log backups (perhaps not even a prior database backup), then it is OK to break the chain of log backups. If you do have earlier log backups and want to be able to restore from them up to any point in time then it isn’t OK to break the chain of log backups.
OK to break the chain of log backups. This is easiest:
Set recovery model to simple.
Do the shrink according to the steps above (for a database in simple recovery model).
Set recovery model to full.
Perform a full database backup, so your scheduled log backups have somewhere to begin from.
Not OK to break the log backup chain. Then you backup the log (which can produce a huge log backup file, and consequently take some time), and then shrink the file. If the file doesn’t shrink to the size you want, then backup log and shrink again, and again. If it still doesn’t shrink to the size you want, then read this (the “Shrinking of transaction log file” section).

Commands to do the actual shrink:
Simple recovery model
USE dbname
–First param below is fileno for log file, often 2. Check with sys.database_files
–Second is desired size, in MB.
Now repeate above commands as many times as needed!

Full or bulk_logged recovery model
USE dbname
BACKUP LOG dbname TO DISK = ‘C:\x\dbname.trn’
–First param below is fileno for log file, often 2. Check with sys.database_files
–Second is desired size, in MB.

Now repeate above commands as many times as needed!
What you end up doing is empty the log (CHECKPOINT or BACKUP LOG) and DBCC SHRINKFILE several times, so that SQL Server can move the head of the log to beginning of the file and also so the end of the file becomes unused. Investigate the layout of the log file using DBCC LOGINFO in between.

If you have loads of VLF (many rows returned from DBCC LOGINFO), you probably had a small file size for the log initially and then had lots of small autogrow. Having lots of VLF is a a performance hit, especially for database recovery and batch modifications.If this is your case, consider shrinking the log file to a very small size and then expand the file size to some comfortable larger size.

Have a good day.

Implementing & Configuring Failover Clustering in Windows Server 2012 R2

Step by Step

Failover clusters in Windows Server 2012 provide a high-availability solution for many server roles and applications.

By implementing failover clusters, you can maintain application or service availability if one or more computers in the failover cluster fails.

There are a lot of information that you can digest on the Failover Clustering, for more information please log in to :

For this Failover Clustering demo, i will be using 4 VM’s, which is domain controller and 3 member server. Please refer to the screenshot :









1st step is to configure a Failover Cluster, which is in this step i will connect a cluster nodes to the iSCSI targets

—> Here is very simple <—

“Your organization has important applications and services that the company wants to make highly available.

Some of these services cannot be made redundant by using NLB, so you decide to implement failover clustering.

Because iSCSI storage is already in-place, you decide to use the iSCSI storage for failover clustering.

First, you will implement the core components for failover clustering and validate the cluster, and then you will
create the failover cluster.”

  1. For this configuration i will be using my OSI-SVR3 member server, on OSI-SVR3, open Server Manager, click Tools, and then click the iSCSI Initiator











2. In the Microsoft iSCSI interface, just click Yes











3. On the iSCSI initiator Properties interface, click the Discovery tab and then click Discover Portal

— Internet SCSI (iSCSI) initiator –—-> to established a connection with an iSCSI targe











4. In the IP address or DNS name box, type, and then click OK… – OSi-SVR1 server











5. Next, click the Targets tab, and click Refresh…

— In the Targets list, select iqn.1991-05…& then click Connect












6. Then cllick Add this connection to the list of Favorite Targets, & then click OK 2 times











1 – 4 on your SVR4 server…

7.  Switch to SVR3 and open Computer Management and make sure that you have few disk already attach to your Server to stimulate this demo, for this demo i have 3 VHD that i attach previously on the SVr3 server, all 3 disk having 30GB space each, own space at your choice.











8.  Switch to SVR4 and please make sure also that you have the same disk configuration…

–> make sure that all the disk is online to check Right-click Disk 1, & then click Online <–











2nd : Let install the failover clustering feature on our SVR2 server….

  1. Open Server Manager and continue with add roles & feature until you reach Select features interface,
  2. then click Failover Clustering and continue with installation











2. Next on the Confirm installation selections interface, click Install











3.  Once installation complete, click Close











Repeat steps 1 – 3 on SVR3 server…

4. Now we need to validate the both servers for failover clustering, on the SVR3 server open Failover Cluster Manager











5. On the right pane of Failover Cluster Manager, click Validate Configuration











6.  In the Validate a Configuration Wizard interface, click Next











7.  On the Select Servers or a cluster interface, please add our SVR2 & SVR3 and then click Next













8.  On the Testing options interface, click Run all tests recommended & then click Next











9. On the Confirmation interface, click NexT












10. This process might take a couple of mins or so, just wait.












11. Once the validation tests to finish, on the summary page, click go through View Report, if there is error fix it and repeat again test.












12.  Just verify that all tests completed 🙂











2nd : Our next step is to create the failover cluster…

  1. in the Failover Cluster Manager, click Create Cluster






















On the Select Servers interface, make sure you add SVR2 & SVR3 in the selected servers and then click Next












In Access Point for Administering the Cluster interface, in the Cluster Name box, type OSICluster1.

–> Under Address, type, and then click Next.













In the Confirmation box, verify the information, and then click Next











On the Summary interface, click Finish











Cluster Shared Volumes (CSV) enable multiple nodes in a failover cluster to simultaneously have read-write access to the same LUN (disk) that is provisioned as an NTFS volume.

With CSV, clustered roles can fail over quickly from one node to another node without requiring a change in drive ownership, or dismounting and remounting a volume.

CSV also help simplify the management of a potentially large number of LUNs in a failover cluster.”

  1. On SVR3 server, in the Failover Cluster Manager console, expand, expand Storage, and then click Disks.

–> locate a disk that is assigned to Available Storage. You can see this in the Assigned To column.

–> Right-click that disk, and then click Add to Cluster Shared Volumes.

–> Verify that the disk is assigned to Cluster Shared Volume










Our next step is to deploy and configure Highly Available File Server

1. On the SVR3 server, open Server Manager, click add roles & features and continue to Select server roles and then select File Server,  then click Next 2 times











On the Confirmation interface, click Install











Next, switch back to SVR3 server, in the Failover Cluster Manager, expand, right-click Roles, and then select Configure Role




















On the Select Role interface, select File Server, and then click Next










On the File Server Type interface, click File Server for general use, and then click Next










On the Client Access Point interface, in the Name box, type OSI-FS, in the Address box, type, and then click Nex










On the Select Storage interface, select the Cluster Disk 3 check box, and then click Next










On the Confirmation interface’






















After the clustered role is added successfully, click Close and we have successfully configured CAU.


Let me know if you have any further question and your comments will be learning point.

— Microsoft Certified Solutions Associate (MCSA)



(TDE) – All User Databases

Applying Transparent database encryption is one of the important tasks which we need to do across the enterprise level databases.

Below script will apply the encryption on all user databases in a particular instance. The certificate need to be created on all secondary databases in an Always-On environment.

USE [master]

–Create Master Key

BY PASSWORD=’Azdb_pRd@2017′;

–Create Certificate


–Create Database Encryption Key on each user databases

DECLARE @sql varchar(2000),
@dbname varchar(64)

FROM sys.databases d
WHERE d.database_id >4

OPEN apply_tde
FETCH NEXT FROM apply_tde INTO @dbname

SELECT @sql =’USE [@dbname]

SELECT @sql = REPLACE(@sql,’@dbname’,@dbname)
PRINT ‘Encryption Started –‘+@sql
EXEC (@sql)

SELECT @sql = REPLACE(@sql,’@dbname’,@dbname)
PRINT ‘Encryption Finished –‘+@sql
EXEC (@sql)

FETCH NEXT FROM apply_tde into @dbname
CLOSE apply_tde
DEALLOCATE apply_tde


TO FILE = ‘D:\temp\AZ_TDE_PRD’


DB_NAME(database_id) AS ‘Database Name’
,create_date AS ‘Create Date’
,set_date AS ‘Set Date’
WHEN Encryption_State=’0′ THEN ‘No Database Encryption’
WHEN Encryption_State=’1′ THEN ‘Unencrypted’
WHEN Encryption_State=’2′ THEN ‘Encryption In Progress’
WHEN Encryption_State=’3′ THEN ‘Encrypted’
WHEN Encryption_State=’4′ THEN ‘Key Change In Progress’
WHEN Encryption_State=’5′ THEN ‘Decryption In Progress’
WHEN Encryption_State=’6′ THEN ‘Protection Change In Progress’
END) AS ‘Encryption State’
,key_algorithm AS ‘Algorithm Used’
,key_length AS ‘Key Length’
,encryptor_thumbprint AS ‘Encryptor Thumbprint’
,percent_complete AS ‘Percent Complete’
,encryptor_type AS ‘Encryptor Type’
FROM sys.dm_database_encryption_keys

After executing the above script, you need to copy the certificate to the secondary server in an always-on environment and create the certificate in a secondary server.

This is also applicable if you are implementing the transparent database encryption in production and restoring it in test/development environment.

FILE = ‘D:\temp\AZ_TDE_PRD_PVT’,


För more information: Transparent Data Encryption (TDE) encrypts SQL Server, Azure SQL Database

sys.dm_database_encryption_keys (Transact-SQL)

Q: What version of SQL Server supports TDE?

A: TDE requires SQL Server 2012 Enterprise edition. It’s not available in SQL Server 2012 Standard or Business Intelligence editions. TDE is also available in SQL Server 2008 and SQL Server 2008 R2 Datacenter and Enterprise editions.

Q: Does TDE prevent the security administrator or database administrator from seeing the data in the database?

A: No. TDE protects the data at rest, but an authorized user such as a security administrator or a database administrator can access the data in a TDE-encrypted database. To prevent an SA or DBA from accessing selected parts of the data, you need to use application-level encryption.

Q: How do you enable TDE?

A: To use TDE, first you need to create a master key using the CREATE MASTER KEY T-SQL statement. Next, you need to create a certificate using the CREATE CERTIFICATE command.

Then you need to use the CREATE DATABASE ENCRYPTION KEY statement to encrypt the Database Encryption Key (DEK).  Finally, you need to use the ALTERDATABASE SET ENCRYPTION ON statement to actually encrypt the user database with the DEK. As soon as you use TDE for a user database, TempDB is automatically encrypted as well.

Q: Is there a performance impact for using TDE?

A: Yes, some performance overhead is involved in using TDE. The encryption and decryption process do require additional CPU cycles. The overhead for using TDE ranges from about 3 percent to 30 percent, depending on the type of workload.

SQL Server instances with low I/O and low CPU usage will have the least performance impact. Servers with high CPU usage will have the most performance impact.

Q: Does TDE encrypt the data stream sent to the client?

A: No. TDE encrypts the data stored on disk at the page and log level. The data that’s sent across the wire to the client connections isn’t encrypted.

If you want to encrypt the link between SQL Server and the client systems, then you need to use Secure Sockets Layer (SSL). You can find out more information about how to use SSL at Encrypting Connections to SQL Server.

Let me know if you have any further question and your comments will be learning point.

— Microsoft Certified Solutions Associate (MCSA)


System Center query performance or general slowness – Database Optimizations

Sometimes we get wrapped up in day to day operations and don’t notice that an application such as System Center can quickly come to it’s knees. All of a sudden you are left with an application
that is cumbersome or hard to use. The slowness could be for various reasons, but a good start is always the Hardware and Database layers.

Recently I came across a poorly performing application and noticed that it would run slowly from a workstation or from the database server. I ran the first query below to check for fragmentation
of the database, then ran the a reindex followed up by an update statistics.

Important: Take utmost care with these queries and ensure to monitor performance of the DB Server\Instance while running. I typically shut down all Services\Connections to the Database Server
while the optimizations take place.

1. Open SQL Server Management Studio, run the query below to check fragmentation:

SELECT a.index_id, name, avg_fragmentation_in_percent FROM

sys.dm_db_index_physical_stats (NULL,NULL, NULL, NULL, NULL) AS a JOIN sys.indexes

AS b ON a.object_id = b.object_id AND a.index_id = b.index_id Where

avg_fragmentation_in_percent > 30


2. Then run this command on the tables (may take a while):

EXEC sp_MSForEachtable ‘DBCC DBREINDEX (”?”)’


4. Then, run this query (may take a while)



MS SQL Server R Services

So, here we go. I think we have a very interesting fly of though here. First of all, the first question, Still i am trying to undestand how R benefits SQL Server is a bit of a strange question. R does not have to benefit SQL Server. R is a standalone language with all of its perks and downside’s and it seems to work quite well in its own niche.

R Services (In-Database) provides a platform for developing and deploying intelligent applications that uncover new insights. You can use the rich and powerful R language and the many packages from the community to create models and generate predictions using your SQL Server data. Because R Services (In-Database) integrates the R language with SQL Server, you can keep analytics close to the data and eliminate the costs and security risks associated with data movement.

R Services (In-Database) supports the open source R language with a comprehensive set of SQL Server tools and technologies that offer superior performance, security, reliability and manageability. You can deploy R solutions using convenient, familiar tools, and your production applications can call the R runtime and retrieve predictions and visuals using Transact-SQL. You also get the ScaleR libraries to improve the scale and performance of your R solutions.

SQL Server setup, you can install both server and client components.
R Services (In-Database): Install this feature during SQL Server setup to enable secure execution of R scripts on the SQL Server computer.
R scripts, and a new service is created, the SQL Server Trusted Launchpad, to manage communications between the R runtime and the SQL Server instance.

Microsoft R Server (Standalone): A distribution of open source R combined.Both R Services (In-Database) and Microsoft R Server (Standalone) include the base R runtime and packages, plus the ScaleR libraries for enhanced connectivity and performance.

Microsoft R Client is available as a separate, free installer. You can use Microsoft R Client to develop solutions that can be deployed to R Services running on SQL Server, or to Microsoft R Server running on Windows, Teradata, or Hadoop.

SQL Server, on the other hand, is a fairly good RDBMS system, which has everything a user might need from ACID to supporting the storage of fairly large data volumes. What SQL Server lacks, and in my opinion will not gain anytime soon, is a sophisticated way to crunch data for analysis and a way to do machine learning, not even to start speaking of deep learning. It is just not designed for this. As a matter of fact, many RDBMS systems are just not designed for this.

The other problem is that SQL Server costs. A lot, if you don’t know what you are doing or if you know what you are doing but have a lot of data and big intentions.

And now we get to the second discussion – the one on the analytical part of the RDBMS. It does not have to be hard to do, it is just matter of some strategy. Think about it this way – how often does it happen for a user to be needing to analyze ALL of their data? Not really so often. And the actual RDBMS system should not be scaled for this either, since most of them either can’t scale up, or cost a lot to do so. (Yep, some of the vendors make big money out of this – think about SQL Server, for example – functional columnstores in PDW, and just silly remains of functionality in SQL Server)

So, here is an idea: keep your data in RDBMS, if you need to. Fine, RDBMS will guarantee you transactions, recovery, speed and storage. For the analytical part do this: get an appliance server – it is like a toaster, it does one thing but it does it well – and make sure you load plenty of memory on it. Memory is cheap.

Then do some data cleaning and pre-aggregations in your RDBMS – after all you are paying crazy amounts of money for it, so might as well make it work for it
then load the prepared data in the other cheap server, in memory with R and do the real heavy lifting there. If needed, return some results to be written in your RDBMS for later use.
The idea is fairly simple – just give the data owners the option to get a fairly cheap way to crunch the data and they will be happy. Sure, you can count on it, but knowing Microsoft, it will take a long while before it’s useful, and with other providers you still have to do your own heavy lifting. Might as well do the heavy lifting here and now, and get some competitive advantage here and how (instead of waiting for it to come to you later).

Important: Microsoft R Server (Standalone) is a separate option designed for using the ScaleR libraries on a Windows computer that is not running SQL Server.
However, if you have Enterprise Edition, we recommend that you install Microsoft R Server (Standalone) on a laptop or other computer used for R development, to create R solutions that can easily be deployed to an instance of SQL Server that is running R Services (In-Database).

Here is some useful link:


What is new in SQL Server Machine Learning Services?

SQL Server R Services

Let me know if you have any further question and your comments will be learning point.

— Microsoft Certified Solutions Associate (MCSA)




What are the steps involved in a simple upgrade to SQL Server 2014?  I am currently running an older version of SQL Server and want to do an in place upgrade.


In this tip, we will demonstrate an upgrade to SQL Server 2014.

Note: Before starting make sure you are connected to a network and have Internet access.

After you launch the SQL Server setup, the SQL Server Installation Center window will display. Along the left side of the window are the categories showing how the SQL Server Installation Center is organized. On the right side of the window are different actions the installer can take. Notice that some of the items allow the installer to view documentation while other items will make changes to the system. By default the Planning category is displayed.

Quick Guide to Upgrading to SQL Server 2014

Clicking on Installation on the left side will display the different installation options. For this tip, we will click on “Upgrade from SQL Server 2005, SQL Server 2008, SQL Server 2008 R2 or SQL Server 2012”. This will launch the Upgrade to SQL Server 2014 application.

Upgrade from SQL Server 2005, SQL Server 2008, SQL Server 2008 R2 or SQL Server 2012

The following pop-up window might appear several times throughout the installation process.

Upgrade to SQL Server 2014

The Upgrade to SQL Server 2014 application lists the steps it will follow on the left side of the window. Enter the product key and click “Next >”. Sample Image

Review the license terms, click on “I accept the license terms”, and then click “Next >”.

"I accept the license terms"

The upgrade application will download, extract and install files needed to carry out the installation process. Click on “Next >” to continue.

The upgrade application will download, extract and install files needed to carry out the installation process.

Select the SQL Server instance to modify. Click on “Next >” to continue.

Select the SQL Server instance to modify

Sometimes a “Please Wait…” pop-up box might appear.

Sometimes a "Please Wait..." pop-up box might appear.

The next step is the feature selection. In the example shown below, all of the features will be upgraded. Click on “Next >” to continue.

In the example shown below, all of the features will be upgraded

The Instance Configuration screen allows the installer to specify the instance ID. This screen will also display other installed instances. In this example, the MSSQLSERVER instance on SQL Server 2012 will be upgraded to SQL Server 2014.

the MSSQLSERVER instance on SQL Server 2012 will be upgraded to SQL Server 2014.

The Server Configuration screen shows the services to be installed, the service account name and the Startup Type. If you are installing SQL Server 2014 on a device with limited resources then the Startup Type should be set to Automatic (Delayed Start) instead of Automatic. Click on “Next >”.

The Server Configuration screen shows the services to be installed

The Full-text Upgrade screen provides three Full-Text upgrade options (import, rebuild and reset).

The Full-text Upgrade screen provides three Full-Text upgrade options (import, rebuild and reset).

The Feature Rules screen will execute a rule check to make sure the upgrade process should successfully.

The Feature Rules screen will execute a rule check to make sure the upgrade process should successfully.

The Ready to Upgrade screen displays all of the features and prerequisites to be installed or upgraded. At this point, the installer can still go back to make changes or quit the process. Click on “Upgrade” to begin the upgrade of the SQL Server 2014 components.

Click on "Upgrade" to begin the upgrade of the SQL Server 2014 components.

The upgrade progress bar tracks the status of the upgrade.

The upgrade progress bar tracks the status of the upgrade.

Upon completion of the installation, click on “Close” to exit the Upgrade to SQL Server 2014 application.

Upon completion of the installation, click on "Close" to exit the Upgrade to SQL Server 2014 application.

The SQL Server Installation Center can now be closed.



MS SQL database backup file

SQL Server 2016 Community Technology Preview 2 (CTP2) through current version).

To make your database secured and protective; it helps you to restore your inaccessible files of the main database when any type of corruption or damage occurs. But, what if backup also corrupt, while attempting to restore database from backup file. There could be multiple reasons behind such disaster situation, here in this write-up you will get to know about the reason and solutions behind such case, where SQL Server user faces corruption.

— SQL Server database backup and not able to restore their databases.

Transact-SQL Syntax Conventions

SQL backup files are basically a replica of your original SQL database, which can be located in different locations on the system. There could be multiple reasons of inaccessible backup file. Here are some most common causes of damaged SQL BAK files:

— Virus attack
— Abrupt system shutdown
— Use of a wrong driver
— Bad sectors in your system’s hard disk
— Sudden removal of a selected tables, records, and procedures
— unconventional functioning of Hard disk
— Improper shutdown of application
— Wrong database synchronization
— System crash
— corrupt database system rules and tables

The most common error message during restoration of database is: ‘Backup or restore operation terminating abnormally.’ A Backup restoration error occurs when a filemark in the backup device could not be read. There could be multiple causes of when a user encounters a filemark error. The most common reasons are:

— A media failure may arise on the same device where the backup is stored
— A write failure may occur while creating the backup file
— Loss of connectivity may arise while creating a network backup
— A failure in the Input/Output path occurs in the disk just after successful write to the disk

Manual Solution:-
After backup restore error the first thing you could do is to check whether all the sets of backup have issues or just some sets have issues. It might be possible that only some sets of backup have issues due to which you are getting restore error. In order to retrieve other backup sets from the device, you need to specify the file number. In case, there are multiple backup sets available on a single device, then to determine the usable backup, you can run the following query:


If you got the usable set from the disk, copy it to another drive for usage and try to restore the damaged files with the help of SQL restore commands. Here are some of the SQL commands that you can use to restore corruption in your SQL database backup, the following essential backup concepts:

Backup Types

Transaction Log Truncation

Formatting Backup Media

Working with Backup Devices and Media Sets

Restoring SQL Server Backups

To recover a database use the following command. This will put your database in the “restoring” state

RESTORE DATABASE <DB Name> FROM DISK='<Backup Location>’ WITH FILE = <FileNumber>

— Write the backup set number instead of ‘FileNumber’ that you want to restore.

The following command will take the database, which is in ‘restoring’ state and make it available for end users.

RESTORE LOG <DB Name> FROM DISK = ‘<Backup Location>’

“Before you can create the first log backup, you must create a full backup ”

The above mentioned commands are used to restore corrupt backup file of SQL database. However, these corrupt backup recovery solutions provided by Microsoft are not applicable for deep corruption cases. In order to restore your highly damaged or corrupt SQL backup database you can always choose a third party SQL backup recovery software. These professional utilities are designed to restore data from a corrupt (.BAK) SQL backup file.

BACKUP DATABASE AdventureWorks2012
TO DISK=’X:\SQLServerBackups\AdventureWorks1.bak’,
MEDIANAME = ‘AdventureWorksStripedSet0’,
MEDIADESCRIPTION = ‘Striped media set for AdventureWorks2012 database;

Third party applications have functions to restore SQL backup file due to all above mentioned reasons. Before buying any professional backup recovery tool, you need to choose the most reliable one. For that you should use the online demo versions of the backup recovery applications to test their efficiency.

For more information, see: Full File Backups (SQL Server) and Back Up Files and Filegroups (SQL Server).

For more information and examples, see SQL Server Backup and Restore with Microsoft Azure Blob Storage Service. For a tutorial, see Tutorial: SQL Server Backup and Restore to Windows Azure Blob Storage Service.
Security: Beginning with SQL Server 2012, the PASSWORD and MEDIAPASSWORD options are discontinued for creating backups. It is still possible to restore backups created with passwords.

Permissions: BACKUP DATABASE and BACKUP LOG permissions default to members of the sysadmin fixed server role and the db_owner and db_backupoperator fixed database roles.

Ownership and permission problems on the backup device’s physical file can interfere with a backup operation. SQL Server must be able to read and write to the device; the account under which the SQL Server service runs must have write permissions. However, sp_addumpdevice, which adds an entry for a backup device in the system tables, does not check file access permissions. Such problems on the backup device’s physical file may not appear until the physical resource is accessed when the backup or restore is attempted.

Let me know if you have any further question and your comments will be learning point.

— Microsoft Certified Solutions Associate (MCSA)



MS SQL Server 2014 HotfiX’s  use syntax i.e. Select @@version

Build File version KB / Description Release Date
12.00.4050 2014.120.4050.0 SQL Server 2014 Service Pack 1 (SP1) April 15, 2015 
12.00.2495 2014.120.2495.0 3046038 Cumulative update package 7 (CU7) for SQL Server 2014 April 23, 2015 
12.00.2488 2014.120.2488.0 3048751 FIX: Deadlock cannot be resolved automatically when you run a SELECT query that can result in a parallel batch-mode scan April 1, 2015
12.00.2485 2014.120.2485.0 3043788 An on-demand hotfix update package is available for SQL Server 2014 March 16, 2015
12.00.2480 2014.120.2480.0 3031047 Cumulative update package 6 (CU6) for SQL Server 2014 February 16, 2015
12.00.2472 2014.120.2472.0 3032087 FIX: Cannot show requested dialog after you connect to the latest SQL Database Update V12 (preview) with SQL Server 2014 January 28, 2015
12.00.2464 2014.120.2464.0 3024815 Large query compilation waits on RESOURCE_SEMAPHORE_QUERY_COMPILE in SQL Server 2014 January 5, 2015
12.00.2456 2014.120.2456.0 3011055 Cumulative update package 5 (CU5) for SQL Server 2014 December 18, 2014
12.00.2436 2014.120.2436.0 3014867 FIX: “Remote hardening failure” exception cannot be caught and a potential data loss when you use SQL Server 2014 November 27, 2014
12.00.2430 2014.120.2430.0 2999197 Cumulative update package 4 (CU4) for SQL Server 2014 October 21, 2014
12.00.2423 2014.120.2423.0 3007050 FIX: RTDATA_LIST waits when you run natively stored procedures that encounter expected failures in SQL Server 2014 October 22, 2014
12.00.2405 2014.120.2405.0 2999809 FIX: Poor performance when a query contains table joins in SQL Server 2014 September 25, 2014
12.00.2402 2014.120.2402.0 2984923 Cumulative update package 3 (CU3) for SQL Server 2014 August 18, 2014
12.00.2381 2014.120.2381.0 2977316 MS14-044: Description of the security update for SQL Server 2014 (QFE) August 12, 2014
12.00.2370 2014.120.2370.0 2967546 Cumulative update package 2 (CU2) for SQL Server 2014 June 27, 2014
12.00.2342 2014.120.2342.0 2931693 Cumulative update package 1 (CU1) for SQL Server 2014 April 21, 2014
12.00.2254 2014.120.2254.0 2977315 MS14-044: Description of the security update for SQL Server 2014 (GDR) August 12, 2014


SQL Database Engine Instances – 2014

Topic:                                                       Task Description

Describes how to configure the properties of an instance. Configure defaults such as file locations and date formats, or how the instance uses operating system resources, such as memory or threads.

Configure Database Engine Instances (SQL Server)

Describes how to manage the collation for an instance of the Database Engine. Collations define the bit patterns used to represent characters, and associated behaviors such as sorting, and case or accent sensitivity in comparison operations.

Collation and Unicode Support

Describes how to configure linked server definitions, which allow Transact-SQL statements run in an instance to work with data stored in separate OLE DB data sources.

Linked Servers (Database Engine)

Describes how to create a logon trigger, which specifies actions to be taken after a logon attempt has been validated, but before it starts working with resources in the instance. Logon triggers support actions such as logging connection activity, or restricting logins based on logic in addition to the credential authentication performed by Windows and SQL Server.

Logon Triggers

Describes how to manage the service associated with an instance of the Database Engine. This includes actions such as starting and stopping the service, or configuring startup options.

Manage the Database Engine Services

Describes how to perform server network configuration tasks such as enabling protocols, modifying the port or pipe used by a protocol, configuring encryption, configuring the SQL Server Browser service, exposing or hiding the SQL Server Database Engine on the network, and registering the Server Principal Name.

Server Network Configuration

Describes how to perform client network configuration tasks such as configuring client protocols and creating or deleting a Server Alias.

Client Network Configuration

Describes the SQL Server Management Studio editors that can be used to design, debug, and run scripts such as Transact-SQL scripts. Also describes how to code Windows PowerShell scripts to work with SQL Server components.

Database Engine Scripting

Describes how to use maintenance plans to specify a workflow of common administration tasks for an instance. Workflows include tasks such as backing up databases and updating statistics to improve performance.

Maintenance Plans

Describes how to use the resource governor to manage resource consumption and workloads by specifying limits to the amount of CPU and memory that application requests can use.

Resource Governor

Describes how database applications can use database mail to send e-mail messages from the Database Engine.

Database Mail

Describes how to use extended events to capture performance data can be used to build performance baselines or to diagnose performance problems. Extended events are a light-weight, highly scalable system for gathering performance data.

Extended Events

Describes how to use SQL Trace to build a customized system for capturing and recording events in the Database Engine.

SQL Trace

Describes how to use SQL Server Profiler to capture traces of application requests coming in to an instance of the Database Engine. These traces can later be replayed for activities such as performance testing or problem diagnosis.

SQL Server Profiler

Describes the Change Data Capture (CDC) and Change Tracking features and describes how to use these features to track changes to data in a database.

Track Data Changes (SQL Server)

Describes how to use the Log File viewer to find and view SQL Server errors and messages in various logs, such as the SQL Server job history, the SQL Server logs, and Windows event logs.

Log File Viewer

Describes how to use the Database Engine Tuning Advisor to analyze databases and make recommendations for addressing potential performance problems.

Database Engine Tuning Advisor

Describes how the production database administrators can make a diagnostic connection to instances when standard connections are not being accepted.

Diagnostic Connection for Database Administrators

Describes how to use the deprecated remote servers feature to enable access from one instance of the Database Engine to another. The preferred mechanism for this functionality is a linked server.

Remote Servers

Describes the capabilities of Service Broker for messaging and queueing applications and provides pointers to the Service Broker documentation.

Service Broker

Describes how the buffer pool extension can be used to provide seamless integration of nonvolatile random access storage (solid-state drives) to the Database Engine buffer pool to significantly improve I/O throughput.

Buffer Pool Extension File

Back to top



Back Up and Restore of SQL Server Databases


Sequence Numbers

Bulk Import and Export of Data (SQL Server)

In-Memory OLTP (In-Memory Optimization)

DDL Triggers

Data Compression


DML Triggers

OLE Automation Objects in Transact-SQL

Partitioned Tables and Indexes

Synonyms (Database Engine)

Event Notifications


XML Data (SQL Server)

Monitor and Tune for Performance

Stored Procedures (Database Engine)

Spatial Data (SQL Server)

Search (SQL Server)

Binary Large Object (Blob) Data (SQL Server)

User-Defined Functions

Data-tier Applications


The Transaction Log (SQL Server)

Plan Guides

Database Checkpoints (SQL Server)

Back to top

String Functions (T-SQL)


Metadata Functions (T-SQL)



XQuery is a language that can query structured or semi-structured XML data. With the xml data type support provided in the Database Engine, documents can be stored in a database and then queried by using XQuery. That can help in understanding the implementation of XQuery in the Database Engine.

XML Data (SQL Server) Explains the support for the xmldata type in the Database Engine and the methods you can use against this data type. The xml data type forms the input XQuery data model on which the XQuery expressions are executed.
XML Schema Collections (SQL Server) Describes how the XML instances stored in a database can be typed. This means you can associate an XML schema collection with the xml type column. All the instances stored in the column are validated and typed against the schema in the collection and provide the type information for XQuery.


XQuery Basics Provides a basic overview of XQuery concepts, and also the expression evaluation (static and dynamic context), atomization, effective Boolean value, XQuery type system, sequence type matching, and error handling.
XQuery Expressions Describes XQuery primary expressions, path expressions, sequence expressions, arithmetic comparison and logical expressions, XQuery construction, FLWOR expression, conditional and quantified expressions, and various expressions on sequence types.
Modules and Prologs (XQuery) Describes XQuery prolog.
XQuery Functions against the xml Data Type Describes a list of the XQuery functions that are supported.
XQuery Operators Against the xml Data Type Describes XQuery operators that are supported.
Additional Sample XQueries Against the xml Data Type Provides additional XQuery samples.

More info here W3C specification is available here



Encrypted Backup in SQL Server 2014

Encryption for Backups is a new feature introduced in SQL Server 2014 and the benefits of this option are
1.Encrypting the database backups helps secure the data.

2.Encryption can also be used for databases that are encrypted using TDE.

3.Encryption is supported for backups done by SQL Server Managed Backup to Windows Azure, which provides additional security for off-site backups.

4.This feature supports multiple encryption algorithms including AES 128, AES 192, AES 256, and Triple DES

5.You can integrate encryption keys with Extended Key Management (EKM) providers.

The following are pre-requisites for encrypting a backup:

1.Create a Database Master Key for the master database.

USE master;
2. Create a certificate or asymmetric Key to use for backup encryption.

Use Master
CREATE CERTIFICATE CertforBackupEncryption
WITH SUBJECT = ‘Certificate for Backup Encryption ‘;
3. Backup the database with encryption:

TO DISK = N ‘D:Backupaddarr.bak
SERVER CERTIFICATE = CertforBackupEncryption
STATS = 10
Restoring the encrypted backup:
SQL Server restore does not require any encryption parameters to be specified during restores. It does require that the certificate or the asymmetric key used to encrypt the backup file be available on the instance that you are restoring to. The user account performing the restore must have VIEW DEFINITION permissions on the certificate or key. If you are restoring the encrypted backup to a different instance, you must make sure that the certificate is available on that instance. More info


Microsoft SQL Server 2014 White Papers


Product Documentation

 Back to top

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: