Monday, 5 December 2016

SQL Server 2016 SP1 introduces Enterprise Features for All !

With the recent November 2016 release of SQL Server Service Pack 1 (SP1). Microsoft has rocked the SQL Community by announcing that many of the previously "Enterprise Edition" Only Features of SQL Server 2016 are now available across the range, in Standard and Web Editions as well as  the completely Free Express Editions.

The Features that previously were only available to customers willing to part with in excess of £25,000 for an Enterprise licence can now take advantage of the following features absolutely free and in ALL editions of SQL Server 2016 with SP1.


Database Snapshots
A database snapshot is a read-only, static view of a SQL Server database.

Change Data Capture
Change data capture records insert, update, and delete activity that is applied to a SQL Server table. This makes the details of the changes available in an easily consumed relational format.

Column Store Indexes
The columnstore index is the standard for storing and querying large data warehousing fact tables. It uses column-based data storage and query processing to achieve up to 10x query performance gains in your data warehouse over traditional row-oriented storage, and up to 10x data compression over the uncompressed data size.

In Memory OLTP
In-Memory OLTP can significantly improve the performance of transaction processing, data ingestion and data load, and transient data scenarios. 

Table and Index Partitioning
SQL Server supports table and index partitioning. The data of partitioned tables and indexes is divided into units that can be spread across more than one filegroup in a database. The data is partitioned horizontally, so that groups of rows are mapped into individual partitions.

Data Compression
SQL Server 2016 supports row and page compression for row store tables and indexes, and supports columnstore and columnstore archival compression for columnstore tables and indexes.

Multiple FileStream Containers
FILESTREAM enables SQL Server-based applications to store unstructured data, such as documents and images, on the file system. Applications can leverage the rich streaming APIs and performance of the file system and at the same time maintain transactional consistency between the unstructured data and corresponding structured data.

Always Encrypted Data
Always Encrypted is a feature designed to protect sensitive data, such as credit card numbers or national Security numbers , stored in SQL Server databases. Always Encrypted allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to the Database Engine. As a result, Always Encrypted provides a separation between those who own the data (and can view it) and those who manage the data (but should have no access). 

Dynamic Data Masking
Dynamic data masking limits (DDM) sensitive data exposure by masking it to non-privileged users. It can be used to greatly simplify the design and coding of security in your application.

Dynamic data masking helps prevent unauthorized access to sensitive data by enabling customers to designate how much of the sensitive data to reveal with minimal impact on the application layer. DDM can be configured on the database to hide sensitive data in the result sets of queries over designated database fields, while the data in the database is not changed. Dynamic data masking is easy to use with existing applications, since masking rules are applied in the query results. Many applications can mask sensitive data without modifying existing queries

Row Level Security
Row-Level Security enables customers to control access to rows in a database table based on the characteristics of the user executing a query (e.g., group membership or execution context).
Row-Level Security (RLS) simplifies the design and coding of security in your application. RLS enables you to implement restrictions on data row access. For example ensuring that workers can access only those data rows that are pertinent to their department, or restricting a customer's data access to only the data relevant to their company.
PolyBase Compute Node
PolyBase is a technology that accesses and combines both non-relational and relational data, all from within SQL Server. It allows you to run queries on external data in Hadoop or Azure blob storage. The queries are optimized to push computation to Hadoop
By simply using Transact-SQL (T-SQL) statements, you an import and export data back and forth between relational tables in SQL Server and non-relational data stored in Hadoop or Azure Blob Storage. You can also query the external data from within a T-SQL query and join it with relational data.

Tuesday, 14 June 2016

Managing Alwayson Availability Groups

The following queries can be combined to aid management of Availability Groups

select * from sys.dm_hadr_cluster
select * from sys.dm_hadr_cluster_members
select * from sys.dm_hadr_cluster_networks
select * from sys.availability_groups
select * from sys.availability_groups_cluster
select * from sys.dm_hadr_availability_group_states
select * from sys.availability_replicas
select * from sys.dm_hadr_availability_replica_cluster_nodes
select * from sys.dm_hadr_availability_replica_cluster_states
select * from sys.dm_hadr_availability_replica_states
select * from sys.dm_hadr_auto_page_repair
select * from sys.dm_hadr_database_replica_states
select * from sys.dm_hadr_database_replica_cluster_states
select * from sys.availability_group_listener_ip_addresses
select * from sys.availability_group_listeners
select * from sys.dm_tcp_listener_states

SQL Queries in PowerShell


OK, So Backup guys needed to run a SQL Query to find out which Node of an Availability Group is hosting the Primaries.

So two things here, I needed a SQL Script to give me that information and then a way of running a SQL Script in Powershell and returning the result.

I found the following two scripts online.


$ServerName = "."
$DatabaseName = "master"
$Query = "SELECT HAGS.Primary_Replica FROM Sys.dm_hadr_availability_group_states HAGS INNER JOIN sys.availability_groups AG ON AG.group_id = HAGS.group_id WHERE AG.name = 'AAG_Name'"

#Timeout parameters
$QueryTimeout = 120
$ConnectionTimeout = 30

#Action of connecting to the Database and executing the query and returning results if there were any.
$conn=New-Object System.Data.SqlClient.SQLConnection
$ConnectionString = "Server={0};Database={1};Integrated Security=True;Connect Timeout={2}" -f $ServerName,$DatabaseName,$ConnectionTimeout
$conn.ConnectionString=$ConnectionString
$conn.Open()
$cmd=New-Object system.Data.SqlClient.SqlCommand($Query,$conn)
$cmd.CommandTimeout=$QueryTimeout
$ds=New-Object system.Data.DataSet
$da=New-Object system.Data.SqlClient.SqlDataAdapter($cmd)
[void]$da.fill($ds)
$conn.Close()
$ds.Tables

This script is wonderful, it can be used to run any SQL Statement in powershell and return the results in a Table.

Sunday, 4 January 2015

2015 Starts with a Big Thank You to the NHS.

So just a little story of outstanding service and utter professionalism.

Whilst this story is centered on myself, I am in no way the hero, the heroes are the women and men of our fine NHS, a uniquely British Service of which we should be immensely proud.

On Boxing Day, I became I'll and began vomiting blood, a little initially, so it was ignored and put down to Christmas excess, 

However over the next day and 1/2 I became steadily worse until around 6 am on Sunday Morning, 

After covering my partner Maureen from head to toe in Blood and struggling for consciousness, it was time to call an Ambulance.

Maureen dialed 999 and was still on the phone to the control center giving my details when the Fast Response Paramedic arrived within I would guess less than a minute.

After his initial disbelief at the amount of blood in the Bedroom and bathroom, he quickly performed his initial assessment and called for backup in the form of a fully equipped Ambulance with a Fully Trained Paramedic, this arrived with 5 minutes.

After initially fighting to keep me conscious they loaded me into the Ambulance and had me on a Crash Trolley in "Resus" at the hospital within 10 minutes.

The Doctor assigned to me whilst young (early 20's) looked initially shocked and extremely nervous, did a fantastic job, she managed to stabilize me and managed to get enough blood into me for me to remain conscious (I seem to remember her squeezing the bags to try to get the blood in faster).

I was put into a cubicle to allow some more blood to be administered and for general observation.

After 3 hours of observation, I was deemed stable enough to be handed over to the High Dependency Unit, this is similar to Intensive Care, but is for patients who are not in a coma.

Care in the HDU basically comprises of each patient being allocated two nurses, one for Day Time and one for Night time on a straight 12 Hour Split, proper one to one care.

(Remember this is the NHS, this is FREE).

I spent three days in the HDU with two IV Cannulas in each arm, being pumped full of blood (I counted 11 Units) and drugs on a drip. I was given at least three other Drugs by IV at least hourly to start with but even towards the end it was only down to every 4 hours, all the while hooked up to an ECG 24 Hours a day.

During My Time in the HDU, I had to go for an Ultrasound Scan, no mean feat when you are under the care of HDU, you have to be transported on a “Crash Trolley” with all of your Drips, ECG Machine, own Oxygen Supply, defribulator and various other bits and pieces, your assigned Nurse must accompany you at all times outside of the HDU.

After the scan it was established that I had a two burst veins in my Stomach (2014 has been a bit of a stressful year, the last 3 months especially).

This would again require all of the equipment listed above as well as my Nurse to accompany me to the Endoscopy Suite.

(Anyone keeping tally on how much all this one to one care, Equipment and drugs would cost in the Private Sector, I can’t begin to Imagine).

I had the Burst Veins Sealed with the Endoscopy Procedure and was dispatched back to the HDU.

After receiving 3 Days of One to One Care in HDU, I was transferred to a General Ward.

Again my Care was to be administered 24/7 just not quite with the same intimacy, it was now down to the Sisters and Staff Nurses of the General Ward to administer Care.

I was now also allowed to eat, the Auxiliaries did a fantastic job of feeding me up trying to get me back on my feet, and all four times a day, Breakfast, Lunch, Dinner and Supper. I must point out at this stage that the quality of the Hospital Food was excellent. Obviously not gourmet just good old fashioned stick to your ribs build you up food, casseroles, sticky toffee pudding and custard, Roast beef , Fresh Veg and Roast etc etc.

So after a further three days of round the clock care and feeding up, I was sent for a CT scan to establish that the Bleeds had indeed been stopped.

Once the results came back, I was discharged on Friday Afternoon, from near death to almost fully fit in less than a week.

I for one will never hear a bad word against the NHS, especially from Muppets like Jason Manford who not only publicly whinges about having to wait over two hours for his wife’s appointment, but feels he is so important that he feels he must remonstrate with the Prime Minister.


Well Jason, next time you are waiting, remember the NHS is Free and if you are having to wait it is probably because some inconsiderate person such as I, required the resources a little more urgently than your wife, if you are too important to wait in line, go pay privately.

To Sum up I cannot fault any part of my six days under the care of the NHS, The service was Prompt, professional and efficient, I was kept fully informed about my condition, which drugs were being administered and why. 

From initial contact to discharge, I would give the whole experience a resounding 10/10 and let me tell you not many people get 10/10 off me in the work place. 

Tuesday, 2 September 2014

SQL Merge


OK, So quite a strange one this.

Client has a Database with a number of Large History Tables.

Developers decided that as they were running out of space on the current drive, the only solution was to rename the old Tables and re Create the new Tables on another FileGroup on another LUN.

Not the best solution, but not a total disaster, however, the application also requires to be able to read data from the current year and as they did this change midway through the year. The developers again came up with a Brilliant (or so they thought|) solution which involved copying the current years data back out of the now archived Tables into the New Tables.

OK, so if we now consider the fact that they also want to use these tables for BI, things start to get interesting.

The Archived Table contains close to 500 Million rows, as does the New Versions of the Table with around 100 million Duplicate rows.

Ultimately, I will partition the Table, but the first step is to load all of the data into one Table without Duplicates.

After playing around with various methods, I settled on using the TSQL MERGE Function which first  appeared in SQL Server 2008 :-

http://msdn.microsoft.com/en-us/library/bb510625.aspx


Basically I used it to compare the two tables, if the row in the Archive table did not appear in the New table, then the row was Inserted into the New Table, thus resulting in One Table containg all of the rows without any duplicates.

A simplified version of the query can be found below :-

MERGE
       [dbo].[HistoryTable] SRC
USING
       [dbo].[OldHistoryTable] OLD
ON
       SRC.[PrimaryKeyField_1]    = OLD.[PrimaryKeyField_1]
AND
       SRC.[PrimaryKeyField_2]    = OLD.[PrimaryKeyField_2]
WHEN
       NOT MATCHED BY TARGET THEN
 
       INSERT
              ([PrimaryKeyField_1],
              [PrimaryKeyField_2],
              [Field_1],
              [Field_2])
       VALUES
              (OLD.[PrimaryKeyField_1],
              OLD.[PrimaryKeyField_2],
              OLD.[Field_1],
              OLD.Field_2]);



Tuesday, 3 June 2014

SQL Server 2014 - Features Only Supported in Enterprise Edition

Overview.


You can find many articles, white papers, and books demonstrating the myriad features found in SQL Server 2014.

However it is sometimes easy to forget that there’s a big disparity in the features that the different editions of SQL Server 2014 support.

Chances are that the exciting new features you’re reading about are supported only by
SQL Server 2014 Enterprise Edition or SQL Server 2014 Developer Edition.

(Although Enterprise Edition and Developer Edition share the same feature set, Developer Edition is licensed solely for development work and can’t be used for production work.)

The following is a list of the most important features, there are others that are restricted to Analysis Services, but those are not covered here :-

1     No Upper Memory Limit
Maximum memory utilized (per instance of SQL Server Database Engine) Is limited by the OS Limits only for Enterprise Edition, BI and Standard Edition are limited to 128GB per Instance. Web edition is limited to 64GB with all other editions being limited to 1GB.

1      Online page and file restore
Online restore is supported only on SQL Server Enterprise Edition. In this edition, a file, page, or piecemeal restore is online by default. 

2      Online Indexing - Online Schema Change.
SQL Server introduced Online Index maintenance and partitioned tables with SQL Server 2005. Though operations around switching partitions of a partitioned table out or into a table are considered ‘online’ operations as well, they require an exclusive Schema Modification lock (SCH_M lock). A SCH_M lock against a table is usually used by DDL operations that structurally change the table. From an architecture and design point SQL Server should use the SCH_M lock for small time periods only, especially when using online options offered by some of the DDL commands of SQL Server.

3     Global Batch Aggregation
Standard query processing in SQL Server is based on a row-at-a-time iterator model, that is, a query operator processes one row at a time. To reduce CPU time a new set of query operators have been introduced that instead process a batch of rows at a time. 

4    Fast Recovery.
After installation, SQL Server sets recovery interval to zero. As long as the recovery interval setting is at the default setting and long-running transactions are not present, recovery for each database should take approximately 1 minute or less. 

When recovering restored data, if long-running transactions were active at the time of the data loss, recovery time is controlled by the time it takes to roll back the effects of these transactions. 

However, in SQL Server Enterprise Edition, the database is available during the undo phase of a crash recovery or a database mirroring failover, a feature known as fast recovery.

5      Mirrored Backups.
Mirroring a media set increases backup reliability by reducing the impact of backup-device malfunctions.

6     AlwaysOn Availability Groups  
An availability group supports a failover environment for a discrete set of user databases, known as availability databases, that fail over together. An availability group supports a set of primary databases and one to four sets of corresponding secondary databases. Secondary databases are not backups. Continue to back up your databases and their transaction logs on a regular basis.

7      Table and Index Partitioning. 
The data of partitioned tables and indexes is divided into units that can be spread across more than one filegroup in a database. The data is partitioned horizontally, so that groups of rows are mapped into individual partitions. All partitions of a single index or table must reside in the same database. The table or index is treated as a single logical entity when queries or updates are performed on the data.

8     Data Compression
SQL Server 2014 supports row and page compression for rowstore tables and indexes, and supports columnstore and columnstore archival compression for columnstore tables and indexes.
For rowstore tables and indexes, use the data compression feature to help reduce the size of the database. In addition to saving space, data compression can help improve performance of I/O intensive workloads because the data is stored in fewer pages and queries need to read fewer pages from disk. However, extra CPU resources are required on the database server to compress and decompress the data, while data is exchanged with the application.

9     Resource Governor.
SQL Server Resource Governor is a feature than you can use to manage SQL Server workload and system resource consumption. Resource Governor enables you to specify limits on the amount of CPU, physical IO, and memory that incoming application requests can use.

10      Partition Table Parallelism.
Enables the use of separate threads for queries over multiple partitions.

11      Multiple Filestream Containers.
Provides the ability to have multiple data containers in a single filegroup. This feature comes in handy when you want to load-balance the I/O across multiple disk systems, or when you’ve run out of space in the data container and you need to expand your storage capacity.

12     NUMA Aware Large Page Memory and Buffer Array Allocation.
Allows the use of The intelligent, adaptive NUMA scheduling and memory placement policies in vSphere enabling transparent management of virtual machines transparently, so administrators do not need to deal with the complexity of balancing virtual machines between nodes by hand. 

13      Hot Add Memory and CPU.
Hot add CPU is the ability to dynamically add CPUs to a running system. Adding CPUs can occur physically by adding new hardware, logically by online hardware partitioning, or virtually through a virtualization layer.

14      In-Memory OLTP. 
New in SQL Server 2014, In-Memory OLTP can significantly improve OLTP database application performance. In-Memory OLTP is a memory-optimized database engine integrated into the SQL Server engine, optimized for OLTP.

15    Fine Grained Auditing.
SQL Server Enterprise enhances the auditing capabilities of the server by introducing native auditing functionality into the database engine.

16      Transparent Database Encryption.
Transparent data encryption (TDE) performs real-time I/O encryption and decryption of the data and log files. The encryption uses a database encryption key (DEK), which is stored in the database boot record for availability during recovery. The DEK is a symmetric key secured by using a certificate stored in the master database of the server or an asymmetric key protected by an EKM module. TDE protects data "at rest", meaning the data and log files. It provides the ability to comply with many laws, regulations, and guidelines established in various industries.

17      Extensible Key Management.
SQL Server provides data encryption capabilities together with Extensible Key Management (EKM), using the Microsoft Cryptographic API (MSCAPI) provider for encryption and key generation. Encryption keys for data and key encryption are created in transient key containers, and they must be exported from a provider before they are stored in the database. This approach enables key management that includes an encryption key hierarchy and key backup, to be handled by SQL Server.

18      Oracle Replication Publishing.
Publishing servers can be deployed on any Oracle supported hardware and operating system. The feature is built on the well-established foundation of SQL Server snapshot replication and transactional replication, providing similar performance and usability.

Oracle Publishing is deprecated. Heterogeneous replication to non-SQL Server subscribers is deprecated. To move data, create solutions using change data capture and SSIS.

19     Peer to Peer Transactional Replication.
Peer-to-peer replication provides a scale-out and high-availability solution by maintaining copies of data across multiple server instances, also referred to as nodes. Built on the foundation of transactional replication, peer-to-peer replication propagates transactionally consistent changes in near real-time. This enables applications that require scale-out of read operations to distribute the reads from clients across multiple nodes. Because data is maintained across the nodes in near real-time, peer-to-peer replication provides data redundancy, which increases the availability of data.

20     Parallel Indexed Operations.
The query plans built for the index operations that create or rebuild an index, or drop a clustered index, allow for parallel, multi-threaded operations on computers that have multiple microprocessors.

21      Automatic Use of Indexed View By Query Optimizer.
Indexed views can be created in any edition of SQL Server. In SQL Server Enterprise, the query optimizer automatically considers the indexed view. To use an indexed view in the Standard edition or the Datacenter edition, the NOEXPAND table hint must be used

22     Parallel Consistency Check - DBCC CheckDB.
DBCC CHECKDB is supported on databases that contain memory-optimized tables but validation only occurs on disk-based tables. However, as part of database backup and recovery, a CHECKSUM validation is done for files in memory-optimized filegroups.
Since DBCC repair options are not available for memory-optimized tables, you must back up your databases regularly and test the backups. If data integrity issues occur in a memory-optimized table, you must restore from the last known good backup.

23      SQL Server Utility Control Point.
The SQL Server Utility models an organization’s SQL Server-related entities in a unified view. Utility Explorer and SQL Server Utility viewpoints in SQL Server Management Studio (SSMS) provide administrators a holistic view of SQL Server resource health through an instance of SQL Server that serves as a utility control point (UCP).

24      Change Data Capture Service for Oracle by Attunity. 
The CDC Service for Oracle is a Windows service that scans Oracle transaction logs and captures changes to Oracle tables of interest into SQL Server change tables. The SQL change tables where the changes captured from Oracle are stored are the same type of change tables used in the native SQL Server Change Data Capture feature. This makes consuming these changes as easy as consuming changes made to SQL Server databases.

25    Change Data Capture.
Change data capture is designed to capture insert, update, and delete activity applied to SQL Server tables, and to make the details of the changes available in an easily consumed relational format. The change tables used by change data capture contain columns that mirror the column structure of a tracked source table, along with the metadata needed to understand the changes that have occurred.

26   Star Join Query Optimizations.
Performs Query Optimizations on Data Warehouse Star Joins.

27      Parallel Query Processing On Partitioned Tables and Indices.
SQL Server Enterprise improves query processing performance on partitioned tables for many parallel plans, changes the way parallel and serial plans are represented, and enhances the partitioning information provided in both compile-time and run-time execution plans.

28      xVelocity memory optimized ColumnStore indexes.
xVelocity memory-optimized columnstore index is a feature which enables high performance query processing especially in Data Warehouses and Data Marts.
Unlike "Standard" indexes, columnstore indexes store data in memory (as the name memory-optimized columnstore index suggests) and the storage organization is not row-oriented.

29      SSIS -  Persistent (high performance) lookups.
The Lookup transformation performs lookups by joining data in input columns with columns in a reference dataset. You use the lookup to access additional information in a related table that is based on values in common columns.
The reference dataset can be a cache file, an existing table or view, a new table, or the result of an SQL query. The Lookup transformation uses either an OLE DB connection manager or a Cache connection manager to connect to the reference dataset.
   
30     SSIS - Data Mining Query Transformation .
The Data Mining Query transformation performs prediction queries against data mining models. This transformation contains a query builder for creating Data Mining Extensions (DMX) queries. The query builder lets you create custom statements for evaluating the transformation input data against an existing mining model using the DMX language.
Attunity connector provides an improved performance even with few records. The gain is around 15% if you have a lot of rows to transfer.

32      SSIS - Fuzzy Grouping and Lookup Transformations.
The Fuzzy Grouping transformation performs data cleaning tasks by identifying rows of data that are likely to be duplicates and selecting a canonical row of data to use in standardizing the data.

33      SSIS - Data Mining Model Training Destination Adapter .
The Data Mining Model Training destination trains data mining models by passing the data that the destination receives through the data mining model algorithms. Multiple data mining models can be trained by one destination if the models are built on the same data mining structure.

34     SSIS - Connector for Open Database Connectivity (ODBC) by Attunity .
The Connector for Open Database Connectivity (ODBC) by Attunity for SQL Server 2014 Integration Services (SSIS) helps SSIS developers easily create packages that load and unload data from ODBC-supported databases.
The ODBC Connector is designed to achieve optimal performance when loading data into or unloading data from an ODBC-supported database in the context of SQL Server 2014 Integration Services (SSIS).




Tuesday, 27 May 2014

SQL Joins


--INNER JOIN: Returns all rows when there is at least one match in BOTH tables
--LEFT JOIN: Return all rows from the left table, and the matched rows from the right table
--RIGHT JOIN: Return all rows from the right table, and the matched rows from the left table
--FULL JOIN: Return all rows when there is a match in ONE of the tables

SELECT
       COUNT (*)
FROM
       [DataBase].dbo.TableA A
       INNER JOIN [DataBase].dbo.TableB B ON A.ColumnA = B.ColumnB


SELECT
       COUNT (*)
FROM
       [DataBase].dbo.TableA A
       LEFT JOIN [DataBase].dbo.TableB B ON A.ColumnA = B.ColumnB

SELECT
       COUNT (*)
FROM
       [DataBase].dbo.TableA A
       RIGHT JOIN [DataBase].dbo.TableB B ON A.ColumnA = B.ColumnB

SELECT
       COUNT (*)
FROM
       [DataBase].dbo.TableA A
       FULL JOIN [DataBase].dbo.TableB B ON A.ColumnA = B.ColumnB


SELECT
       COUNT (*)
FROM
       [DataBase].dbo.TableA A
       LEFT JOIN [DataBase].dbo.TableB B ON A.ColumnA = B.ColumnB
WHERE
              A.ColumnA IS NULL

SELECT
       COUNT (*)
FROM
       [DataBase].dbo.TableA A
       RIGHT JOIN [DataBase].dbo.TableB B ON A.ColumnA = B.ColumnB
WHERE
              A.ColumnA IS NULL

SELECT
       COUNT (*)
FROM
       [DataBase].dbo.TableA A
       LEFT JOIN [DataBase].dbo.TableB B ON A.ColumnA = B.ColumnB
WHERE
              B.ColumnB IS NULL

SELECT
       COUNT (*)
FROM
       [DataBase].dbo.TableA A
       RIGHT JOIN [DataBase].dbo.TableB B ON A.ColumnA = B.ColumnB
WHERE
              B.ColumnB IS NULL

SELECT
       COUNT (*)
FROM
       [DataBase].dbo.TableA A
       FULL OUTER JOIN [DataBase].dbo.TableB B ON A.ColumnA = B.ColumnB

SELECT
       COUNT (*)
FROM
       [DataBase].dbo.TableA A
       FULL OUTER JOIN [DataBase].dbo.TableB B ON A.ColumnA = B.ColumnB
WHERE
               A.ColumnA IS NULL
OR
              B.ColumnB IS NULL