Thursday, May 23, 2013

Top 20 exciting features of SQL Server 2012

http://www.codeproject.com/Articles/526621/Top-20-exciting-features-of-SQL-Server-2012-Part-1


Top 20 exciting features of SQL Server 2012 – Part 1

By , 14 Mar 2013


   4.89 (37 votes)

1

2

3
4 votes, 10.8%
4
33 votes, 89.2%
5
4.89/5 - 37 votes
μ 4.89, σa 0.99 [?]
 

Introduction
Feature number 1 (Revolution):- Column store indexes
Feature number 2 (Evolution):- Sequence objects
Feature number 3 (Revolution):- Pagination
Feature number 4 (Revolution):- Contained database
Feature number 5 (Evolution):- Error handling
Which are the next 5 features?

Introduction

SQL Server 2012 was released on April 2012 and it has started becoming favorite among professionals. Any new product comes from Microsoft the first thing I personally ask myself, is it worth to jump in?. Is it worth to spend customer’s hard earned money to get in to that product?. The way to assess the same is dividing the product features in to   “revolution” and “evolution”. “Revolution” means it’s completely a new thing while “evolution” means there was something already and it has been improvised.
In this article I will discuss 20 features of SQL Server 2012 which I personally like (we can agree to disagree on my list) and from these 20 features, 7 are revolution and 13 are evolution. This article is divided in to 4 parts, in every part we will discuss 5 features. So let’s start with the first 5 top features.

Feature number 1 (Revolution):- Column store indexes

Column store indexes are unexpected and awesome feature. When I read this feature first time I was like, mouth wide open. You can get this feature when you right click on the indexes folder as “Non-Clustered Column store Index” , as shown in the below figure.

So let’s quickly understand what exactly it does. Now Relational database store data “row wise”. These rows are further stored in 8 KB page size.
For instance you can see in the below figure we have table with two columns “Column1” and “Column2”. You can see how the data is stored in two pages i.e. “page1” and “page2”. “Page1” has two rows and “page2” also has two rows. Now if you want to fetch only “column1”, you have to pull records from two pages i.e. “Page1” and “Page2”, see below for the visuals.
As we have to fetch data from two pages its bit performance intensive.

If somehow we can store data column wise we can avoid fetching data from multiple pages. That’s what column store indexes do. When you create a column store index it stores same column data in the same page. You can see from the below visuals, we now need to fetch “column1” data only from one page rather than querying multiple pages.

Feature number 2 (Evolution):- Sequence objects

This feature is good to have and I personally feel it just mimics Oracle’s sequence objects.  Looks like it’s just a good to have feeling, if Oracle has it why not SQL Server. A sequence object generates sequence of unique numeric values as per specifications. Many developers would have now got a thought, we have something similar like this called as “Identity” columns. But the big difference is sequence object is independent of a table while identity columns are attached to a table.
Below is a simple code to create a sequence object. You can see we have created a sequence object called as “MySeq” with the following specification:-
  • Starts with value 1.
  • Increments with value 1 Minimum value it should start is with zero.
  • Maximum it will go to 100. No cycle defines that once it reaches 100 it will throw an error.
  • If you want to restart it from 0 you should provide “cycle”.
  • “cache 50”  specifies that till 50 the values are already incremented in to cache to reduce IO. If you specify “no cache” it will make input output on the disk.
create sequence MySeq as int
 start with 1  -- Start with value 1
 increment by 1-- Increment with value 1
 minvalue 0 -- Minimum value to start is zero
 maxvalue 100 -- Maximum it can go to 100
 no cycle -- Do not go above 100
 cache 50 -- Increment 50 values in memory rather than incrementing from 
IO
To increment the value we need to call the below select statement. This is one more big difference as compared to identity.In identity the values increment when rows are added here we need to make an explicit call.
SELECT NEXT VALUE FOR dbo.MySequence AS seq_no;

Feature number 3 (Revolution):- Pagination

There are instances when you want to display large result sets to the end user. The best way to display large result set is to split them i.e.  apply pagination. So developers had their own hacky ways of achieving pagination using “top”, “row_number” etc. But from SQL Server 2012 onwards we can do pagination by using “OFFSET” and “FETCH’ commands.
For instance let’s says we have the following customer table which has 12 records. We would like to split the records in to 6 and 6.

So doing pagination is a two-step process: -
  • First mark the start of the row by using “OFFSET” command.
  • Second specify how many rows you want to fetch by using “FETCH” command.
You can see in the below code snippet we have used “OFFSET” to mark the start of row from “0”position. A very important note order by clause is compulsory for “OFFSET” command.
select * from
tblcustomer order by customercode
offset 0 rows – start from zero
In the below code snippet we have specified we want to fetch “6” rows from the start “0”position specified in the “OFFSET”.
fetch next 6 rows only
Now if you run the above SQL you should see 6 rows.

To fetch the next 6 rows just change your “OFFSET” position. You can see in the below code snippet I have modified the offset to 6. That means the row start position will from “6”.
select * from
tblcustomer order by customercode
offset 6 rows



fetch next 6 rows only
The above code snippet displays the next “6” records , below is how the output looks.

Feature number 4 (Revolution):- Contained database

This is a great feature for people who have to go through pain of SQL Server database migration again and again. One of the biggest pains in migrating databases is user accounts.  SQL Server user resides either in windows ADS or at SQL Server level as SQL Server users.  So when we migrate SQL Server database from one server to other server these users have to be recreated again. If you have lot’s of users you would need one dedicated person sitting creating one’s for you.
So one  of the requirements from easy migration perspective is  to create databases which are self-contained. In other words, can we have a database with meta-data information, security information etc with in the database itself. So that when we migrate the database, we migrate everything with it.  There’s where “Contained” database where introduced in SQL Server 2012.
Creating contained database is a 3 step process: -
Step 1: - First thing is to enable contained database at SQL Server instance level. You can do the same by right clicking on the SQL Server instance and setting  “Enabled Contained Database” to “true”.

You can achieve the same by using the below SQL statements as well.
sp_configure 'show advanced options',1
GO
RECONFIGURE WITH OVERRIDE
GO
sp_configure 'contained database authentication', 1
GO
RECONFIGURE WITH OVERRIDE
GO
Step 2 - The next step is to enable contained database at database level. So when create a new database set “Containment type” to partial as shown in the below figure.

You can also create database with “containment” set to “partial” using the below SQL code.
CREATE DATABASE [MyDb]
CONTAINMENT = PARTIAL
ON PRIMARY
( NAME = N'My', FILENAME = N'C:\My.mdf')
LOG ON
( NAME = N'My_log', FILENAME =N'C:\My_log.ldf')
Step 3: - The final thing now is to test if “contained” database fundamental is working or not. Now we want the user credentials to be part of the database , so we need to create user as “SQL User with password”.

You can achieve the same by using the below script.
CREATE USER MyUser
WITH PASSWORD = 'pass@123';
GO
Now if you try to login with the user created, you get an error as shown in the below figure. This proves that the user is not available at SQL Server level.

Now click on options and specify the database name in “connect to database” , you should be able to login , which proves that user is part of database and not SQL Server

Feature number 5 (Evolution):- Error handling

As a developer I am personally very comfortable with using “try/catch/throw” syntax structure for error handling in c# or vb.net. Thanks to SQL Server team in 2005 they brought in “try/catch” structure which is very much compatible the way I as a developer was doing error handling in c#.  It was nightmare handling error using “IF” conditions and “@error” code before SQL Server 2005. Below is a sample code which shows how “try/catch” code looks.
begin try


declare @n int = 0;
set @n = 1/0;


end try


begin catch


print('divide by zero');
RAISERROR ( ‘Divide by zero‘, 16, 1) ;


end catch
But what still is itching me in the above code is when it comes to propagating errors back to the client I was missing the “THROW” command.  We still need to  use “RAISEERROR” which does the job, but lacks lot of capabilities which “THROW” has. For example to throw user defined messages you need to make entry in to “sys.messages” table.
Below is how the code with “throw” looks like.
begin try
-- The code where error has occurred.
end try


begin catch
-- throw error to the client
Throw;
end catch
If you want to throw exception with a user defined message defined you can use the below code. No entry need in the “sys.messages” table.
THROW 49903, 'User define exception.', 1
From SQL Server 2012 onwards use “Throw” rather than “raiseerror” , looking at the features of “throw” looks like sooner or later “raiseerror” will be deprecated . Below is a comparison table which explains the difference between “throw” vs “raiseerror”.
ThrowRaiseError
User & system exceptionCan generate only user exception.Can generate user and system exception.
“Sys.Messages” tableYou can supply adhoc text does not need an entry in “Sys.Messages” table.You need to make an entry in “Sys.Messages” table.
Original exception.Original exception is propagated to the client.Original exception is lost to the client.


http://www.codeproject.com/Articles/561797/Top-20-exciting-features-of-SQL-Server-2012-Part-2

Introduction
Feature number 6 (Evolution):- User defined roles
Feature number 7 (Evolution):- Windows server core support
Feature number 8 (Revolution):- Tabular Model (SSAS)
Feature number 9 (Revolution):- Power view
Feature number 10 (Revolution):- DQS Data quality services

Introduction

In part 1 SQL Server 2012 new features we talked about five features Column store indexes,Sequenceobjects,Pagination,Contained database and Error handling. In case you have missed it you can read about the same from http://www.codeproject.com/Articles/526621/Top-20-exciting-features-of-SQL-Server-2012-Part-1
In this article we will talk about the next 5 interesting SQL Server 2012 features:-
  • User defined roles.
  • Windows Core support.
  • Tabular model.
  • Power and DQS. 

Feature number 6 (Evolution):- User defined roles

In SQL Server 2008 R2 we had the ability to create roles at database level. So you create customized roles at the database level and then assign them to users. But at the server level or instance level we did not have options of creating server roles. So if you right click on the “Server roles” you will not find any options for adding new server roles.

Now that’s a serious limitation. Let’s say you have two sets of database user one programmers and the other DBA’s. The programmers should be able to fire insert, update and delete queries while DBA’s should be able to create database, backup and do maintenance related activities. But DBA’s should not be able to fire insert, update and delete queries. But now because you have fixed roles the DBA’s get more access so they can even fire insert, update and delete queries. In simple words we need flexible roles.
In SQL Server 2012 you can create your own role and define customized permission for the role at a more granular level.

You can see in the below image how you can select permission at a finer level and create customized roles which can be later assigned to a user.
 

Feature number 7 (Evolution):- Windows server core support

This is a small evolution but an important one. Windows server core is one of the flavors of Windows operating system. It is a GUI less version of windows operating system. When you boot with windows core you would be surprised to get a simple DOS command line as shown in the figure as compared to start program files and crowded desktop short cuts. Because only necessary services are enabled, we have less memory consumption, simplified management as many features are not enabled and great stability. When we talk about SQL Server we would love to run it over an operating system with minimal feature enabled. So this is the most welcome feature and on production server using windows core is definitely the way to go.
 

Feature number 8 (Revolution):- Tabular Model (SSAS)

This is my personal top feature in SQL Server. Now the main goal of SSAS (SQL Server analysis service) is to do analysis, i.e. Convert data in to information. And SSAS achieves this by creating CUBES from data provided.
So the basic flow goes in 3 steps :-
  1. First data is brought to central database (data ware house) using SSIS package. The design of the data ware house system is normally in snow flake or star schema, so that we can create CUBE’s effectively.
  2. Later analysis services runs over the data ware house to create CUBES to give multi-dimensional view of the data for better analysis.
  3. We can then run different clients like EXCEL, SSRS etc to display data to different sections of users. 

 
Can you guess one big potential problem with the above 3 step approach?. Give a PAUSE and think over it for a minute before you read ahead.
The biggest issue is simple business users CAN NOT CONTRIBUTE TO CUBES. I mean if I am a business user who would like to take data from a excel sheet, use my excel formula skills, derive conclusions and publish cubes, so how do I go about it?. My personal belief is that the best business analysis can only be done by business end users who actually do business on the field. They are the best people who understand things and can create CUBES which are more useful and logical.
Also if you notice the previous steps its highly technical:-
  • Can a simple business user create DB designs like snow flake / star schema?
  • Can he use the complicated SSAS user interface to publish cubes?.
  • Does he have the knowledge of using SQL Server analysis capability?
Note: - We will change our vocabulary so that we are compatible with Microsoft vocabulary. We will term simple business users as personal users hence forth.
Now personal users work most of the time with EXCEL and if we really want to give analysis power to them, it should be inside excel itself. That’s what power pivot does. Power pivot is plugin which sits inside EXCEL and gives analytical capabilities to simple personal users to do analysis with data they have in EXCEL.
Now EXCEL data is in tabular format with rows and columns. So if you want publish this kind of analyzed data from EXCEL you need to have SSAS installed in tabular mode.


So now if you compare personal users with professional BI the workflow will be following:-
  • IMPORT 
  1. Professional BI personal will use SSIS, data flows, control flows etc.
  2. Personal BI people can use import, copy past mechanism to get data in to EXCEL.
  • ANALYZE
  1. Professional BI person will uses SSAS , BI intelligence algorithm to do analysis. Once analysis is done they will publish in multi-dimension format.
  2. Personal BI people will use power pivot and excel formulas to come to an analysis. Once analysis is done they will publish in tabular format.
  • VIEW
At the end of the day both personal BI and SSAS will publish in a CUBE format. So you can view the data from CUBE using SSRS , EXCEL or any other mechanism.

So the personal BI user can use power pivot to do analysis. He can then save the same as an simple EXCEL file.

You can then select import from power pivot, go to power pivot EXCEL file and deploy the same in a tabular format.
 


 
To publish the same to tabular you can click on Build – Deploy tabular project name.
 

Once deployed you should see the CUBE deployed in SSAS as shown in the below figure.
 

Because the CUBE is created from tabular format we cannot use MDX to query the CUBE. No worries, a new simple query language have been introduced called as DAX (Data analysis expression). You can see in the below figure how I have queried the “Sales 1” cube. DAX query starts with evaluate keyword, brackets and then the cube name.
This article will not go in to DAX as our main concentration is SQL Server 2012 new features.
 

Feature number 9 (Revolution):- Power view

Every second project I have worked in my life always wanted a system where in end users can go and create their own custom reports. Even though we have a facility in SSRS for adhoc reporting it has huge limitations like you need to install something on the client, works only with windows operating system and internet explorer etc.
Power view is created for simple end user who would like to drag and drop and create their own report using ad-hoc ways. It’s a simple Silverlight plugin which gets downloaded and you get a screen something as shown below. End users can now drag and drop the fields from right hand side, create a report and publish it. Please note end users can not add fields that have to be added from SSRS or Power pivot.
This feature would have been my top feature but due a serious limitation it is not. “Power view only works with SharePoint”….I am sure you are feeling hurt like me. Hope Microsoft makes this independent of share point.


If we visualize properly you can understand what the end GOAL of Microsoft is to empower simple business users so that can do BI themselves. So a personal BI user cannot get data in EXCEL, do analysis by using Power pivot and finally create reports using the ad-hoc reporting tool power view.

Feature number 10 (Revolution):- DQS Data quality services

This feature really touched by heart. When we talk about business intelligence it’s all about DATA, DATA and DATA. One of the big problems with data is that it can come in crude and unpolished formats. For instance if someone has entered “IND” and you would like to change it to “India” so that data is in a proper format.DQS helps you build a knowledge base for your data and you can then use this knowledge base to do data cleaning. You can locate DQS as shown in the below image.
 

Once you open DQS you will find three sections as shown below Knowledge base, Data quality projects and Administration.


Knowledge base will help you define your validation rules. For instance you can see in the below figure how we are creating a validation called as “CustomerCode” and this validation checks if the data length is equal to 10.

You can also define correction rules like as shown one below. If you find data as “IND” change it to “India”.
 

Once you have defined you knowledge, next step is to run this knowledge base over a data. So create a DQS project and apply the knowledge base which you had created as shown in the below figure.


You can then define where the data can come from and also you can map which columns can have which validations. For instance you can see in the below screen for country and customer we have mapped different domains. Domains are nothing but validation rules.
 

Once done you can start the process and you would see a progress screen as shown below of corrected values and suggested values depending.


Finally you can export the cleaned data to SQL Server, Excel or CSV.


Next 5 features I will discuss about Always on, Trouble shooting in SSIS, Varying columns, SSIS CDC support and SSIS parameter support.
If you ever get a chance, do visit my site which has some awesome collection of SQL Server Interview question and answer videos. I would also like to celebrate with everyone the 10000 subscriber mark we reached on our youtube channel which is dedicated for C# and .NET interview questions.

Using SQL Server 2012 T-SQL New Features

By , 13 Oct 2011
 

Introduction

SQL Server 2012 “Denali” is the next major release of Microsoft database server. There are some new features that are added to T-SQL to make common tasks much easier. I will show how to use some of the new features in this article.

Sequence

Generating a sequence number, a.k.a. auto number, is a common task in an enterprise application. For a single table, you can specify identity field. But, if you want to have database wide sequential number, then you must devise something by yourself before SQL Server 2012. One solution to this problem is to create a table that has a numeric field can be used to store sequential number, then use SQL to increase it every time used one. In SQL Server 2012, we have a new solution - use Sequence.
Create Sequence
To create a Sequence in SQL Server 2012 is very simple. You can create it with SQL Server Management Studio or T-SQL.
  1. Create Sequence with SQL Server Management Studio
    In Object Explorer window of SQL Server Management Studio, there is a Sequences node under Database -> [Database Name] -> Programmability. You can right click on it to bring up context menu, and then choose New Sequence… to open the New Sequence window. In New Sequence window, you can define the new Sequence, like Sequence Name, Sequence schema, Data type, Precision, Start value, Increment by, etc. After entering all the required information, click OK to save it. The new Sequence will show up in Sequences node.
  2. Create Sequence with T-SQL
    The following T-SQL script is used to create a new Sequence:
    CREATE SEQUENCE DemoSequence
    START WITH 1
    INCREMENT BY 1;
Use Sequence
The new NEXT VALUE FOR T-SQL keyword is used to get the next sequential number from a Sequence.
SELECT VALUE FOR DemoSequence
One thing I want to mention in here is Sequence doesn’t support transaction, if you run this script:
BEGIN TRANSELECT NEXT VALUE FOR dbo.DemoSequence
ROLLBACK TRAN
You can see even the transaction is rolled back at the end. The NEXT VALUE FOR will still return the next sequential number. This behavior is consistent with identity field.

Page Data

A common situation for displaying page is how to display large amount of data in DataGrid. Earlier, the programmer usually used the paging feature of DataGrid to handle this situation. Therefore, by choosing a different page number, different set of data are displayed on the screen. However, how to retrieve data from database is multiplicity. A developer could:
  1. Retrieve all data from database, and then let DataGrid to only display the current page data.
  2. Retrieve the current page data from database by using temp table.
  3. Retrieve the current page data from database by using ROW_NUMBER() function.
    The SQL Server 2012 provided a new way to retrieve current page data from database.
SELECT *
FROM Customers
ORDER BY CustomerID
OFFSET 10 ROWS
FETCH NEXT 10 ROWS ONLY;
The OFFSET keyword and FETCH NEXT keyword allow the developer to only retrieve certain range data from database. If you compare this script with ROW_NUMBER() function introduced in SQL Server 2008, you can see this script is shorter and more intuitive.
SELECT *
FROM (
SELECT ROW_NUMBER() OVER(ORDER BY CustomerID) AS sequencenumber, *
FROM Customers) AS TempTable
WHERE sequencenumber > 10 and sequencenumber <= 20

Exception Handling

SQL Server 2005 introduced TRY CATCH block to handle exception in T-SQL. The TRY CATCH block is similar to whatever in C# language except you need always raise a new exception after catching it. There is no way to simply re-throw it.
A sample of T-SQL script with exception handling in SQL Server 2005:
BEGIN TRY
 BEGIN TRANSACTION – Start the transaction

 -- Delete the Customer
 DELETE FROM Customers
 WHERE EmployeeID = ‘CACTU’

 -- Commit the change
 COMMIT TRANSACTIONEND TRY
BEGIN CATCH
 -- There is an error
 IF @@TRANCOUNT > 0
  ROLLBACK TRANSACTION

 -- Raise an error with the details of the exception
 DECLARE @ErrMsg nvarchar(4000), @ErrSeverity int
 SELECT @ErrMsg = ERROR_MESSAGE(),
  @ErrSeverity = ERROR_SEVERITY()

 RAISERROR(@ErrMsg, @ErrSeverity, 1)
END CATCH
In SQL Server 2012, by using Throw keyword, the above script will be changed to this:
BEGIN TRY
 BEGIN TRANSACTION -- Start the transaction

 -- Delete the Customer
 DELETE FROM Customers
 WHERE EmployeeID = ‘CACTU’

 -- Commit the change
 COMMIT TRANSACTIONEND TRY
BEGIN CATCH
 -- There is an error
 ROLLBACK TRANSACTION

 -- Re throw the exception
 THROW
END CATCH
Also, you can use Throw to replace RAISERROR function:
THROW 51000, ‘The record does not exist.’, 1;

Enhanced EXECUTE keyword

The EXECUTE keyword is used to execute a command string. The previous version SQL Server only has WITH RECOMPILE option to force new plan to be re-compiled. The SQL Server 2012 dramatically improved this part. The option part is like this right now.
[ WITH <execute_option> [ ,…n ] ]

<execute_option>::=
{
 RECOMPILE
 | { RESULT SETS UNDEFINED }
 | { RESULT SETS NONE }
 | { RESULT SETS ( <result_sets_definition> [,…n] ) }
}

<result_sets_definition> ::=
{
 (
  { column_name
    data_type
  [ COLLATE collation_name ]
  [ NULL | NOT NULL ] }
  [,…n ]
 )
 | AS OBJECT
  [ db_name . [ schema_name ] . | schema_name . ]
  {table_name | view_name | table_valued_function_name }
 | AS TYPE [ schema_name.]table_type_name
 | AS FOR XML
}
The way to use the new added options is like this:
EXEC CustOrderDetail ‘2’
WITH RESULT SETS
(
 (
 ProductName1 varchar(100),
 Unitprice1 varchar(100),
 Quantity1 varchar(100),
 Discount1 varchar(100),
 ExtendedPrice1 varchar(100)
 )
);

Get Metadata

Application sometimes needs more insight of the SQL script result set. In the past, you needed to write a complicated script to query system tables or views, e.g. sys.objects, to get all the information. In SQL Server 2012, the new system stored procedure sp_describe_first_set makes the work trivial.
sp_describ_first_result_set @tsql = N’SELECT * FROM customers’

No comments:

Post a Comment