Thursday, December 31, 2015

Wishing you all the very best of everything in 2016

Wishing you all the very best of everything in 2016
Hau’oli Makahiki Hou!

We wish our readers the very best in 2016 and a Happy New Year. We hope to continue writing interesting posts. HodentekMSSS blog mostly covers Microsoft SQL Server.

It was a very productive year which started with the Introduction to Structured Query Language class in the beginning of the year. HodentekMSSS blog has covered most of the developments in SQL Servers including the great tools. 

The number of posts this year exceeded the last years' and with SQL Server 2016 there will be a lot to learn and write about. I hope to keep the number at the same level in 2016.

Most of my readers are from USA, India, UK, Canada, France and Russia plus the many others from other parts of the world. The most interesting and satisfying part of writing the blog is to find who reads the blog. When I see people from war torn countries reading about something technical makes me very happy.

Reader locations for December 2015 from Cluster Maps


HodentekMSSS posts over the years

Group Photo: Introduction to Structured Query Language at PCATT 2015

Here is a group photo of the class who took this course during February-March of 2015 at the Pacific Center for Advanced Technology Training in Honolulu.

I wish all my students a very Happy New Year.

The next class is starting Jan 28, 2016

Wednesday, December 30, 2015

How is row storage index different from Column Storage Index?

First of all ColumnarStorage Index  is a new feature in SQL Server 2012 and upward versions. This new indexing type is designed to speed up disk I/O operations in queries retrieving data from data warehouses where it is typically used. In data warehousing with ColumnarStorage Index; data is stored in columnar fashion. Hence if the data consists of large number of columns but only few columns are regularly called upon in a query, then having those columns (indexed) arranged to store data in the sense of columnar storage will be more efficient.

Probably this Mickey Mouse picture shows these two different index types.
Microsoft puts it this way,

"An xVelocity memory optimized columnstore index, groups and stores data for each column and then joins all the columns to complete the whole index. This differs from traditional indexes which group and store data for each row and then join all the rows to complete the whole index.
There are many restrictions on creating Columnar Storage Index, the first of which is that you can only do it if you have SQL Server 2012 Enterprise edition.

These are the basic restrictions for the columnstoreindex:
  • Cannot have more than 1024 columns.
  • Cannot be clustered. Only nonclustered columnstore indexes are available.
  • Cannot be a unique index.
  • Cannot be created on a view or indexed view.
  • Cannot include a sparse column.
  • Cannot act as a primary key or a foreign key
  • Cannot be changed using the ALTER INDEX statement. Drop and re-create the columnstore index instead. (You can use ALTER INDEX to disable and rebuild a columnstore index.)
  • Cannot be created with the INCLUDE keyword.
  • Cannot include the ASC or DESC keywords for sorting the index. Columnstore indexes are ordered according to the compression algorithms. Sorting is not allowed in the index. Values selected from a columnstore index might be sorted by the search algorithm, but you must use the ORDER BY clause to guarantee sorting of a result set.
  • Does not use or keep statistics in the manner of a traditional index.
  • Cannot contain a column with a FILESTREAM attribute. Other columns in the table that are not used in the index can contain the FILESTREAM attribute.
  • Also, when it comes to updating a table, you cannot update a table with a columnstore index.
More info here.

Tuesday, December 29, 2015

Creating and Executing a SSIS Project Package - Part 3

In the previous part, Part 2 we successfully configured a SSIS Package consisting of a ADO.NET Source and a Recordset Destination. However we did not see the returned rows from the data source although we know it returned 8 rows.

In this post we will add a Data Viewer to monitor the data going from Data Source to Recordset. The data flowing is seen in the Data Viewer which you may copy.

Adding a Data Viewer to the design

Highlight the arrow connecting the ADO.NET Source to Recordset Destination. This is line is called the Data Flow Path.

Data Viewer_01

After right clicking on Data Flow Path, pick Enable Data Viewer by choosing it in the list. It sits in the Data Flow Path as shown.

Data Viewer_02

Right click on an empty spot in the designer and pick Execute Task.

The task is processed and the data flowing out of data source first appears in the Data Viewer. The data is shown in the  ADO.NET Source Output Data Viewer at Data Flow Task window as shown. The data flow is temporarily stopped here.
Data Viewer_03

This is an intermediate point in the data flow and when you click Detach (it changes to Attach) and the task is completed as shown. You might have also noticed that we retrieved only two columns into memory but the data veiwer is showing all the columns.

Data Viewer is a useful device to monitor data flows in SSIS in Microsoft BI.

Monday, December 28, 2015

Creating and Executing a SSIS Project Package - Part 2

In the previous part, Part 1 we configured the ADO.NET Source to get the data from the local instance of SQL Server 2012.

In this post we will configure a Recordset Destination to get the input of data from the above source. We will also configure how much of the data from the source should be sent to the Recordset Destination.

Recordset Destination is an object which stores the information it receives in memory and is characterized by a variable of type Object. After the recordset saves data you will have to use Foreach Loop container or some other way to read the data.

Let us start inserting the Recordset Destination.

Adding a Recordset Destination

Click Recordset Destination under Other Destinations in the SSIS Toolbox shown here. Hold it and drag it over to the Data Flow tab of the Packet Designer (Package.dtsx[Design]).


After dropping Recordset Destination the designer appears as shown.


Configuring the Recordset Destination

You notice the red circle with x in it. This means this object is not configured.
Hold and draw the dangling(arrow pointing downwards on the ADO.NET Source, click on an empty area) to touch the Recordset Destination and let it go. The source ADO.NET Source and the Recordset Destination are now connected solidly and the Recordset is ready to take in the data from the source.

Right click the Recordset Destination to display the list shown:

Click Edit...

The Advanced Editor for Recordset Destination gets displayed a shown.


In the Component Properties tab go down the list and provide a variable name for this object, let say X ( we may have to makes changes to this later). You may accept others as is (in the scope of the present post).


Click on Input Columns tab on the Advanced Editor.
The columns coming from the ADO.NET Source are shown in the top pane. The error at the bottom is saying that you should at least choose one column.


Place check marks for CategoryID, CategoryName and Description columns in the top pane. They get shown in the bottom pane.

The bottom pane has three columns, The Input Column, the Output Alias (if you want you can change the alias by overwriting) and the Usage Type set to ReadOnly. If you are making changes you can change this to Read/Write.

Now click the Refresh button. The message goes away and you are now in the clear.

Click OK on the Advanced Editor window.

When you come back the Recordset Destination is still showing the red icon. Although you associated the Recordset with a variable, it is not associated with the package.

Right click an empty area (as shown) in the designer to bring up a pop-up pick list as shown.

Click Variables.
The Variables window is displayed as shown.


Click the first icon at the top (hovering over it will show Add Variable).
The Variable window changes to the following:


The name is the same one we gave to the Recordset and the data type is Object and Data Value is System.Object.

The object name X given is not completely defined. Go back to the Advanced Editor and click on the Variable Name drop-down. Choose the notation User::X as shown. Click OK


Now the error on the Recordset Destination is gone. Now you are ready to execute. If you do not see the columns, do not worry as the data is in the memory.

Executing the Task

It is time to test the Data Flow Task we have configured. Click on an empty area in the designer and click Execute Task.

This may take some time. The design area becomes blank and after a little while you will see the following:

Well where are the  rows from the source transferred to the Record Set?

We will do it in another post.

For now Click on the Progress in the previous image. We see that 8 rows were copied over to the Recordset Destination.

This completes this post. In the next we look at displaying the rows copied.

Sunday, December 27, 2015

Creating and Executing a SSIS Project Package - Part 1

This post will describe the following:

1. Creating a SSIS package to retrieve a few columns from an SQL Server 2012 database Source and send it to a Recordset Destination. Only the Source configuration is described. In Part 2, the Recordset Destination will be described followed by the next step.
2. Executing the package in the first step.

The preparatory steps for this process are the following:

1. You should install SQL Server 2012 and attach a database to work with.
  • Installing SQL Server 2012 step-by-step described here
  • Attaching a sample database is described here
  • Installing a database using script as described here
2. You should have preferably the latest (even earlier version will do), SQL Server Data Tools (SSDT); either a shell program or as part of a Visual Studio program (2012,2013, or 2015).
  .Installing SSDT described here (

What this package will do?

This package connects to the SQL Server Instance and extract some data from the server which is the source of data. The extracted data is then routed to the destination which is the simplest one, the Recordset Object.

In order to get the data out of SQL Server 2012 we will use a ADO NET Source in the SQL Server Integration Sources Toolbox.

For the destination we will use the Recordset Destination also in the same Toolbox.

Step-by-step procedure to configure the ADO.NET Source:

We take the source and destination and place them on the designer pane of the SQL Server Integration Services user interface and establish connections in such a way data flows from the Source to the Destination. This is true of most of the data flows. During the flow the data can be monitored.

Click on the Control Flow tab and from Toolbox ; drag and drop a Data Flow Task on to the designer pane as shown. After dropping the Data Flow Task it will be displayed in the designer as shown.


Right Click the Data Flow Task to dsiplay the options in the pop-up as shown.


Click Edit and the tab changes to Data Flow as shown. The bottom pane is reserved for Connection Manager which manages the connection information for the data sources and data destinations.


Move over to the bottom pane Connection Managers and right click to add a new connection manager to the SSIS Package as shown.


Click New ADO NET Connection... The Configure ADO.NET Connection Manager window is displyaed as shown.


Click button New... at the bottom. The Connection Manager window is diaplyed on top of the previous window as shown.


The data provider is .NET providers\SqlClient Data Provider. The Server name: field is blank. Click Refresh and then click the drop-down handle to the left of it. The installed SQL Server Instance is displyed as shown.


At the same time, the Select or enter database name: field becomes active. Click on the handle in this field and the list of databases in this on this instance of SQL Server is displayed as shown.


Herein the database Northwind is chosen (you can choose any). When you click on the database in the list, the database will be the one which will be selected. The Log on to the server did work with the first option (Use Windows Authentication) because the database list was displayed (if it is wrong, it will not). If you go back and look at the SQL Server 2012 Installation you will notice that Windows Authentication was the choice at installation time.

Click Test Connection button.

The success of the test is reported in the message shown.


Click OK to the message as well as the one on the Connection Manager window.
The Configuration ADO.NET Conneciton Manager window gets filled up with the required information as shown.


Click OK. The Configuration Manager window gets closed and the Connection gets added to the Connection Managers pane as shown.


Note: There may be a problem in this interface, the ADO NET Source added should be in the design pane, but it had disappeard. It was added again as shown.


Right click the ADO NET Source (the red circle with the cross means it is not completly configured) to display the pop-up window.


Click Edit...
The ADO.NET Source Editor is displayed as shown. ADO.NET Connection Manager has already updated the field. You can access a Table or a View and since it is already connected to the database Northwind, the table or view will be the one on that database.

Click on the handle for the Name of the table or the view: as shown. All the tabels and views will appear (only some are in the next image, scroll-down to see more).


Choose a database. Herein the Categories table is chosen.

On the left is the navigation. We just completed the Connection Manager. Click on Columns to display the Columns from this table (Categories).


The Error Output will assume the following. Just accept this default for now.


Click OK.

The ADO.NET Source is Configured now. There are two downward pointing arrow one blue and one red.

In Part 2 we will consider the Recordset Destination in detail.

Friday, December 25, 2015

SQL Server Database files on Microsoft Azure

Hybrid cloud scenarios bring on premises data and Azure cloud together by integration provided by the hybrid features.

SQL Server 2014 has the features that allow you to create the hybrid cloud to move at your pace from on premises to the cloud.

Supporting the placement of your on premises data files as Azure storage Blobs has many advantages besides cost considerations:

Easy and fast migration benefits:
  • Use Attach/Detach to move databases between machines
Storage Benefits
  • Excellent and limitless Azure Blob Storage
  • Inexpensive and high availability of storage
  • Disaster recovery including geo-locations
  • Built-in manageability resulting in lowered cost
Security Benefits:
  • Encrypt all data on public cloud using Transparent Data Encryption certificates stored locally
Snapshot backup
  • Snapshot backup helps near instantaneous backups and quicker restores
Read more here.

Thursday, December 24, 2015

Integration Services Server (SSIS) in SQL Server 2012 and SSISDB

Integration Services Server is basically the SQL Server Database Engine that hosts a special database that does not allow replication or mirroring, the SSISDB. The database stores the following objects:

  • Packages
  • Projects
  • Parameters
  • Permissions
  • Server Properties
  • Operational History
While installing SQL Server 2012you would have installed SQL Server Integration Services as shown here:


SSISDB provides the public views that allows querying and for manageability, it allows creating stored procedures. SSISDB has to be in place before you deploy SSIS Projects.

The Packages are created using SQL Server Data Tools and deployed to SSIS. It assumes you have created the SSISDB before deploying the SSIS Projects.

Where is the SSISDB or how do you create the database SSISDB?

Assuming you have installed SQL Server 2012 as I the previously mentioned link, you launch the SQL Server Management Studio and change over from connecting to Database Engine to Integration Services  as shown.


Since you installed with Windows Authentication, just click Connect.

Oops! you may get this message.


Now go ahead and launch SQL Server Management Studio with Elevated permissions (As administrator).  Repeat connecting to Integration Services as before. You may get this message if SQL Server Integration Services has not started.


Start the SQL Server Integration Services in Control Panel|..|Services  shown.


Click Start to start the SQL Server Integration Services 11.0. It processes the information and status changes to Running.

Now connect to the Integration Services as before (after launching the SSMS in Administrative mode).

The first of the nodes in the Object Explorer is the Integration Services. This is an expanded view of the Integration Services Server.

This is a named instance of SQL Server that will host the SSISDB.


Creating the SSISDB

Connect to the named instance of SQL Server 2012, herein the Hodentek8\RegencyPark.
Right click the Integration Servies Catalogs and click Create Catalog.. from the drop-down menu as shown above. The following window will be displayed.


Place check mark for Enable CLR Integration. Leave the catalog database name as is and create a password to protect data using encryption (enter and retype password). Save this information in a secure place (under lock and key). Click OK.

Now the SSISDB gets created as shown.

This is an expanded view of the SSISDB in the Object Explorer of the named instance.


In the next post a SSIS Project creation will be described that can be deployed to the Integration Services Server.

Get jump start on SSIS (the version is old but concepts are same)

Read here:

What is the latest MySQL Version available?

The latest version for Windows (x86) version of MSI installer can be found here: ...