Saturday, October 31, 2015

Creating a data-tier application definition in SQL Server 2012

Data-tier application in SQL Server defines the schema and objects that are required to support an application.It is really very simple.

There are two ways you can implement a DAC:
  • Using Microsoft Visual Studio 2010 with a SQL Server Data-tier application project type
  • Using the extraction utility in SQL Server's Extract Data-tier wizard.
In Microsoft Visual Studio 2015 Community there are no templates to do this although you can import one, if it exists in the community samples.

Here is how you extract using the SQL Server Management Studio in SQL Server 2012.

Connect to SQL Server on your computer. Here is the Object Explorer of a named instance of SQL Server 2012.

Dac_01

We now create a DAC using the Northwind database.
Right click Northwind to open the menu and the sub-menu as shown.


Dac_02

Click on Extract Data-Tier Application... to display the Introduction screen of the Wizard.

Dac_03

The above screen pretty well describes the actions we take. It has the three steps:
  • Set the DAC properties
  • Review object dependency and validation results
  • Build the DAC Package
Click Next to display the Set Properties page of the wizard. when you choose the databse the application name gets the database name by default and if it iss the first time the version is also set.


Dac_04

It appears that I have already created a file with that name and I will extract another with a different name NrthWnd.

Dac_05

Click Next. Displays the validation and summary page of the wizard.

Dac_07

Click Next. after a bit of creating and saving animation the process either succeeds or fails.

Dac_08

Click finish (after you get to see the Success of the operationb)  and the DAC  page is saved to the location indicated.

Dac_09

DAC files can be unpacked with programs shown.

 Dac_10
 

Download SQL Server Data Tools Preview to work with SQL Server 2016 CTP3

SQL Server Data Tools preview can be used with Visual Studio 2013 or Visual Studio 2015 to work with SQL Server 2016 CTP3. However the preview for Visual Studio 2015 covers most of the features and this post describes downloading and installing the preview.

SQL Server Data Tools can be installed standalone or integrated into an existing Visual studio installation. If you have Professional or higher version of Visual Studio 2013 or 2015 and did not opt out off installing SSDT during setup the computer used must have an existing version of SSDT.

Here is your master reference for SSDT.

If you are using Community editions then standalone is your only option. You can download the SSDT Preview from these links:

  • SSDT Preview for Visual Studio 2013
    SQL Platform Support:
    SQL Server 2005 – 2016 CTP 3.0
    Azure SQL Database
    Azure SQL Data Warehouse

  • SSDT Preview for Visual Studio 2015
    SQL Platform support:
    SQL Server 2005 – 2016 CTP 3.0
    Analysis Services for SQL Server 2008 - 2016 CTP 3.0
    Integration Services for SQL Server 2016 CTP 3.0
    Reporting Services for SQL Server 2008 - 2016 CTP 3.0
    Azure SQL Database
    Azure SQL Data Warehouse
Getting the SSDT Preview for Visual Studio 2015 should cover most of the ground. In preparing for SQL Server 2016 CTP3, the SSDT Preview was installed. Here are some installation screen shots.

Double click the downloaded SSDT Setup.exe to start downloading and installing






Click Next. Accept license terms as in the following

 

Click Install. This is going to be a slow process. It starts off like this.




 Downlaods everything needed for CTP3.





It's done.







Lots of programs are added to the computer. Have a look at Programs in the Control Panel's Programs and Features..





Looks like the Analysis Services, Reporting Services and Integration services get integrated with the Visual Studio 2015 Community.


Launching Visual Studio 2015 Community.

The start screen appears after a very long time and I hope it will be faster in the future.  It starts off with Object Browser.


The New Project window appears as shown with the BI | Analysis Services highlighted. 


OK. This is it. Now you can create BI projects with SQL Server 2016 CTP3 and other supported SQL Server Platform items.

Thursday, October 29, 2015

Get SQL Server 2016 CTP3 and work on SQL Server 2016 soon


SQL Server Team blog on 10/18/2015 announced the availability of SQL Server 2016 CTP3 besides quite a few others. This post is related to CTP2 only.

Microsoft has been very active in working on its database product SQL Server 2016 this year as seen in the number of posts. It is still in CTP but then it is not 2016 yet. These are the quoted kye words for this version,'faster transactions and queries, deeper insights on any device, advanced analytics, new security technology, and new hybrid cloud scenarios'.

http://hodentek.blogspot.com/2015/05/sql-serverr-2016-early-bird-special.html -5/13
http://hodentek.blogspot.com/2015/05/start-evaluating-sql-server-2016-ctp2.html -5/28
http://hodentek.blogspot.com/2015/07/sql-server-2016-ctp-21-is-ready-for.html -7/11
http://hodentek.blogspot.com/2015/07/upgrading-sql-server-2016-ctp-2-to-sql.html -7/11
http://hodentek.blogspot.com/2015/07/notes-2-sql-server-management-studio.html -7/13
http://hodentek.blogspot.com/2015/08/regarding-sql-server-2016-ctp-22.html -8/06
http://hodentek.blogspot.com/2015/09/download-sql-server-ctp-23-and-check.html -9/11

You can download from here after the sign in ritual.
https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2016.
From the other links on the above site you can download the previous versions of SQL Servers(2014 SP1 and 2012 SP1; both evaluation versions valid for 180 days)

The stated benefits of the CTP3 are the following:
  •     Enhanced in-memory performance provide up to 30x faster transactions, more than 100x faster queries than disk based relational databases and real-time operational analytics
  •     New Always Encrypted technology helps protect your data at rest and in motion, on-premises and in the cloud, with master keys sitting with the application, without application changes
  •     Built-in advanced analytics– provide the scalability and performance benefits of building and running your advanced analytics algorithms directly in the core SQL Server transactional database
  •     Business insights through rich visualizations on mobile devices with native apps for Windows, iOS and Android
  •     Simplify management of relational and non-relational data with ability to query both through standard T-SQL using PolyBase technology
  •     Stretch Database technology keeps more of your customer’s historical data at your fingertips by transparently stretching your warm and cold OLTP data to Microsoft Azure in a secure manner  without application changes
  •     Faster hybrid backups, high availability and disaster recovery scenarios to backup and restore your on-premises databases to Microsoft Azure and place your SQL Server AlwaysOn secondaries in Azure
File formats are 64bit ISO or CAB. The installation is as usual except if you are using Microsoft Edge, you cannot save, you just run. However you do get an option to name a directory (with a default) to save extracted files.

Keep coming back to Hodentek and HodentekMSSS for more posts on SQL Server related information.

Monday, October 26, 2015

Idera's PowerShell Plus is a good tool for SQL Server DBAs

This is a useful tool as it can make your life easy as a DBA.
This post shows how to install and configure the tool.

Download IderaPowerShellPlusSetup-x64.zip (25 MB) from Idera's website after registering and get the free tool. Get the tool to match your OS architecture (herein (x64) on a Laptop running Windows 10.






Extract files to a location of your choice and double click to begin installation.



This tool helps to:
  • Create Usefull other Tools
  • Automate Repetitive Tasks and save time
  • Supports multi-server automation
  • Monitoring, exporting/importing files
  • Create PowerShell SMO Scripts
  • Carry out compliance tasks

Configuring the tool
After downloading and running the program you may need to do some configuration.


 
PSP_05

Please configure as needed by clicking Next.

The first is the execution policy related. Here  unrestricted option chosen.

PSP_6

Click Next. Configuring Remoting Configurations is next. Not added here.

PSP-7

Click Next. Configuring SQL Server Connections is next

PSP-8

In the above, click Add Server to bring up SQL Server Configuration. Server Instance here is,

Hodentek8\RegencyPark with Windows Authentication. The ellipsis button does not (may not)  guide you to get the information.

PSP_9

Find server/instantce from SSMS or otherwise and enter. Server  gets added.


 PSP_9a
 Server  gets added.
PSP_10

Click Next. Next in line is Configuration of Email Settings

PSP_11

If you are an individual with an ISP, use the ISP provider information.



PSP_11b

Click Test to verify, it works. Check email at the receiving end.

Finally the script execution warning note:

PSP_12

Summarizing with a summary of actions taken.

PSP_13

Click Finish. Now you have the PowerShellPlus UI to work with:

PSP_14

You get an app added to your All Apps menu in Windows 10.
Rock and Roll!

Saturday, October 10, 2015

Querying a SQL Server Database from R

As previously posted you should have the RODBC package in your library. Once you have this it is very easy.
You also need to have an instance of SQL Server (herein 2012) or access to it on/from your computer. Further it is assumed you have created a ODBC (Data Source Name) DSN. Since you will be querying from R, you have R installed on your computer.

Some relevant references to the above can be found here:
------------------------------------------------------
[Windows 10 ODBC : http://hodentekmsss.blogspot.com/2015/08/windows-10-odbc-data-base-adminsitrators.html
 SQL Server 2012 ODBC DSN: http://hodentekmsss.blogspot.com/2013/08/how-do-you-create-odbc-dsn-to-sql.html
SQL Server 2012 Installation: http://hodentekmsss.blogspot.com/2014/07/sql-server-2012-developer-edition-is.html 
Installing R: http://hodentekhelp.blogspot.com/2015/09/can-you-install-r-version-322.html ]--------------------------------------------------------

I already have a ODBC datasource (DSN) created for a SQL Server 2012 database called Feb12_15 (review next image).
In ODBC Database Manager(x64) you can trace back and see how it was constructed. The following images describe how it was constructed.



Find the DSN in ODBC Database Manager as shown:



1.png

Double click this DSN in the ODBC Data Source Adminsitrator window
SQL Server details gets dispalyed as shown.


2.png
Click Next.
The authentication used is shown (Windows Authentication)

3.png
Click Next.
Default database and related configuration information is shown.


4.png
Click Next.
Some defaults used during DSN creation are shown.


5.png
Click Next.
Details of ODBC Datasource is displayed.

6.png
Click  Test and test result is displayed.


7.png

Click OK and cancel all the open windows.
Launch the R-GUI from the short cut.
Connecting to ODBC Datasource
---------------------------
> library(RODBC)
> channel <-odbcConnect("Feb12-15")
Warning messages:
1: In odbcDriverConnect("DSN=Feb12-15") :
  [RODBC] ERROR: state IM002, code 0, message [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified 
2: In odbcDriverConnect("DSN=Feb12-15") : ODBC connection failed
    --an error due to wrong DSN file name corrected in the next line.
> channel <-odbcConnect("Feb12_15")  -- DSN file name is correct. If there are no errors a R prompt is returned
>
--------------------------------
Fetching a named table:
> sqlFetch(channel, "Customers")
   CustomerID                          CompanyName             ContactName
1       ALFKI                  Alfreds Futterkiste            Maria Anders
2       ANATR   Ana Trujillo Emparedados y helados            Ana Trujillo
3       ANTON              Antonio Moreno Taquería          Antonio Moreno
4       AROUT                      Around the Horn            Thomas Hardy
5       BERGS                   Berglunds snabbköp      Christina Berglund
6       BLAUS              Blauer See Delikatessen              Hanna Moos
7       BLONP             Blondesddsl père et fils      Frédérique Citeaux
8       BOLID            Bólido Comidas preparadas           Martín Sommer
(note all response wasnot copied here as there are too many columns to fit)
------------------------------------
Finding primary Keys, running SQL Queries
The following query is used as an example:


8.png
-----------
Finding Primary Keys
> sqlPrimaryKeys(channel, "Order Details")
  TABLE_CAT TABLE_SCHEM    TABLE_NAME COLUMN_NAME KEY_SEQ          PK_NAME
1  NORTHWND         dbo Order Details     OrderID       1 PK_Order_Details
2  NORTHWND         dbo Order Details   ProductID       2 PK_Order_Details
Submitting a query
> sqlQuery(channel, "Select FirstName, LastName, City from Employees")
  FirstName  LastName     City
1     Nancy   Davolio  Seattle
2    Andrew    Fuller   Tacoma
3     Janet Leverling Kirkland
4  Margaret   Peacock  Redmond
5    Steven  Buchanan   London
6   Michael    Suyama   London
7    Robert      King   London
8     Laura  Callahan  Seattle
9      Anne Dodsworth   London
Submitting a query with a WHERE clause
> sqlquery(channel, "Select FirstName, LastName from Employees where City='Seattle'")
Error: could not find function "sqlquery"
> sqlQuery(channel, "Select FirstName, LastName from Employees where City='Seattle'")
  FirstName LastName
1     Nancy  Davolio
2     Laura Callahan
>
--------------------
This a graphic from the R-GUI(64-bit)


9.png
Read this important document.
https://cran.r-project.org/web/packages/RODBC/RODBC.pdf



Wednesday, October 7, 2015

U-SQL based on SCOPE is designed for Big Data

While SQL covered the RDBMS landscape U-SQL covers a much larger data landscape.

At the same time as the announcement of Azure Data Lake Services, a new language under development at Microsoft, the U-SQL language was also announced. For the Azure Data Lake Service and what it means to business read here.

With the advent of Big Data and the task of mining all kinds of data, RDBMS suddenly found itself at a disadvantage. Structured Query Language (SQL) could only address what is in a relational data store. U-SQL was born to address this challenge posed by Big Data defined by volume, velocity and variety.

What is U-SQL
U-SQL deep dives into Big Data to extract the most relevant information. It is a powerful language (in the words of Microsoft):
  • Process any type of data. From analyzing BotNet attack patterns from security logs to extracting features from images and videos for machine learning, the language needs to enable you to work on any data.
  • Use custom code easily to express your complex, often proprietary business algorithms. The example scenarios above may all require custom processing that is often not easily expressed in standard query languages, ranging from user defined functions, to custom input and output formats.
  • Scale efficiently to any size of data without you focusing on scale-out topologies, plumbing code, or limitations of a specific distributed infrastructure.
Compared to HIVE, a SQL-Based language U-SQL is flexible and does not have the limited capability to address the 'variety' in non-structured data requiring schema generation prior to running queries. U-SQL should prove more easy to use than Hive for complex scenarios.

U-SQL has been designed as declarative SQL based language with native extensibility through user code in C#. This approach:
  • Unifies SQL and C#
  • Unifies structured and Unstructured
  • Unifies declarative and custom code
U-SQL is based on SCOPE which is based on  existing prior languages, ANSI-SQL, T-SQL and HIVE. U-SQL should present a less steeper curve for those who are using SQL already.

U-SQL is an important development that developer need to jump on.

Monday, October 5, 2015

If you know SQL then SCOPE is easy

SCOPE is an acronym for Structured Computations Optimized for Parallel Execution, a declarative language for working with large-scale data. It is still under development at Microsoft. If you know SQL then working with SCOPE will be quite easy as SCOPE builds on SQL.
The execution environment is different from that RDBMS oriented data.
Data is still modeled as rows. Every row has typed columns and eveyr rowset has a well-defined schema. There is a SCOPe compiler that comes up with optimized execution plan and a runtime execution plan.

Look at this QCount query in SCOPE:

SELECT query, COUNT(*) AS count
FROM "search.log" USING LogExtractor
GROUP BY query
HAVING count > 1000
ORDER BY count DESC;
OUTPUT TO "qcount.result";


You probably know most and the rest you are able to guess.

In the above there is a built-in LogExtractor. You can get it step-by-step going line by line; each step output being the input of next step.

SCOPE requires a software platform for storing and analyzing massive amounts of data and Microsoft has one called 'Cosmos'. Here is graphic of SCOPE processing is carried out.

This post is based on the PDF document you will find here and the image is taken from the same PDF.

Sunday, October 4, 2015

Microsoft Azure expands storage options with Azure Data Lake

Recently announced Azure Data Lake addresses the big data  3V challenges; volume, velocity and variety. It is one more storage feature in addition to blobs and SQL Azure database. Azure Data Lake (should have been Azure Data Ocean IMHO) is really omnipotent. Just look at the key capabilities of Azure Data Lake:

Any Data
Native format, distributed data store. No need to pre-define schema information. From unstructured to structured data handling.

Any Size
Kilo bytes to Exa bytes OK. Ready for read/write.

At any scale
Scale to match your needs; high volume data handling of small writes and low latency. Can Aaddress near real-time web analytics scenarios.

HDFS Compatible
Works out-of-the box with Hadoop including services such as HD Insight

Full integration with Azure Active Directory
Supporting identity and access management over all of the data.

Azure Data Lake Store  is therefore a hyper-scale HDFS repositiory designed specifically for big data analytics in the cloud. It is order made for IoT and thorughput-intensive analytics for high volume data.

Read more here.
The graphic is from a  Microsoft Technet site
I checked out the preview portal (https://portal.azure.com/), I do not see it. Possible by the end of the year.