Author Archive for Guest Blogger

18
Oct
10

Chapter 38 — Successfully Implementing Kerberos Delegation

MVP Scott Stauffer does a fantastic job of explaining what Kerberos delegation is, the issues it solves, and provides an extensive list of resources to close out the chapter.

This is the best resource I have found for understanding Kerberos and how to implement it.  K. Brian Kelley has an excellent article on SQLServerCentral that is good, but not quite as in-depth as this chapter.  Scott lays out the process logically beginning with how and why he came to understand the process, taking you through what Kerberos is, what issues Kerberos delegation solves, and providing a business scenario that Kerberos will solve.

Kerberos is a complex topic which Scott makes simple in his section Understanding Kerberos Delegation.  He explains SPN’s (Service Principal Names), permissions needed to work with SPN’s, and the tools you will need.

Scott very clearly takes you through implementing Kerberos from beginning to end.  He includes great screen shots and code snippets along with explaining WHY you need to go through each of the steps.

Scott also included a section on validating your delegation setup using a tool called DelegConfig.  This tool is run on the client and web tiers and will validate that Kerberos delegation is configured correctly and working. 

Finally Scott provides 3 pages of resources to use to learn about and implement Kerberos delegation which includes: tools, blog posts, TechNet articles, MS Help & Support articles, white papers, and webcasts.

This chapter is really a one-stop shop for people who need to understand and implement Kerberos delegation. 

Jack is a SQL Server professional currently serving with New Tribes Mission in Sanford, FL. Jack has been working with SQL Server for over 10 years and has worked with versions 6.5 to 2008. Jack is active in the SQL Server Community as President of OPASS, a member of the 2010 PASS Summit Program Committee, and has spoken at several user groups and SQLSaturdays. Jack also blogs regularly about SQL Server and the SQL Server Community at http://wiseman-wiseguy.blogspot.com

02
Sep
10

Chapter 31 — Practical issues in table partitioning

Over the past year I have had the opportunity to get deeply involved in database partitioning while developing a reporting system against my companies OLTP database.  As my data warehouse grew I quickly realized the benefits table partitioning could offer me. I appreciate SQL Perspectives allowing me the opportunity to post some of my thoughts on the benefits of partitioning. I enjoyed reading this chapter and I appreciate how Ron Talmage approached this subject. Partitioning can be a bear of a topic and it is best approached dissected.   

Ron highlights very well the benefit of metadata operations found in partitioning. It is summed up best on page 419:   

The Key: Avoid Data Movement    

“The key to loading and removing data quickly from partitioned tables is to use the metadata operations of SWITCH, SPLIT, and MERGE without incurring any data movement”.   

 Any seasoned DBA will tell you what a nightmare it is to transfer large volumes of data between different tables. “Metadata Operation” suddenly becomes 2 very beautiful words to a person confronted with this task. Taking advantage of metadata operations in table partitioning is huge. Below is summary of the definitions from the chapter.   

SWITCH   

  •  100% metadata operation, one side of the switch will always will be empty.

SPLIT    

  • Safe if using to create new partitions which are empty. Split can be dangerous if splitting a partition loaded with data because some of the data may move into a new partition depending on the range.

MERGE   

  • Safe if you merge an empty partition with a partition full of data. However, if merging with another partition already filled with data you are asking for a possible very I/O intensive operation.

    

The above operations give us lots of power, but can quickly turn very dangerous. Remember, with great power comes great responsibility. The truth of the matter is table partitioning is a very enterprise level tool for use when designing a scalable database. If you are going to use it you must know what you are doing. Proper testing is the only way to ensure a beneficial increase of performance.   

It is important to understand that when a partition is added it creates a schema lock on the table. Even a query used with a nolock hint is going to be blocked. In certain circumstances the nolock query will actually cause the partition creation to fail. When this happens all of the benefits of partitioning go out the window. To rectify the problem you have to use  the SPLIT operation. And as we discussed earlier since the partition already has data in it this can be a very expensive operation. Something that I have found that helps with this problem is the SET DEADLOCK_PRIORITY HIGH tsql statement. This lets SQL  Server know that this is a very important operation and should be the last to be considered for a deadlock victim. I just know the last thing I want to see on my system are partitions failing to create and I want to make sure I do everything to not let that happen. The recovery costs (especially if it’s not caught right away) can be disastrous. After we implemented this in our production system we rarely have deadlock problems on the creation of a partition.   

When designing a data warehouse from scratch there is a systematic process that, if followed allows the database to have integrity, appropriate normalization/denormalization, and proper index creation for the foreseeable queries that will run against it. A couple of years ago my company assigned me the task of re-architecting our BI reporting system. Using the above mentioned process I created what I thought was a very manageable and yes, very scalable reporting system. My reporting database hummed along for about 8 months without issue other than the size of the database continuing to grow and grow, which is a good thing, right? It was shortly after this when a couple of the tables reached nearly 500 gigs and a few months later, as we brought on more customers, one of them grew nearly a terabyte in size. It was at this point I became aware of a painful lesson… my awesome reporting system had grown out of control and I realized how non-scalable the database actually was. This was also about the time (as is always the case) management had learned to love and depend on the “new fabulous reports the data team created” that were based off of my reporting system. Simply put, right or wrong the reporting system was there to stay and I had to make it work.   

Options? I quickly went through all of my procedures and tables, tuning and optimizing where I could. I did achieve some performance gains but not enough to make me comfortable. I needed more hardware resources on the box or simply scale out the data. Obviously management doesn’t like the idea of always throwing additional hardware at production problems.  Sometimes it is necessary but you should always look at your process and see if it isn’t something the dba/developer can optimize him/herself.   

Our data team at my company has a saying “Less data means less problems”.  There is a need to scale up the system by breaking up the data into smaller, more manageable chunks; which in turn leaves the queries with less data to sift through and improves their performance. I decided that I could either implement a version of my own partitioning by breaking up the data into smaller tables or take advantage of the SQL Server built-in partitioning capabilities. After taking Kimberly Tripp’s database partitioning session at PASS I was obviously motivated to go the SQL Server route.   

I spent about a week of serious research and dev time reading white papers and any good material on partitioning best practices. I then took a crack at redesigning the database schema to incorporate partitioned tables.  I saw huge performance increases such as queries finishing in a fraction of the time. However, these performance gains did not come initially. This leads me to the main point of this post.  There are about a million different ways to set up SQL Server table partitioning incorrectly and only a few ways to do it right.  Every step has to be done in the proper order to get the maximum benefit. The journey is tedious and as Ron mentioned on page 413 “table partitioning is a very involved topic and can quickly become complex”. However, when implemented correctly it is the difference between an awesome and scalable system and one that simply, in the end, disappoints.   

My experience taught me two facts:   

  1. Always align your indexes with the partition. It is truly an insane world when someone over looks this step. An index that does not know about its partition is like a $100 bill stuck underneath the seat of your car. It has the potential to do a lot for you but it goes unused.
  2.  Creating multiple filegroups multiplies the benefits of partitioning. Even though it is easier don’t just stick all of your data on the primary filegroup. You give up so much when this happens.

    

Whenever I try to learn a new concept I try to find the most basic, yet complete example of the topic at hand. Below I have listed the steps in order to create a working partition.   

Below are the definitions and then the practical example.   

Create Filegroup(s)   

  • In this step we create the logical filegroup. Using multiple filegroups with partitions is best for implementing table partitioning. Filegroups, for example allow you to span the table on different physical disks. Even in a VM environment you benefit from the use of multithreading, assuming multiple CPU’s are available to the OS.

Map the filegroup to a physical location   

  • In this step the logical file group is physically mapped to a specific database and file location.

    

Create Partition Function   

  • Defines the partitions boundaries or ranges.  Has a one to many relationship with the partition scheme.

Create Partition Scheme    

  • Defines on which filegroups the table’s multiple partitions will be stored.

Create Table on Partition Scheme   

  • In order for a table to be partitioned it has to be created on a particular partition Scheme. In this step it is important to align the primary key of the table with the partition key.

Code Examples   

Create Filegroup(s)   

 
ALTER DATABASE DatabaseName ADD FILEGROUP FG_FirstMonth
ALTER DATABASE DatabaseName ADD FILEGROUP FG_SecondMonth

Map the filegroup to a physical location   

 
ALTER DATABASE DatabaseName add FILE (NAME = N'FG_FirstMonth’,   

FILENAME = N'C:\PartitionFiles\MSSQL\DATA\FirstMonth.ndf',   

TO FILEGROUP [FG_FirstMonth]
 

ALTER DATABASE DatabaseName add FILE (NAME = N'FG_SecondMonth’,   

FILENAME = N'C:\PartitionFiles\MSSQL\DATA\SecondMonth.ndf',   

TO FILEGROUP [FG_SecondMonth]

Create Partition Function   

 
Create  PARTITION FUNCTION [MonthlyDateRange] (int)

AS RANGE Left FOR VALUES ('1', '2')

Create Partition Scheme    

 
Create PARTITION SCHEME [MonthScheme]

AS PARTITION [MonthlyDateRange]

TO (FG_FirstMonth, FG_SecondMonth, [primary])

Create Table on Partition Scheme   

 
CREATE TABLE [dbo]. Sales

[RowID] [int] IDENTITY(1,1) NOT NULL,

[ProductName] [int] NOT NULL,

[CustomerName] NOT NULL,

[SalesMonth] [int] NOT NULL,

CONSTRAINT [PK_Sales] PRIMARY KEY CLUSTERED

([RowID] ASC,

[SalesMonth] ASC

[MonthScheme]([SalesMonth]))

ON [MonthScheme]([SalesMonth])

Thanks for the great chapter Ron.   

My name is Dustin Brown I have been working with SQL Server for the last 6 years. When I am not developing databases I enjoy spending time with my family and living the good life with my kids.  I live and work in Sonora, CA with my wife and 4 children. Please drop me a line at dustinkbrown@gmail.com

24
Aug
10

Powershell Review

I’ve been asked to write a review of the PS chapters in the deep dives book so that’s what I’m going to do.  I’m not going to give detailed analysis of each page or of each chapter individually though.  Instead I’m going to concentrate on how PS was covered in the book as a whole.   And I’m going to give some alternate code samples for the ones offered by the authors. 

For starters, I’m going to say that the first thing that went through my mind when I read these chapters is that the authors just don’t get it.  They clearly don’t understand what PS is all about as is evidenced by their very complicated coding examples.  It’s clear to me that the point of both of these chapters was to introduce PS to the readers because they both started off explaining PS from the ground up.  They were quick intros, but it’s still clear that the audience was the complete beginner.  And to that end, I think the chapters failed in their task.  Now before I go further let me say right here that both of these authors are smarter and I am.  I know Allen personally and I think I’ve met Richard (really can’t remember), but they’ve both been coding longer than I have so they’re both much better coders than I am.  And I’m not saying that to be flippant, or to be falsely modest.  I really mean both of these guys are smarter than me.  However, it’s their vast experience that hurts them when coding in something like PS, and honestly if my first exposure to PS came from this book I probably wouldn’t pursue it any further. 

The whole point of PS is to be able to do things with very small succinct commands, and all of the examples here are very long and over-complicated.  The main reason to use PS is to shorten the code it takes to get something done.  So it’s usually the perfect replacement for T-SQL cursors and dynamic SQL.  Let’s look at a specific example though.  In ch.27 Allen shows a way to backup all the databases on a server using SMO.  And while I use SMO all the time in PS, this isn’t one of those times when SMO is the best solution for the job.  

For easy reference, here’s the script provided in the book. 

#backup.ps1 

#Performs a Full backup followed by a transaction log backup on all user 

➥databases 

param ( 

[string]$srvname='MyServer\MyInstance' 

) 

# Load SMO assembly, and if we're running SQL 2008 DLLs load the 

➥SMOExtended and SQLWMIManagement libraries 

$v = [System.Reflection.Assembly]::LoadWithPartialName 

➥('Microsoft.SqlServer.SMO') 

$p = $v.FullName.Split(',') 

$p1 = $p[1].Split('=') 

$p2 = $p1[1].Split('.') 

if ($p2[0] -ne '9') { 

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer. 

➥SMOExtended') | out-null 

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer. 

➥SQLWMIManagement') | out-null 

} 

$s = new-object ('Microsoft.SqlServer.Management.Smo.Server') $srvname 

$bkdir = $s.Settings.BackupDirectory 

$dbs = $s.Databases 

foreach ($db in $dbs) { 

if ($db.IsSystemObject -eq $False -and $db.IsMirroringEnabled -eq $False) 

➥{ 

SqlBackup 

AddDevice 

Action 

BackupSetDescription 

BackupSetName 

Database 

MediaDescription 

BackupDevice 

Backup 

Devices 

$dbname = $db.Name 

$dt = get-date -format yyyyMMddHHmmss 

$bk = new-object ('Microsoft.SqlServer.Management.Smo.Backup') 

$bk.Action = 'Database' 

$bk.BackupSetDescription = "Full backup of " + $dbname 

$bk.BackupSetName = $dbname + " Backup" 

$bk.Database = $dbname 

$bk.MediaDescription = "Disk" 

$bk.Devices.AddDevice($bkdir + "\" + $dbname + "_db_" + $dt + ".bak", 

➥'File') 

$bk.SqlBackup($s) 

# Simple Recovery Model has a Value Property of 3 

# SQL Server 2008 doesn't recognize the enumerated value so the code is 

➥slightly different 

# Set a variable to run the transaction log backup, and if Simple, turn 

➥it off 

$trnbck = 1 

if ($p2[0] -eq '9') 

{ 

if ($db.DatabaseOptions.RecoveryModel -eq 'Simple') { $trnbck = 

➥0 } 

} 

else 

{ 

if ($db.RecoveryModel.value__ -eq 3) { $trnbck = 0 } 

} 

if ($trnbck -eq 1) { 

$dt = get-date -format yyyyMMddHHmmss 

$trn = new-object ('Microsoft.SqlServer.Management.Smo.Backup') 

$trn.Action = 'Log' 

$trn.BackupSetDescription = "Trans Log backup of " + $dbname 

$trn.BackupSetName = $dbname + " Backup" 

$trn.Database = $dbname 

$trn.MediaDescription = "Disk" 

$trn.Devices.AddDevice($bkdir + "\" + $dbname + "_tlog_" + $dt 

➥+ ".trn", 'File') 

$trn.SqlBackup($s) 

} 

} 

} 

  

Now let’s look at a really simple script that will backup all the databases on a server without using SMO.  I haven’t put every element in there that Allen did, but it wouldn’t take much more code to duplicate it exactly and it would still be much shorter and much easier to read. 

Dir | %{$DB = $_.Name; $Filename = “C:\$DB.bak”; invoke-sqlcmd –query “Backup Database $DB to disk = ‘’$FileName”} 

So that’s really all the straight PS code it takes to backup every database on a server.  And it’s not only demonstrates why PS is so popular, but it’s very easy to see why SMO isn’t the tool for the job.  And the biggest point here is that as a DBA, which code would you rather support and extend and troubleshoot? 

And of course you can restore all the DBs just as easily.  And in the true PS fashion there are more ways than this to accomplish the same task.  One of the things I really like is to either mix PS with T-SQL or even mix it with straight DOS commands and T-SQL.  I’ll show you an example of that in a minute and you’ll see what I mean. 

Creating a DB 

Creating a DB is another one of those tasks that I think is far too bloated in SMO.  Now, this isn’t Allen’s fault, it’s just how SMO works, but that’s why I say it’s not the right tool for this job.  Again, for your convenience I’ll provide the code from the book. 

#createdatabase.ps1 

#Creates a new database using our specifications 

param ( 

[string]$srvname='MyServer\MyInstance', 

[string]$dbname='MyAppDB', 

[int]$datsize=25, 

[int]$maxsize=100, 

[int]$logsize=10 

) 

# Load SMO assembly, and if we're running SQL 2008 DLLs load the 

➥SMOExtended and SQLWMIManagement libraries 

$v = [System.Reflection.Assembly]::LoadWithPartialName 

➥('Microsoft.SqlServer.SMO') 

$p = $v.FullName.Split(',') 

$p1 = $p[1].Split('=') 

$p2 = $p1[1].Split('.') 

if ($p2[0] -ne '9') { 

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer. 

➥SMOExtended') | out-null 

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer. 

➥SQLWMIManagement') | out-null 

} 

$srv = new-object ('Microsoft.SqlServer.Management.Smo.Server') $srvname 

# Instantiate the database object and add the filegroups 

$db = new-object ('Microsoft.SqlServer.Management.Smo.Database') ($srv, 

➥$dbname) 

$sysfg = new-object ('Microsoft.SqlServer.Management.Smo.FileGroup') ($db, 

➥'PRIMARY') 

$db.FileGroups.Add($sysfg) 

$appfg = new-object ('Microsoft.SqlServer.Management.Smo.FileGroup') ($db, 

➥'AppFG') 

$db.FileGroups.Add($appfg) 

# Create the file for the system tables 

$dbdsysfile = new-object ('Microsoft.SqlServer.Management.Smo.DataFile') 

➥($sysfg, 'MyAppDB_SysData') 

$sysfg.Files.Add($dbdsysfile) 

$dbdsysfile.FileName = $srv.Information.MasterDBPath + 

➥'\MyAppDB_SysData.mdf' 

$dbdsysfile.Size = [double](5.0 * 1024.0) 

$dbdsysfile.GrowthType = 'KB' 

$dbdsysfile.Growth = [double](5.0 * 1024.0) 

$dbdsysfile.IsPrimaryFile = 'True' 

# Create the file for the Application tables 

$dbdappfile = new-object ('Microsoft.SqlServer.Management.Smo.DataFile') 

➥($appfg, 'MyAppDB_AppData') 

$appfg.Files.Add($dbdappfile) 

$dbdappfile.FileName = $srv.Information.MasterDBPath + 

➥'\MyAppDB_AppData.ndf' 

$dbdappfile.Size = [double](25.0 * 1024.0) 

$dbdappfile.GrowthType = 'KB' 

Listing 4 Creating a user database 

$dbdappfile.Growth = [double]($datsize * 1024.0) 

$dbdappfile.MaxSize = [double]($maxsize * 1024.0) 

# Create the file for the log 

$dblfile = new-object ('Microsoft.SqlServer.Management.Smo.LogFile') ($db, 

➥'MyAppDB_Log') 

$db.LogFiles.Add($dblfile) 

$dblfile.FileName = $srv.Information.MasterDBLogPath + '\MyAppDB_Log.ldf' 

$dblfile.Size = [double]($logsize * 1024.0) 

$dblfile.GrowthType = 'KB' 

$dblfile.Growth = [double]($logsize * 1024.0) 

# Create the database 

$db.Create() 

# Set the default filegroup to AppFG 

$appfg = $db.FileGroups['AppFG'] 

$appfg.IsDefault = $true 

$appfg.Alter() 

$db.Alter() 

  

OK, wow, that’s a lot of code to create a DB don’t you think?  This task can be accomplished so much easier and so much faster by sticking to what you already know, but using PS to make it more dynamic.  My thought process behind this is getting the job done quickly and in a way that’s easy to support even if you’re not a PS or coding god.  In other words, build on the skills you already have.  I have many solutions like this in my environment and the PS element is minimal.  Here I’m going to show you just 1  solution, but I can think of like 3 other variations.  They’re all use PS to control the flow and the params, but they’ll call the actual code slightly differently.  The reason PS should be minimal in this example is because you can’t beat T-SQL for this task.  And it’s hard to imagine a scenario where SMO has to be used to create a DB.  So don’t force PS or SMO into something they’re not supposed to do.  Just because there’s a method for it, doesn’t mean you should use it.  That said, this could be useful if you needed to create the same DB on several servers, or if you wanted to create several DBs at once with different names.  You could easily modify this script (with MINIMAL coding) to accept a list of DBs and just cycle through them. 

So here  I’m going to show you a “pure” PS version.  

 $Server = "localhost" 

$DBName = "MyDB" 

$DataPath = "C:" 

$LogPath = "C:" 

$a = @" 

CREATE DATABASE $DBName 

ON 

( NAME = $DBName`Data, 

    FILENAME = '$DataPath\$DBName`Data.mdf', 

    SIZE = 10, 

    MAXSIZE = 50, 

    FILEGROWTH = 5 ) 

LOG ON 

( NAME = $DBName`Log, 

    FILENAME = '$LogPath\$DBName`Log.mdf', 

    SIZE = 5MB, 

    MAXSIZE = 25MB, 

    FILEGROWTH = 5MB ) 

"@ 

$a | out-file c:\DB.txt 

invoke-sqlcmd -query $a -ServerInstance $Server 

That wasn’t too bad, right?  What I did was I kept the bulk of the code in T-SQL where most DBAs are comfortable, and I just used PS to control things.  This is a perfectly good way to do this kind of operation and it has some advantages the straight SMO version doesn’t easily come by.  For starters, I can easily save the scripts created and keep a record of when and where and how they were created and I can use them as T-SQL templates for other DB creations.  And more importantly, if there’s an error, I can easily jump into the T-SQL that was created and see where the issue is.  This code is also sharable among different processes.  Perhaps I just want to use the PS to create the script and have it run by something else.  Maybe I need to add it to my source control to have a record of how to re-create the DB in case of a disaster.  For whatever the reason, I now can easily have a record of what was created. 

And of course, the biggest reason I love this method so much is that it in itself is a template for anything else you want to do.  If you want to create powershell scripts that run whatever T-SQL you like, all you have to do is replace the T-SQL itself and change a couple variables and the rest stays the same.  And of course changing this script to run on multiple servers is as easy as it can be.  So scripts like this don’t need SMO, and in fact they can be more effective without it. 

While I’m at it I’m going to jump into an example from ch.26.  Here Richard shows us how to connect to SQL Server from PS, but honestly, this is just cumbersome.  Here’s the code for your reference: 

 $connString = "server=SQL1;Integrated Security=SSPI;database=adventureworks" 

$cn = new-object "System.Data.SqlClient.SqlConnection" 

$connString 

$cmd = new-object "System.Data.SqlClient.SqlCommand" 

$cmd.CommandType = [System.Data.CommandType]"Text" 

$cmd.CommandText = "SELECT * FROM Production.Product Where Color ='Silver'" 

$cmd.Connection = $cn 

$dt = new-object "System.Data.DataTable" 

$cn.Open() 

$rdr = $cmd.ExecuteReader() 

$dt.Load($rdr) 

$cn.Close() 

$dt | Format-Table 

I’m gonna go ahead and say in Richard’s defense that he does state that there are other ways to do this, but this is the way he knows works.  So I’ll give him props for at least saying that, but why would you make this the default way to connect to SQL from PS?  If I wanted to completely recreate this code for DBAs I would do it like this: 

 $queryTxt = "SELECT * FROM Production.Product Where Color ='Silver'" 

$a = invoke-sqlcmd -S$ServerName –q”$queryText” 

$a | format-table 

That’s it, 3 lines of code, and you can easily see I could have done it in 2, but I like to make the query text easy to get to.  Each method presented here has its merits, but I find that for everyday usage in a SQL Server environment, this one is the easiest to support, change and duplicate.  I don’t have to go searching for .NET code to reproduce this on the fly.  Of course, if you’re in a multi-platform environment and you’re constantly connecting to Oracle and other vendors, then Richard’s code may make more sense to you because .NET can easily handle that.  But that’s not the context given in the book, and it is a book for SQL guys, not network admins.  So here the assumption should be that you’re connecting to SQL Server in the easiest way possible and then show the harder methods… the exceptions, on down the road. 

Unfortunately, my honest opinion of the PS chapters in this book is that they mislead beginners into thinking that PS is harder than it is and that you have to jump through dozens of lines of code to perform simple tasks when that just isn’t the case.  I think both authors had a wonderful opportunity to show beginners how PS could make their lives much easier by shortening the code they have to write and support but instead they chose methods that were much longer than their traditional counterparts.  So those are my thoughts on the PS chapters in the MVP Deep Dives book.  Both of these authors are accomplished coders and these methods work for them.  What I wanted to do here was show you that there are alternatives for those of us who aren’t quite as good at those older, more complicated methods.  Either way you look at it though it really goes to show the awesomeness of PS because we can all code the way we like.  Allen and Richard can code the way they’re used to and I can code the way I’m used to and everybody’s happy.  It’s really the perfect framework because it knows how to get out of your way and just let you work.  And there are even more methods than I provided here.  So go explore your favorite ways to do these things and write a blog calling my code long-winded.  If you do, I’ll probably switch to your method.  

Sean McCown holds a double MCITP in SQL Server 2008 for both Administration and Development.  He is also a SQL Server MVP with over 15 years experience in databases.  Sean is a contributing editor with InfoWorld and a frequent contributor to many community sites and forums. 

Sean is also founder and co-owner of the popular database website MidnightDBA.com where he records free SQL Server training videos and co-hosts the popular webshow, DBAs@Midnight. 

Sean has also created content for TrainingSpot.com, TrainSignal, and moderates the Petri IT Knowledgebase SQL forums. Sean also speaks a various SQLSaturday events and is a board member of his local user group, the North Texas SQL Server User Group (NTSSUG).

16
Jun
10

Guest Review by Tim Ford

Chapter 19 of the MVP Deep Dives Book covers the topic of custom keyboard shortcuts in SQL Server Management Studio (SSMS).  Written by SQL Server MVP Pawel Potasinkski of Poland.  Pawel highlights the ability to assign keystroke combinations to a block of Transact-SQL code inside of SSMS as a way to increase productivity by omitting the need to type code that we SQL Server DBAs use on a daily basis.

 The process is simple: by navigating the menu you can launch a graphical user interface (GUI) that allows you to assign the code to the keystroke combination of your chosing.

 Pawel goes on to state that he has set up code shortcuts to:

  • Search for database objects by name
  • Scripting of database objects
  • Selecting sample rows from a table

 While a short chapter, it is interesting to see how others use productivity tools inside SQL Server Management Studio.

Tim Ford is a SQL Server MVP from Kalamazoo, Michigan and Principal SQL Server DBA and SME for Spectrum Health.  He’s been working with SQL Server in one form or another since the beginning of the century. (It sounds better than when he says 10 years.)  Tim is a volunteer for The Professional Association For SQL Server (PASS), the Chapter Leader for the West Michigan SQL Server Users Group, Blogger (thesqlagentman.com), Community Speaker, and Author.  Along with Brent Ozar – SQL Server MVP, MCM – he is the force behind SQL Cruise (SQLCruise.com) – specializing in Advanced SQL Server training in exotic locations.  Tim is active on Twitter as @SQLAgentMan and @SQL11.  What little free time he has is spent with his sons, wife, camera, and Zune.




Chapters

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 13 other followers


Follow

Get every new post delivered to your Inbox.