24
Aug
10

Powershell Review

I’ve been asked to write a review of the PS chapters in the deep dives book so that’s what I’m going to do.  I’m not going to give detailed analysis of each page or of each chapter individually though.  Instead I’m going to concentrate on how PS was covered in the book as a whole.   And I’m going to give some alternate code samples for the ones offered by the authors. 

For starters, I’m going to say that the first thing that went through my mind when I read these chapters is that the authors just don’t get it.  They clearly don’t understand what PS is all about as is evidenced by their very complicated coding examples.  It’s clear to me that the point of both of these chapters was to introduce PS to the readers because they both started off explaining PS from the ground up.  They were quick intros, but it’s still clear that the audience was the complete beginner.  And to that end, I think the chapters failed in their task.  Now before I go further let me say right here that both of these authors are smarter and I am.  I know Allen personally and I think I’ve met Richard (really can’t remember), but they’ve both been coding longer than I have so they’re both much better coders than I am.  And I’m not saying that to be flippant, or to be falsely modest.  I really mean both of these guys are smarter than me.  However, it’s their vast experience that hurts them when coding in something like PS, and honestly if my first exposure to PS came from this book I probably wouldn’t pursue it any further. 

The whole point of PS is to be able to do things with very small succinct commands, and all of the examples here are very long and over-complicated.  The main reason to use PS is to shorten the code it takes to get something done.  So it’s usually the perfect replacement for T-SQL cursors and dynamic SQL.  Let’s look at a specific example though.  In ch.27 Allen shows a way to backup all the databases on a server using SMO.  And while I use SMO all the time in PS, this isn’t one of those times when SMO is the best solution for the job.  

For easy reference, here’s the script provided in the book. 

#backup.ps1 

#Performs a Full backup followed by a transaction log backup on all user 

➥databases 

param ( 

[string]$srvname='MyServer\MyInstance' 

) 

# Load SMO assembly, and if we're running SQL 2008 DLLs load the 

➥SMOExtended and SQLWMIManagement libraries 

$v = [System.Reflection.Assembly]::LoadWithPartialName 

➥('Microsoft.SqlServer.SMO') 

$p = $v.FullName.Split(',') 

$p1 = $p[1].Split('=') 

$p2 = $p1[1].Split('.') 

if ($p2[0] -ne '9') { 

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer. 

➥SMOExtended') | out-null 

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer. 

➥SQLWMIManagement') | out-null 

} 

$s = new-object ('Microsoft.SqlServer.Management.Smo.Server') $srvname 

$bkdir = $s.Settings.BackupDirectory 

$dbs = $s.Databases 

foreach ($db in $dbs) { 

if ($db.IsSystemObject -eq $False -and $db.IsMirroringEnabled -eq $False) 

➥{ 

SqlBackup 

AddDevice 

Action 

BackupSetDescription 

BackupSetName 

Database 

MediaDescription 

BackupDevice 

Backup 

Devices 

$dbname = $db.Name 

$dt = get-date -format yyyyMMddHHmmss 

$bk = new-object ('Microsoft.SqlServer.Management.Smo.Backup') 

$bk.Action = 'Database' 

$bk.BackupSetDescription = "Full backup of " + $dbname 

$bk.BackupSetName = $dbname + " Backup" 

$bk.Database = $dbname 

$bk.MediaDescription = "Disk" 

$bk.Devices.AddDevice($bkdir + "\" + $dbname + "_db_" + $dt + ".bak", 

➥'File') 

$bk.SqlBackup($s) 

# Simple Recovery Model has a Value Property of 3 

# SQL Server 2008 doesn't recognize the enumerated value so the code is 

➥slightly different 

# Set a variable to run the transaction log backup, and if Simple, turn 

➥it off 

$trnbck = 1 

if ($p2[0] -eq '9') 

{ 

if ($db.DatabaseOptions.RecoveryModel -eq 'Simple') { $trnbck = 

➥0 } 

} 

else 

{ 

if ($db.RecoveryModel.value__ -eq 3) { $trnbck = 0 } 

} 

if ($trnbck -eq 1) { 

$dt = get-date -format yyyyMMddHHmmss 

$trn = new-object ('Microsoft.SqlServer.Management.Smo.Backup') 

$trn.Action = 'Log' 

$trn.BackupSetDescription = "Trans Log backup of " + $dbname 

$trn.BackupSetName = $dbname + " Backup" 

$trn.Database = $dbname 

$trn.MediaDescription = "Disk" 

$trn.Devices.AddDevice($bkdir + "\" + $dbname + "_tlog_" + $dt 

➥+ ".trn", 'File') 

$trn.SqlBackup($s) 

} 

} 

} 

  

Now let’s look at a really simple script that will backup all the databases on a server without using SMO.  I haven’t put every element in there that Allen did, but it wouldn’t take much more code to duplicate it exactly and it would still be much shorter and much easier to read. 

Dir | %{$DB = $_.Name; $Filename = “C:\$DB.bak”; invoke-sqlcmd –query “Backup Database $DB to disk = ‘’$FileName”} 

So that’s really all the straight PS code it takes to backup every database on a server.  And it’s not only demonstrates why PS is so popular, but it’s very easy to see why SMO isn’t the tool for the job.  And the biggest point here is that as a DBA, which code would you rather support and extend and troubleshoot? 

And of course you can restore all the DBs just as easily.  And in the true PS fashion there are more ways than this to accomplish the same task.  One of the things I really like is to either mix PS with T-SQL or even mix it with straight DOS commands and T-SQL.  I’ll show you an example of that in a minute and you’ll see what I mean. 

Creating a DB 

Creating a DB is another one of those tasks that I think is far too bloated in SMO.  Now, this isn’t Allen’s fault, it’s just how SMO works, but that’s why I say it’s not the right tool for this job.  Again, for your convenience I’ll provide the code from the book. 

#createdatabase.ps1 

#Creates a new database using our specifications 

param ( 

[string]$srvname='MyServer\MyInstance', 

[string]$dbname='MyAppDB', 

[int]$datsize=25, 

[int]$maxsize=100, 

[int]$logsize=10 

) 

# Load SMO assembly, and if we're running SQL 2008 DLLs load the 

➥SMOExtended and SQLWMIManagement libraries 

$v = [System.Reflection.Assembly]::LoadWithPartialName 

➥('Microsoft.SqlServer.SMO') 

$p = $v.FullName.Split(',') 

$p1 = $p[1].Split('=') 

$p2 = $p1[1].Split('.') 

if ($p2[0] -ne '9') { 

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer. 

➥SMOExtended') | out-null 

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer. 

➥SQLWMIManagement') | out-null 

} 

$srv = new-object ('Microsoft.SqlServer.Management.Smo.Server') $srvname 

# Instantiate the database object and add the filegroups 

$db = new-object ('Microsoft.SqlServer.Management.Smo.Database') ($srv, 

➥$dbname) 

$sysfg = new-object ('Microsoft.SqlServer.Management.Smo.FileGroup') ($db, 

➥'PRIMARY') 

$db.FileGroups.Add($sysfg) 

$appfg = new-object ('Microsoft.SqlServer.Management.Smo.FileGroup') ($db, 

➥'AppFG') 

$db.FileGroups.Add($appfg) 

# Create the file for the system tables 

$dbdsysfile = new-object ('Microsoft.SqlServer.Management.Smo.DataFile') 

➥($sysfg, 'MyAppDB_SysData') 

$sysfg.Files.Add($dbdsysfile) 

$dbdsysfile.FileName = $srv.Information.MasterDBPath + 

➥'\MyAppDB_SysData.mdf' 

$dbdsysfile.Size = [double](5.0 * 1024.0) 

$dbdsysfile.GrowthType = 'KB' 

$dbdsysfile.Growth = [double](5.0 * 1024.0) 

$dbdsysfile.IsPrimaryFile = 'True' 

# Create the file for the Application tables 

$dbdappfile = new-object ('Microsoft.SqlServer.Management.Smo.DataFile') 

➥($appfg, 'MyAppDB_AppData') 

$appfg.Files.Add($dbdappfile) 

$dbdappfile.FileName = $srv.Information.MasterDBPath + 

➥'\MyAppDB_AppData.ndf' 

$dbdappfile.Size = [double](25.0 * 1024.0) 

$dbdappfile.GrowthType = 'KB' 

Listing 4 Creating a user database 

$dbdappfile.Growth = [double]($datsize * 1024.0) 

$dbdappfile.MaxSize = [double]($maxsize * 1024.0) 

# Create the file for the log 

$dblfile = new-object ('Microsoft.SqlServer.Management.Smo.LogFile') ($db, 

➥'MyAppDB_Log') 

$db.LogFiles.Add($dblfile) 

$dblfile.FileName = $srv.Information.MasterDBLogPath + '\MyAppDB_Log.ldf' 

$dblfile.Size = [double]($logsize * 1024.0) 

$dblfile.GrowthType = 'KB' 

$dblfile.Growth = [double]($logsize * 1024.0) 

# Create the database 

$db.Create() 

# Set the default filegroup to AppFG 

$appfg = $db.FileGroups['AppFG'] 

$appfg.IsDefault = $true 

$appfg.Alter() 

$db.Alter() 

  

OK, wow, that’s a lot of code to create a DB don’t you think?  This task can be accomplished so much easier and so much faster by sticking to what you already know, but using PS to make it more dynamic.  My thought process behind this is getting the job done quickly and in a way that’s easy to support even if you’re not a PS or coding god.  In other words, build on the skills you already have.  I have many solutions like this in my environment and the PS element is minimal.  Here I’m going to show you just 1  solution, but I can think of like 3 other variations.  They’re all use PS to control the flow and the params, but they’ll call the actual code slightly differently.  The reason PS should be minimal in this example is because you can’t beat T-SQL for this task.  And it’s hard to imagine a scenario where SMO has to be used to create a DB.  So don’t force PS or SMO into something they’re not supposed to do.  Just because there’s a method for it, doesn’t mean you should use it.  That said, this could be useful if you needed to create the same DB on several servers, or if you wanted to create several DBs at once with different names.  You could easily modify this script (with MINIMAL coding) to accept a list of DBs and just cycle through them. 

So here  I’m going to show you a “pure” PS version.  

 $Server = "localhost" 

$DBName = "MyDB" 

$DataPath = "C:" 

$LogPath = "C:" 

$a = @" 

CREATE DATABASE $DBName 

ON 

( NAME = $DBName`Data, 

    FILENAME = '$DataPath\$DBName`Data.mdf', 

    SIZE = 10, 

    MAXSIZE = 50, 

    FILEGROWTH = 5 ) 

LOG ON 

( NAME = $DBName`Log, 

    FILENAME = '$LogPath\$DBName`Log.mdf', 

    SIZE = 5MB, 

    MAXSIZE = 25MB, 

    FILEGROWTH = 5MB ) 

"@ 

$a | out-file c:\DB.txt 

invoke-sqlcmd -query $a -ServerInstance $Server 

That wasn’t too bad, right?  What I did was I kept the bulk of the code in T-SQL where most DBAs are comfortable, and I just used PS to control things.  This is a perfectly good way to do this kind of operation and it has some advantages the straight SMO version doesn’t easily come by.  For starters, I can easily save the scripts created and keep a record of when and where and how they were created and I can use them as T-SQL templates for other DB creations.  And more importantly, if there’s an error, I can easily jump into the T-SQL that was created and see where the issue is.  This code is also sharable among different processes.  Perhaps I just want to use the PS to create the script and have it run by something else.  Maybe I need to add it to my source control to have a record of how to re-create the DB in case of a disaster.  For whatever the reason, I now can easily have a record of what was created. 

And of course, the biggest reason I love this method so much is that it in itself is a template for anything else you want to do.  If you want to create powershell scripts that run whatever T-SQL you like, all you have to do is replace the T-SQL itself and change a couple variables and the rest stays the same.  And of course changing this script to run on multiple servers is as easy as it can be.  So scripts like this don’t need SMO, and in fact they can be more effective without it. 

While I’m at it I’m going to jump into an example from ch.26.  Here Richard shows us how to connect to SQL Server from PS, but honestly, this is just cumbersome.  Here’s the code for your reference: 

 $connString = "server=SQL1;Integrated Security=SSPI;database=adventureworks" 

$cn = new-object "System.Data.SqlClient.SqlConnection" 

$connString 

$cmd = new-object "System.Data.SqlClient.SqlCommand" 

$cmd.CommandType = [System.Data.CommandType]"Text" 

$cmd.CommandText = "SELECT * FROM Production.Product Where Color ='Silver'" 

$cmd.Connection = $cn 

$dt = new-object "System.Data.DataTable" 

$cn.Open() 

$rdr = $cmd.ExecuteReader() 

$dt.Load($rdr) 

$cn.Close() 

$dt | Format-Table 

I’m gonna go ahead and say in Richard’s defense that he does state that there are other ways to do this, but this is the way he knows works.  So I’ll give him props for at least saying that, but why would you make this the default way to connect to SQL from PS?  If I wanted to completely recreate this code for DBAs I would do it like this: 

 $queryTxt = "SELECT * FROM Production.Product Where Color ='Silver'" 

$a = invoke-sqlcmd -S$ServerName –q”$queryText” 

$a | format-table 

That’s it, 3 lines of code, and you can easily see I could have done it in 2, but I like to make the query text easy to get to.  Each method presented here has its merits, but I find that for everyday usage in a SQL Server environment, this one is the easiest to support, change and duplicate.  I don’t have to go searching for .NET code to reproduce this on the fly.  Of course, if you’re in a multi-platform environment and you’re constantly connecting to Oracle and other vendors, then Richard’s code may make more sense to you because .NET can easily handle that.  But that’s not the context given in the book, and it is a book for SQL guys, not network admins.  So here the assumption should be that you’re connecting to SQL Server in the easiest way possible and then show the harder methods… the exceptions, on down the road. 

Unfortunately, my honest opinion of the PS chapters in this book is that they mislead beginners into thinking that PS is harder than it is and that you have to jump through dozens of lines of code to perform simple tasks when that just isn’t the case.  I think both authors had a wonderful opportunity to show beginners how PS could make their lives much easier by shortening the code they have to write and support but instead they chose methods that were much longer than their traditional counterparts.  So those are my thoughts on the PS chapters in the MVP Deep Dives book.  Both of these authors are accomplished coders and these methods work for them.  What I wanted to do here was show you that there are alternatives for those of us who aren’t quite as good at those older, more complicated methods.  Either way you look at it though it really goes to show the awesomeness of PS because we can all code the way we like.  Allen and Richard can code the way they’re used to and I can code the way I’m used to and everybody’s happy.  It’s really the perfect framework because it knows how to get out of your way and just let you work.  And there are even more methods than I provided here.  So go explore your favorite ways to do these things and write a blog calling my code long-winded.  If you do, I’ll probably switch to your method.  

Sean McCown holds a double MCITP in SQL Server 2008 for both Administration and Development.  He is also a SQL Server MVP with over 15 years experience in databases.  Sean is a contributing editor with InfoWorld and a frequent contributor to many community sites and forums. 

Sean is also founder and co-owner of the popular database website MidnightDBA.com where he records free SQL Server training videos and co-hosts the popular webshow, DBAs@Midnight. 

Sean has also created content for TrainingSpot.com, TrainSignal, and moderates the Petri IT Knowledgebase SQL forums. Sean also speaks a various SQLSaturday events and is a board member of his local user group, the North Texas SQL Server User Group (NTSSUG).

Advertisements

5 Responses to “Powershell Review”


  1. August 24, 2010 at 11:38 pm

    Sean, I’m really disappointed in your review of my chapter for one reason alone. The title of the chapter is “Automating SQL Server Management using Server Management Objects”. In other words, the chapter was about SMO, not how to do things quickly in PowerShell. When I first wrote the material I included examples in VB.Net, C# and PowerShell, but this took more space in the book than was available for any one chapter, so I cut it down to just the PowerShell examples. Since before SQL Server 2005 came out I’ve been writing about SMO and my focus has consistently been on SMO. (In fact, this chapter was written before SQL Server 2008 and the Invoke-Sqlcmd cmdlet were even available.)

    Allen

    • August 25, 2010 at 1:28 am

      Allen, man you know I love you almost more than anyone else in this industry. I consider you a dear friend. And perhaps you’re right and I did get hung up in the PS aspect of the chapter instead of the SMO aspect. And for that I’ll capitulate and say you’re right. What you did was perfectly cromulent as far as SMO goes and I don’t think I even hinted otherwise. Even if the .net code were left in the book though I still don’t think either of those are good examples of how to automate SQL. Because even though it’s how you would do it in SMO that still doesn’t make it a good idea. I can’t think of a single scenario where that would be useful. And the reason is because it’s so long. To write that much code to perform a couple simple tasks is beyond me. If I needed to backup a DB from .net I would rather just connect through OLEDB and run a t-sql statement instead. And the same goes for creating a DB. So you’re right in saying that I shouldn’t have lumped yours in with a pure PS review, but I also think I’m right in that they were both examples of something almost nobody would ever do because of their length.

      And I said it in the review and I really meant it. You’re a much better coder than I am and I have trouble reading all that complicated pure SMO code. I just can’t think like a real coder like you can. And maybe that’s my trouble, but I’m not alone. People automating SQL are DBAs, not coders, so the code needs to be much simpler for us. So I still say that even though invoke-sqlcmd wasn’t available when you wrote this, osql-sqlcmd-oledb are probably all much more succinct methods for performing these easy tasks. So if I read this chapter with my coding hat I say it’s a fine piece of work because it shows examples of how SMO works. But if I look at it with my DBA hat (the one I use for reviews meant for DBAs), then I have to say I just don’t get it. And this chapter was clearly for DBAs.

  2. August 25, 2010 at 1:57 am

    So, what you’re saying is that SMO should remain undocumented, because it’s too hard for DBAs to use? I don’t agree with that. Since early 2005 I’ve set out to point out just how to do things using SMO, and where those methods fall short (as they sometimes do.) If I wanted to use T-SQL I’d use sqlcmd and execute T-SQL scripts. It may not be the easiest or quickest way to get something done, but it’s SQL Server’s management API, and it deserves to see the light of day.

    • August 25, 2010 at 10:44 pm

      No, I’m not saying it should reamin undocumented, but I’m saying if it’s that hard to find practical examples then maybe it should go away. It’s like creating tables in SMO. I’ve seen the code for it, but what’s the point? I can’t see a single use case for doing that instead of t-sql being called from anywhere you can make a DB connection.
      I guess that’s what I’m saying… give me a use case where backing up DBs in SMO is preferable to the other methods we know. Because for a method to be viable, it has to be more than just possible. It should be the preferred method in some given scenario otherwise it’s just there for the sake of being there. What if they developed an even more verbose method for backing up DBs… one that took a minimum of 5 pages of code? And it had no real use case where it was actually better than the other methods out there… would anyone learn it?

      Guys like you got good at SMO because you were already very proficient while the rest of us were just beginning and it’s what you had at the time. Even still, I’m not sure why you used it instead of the much easier and shorter t-sql calls. Why is that? Why choose to write more code?

  3. August 26, 2010 at 8:08 pm

    It boils down to this – the focus of the chapter is SMO. SMO is the Management API for SQL Server, so it was put into the DBA section of the book. The chapter is not about “how to get things done fast in PowerShell”, it’s about SMO.

    Personally, I use SMO because I’ve gotten familiar with it, and it’s not that hard. I don’t build tables in SMO for the same reason I don’t build tables in the SSMS table designer – it’s painful and stupid. There are a lot of things I’d love for them to change and to fix in SMO, but that’s not my decision. I do like the granular control of properties you have in SMO, though, and that makes it extremely useful overall. There’s nothing wrong with interchanging T-SQL with SMO and getting the best of both worlds.

    There is something wrong, however, with not providing a source of information so people can make their own decisions. And your review of my chapter infers that using the SMO methods is a waste of time so they shouldn’t even be written about.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Chapters

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 13 other followers