Archive Page 2

27
Aug
10

Something You can Use

I don’t want this to sound like it’s all about me. The end result is that it isn’t, I hope that when you read this that really I am trying to make it about you, something you can use or something that you may need to use in the future. I think about it this way, when I go to a conference there are so many excellent sessions that I would like to attend, some of these sessions are really deep and some are the overview. By default I want to look at sessions that I really think I am going to learn a lot, that is the whole point right? But then I have to come back to earth and realize that not only does it have to be something that I want to learn but something I think I can use soon. So what does this mean for you?

Each DBA at some point in time is going to have to deal with a space issue, and before you can do that you really need to sit down and free up as much space as you can to make sure you know what you are working with. There are issues here with just freeing space that you may want to consider, and Joe talks about these in his chapter. The end result is this is a chapter you should read and understand so when you are working with your databases, you know how to reclaim and reuse the space that you can.

Advertisements
27
Aug
10

Reusing space

Reusing space in a table…While reading this chapter, I kept thinking about the TRIM functionality for SSDs (sold state drives). Where a file is written to the drive and if it is deleted, the OS will recognize that and then release the space so that it can be used again (really a high-level overview, it does a little more than that but that is the basic function). SQL Server has a similar function when is comes to releasing previously used space. It is automatic when it comes to reusing space but in certain cases, you will have to “force” SQL Server to free up that used space. Joe Webb is the author of this chapter titled “Reusing space in a table”. This is another great informative read because when it comes to data and releasing the space that was being used, I really never gave it any thought. When it comes to SQL Server not releasing the space, the author writes about the function that can be used to release the space. Read this chapter and try the examples that are provided, it is a great way to see what can be going on with your databases and the space that it is using.

Thank you for reading and staying with us. Have a great week.

27
Aug
10

Chapter 30 – Reusing space in a table

It’s rare that I read the first paragraph in a chapter of a technical book and smile with fond memories. 

This is one of those rare times.  If you have the book SQL Server MVP Deep dives, written by Joe Webb (Blog | Twitter) then you know what I’m talking about.  If you don’t have the book and you are reading this… well, uh, go.  get the book.  Here. Today. 

This is a cool chapter… while I have only used the functionality provided by this on a couple of occasions, I can attest to its value.  I can especially attest to its value for some C.O.T.S. packages out there that update their schema with a version upgrade and the space is not readily re-used.  I’m not going to name names because that’s not the point… the point is that if you are a developer for some of the C.O.T.S. implementations out there; please keep this in mind.  It’s an easy, easy thing to be on the lookout for and it will keep grumpy DBA’s such as myself from calling you.

More to the point, if you use SQL Server to store data in some of the larger data types (xml, text, image, etc…) and you have data that falls outside of the IN_ROW_DATA allocation space, this chapter and understanding how SQL Server cleans up and doesn’t clean up after itself is very, very important.

The only other item that I think bears a mention is that this will lock your table while this process is running.  That’s a pretty important thing to keep in mind if you are thinking about running this command against your Transactions table in production.  🙂

Lastly, for those of you who haven’t frequented Joe’s blog, you should.  I spent about 45 minutes on it while trying to write this little post… he has some great content!  http://webbtechsolutions.com/blog/

24
Aug
10

Powershell Review

I’ve been asked to write a review of the PS chapters in the deep dives book so that’s what I’m going to do.  I’m not going to give detailed analysis of each page or of each chapter individually though.  Instead I’m going to concentrate on how PS was covered in the book as a whole.   And I’m going to give some alternate code samples for the ones offered by the authors. 

For starters, I’m going to say that the first thing that went through my mind when I read these chapters is that the authors just don’t get it.  They clearly don’t understand what PS is all about as is evidenced by their very complicated coding examples.  It’s clear to me that the point of both of these chapters was to introduce PS to the readers because they both started off explaining PS from the ground up.  They were quick intros, but it’s still clear that the audience was the complete beginner.  And to that end, I think the chapters failed in their task.  Now before I go further let me say right here that both of these authors are smarter and I am.  I know Allen personally and I think I’ve met Richard (really can’t remember), but they’ve both been coding longer than I have so they’re both much better coders than I am.  And I’m not saying that to be flippant, or to be falsely modest.  I really mean both of these guys are smarter than me.  However, it’s their vast experience that hurts them when coding in something like PS, and honestly if my first exposure to PS came from this book I probably wouldn’t pursue it any further. 

The whole point of PS is to be able to do things with very small succinct commands, and all of the examples here are very long and over-complicated.  The main reason to use PS is to shorten the code it takes to get something done.  So it’s usually the perfect replacement for T-SQL cursors and dynamic SQL.  Let’s look at a specific example though.  In ch.27 Allen shows a way to backup all the databases on a server using SMO.  And while I use SMO all the time in PS, this isn’t one of those times when SMO is the best solution for the job.  

For easy reference, here’s the script provided in the book. 

#backup.ps1 

#Performs a Full backup followed by a transaction log backup on all user 

➥databases 

param ( 

[string]$srvname='MyServer\MyInstance' 

) 

# Load SMO assembly, and if we're running SQL 2008 DLLs load the 

➥SMOExtended and SQLWMIManagement libraries 

$v = [System.Reflection.Assembly]::LoadWithPartialName 

➥('Microsoft.SqlServer.SMO') 

$p = $v.FullName.Split(',') 

$p1 = $p[1].Split('=') 

$p2 = $p1[1].Split('.') 

if ($p2[0] -ne '9') { 

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer. 

➥SMOExtended') | out-null 

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer. 

➥SQLWMIManagement') | out-null 

} 

$s = new-object ('Microsoft.SqlServer.Management.Smo.Server') $srvname 

$bkdir = $s.Settings.BackupDirectory 

$dbs = $s.Databases 

foreach ($db in $dbs) { 

if ($db.IsSystemObject -eq $False -and $db.IsMirroringEnabled -eq $False) 

➥{ 

SqlBackup 

AddDevice 

Action 

BackupSetDescription 

BackupSetName 

Database 

MediaDescription 

BackupDevice 

Backup 

Devices 

$dbname = $db.Name 

$dt = get-date -format yyyyMMddHHmmss 

$bk = new-object ('Microsoft.SqlServer.Management.Smo.Backup') 

$bk.Action = 'Database' 

$bk.BackupSetDescription = "Full backup of " + $dbname 

$bk.BackupSetName = $dbname + " Backup" 

$bk.Database = $dbname 

$bk.MediaDescription = "Disk" 

$bk.Devices.AddDevice($bkdir + "\" + $dbname + "_db_" + $dt + ".bak", 

➥'File') 

$bk.SqlBackup($s) 

# Simple Recovery Model has a Value Property of 3 

# SQL Server 2008 doesn't recognize the enumerated value so the code is 

➥slightly different 

# Set a variable to run the transaction log backup, and if Simple, turn 

➥it off 

$trnbck = 1 

if ($p2[0] -eq '9') 

{ 

if ($db.DatabaseOptions.RecoveryModel -eq 'Simple') { $trnbck = 

➥0 } 

} 

else 

{ 

if ($db.RecoveryModel.value__ -eq 3) { $trnbck = 0 } 

} 

if ($trnbck -eq 1) { 

$dt = get-date -format yyyyMMddHHmmss 

$trn = new-object ('Microsoft.SqlServer.Management.Smo.Backup') 

$trn.Action = 'Log' 

$trn.BackupSetDescription = "Trans Log backup of " + $dbname 

$trn.BackupSetName = $dbname + " Backup" 

$trn.Database = $dbname 

$trn.MediaDescription = "Disk" 

$trn.Devices.AddDevice($bkdir + "\" + $dbname + "_tlog_" + $dt 

➥+ ".trn", 'File') 

$trn.SqlBackup($s) 

} 

} 

} 

  

Now let’s look at a really simple script that will backup all the databases on a server without using SMO.  I haven’t put every element in there that Allen did, but it wouldn’t take much more code to duplicate it exactly and it would still be much shorter and much easier to read. 

Dir | %{$DB = $_.Name; $Filename = “C:\$DB.bak”; invoke-sqlcmd –query “Backup Database $DB to disk = ‘’$FileName”} 

So that’s really all the straight PS code it takes to backup every database on a server.  And it’s not only demonstrates why PS is so popular, but it’s very easy to see why SMO isn’t the tool for the job.  And the biggest point here is that as a DBA, which code would you rather support and extend and troubleshoot? 

And of course you can restore all the DBs just as easily.  And in the true PS fashion there are more ways than this to accomplish the same task.  One of the things I really like is to either mix PS with T-SQL or even mix it with straight DOS commands and T-SQL.  I’ll show you an example of that in a minute and you’ll see what I mean. 

Creating a DB 

Creating a DB is another one of those tasks that I think is far too bloated in SMO.  Now, this isn’t Allen’s fault, it’s just how SMO works, but that’s why I say it’s not the right tool for this job.  Again, for your convenience I’ll provide the code from the book. 

#createdatabase.ps1 

#Creates a new database using our specifications 

param ( 

[string]$srvname='MyServer\MyInstance', 

[string]$dbname='MyAppDB', 

[int]$datsize=25, 

[int]$maxsize=100, 

[int]$logsize=10 

) 

# Load SMO assembly, and if we're running SQL 2008 DLLs load the 

➥SMOExtended and SQLWMIManagement libraries 

$v = [System.Reflection.Assembly]::LoadWithPartialName 

➥('Microsoft.SqlServer.SMO') 

$p = $v.FullName.Split(',') 

$p1 = $p[1].Split('=') 

$p2 = $p1[1].Split('.') 

if ($p2[0] -ne '9') { 

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer. 

➥SMOExtended') | out-null 

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer. 

➥SQLWMIManagement') | out-null 

} 

$srv = new-object ('Microsoft.SqlServer.Management.Smo.Server') $srvname 

# Instantiate the database object and add the filegroups 

$db = new-object ('Microsoft.SqlServer.Management.Smo.Database') ($srv, 

➥$dbname) 

$sysfg = new-object ('Microsoft.SqlServer.Management.Smo.FileGroup') ($db, 

➥'PRIMARY') 

$db.FileGroups.Add($sysfg) 

$appfg = new-object ('Microsoft.SqlServer.Management.Smo.FileGroup') ($db, 

➥'AppFG') 

$db.FileGroups.Add($appfg) 

# Create the file for the system tables 

$dbdsysfile = new-object ('Microsoft.SqlServer.Management.Smo.DataFile') 

➥($sysfg, 'MyAppDB_SysData') 

$sysfg.Files.Add($dbdsysfile) 

$dbdsysfile.FileName = $srv.Information.MasterDBPath + 

➥'\MyAppDB_SysData.mdf' 

$dbdsysfile.Size = [double](5.0 * 1024.0) 

$dbdsysfile.GrowthType = 'KB' 

$dbdsysfile.Growth = [double](5.0 * 1024.0) 

$dbdsysfile.IsPrimaryFile = 'True' 

# Create the file for the Application tables 

$dbdappfile = new-object ('Microsoft.SqlServer.Management.Smo.DataFile') 

➥($appfg, 'MyAppDB_AppData') 

$appfg.Files.Add($dbdappfile) 

$dbdappfile.FileName = $srv.Information.MasterDBPath + 

➥'\MyAppDB_AppData.ndf' 

$dbdappfile.Size = [double](25.0 * 1024.0) 

$dbdappfile.GrowthType = 'KB' 

Listing 4 Creating a user database 

$dbdappfile.Growth = [double]($datsize * 1024.0) 

$dbdappfile.MaxSize = [double]($maxsize * 1024.0) 

# Create the file for the log 

$dblfile = new-object ('Microsoft.SqlServer.Management.Smo.LogFile') ($db, 

➥'MyAppDB_Log') 

$db.LogFiles.Add($dblfile) 

$dblfile.FileName = $srv.Information.MasterDBLogPath + '\MyAppDB_Log.ldf' 

$dblfile.Size = [double]($logsize * 1024.0) 

$dblfile.GrowthType = 'KB' 

$dblfile.Growth = [double]($logsize * 1024.0) 

# Create the database 

$db.Create() 

# Set the default filegroup to AppFG 

$appfg = $db.FileGroups['AppFG'] 

$appfg.IsDefault = $true 

$appfg.Alter() 

$db.Alter() 

  

OK, wow, that’s a lot of code to create a DB don’t you think?  This task can be accomplished so much easier and so much faster by sticking to what you already know, but using PS to make it more dynamic.  My thought process behind this is getting the job done quickly and in a way that’s easy to support even if you’re not a PS or coding god.  In other words, build on the skills you already have.  I have many solutions like this in my environment and the PS element is minimal.  Here I’m going to show you just 1  solution, but I can think of like 3 other variations.  They’re all use PS to control the flow and the params, but they’ll call the actual code slightly differently.  The reason PS should be minimal in this example is because you can’t beat T-SQL for this task.  And it’s hard to imagine a scenario where SMO has to be used to create a DB.  So don’t force PS or SMO into something they’re not supposed to do.  Just because there’s a method for it, doesn’t mean you should use it.  That said, this could be useful if you needed to create the same DB on several servers, or if you wanted to create several DBs at once with different names.  You could easily modify this script (with MINIMAL coding) to accept a list of DBs and just cycle through them. 

So here  I’m going to show you a “pure” PS version.  

 $Server = "localhost" 

$DBName = "MyDB" 

$DataPath = "C:" 

$LogPath = "C:" 

$a = @" 

CREATE DATABASE $DBName 

ON 

( NAME = $DBName`Data, 

    FILENAME = '$DataPath\$DBName`Data.mdf', 

    SIZE = 10, 

    MAXSIZE = 50, 

    FILEGROWTH = 5 ) 

LOG ON 

( NAME = $DBName`Log, 

    FILENAME = '$LogPath\$DBName`Log.mdf', 

    SIZE = 5MB, 

    MAXSIZE = 25MB, 

    FILEGROWTH = 5MB ) 

"@ 

$a | out-file c:\DB.txt 

invoke-sqlcmd -query $a -ServerInstance $Server 

That wasn’t too bad, right?  What I did was I kept the bulk of the code in T-SQL where most DBAs are comfortable, and I just used PS to control things.  This is a perfectly good way to do this kind of operation and it has some advantages the straight SMO version doesn’t easily come by.  For starters, I can easily save the scripts created and keep a record of when and where and how they were created and I can use them as T-SQL templates for other DB creations.  And more importantly, if there’s an error, I can easily jump into the T-SQL that was created and see where the issue is.  This code is also sharable among different processes.  Perhaps I just want to use the PS to create the script and have it run by something else.  Maybe I need to add it to my source control to have a record of how to re-create the DB in case of a disaster.  For whatever the reason, I now can easily have a record of what was created. 

And of course, the biggest reason I love this method so much is that it in itself is a template for anything else you want to do.  If you want to create powershell scripts that run whatever T-SQL you like, all you have to do is replace the T-SQL itself and change a couple variables and the rest stays the same.  And of course changing this script to run on multiple servers is as easy as it can be.  So scripts like this don’t need SMO, and in fact they can be more effective without it. 

While I’m at it I’m going to jump into an example from ch.26.  Here Richard shows us how to connect to SQL Server from PS, but honestly, this is just cumbersome.  Here’s the code for your reference: 

 $connString = "server=SQL1;Integrated Security=SSPI;database=adventureworks" 

$cn = new-object "System.Data.SqlClient.SqlConnection" 

$connString 

$cmd = new-object "System.Data.SqlClient.SqlCommand" 

$cmd.CommandType = [System.Data.CommandType]"Text" 

$cmd.CommandText = "SELECT * FROM Production.Product Where Color ='Silver'" 

$cmd.Connection = $cn 

$dt = new-object "System.Data.DataTable" 

$cn.Open() 

$rdr = $cmd.ExecuteReader() 

$dt.Load($rdr) 

$cn.Close() 

$dt | Format-Table 

I’m gonna go ahead and say in Richard’s defense that he does state that there are other ways to do this, but this is the way he knows works.  So I’ll give him props for at least saying that, but why would you make this the default way to connect to SQL from PS?  If I wanted to completely recreate this code for DBAs I would do it like this: 

 $queryTxt = "SELECT * FROM Production.Product Where Color ='Silver'" 

$a = invoke-sqlcmd -S$ServerName –q”$queryText” 

$a | format-table 

That’s it, 3 lines of code, and you can easily see I could have done it in 2, but I like to make the query text easy to get to.  Each method presented here has its merits, but I find that for everyday usage in a SQL Server environment, this one is the easiest to support, change and duplicate.  I don’t have to go searching for .NET code to reproduce this on the fly.  Of course, if you’re in a multi-platform environment and you’re constantly connecting to Oracle and other vendors, then Richard’s code may make more sense to you because .NET can easily handle that.  But that’s not the context given in the book, and it is a book for SQL guys, not network admins.  So here the assumption should be that you’re connecting to SQL Server in the easiest way possible and then show the harder methods… the exceptions, on down the road. 

Unfortunately, my honest opinion of the PS chapters in this book is that they mislead beginners into thinking that PS is harder than it is and that you have to jump through dozens of lines of code to perform simple tasks when that just isn’t the case.  I think both authors had a wonderful opportunity to show beginners how PS could make their lives much easier by shortening the code they have to write and support but instead they chose methods that were much longer than their traditional counterparts.  So those are my thoughts on the PS chapters in the MVP Deep Dives book.  Both of these authors are accomplished coders and these methods work for them.  What I wanted to do here was show you that there are alternatives for those of us who aren’t quite as good at those older, more complicated methods.  Either way you look at it though it really goes to show the awesomeness of PS because we can all code the way we like.  Allen and Richard can code the way they’re used to and I can code the way I’m used to and everybody’s happy.  It’s really the perfect framework because it knows how to get out of your way and just let you work.  And there are even more methods than I provided here.  So go explore your favorite ways to do these things and write a blog calling my code long-winded.  If you do, I’ll probably switch to your method.  

Sean McCown holds a double MCITP in SQL Server 2008 for both Administration and Development.  He is also a SQL Server MVP with over 15 years experience in databases.  Sean is a contributing editor with InfoWorld and a frequent contributor to many community sites and forums. 

Sean is also founder and co-owner of the popular database website MidnightDBA.com where he records free SQL Server training videos and co-hosts the popular webshow, DBAs@Midnight. 

Sean has also created content for TrainingSpot.com, TrainSignal, and moderates the Petri IT Knowledgebase SQL forums. Sean also speaks a various SQLSaturday events and is a board member of his local user group, the North Texas SQL Server User Group (NTSSUG).

20
Aug
10

Chapter 29 – My favorite DMV’s and why

This chapter is awesome.  Aaron Bertrand (Blog | Twitter) takes the time to explain and provide code examples for 15 of the DMV’s introduced in SQL Server 2005 and SQL Server 2008. 

I’ve been using DMV’s for a while now and the amount of information you can glean from them about your environment is incredible.  Coupled with the prior chapter on DDL and DML auditing, Microsoft is doing an excellent job with providing more information, in a much more intuitive fashion than I can remember since SQL Server 6.5.  This is a very, very welcome change as it allows DBA’s, developers and the like to have visibility into the HOW, the WHAT, the WHERE, the WHY and the WHO is doing what inside of your databases.

One of my personal favorite DMV’s is sys.dm_exec_sql_text.  The reason that I love this one so much is that it’s typically used via a cross join to other DMV’s such as sys.dm_exec_query_stats.  The DMV sys.dm_exec_query_stats provides statistics around query plans.  Specifically, how many times that plan has been executed, the cost of the plan and then you can cross join that plan to actually SEE the t-sql that’s being run.  This is incredibly valuable when looking to see if you have a lot of re-compiling going on or a lot of dynamic t-sql or linq-to-sql that’s not reusing query plans.  Addressing some of the big hitters here (from either a cost perspective or a quantity of re-compiles perspective) can significantly alter your overall database / server performance.

Aaron goes on this chapter to provide a few full-blown examples of his favorite DMV’s in use and why he likes them.  He provides all of the code necessary for an improved sp_who2, how to get statistics for stored procedures, finding unused stored procedures, finding inefficient and unused indexes, finding inefficient queries and finding missing indexes.  

An outstanding chapter Aaron, thank you.

20
Aug
10

Practical Auditing

Practical auditing in SQL Server 2008…Capturing audit items in your system is, I think, absolutely necessary. This is necessary in my case because I sometimes get sidetracked (either by drive-bys or hot issues) and have to remember if I ran this or that against a database. So, if I need to, I will query the ddl log to check myself.  This next chapter by Jasper Smith talks about SQL Server 2008 and how to setup auditing within it. He steps through in how to setup and capture server audits, from the Windows Security to the File targets. Also setting up auditing items on the database level is written out in this chapter. Not to mention how to look up the data that you just captured. It’s like being “big brother” on your systems which I really don’t mind. There is nothing wrong with spot-checking once in a while to see who or what is accessing your systems.

On a side note, in the summary of the chapter, the author’s website is provided www.sqldbatips.com. I went over to check it out (as with the others that have provided their website info in this book) and I found some really cool items in there. Spend a few minutes to take a look at the site.

I must apologize for missing last week. I was vacationing in Las Vegas and California and was having a grand ole’ time.

Thank you for reading this week. Have a good one.

20
Aug
10

Chapter 28 – Practical Auditing in SQL Server 2008

In chapter 28, Jasper Smith (Blog) introduces us to the built-in auditing features of SQL Server 2008.  This.is.awesome.  For those of you who have had to “build your own” dml and ddl auditing in the past, it was not intuitive nor was it an easy “out of the box” solution. 

A few years back (o.k., 9 years), I was tasked with creating a dml audit for all select, update, insert and delete statements against tables that contained personal information.  This effort required a continual trace for DDL and another for DDL.  Once I had trace files being created, I then wrote additional t-sql and bcp jobs to insert the data into a set of tables so that the security team could review who was looking at what data, where they were logged in from, etc…  While it was a great excercise for me from a professional development standpoint, it wasn’t as easy as the point-n-click implementation that Jasper walks through in this chapter. 

I would highly, highly recommend that every DBA out there set up DDL auditing in their environments.  It can help with so many issues that come up.  I have several clients who run a lot of Commercial Off The Shelf (COTS) packages and when upgrades happen in these environments and the database slows down or rapidly speeds up, many DBA’s are left out in the cold wondering what happened.  This most often means that a call has to be put into the vendor and the DBA spends a significant amount of time understanding what happened and why it had the effect that it did to their systems.  If DDL auditing had been set up, it would have been a quick check and may not even involve having to call into the vendor… score!

Another common scenario is a new release of your custom software will go out and during the deployment phase a piece of DML code updates too much data or not enough data and the release has issues.  Having the ability to know what ran, when, by whom etc… is truly invaluable in these scenarios.

Thanks for a great chapter Jasper!




Chapters

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 13 other followers