Author Archive for Jeremy Lowell


Chapter 24 – What does it mean to be a good DBA?

The third part of the book SQL Server MVP Deep Dives focuses on Database Administration.  The first chapter in this part is written by Brad McGehee (Blog | Twitter) and is preceded by an introduction to this section by the editors Paul Randal (Blog | Twitter) and Kimberly Tripp (Blog | Twitter).  Let me just say that if I were asked to write a chapter in a book and was told that the editors would be Paul and Kimberly, I would …. huh, not sure what I would do, but the point is that it would be an incredible honor and at the same time an incredibly brutal endeavor.  Of course, the brutality of it all would be self-inflicted but even so, it’s still brutality, no?  At any rate, for the authors that submitted chapters on this section, kudos.  Big kudos.

What does it mean to be a good DBA?  Based on the chapter, I was ready to begin scribbling notes to myself on how to become a better “DBA”.  This chapter surprised me, in a good way, by answering that question inside of typical DBA tasks.  Brad defines 36 tasks that DBA’s of varying title find themselves responsible for. 

Years ago when I was a full-time employee, I often struggled with how to communicate what it is that I did to others in my group, my boss and the always important yearly review process.  I believe the information laid out in these pages can help you articulate and define your role and responsibilities in your organization.  Heck, it’s written so well that I’m likely going to photo copy it, highlight the relevant sections, hand it to my mom and say, here, this is what I do.

Over the years I’ve had the opportunity to interview many candidates for varying levels of DBA positions and one of the more frustrating parts of the interview process is when candidates are narrow-minded about what it means to be a “DBA”.  This chapter makes it very clear that a DBA role consists of many different skills and responsibilities that many of us may not have considered a “DBA” task before.  This is an important consideration when you look at your role as the DBA in your company.  It’s an important consideration if you are a developer, manager or someone who wants to become a DBA.  Only by truly understanding what the role *can* consist of can you, or anyone define what the role will consist of in your position.

Another portion of the chapter speaks about “DBA Specialties”.  This part of the chapter is useful if you are looking to dive deeper into a specific area of SQL Server.  It also helps define what you might want your next job title to be.

All in all, another great chapter.


Chapter 23 – Mobile Data Strategies

If the purpose of this book is to continually teach me things I didn’t know, then it has succeeded yet again with this chapter by John Baird (Blog | Twitter). 

Mobile data strategies and SQL Server.  Much like my last post, I haven’t spent any time personally or professionally learning this side of SQL Server and I am, yet again, impressed with the technology and the manner in which John laid out this chapter.  I suppose that as time goes on and smart phones are continually supporting more memory and a faster processing speed that more SQL Server applications will become part of the mainstream.

As I mentioned last week, this is the final installment of Part II, Database Development in the book.  I’d like to thank all of the authors who contributed to this portion of the book and I will re-visit some of the chapters as time goes on. 

Next week we will begin with Part III, Database Administration.  It contains 16 chapters which will see us through the month of October.  We are still seeking guest bloggers for any of the chapters.


Chapter 22 – Placing SQL Server in your pocket

A clever, catchy title for the chapter by Christopher Fairbairn (Blog | StackOverflow). 

I find myself spending the majority of time working to scale, optimize, tune, develop and administer large SQL Server environments.  As a result this chapter was a wake up call for me.  It’s good to be reminded of entire editions of a product that I spend countless hours every week working on. 

With that said, there are a few things that really struck me with this chapter.  The first was the footprint.  Yeah, I knew that the CE (compact edition) of SQL Server was small… but 4MB?  Good grief.  I’ve deployed t-sql scripts for products or customer releases that are larger than that.  It’s truly incredible that Microsoft can pack enough of the engine into a 4MB footprint so as to make it able to support merge replication, multiuser, locking (transactional consistency), be cross-platform and even encrypted!  It’s like a lite beer with all the taste for… well, for nothing! 

So, with the wake-up call to CE I have some next steps.  One is to learn more about it, where it might fit in with my client’s overall strategic goals and more importantly for me to become familiar with.  I haven’t yet had a project that required disconnected hand-held devices to operate an application and then synch back up to the network.  That said, there are clearly a ton of applications out there doing just that.

My mind can’t help but wander with this subject to azure and how CE and Azure will co-exist.  There is probably some information out there on this already, I simply haven’t come across it.  If you have come across it, leave a comment and help educate me and the population of folks following this blog.

Next week is the last chapter of Part 2, Database Development.  Part 3 will begin the broader subject of Database Administration.  If any of you have ideas or comments about the format of this site, the posts themselves or constructive criticism, please don’t hesitate to leave a comment or e-mail one or all of the three main authors of this blog.  Our e-mail addresses and the schedule for the remaining chapters of the book can be found here and here, respectively.

Happy 4th of July to those of you in the U.S.


Chapter 21 — Methods of tracking deprecated features

In most versions of SQL Server there are any number of routines, un-documented functionality or elements in the engine that Microsoft needs to migrate away from.  Often times this is for the benefit of the product, readability, to better adhere to the ANSI standards etc…  The point is that functionality has been deprecated in the past and it will continue to be deprecated in the future.  In Chapter 21, Christian Lefter (Blog | Twitter) takes on deprecated functionality and provides examples of how to monitor your environment for deprecated features.

With SQL Server 2008 the database engine makes it incredibly easy to identify the items that are going to be deprecated.  In the past, I can recall pouring over lines of code, writing wildcard searches through the syscomments table and otherwise spending more time than needed to ensure that the DDL or incoming t-sql statements to an upgraded server would be successful.  More often than not, this involved modifying a few different classes of items.  Other times, it involved touching hundreds of stored procedures, modifying dynamic sql (in the front end application) and generally a great deal of turmoil.

This is a great chapter and a subject that falls under the category of “REQUIRED” if you are at all interested in taking your database career seriously.  Understanding what’s deprecated and being able to measure and ensure that it’s not going to be deprecated is, in my opinion one of the fundamental checks that DBA’s, DB Developers, .NET developers and BI Developers should take very seriously.  It will increase the longevity of your application and might make you aware of some of the NEW features of t-sql and the database engine that you wouldn’t have bothered to look into previously.


Chapter 20 – Why every SQL developer needs a tools database

In Chapter 8, Denis Gobo (LessThanDot – blog,  sqlblog,  Blogspotblog,  @DenisGobo – Twitter) answered the question “What makes a bulk insert a minimally logged operation” and I dug that chapter.  In this chapter Denis tackles a subject that most DBA’s haven’t yet implemented, creating a tools database.

With many of my clients, I find the same objects in many user databases… I’ve seen multiple tables containing zip codes, states, etc…. in different user databases that have to each be scrubbed, clean, kept up to date etc…  The methods provided by Denis in this chapter and the samples he provided are excellent and usable out of the box. 

Other options that I include in my “tools” database include dynamic partitioning, re-indexing scripts, backup strategies and meta-data management among other things.  A database full of common functions (think CLR, split-string, calendar data and other forms of “controlled data” such as zip codes, states, countries, time zones, lat / long boundaries, etc…) will enable you to re-use functionality; simply your code, lower the management overhead of your database objects etc….  The list of options and great uses for a common or a tools database exceeds what I have come up with over the years and as time goes on I find myself adding new items to it.  Case in point, I’m still creating proc’s and views that reference DMV’s which answer all kinds of great questions for me when I’m in a new environment. 

To sum it up, create your tools database by beginning with the excellent examples provided by Denis in this chapter and begin to leverage the value that this can have to your productivity.


Chapter 19 – Extending your productivity in SSMS and Query Analyzer

Productivity.  In today’s perception of the economy, jobs and overall economic stability, productivity is often the difference between those who excel and those who don’t.  This chapter is all about increasing your productivity and as a result, the productivity for your employer or client.  All of which increase your value.

Pawel Potasinski (Blog | Twitter) has some great examples on how to create custom keyboard shortcuts including the creation of utilities that can be poured onto your query screen with a singular key stroke… genius!

Some of the other options / shortcuts in SSMS that I’ve found of value over the years are very simply changing the options.  For instance, I’m often asked how to keep a result set from truncating… a setting can be found in SSMS under the tools menu… Then navigate to options –> Query Results –> Results to Text and modify the “Maximum number of characters displayed in each column”.  This setting is incredibly valuable for those times when dynamic sql is written to generate sql statements. 

Another option that I use a lot is found in the same section under “Results to Grid”.  The top two “Include the query in the result set” and “Include column headers when copying or saving the results” are both just awesome!  I cannot recall the amount of times that I spent time writing a very specific query for an audit or a business analyst and then saved that query “in a safe place” which of course is code for “to never be found again”.  This option saves me a great deal of time and simplifies the amount of organization required a great deal as I can simply run the query; save off the results (with the query and column headers attached!).  Awesome!

Of course, one of the huge benefits with SSMS 2008 is intellisense… I have to admit that intellisense drove me NUTS for the first few months… it took me a long time to adjust and learn how to best use intellisense.  Now that I have stuck with it for a while; it’s brutal to be on a 2005 system without intellisense.  I find myself trying to remember how spell simple object names like dbo.employee instead of d[tab].em[tab] … that simple example saves me 6 key strokes.  That’s a savings of 50 percent!!! 

Lastly, I just have to comment on this comment “…during development of my database, which contains more than 100,000 objects at the moment…”  Dude, Pawel, 100,000 objects?  wow.


Chapter 18 – Getting and staying connected–or not

There are moments in life and moments in careers that define who we are, who we become and who we once were.  This chapter about SQL Server connectivity actually brings to light two of those moments in my life. 

In no particular order, I will begin with career moment #1.  Meeting William Vaughn at DevConnections in Las Vegas and chatting with him for a while over a drink.  You know, there are some great folks with some great stories in this world… but when it comes to my career, listening to William Vaughn is akin to my three-year old daughter having cookies and hot chocolate with Santa Claus at the North Pole. 

Career moment #2:  I will never forget the conversation that I had with Microsoft around the proper and improper usage of the SA account in SQL Server.  For the record, I’m still right and William Vaughn agrees with me when he states:

WARNING: As a general rule of thumb, you should never expose the SA password nor create applications that use it. Do not permit developers to build applications using the SA account. Doing so masks any number of complex issues that will have to be addressed at some time in the future—hopefully before your data is compromised.

Microsoft still sells Great Plains (GP), aka, Dynamics and it STILL requires the SA password be known in order to accomplish many of the administrative functions within the application.  Appalling.  Version after version this has continued.  I do hope that they invest the time, money and resources into providing a secure financial package.

So back to getting and staying connected… This is a great chapter and one that I will be sure to share with developers.  There is a ton of good information in this chapter about the different ways to connect to SQL Server and options therein.  The one that stuck me the most was that in a connection pooling scenario, the connection remains dormant (server-side) for 4 to 8 minutes after the connection is “closed” via the Connection.Close method. 

In closing the author states that “developers want solutions, not options” and this chapter does indeed provide those solutions along with excellent rational for the solutions presented.


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 13 other followers