Here's the link to download the sample scripts and meme-tastic slidedeck from my presentation "SQL Server Indexes in Your Ill-Begotten Databases".
Thanks for everyone who attended, presented, judged and laughed!
Wednesday, November 20, 2013
Tuesday, November 05, 2013
Potential Backup Software Issues with Read-Only Databases in FULL Recovery
If you decide to park a database in read-only mode for a long period of time, go ahead and alter it into SIMPLE recovery mode too.
The message returned by SQL when you try to take a transaction log backup is:
"BACKUP LOG cannot modify the database because the database is read-only. The backup will continue, although subsequent backups will duplicate the work of this backup."
which is fairly self-explanatory to a human.
But to an automated system, such as a third-party enterprise backup platform, this message is not always interpreted correctly.
eVault, for example, does not detect the read-only condition or the message from SQL and instead returns this generic error message:
"A gap was detected in log files for database foo. You will need to re-run a Full backup."
Which is misleading and incorrect and pointed troubleshooting efforts of backup administrators in the wrong direction.
The message returned by SQL when you try to take a transaction log backup is:
"BACKUP LOG cannot modify the database because the database is read-only. The backup will continue, although subsequent backups will duplicate the work of this backup."
which is fairly self-explanatory to a human.
But to an automated system, such as a third-party enterprise backup platform, this message is not always interpreted correctly.
eVault, for example, does not detect the read-only condition or the message from SQL and instead returns this generic error message:
"A gap was detected in log files for database foo. You will need to re-run a Full backup."
Which is misleading and incorrect and pointed troubleshooting efforts of backup administrators in the wrong direction.
Saturday, November 02, 2013
SQL Saturday #255 Dallas 2013 - SQL Server Permissions and Security Principals
500+ IT pros at UT-Arlington this weekend for SQL Saturday #255 Dallas!
Notes from today:
Notes from today:
- A big shout-out to SQLSat Dallas organizer and 2013 PASSion Award Honorable Mention winner Ryan Adams, the South Central PASS Regional Manager, which includes the Baton Rouge User Group. Ryan has spoken at SQLSatBR in the past and it's my pleasure to return the favor.
- I presented an introductory-level SQL security principals session at 1:30 in Track 1 Room 100. Thanks for attending! Click this link for the slidedeck and demo scripts from this presentation. Great crowds and amazing feedback today, great job everyone! Big thanks to the 40+ in attendance!
- Also, my favorite human resources professional presented on "Mastering your Resume & Interview: Tips to Get Hired" at 9:45AM in Track 6 Room 110 and "Interview Tips for Managers" at 1:30PM in the same room. Huge crowds in both, and great questions!
- It is totally Data Manipulation Language. I'm not above admitting I'm wrong or fulfilling my promise to admit it on my blog. :)
On a more serious note:
Someone at #SQLSat255 asked one of my female developer coworkers today if she was a "booth babe". Someone else assumed she was in "sales", not technical. Damnit, guys. Totally not cool. We can do much better.
Friday, November 01, 2013
PowerShell: Delete Unneeded Rows/Columns In Excel
I had a project to automate the import of a large number of excel files via SSIS into SQL 2012. Many of the files had different headers, but they all had a column header row that preceded the data.
On some of the files, the header row started on row 2, 3, 4, or 5, and on some others, the first two or three columns were entirely blank. What a pain, right?
One saving advantage that I identified in this proprietary set of excel files: each of the datasets begins with a column headed "Identifier", so that is the cell that I want to end up in cell [A1]. That was definitely key to making this work and easily identifying when my header row began.
I automated cleaning up these spreadsheets with the following PowerShell script
The last four lines are cleanup - while the last might not be very important, the second-to-last line is pretty important to make sure the stick COM object goes away. Explained much better here: http://technet.microsoft.com/en-us/library/ff730962.aspx
In order to put this in sequence in my SSIS package, I used the Execute Process Task. Screenshots below:
Unfortunately the client server I was working for here only had PowerShell 1.0 installed, but from what I understand, this should apply to PowerShell 2.0 and 3.0. I'm open to anyone who has any insight there.
(click to enlarge the below images)
The argument passed to the PowerShell file includes the full path to the Excel file. (I removed the paths from this actual client implementation.)
I'm open to feedback on this, wonder if anyone else has encountered a similar issue. Obviously, the best solution would have been to modify the process that generates these maddeningly inconsistent excel files.
On some of the files, the header row started on row 2, 3, 4, or 5, and on some others, the first two or three columns were entirely blank. What a pain, right?
One saving advantage that I identified in this proprietary set of excel files: each of the datasets begins with a column headed "Identifier", so that is the cell that I want to end up in cell [A1]. That was definitely key to making this work and easily identifying when my header row began.
I automated cleaning up these spreadsheets with the following PowerShell script
The two while loops are where the customization was for my particular set of problematic Excel spreadsheets, but the general structure of the code is where you can change it up for yourself.Param([string]$wbname) Try { $err1 = 0; #testing only #$wbname = "E:\foo.xls" $xl = New-Object -comobject Excel.Application $xl.DisplayAlerts = $False $wb = $xl.Workbooks.open($wbname) $sheet = $wb.Sheets.Item(1) #delete columns while( ($sheet.Cells.Item(1,1).Text -eq "") -and ($sheet.Cells.Item(2,1).Text -eq "") -and ($sheet.Cells.Item(3,1).Text -eq "")){[void]$sheet.Cells.Item(1,1).EntireColumn.Delete()} #delete rows while( ($sheet.Cells.Item(1,1).Value() -NotLike "Identifier") -and ($sheet.Cells.Item(1,2).Value() -NotLike "Identifier")){[void]$sheet.Cells.Item(1,1).EntireRow.Delete()} #cleanup $wb.Close($true) $xl.Quit() [System.Runtime.Interopservices.Marshal]::ReleaseComObject($xl) Remove-Variable xl } Catch { $err1 = 1; } Finally { if($err1 -eq 1) { #Write-Output "99"; #Write-Output $_.Exception.Message; $stream = [System.IO.StreamWriter] "e:\error.txt" $stream.WriteLine($Error[0].InvocationInfo.PositionMessage) $stream.WriteLine($wbname) $stream.close() [Environment]::Exit("99"); } Else { #Write-Output "0"; [Environment]::Exit("0"); } }
The last four lines are cleanup - while the last might not be very important, the second-to-last line is pretty important to make sure the stick COM object goes away. Explained much better here: http://technet.microsoft.com/en-us/library/ff730962.aspx
In order to put this in sequence in my SSIS package, I used the Execute Process Task. Screenshots below:
Unfortunately the client server I was working for here only had PowerShell 1.0 installed, but from what I understand, this should apply to PowerShell 2.0 and 3.0. I'm open to anyone who has any insight there.
(click to enlarge the below images)
The argument passed to the PowerShell file includes the full path to the Excel file. (I removed the paths from this actual client implementation.)
I'm open to feedback on this, wonder if anyone else has encountered a similar issue. Obviously, the best solution would have been to modify the process that generates these maddeningly inconsistent excel files.
Subscribe to:
Posts (Atom)