Tag Archives: sql

CreateSqlDBHelper – Tool to create and set up database from scripts

I’ve done it again. re-invented the wheel… uhm, no. But I did create a tool to create and set up a new sql server database from scripts that can be launched from the command line. This could be useful if you must automate the process of setting up a test database each time you need to run a new set of tests.

It works like this (coomand line options)

Usage: CreateSqlDBHelper.exe <server> <database> <data files path> <-d> <script1.sql [script2.sql …]>
Where
-d : drop database first

This allows you to specify a sql server/instance name, database name, path where the database files will be located plus any number of sql script files that needs to be run to set up a new database. Typically you’ll start by running scripts that will create tables, views, sprocs and also initial data rows etc. Optionally the tool can first ‘drop’ the existing database (but for now that requires a manual press ‘Y’ step). It makes use of the .Net 4.0 client framework plus SQL server SMO libraries – which come with SQL server…

You can find a copy of the tool here.

SQLScripter

I’ve been wanting to creating something like this for myself for a while. A simple tool to quickly create a backup copy of a SQL Server database ‘structure’ (or schema if you will). It is really a simple command line tool that simply takes a few parameters like server\instance name, database, output directory and a prefix name for the output files.

It creates separate files for Tables, Views and Stored procedures (which includes UDFs)

Example usage:

SQLScripter.exe <-s:server> <-d:database> [-od:outputDirectory] [-p:prefixFileName]

Grab it here (plain exe in a zip file). It make use of the .Net 4.0 client framework.

Create SQLCE database in code

This is not exactly something new or breaking any new boundaries but it could be a useful tip for anyone that wants to make use of SQLCE databases from a small .Net application. There are some great tools and utilities to create a new SQL compact edition database using some UI but what if you want to automate the process or allow a client app to create a new database ‘on the fly’ or when needed?

It turns out it is not really that hard at all. All you need is a reference to the System.Data.SqlServerCe assembly – which is installed when you run the Sql server compact installer. Then you simply need to use the following code (example):

string connStr = string.Format(“Data Source={0};File Mode=Shared Read;Persist Security Info=False”, sdfFilePath);
using (System.Data.SqlServerCe.SqlCeEngine SQLCEDB = new System.Data.SqlServerCe.SqlCeEngine(connStr))
{

SQLCEDB.CreateDatabase();

}

That will create a new ‘blank’ database. What if you want to also create tables inside it? That is also very simple. I created a simple library to do that – it simply takes an input script and runs each command one by one. Each statement must be separated by a ‘GO’ statement that is on its own line.

e.g.

CREATE TABLE [TableA] (
[SomeID] int NOT NULL IDENTITY (1,1)
, [Desc] nvarchar(100) NOT NULL
);
GO
CREATE TABLE [TableB] (
[OtherID] int NOT NULL IDENTITY (1,1)
, [Desc] nvarchar(100) NOT NULL
);
GO

The class that handles it looks like this:

public delegate void CreateCmndResponseMsgDelegate(string createCMND, string message);
public class CreateSQLCEDB : BaseSQLCEDAL
{

private List<string> createCMDs = new List<string>();
public int ErrorCount { get; set; }
public event CreateCmndResponseMsgDelegate CreateCmndResponseMsg;
private void RaiseCreateCmndResponseMsg(string createCMND, string message)
{

if (CreateCmndResponseMsg != null)
{

CreateCmndResponseMsg(createCMND, message);

}

}
public void CreateCommandsFromScript(string script)
{

string[] createCmds = script.Split(new string[] { “\r\nGO\r\n” }, StringSplitOptions.RemoveEmptyEntries);
createCMDs.AddRange(createCmds);

}
public bool RunCommands()
{

ErrorCount = 0;
base.OpenConnection();
foreach (string cmndStr in createCMDs)
{

try
{

base.ExecuteNonQuery(cmndStr);

}
catch (Exception ex)
{

ErrorCount++;
RaiseCreateCmndResponseMsg(cmndStr, ex.Message);

}

}
base.CloseConnection();
return ErrorCount == 0;

}

}

Note that the base class ‘BaseSQLCEDAL’ is simply a wrapper class to handle all interactions with SqlCeConnection, SqlCeCommand etc.

See CreateSQLCEDB for full example code.

IP address to table sql function

I have a requirement to do some stuff with a table that contains ip addresses like grouping per subnets etc. Doing this with straight tsql is tricky (if possible at all) so I created a sql server function that takes the ip address field and breaks it down to a set (table) of 4 values. It’s not perfect and brings another type of complexity of its own but at least you can compare the separate bits of the ip address as unique parts. Also, it doesn’t really have any error checking built in.

create FUNCTION [dbo].[ufn_IPAddressToTable]
(

@IpAddress VARCHAR(15)

)
RETURNS @IpTable TABLE (part1 int, part2 int, part3 int, part4 int)
AS
BEGIN

DECLARE @part1 int, @part2 int, @part3 int, @part4 int
IF (not @IpAddress is null)
BEGIN

if (CHARINDEX(‘.’, @IpAddress) > 0)
begin

set @part1 = CONVERT(int, substring(@IpAddress, 0, CHARINDEX(‘.’, @IpAddress)))
set @IpAddress = substring(@IpAddress, CHARINDEX(‘.’, @IpAddress) + 1, 100)

if (CHARINDEX(‘.’, @IpAddress) > 0)
begin

set @part2 = CONVERT(int, substring(@IpAddress, 0, CHARINDEX(‘.’, @IpAddress)))
set @IpAddress = substring(@IpAddress, CHARINDEX(‘.’, @IpAddress) + 1, 100)

if (CHARINDEX(‘.’, @IpAddress) > 0)
begin

set @part3 = CONVERT(int, substring(@IpAddress, 0, CHARINDEX(‘.’, @IpAddress)))
set @IpAddress = substring(@IpAddress, CHARINDEX(‘.’, @IpAddress) + 1, 100)
set @part4 = CONVERT(int, @IpAddress)

end

end

end
if (not(@part1 is null or @part2 is null or @part3 is null or @part4 is null))

insert @IpTable(part1, part2, part3, part4)
values (@part1, @part2, @part3, @part4)

END
RETURN

END

As an example you can use it like this:

declare @IpAddress varchar(15)
set @IpAddress = ‘127.0.0.1’
select * from dbo.ufn_IPAddressToTable(@IpAddress)

Or if you have a table with ip addresses

with ComputerIps(Area, IpPart1, IpPart2, IpPart3)
as
(

select  c.Area,
(select top 1 part1 from dbo.[ufn_IPAddressToTable](c.IpAddress)) as Part1,
(select top 1 part2 from dbo.[ufn_IPAddressToTable](c.IpAddress)) as Part2,
(select top 1 part3 from dbo.[ufn_IPAddressToTable](c.IpAddress)) as Part3
from Computers c
where not (c.IpAddress is null)  and LEN(c.IpAddress) > 0

)
select Area, IpPart1, IpPart2, IpPart3, COUNT(*) as [Computers]
from ComputerIps
group by Area, IpPart1, IpPart2, IpPart3
order by Area, IpPart1, IpPart2, IpPart3

Another problem is that it may be slow when the source table becomes large. At least it helps a bit.

Moving the SQL server databases of a BizTalk (2006) installation

This is hopefully not something you have to do often and even more so for a production environment!

Firstly if you don’t have any applications installed yet or don’t worry about re-installing everything then rather go ahead and do a fresh install. Secondly, if you can recreate the sql server with exactly the same name – that should be safest option because you don’t need to make any changes to the BizTalk front end servers. Lastly, you can try the ‘restore’ vb scripts that is located in the ‘C:\Program Files\Microsoft BizTalk Server 2006\Schema\Restore’ directory of the BizTalk server(s).

To use these scripts you have to perform the following steps (after making proper backups of the ‘whole’ environment)

1. Stop all host instances plus SSO services.

2. Edit the sample xml file and replace all the placeholder names ‘SourceServer’ and ‘DestinationServer’ plus uncomment the sections you need and save the file.

3. Run the 2 scripts ‘UpdateDatabase.vbs’ (once) and ‘UpdateRegistry.vbs’ (on each BizTalk server).

4. Start SSO service (master first)

5. Start BizTalk hosts

6. If it worked go buy yourself a drink… else start praying…

EventScavenger 4.2

I recently had the ‘pleasure’ of having my EventScavenger tool being used to track the security event logs of a couple of domain controllers for a ‘full’ enterprise company. It may be temporary for us to help a specific issue but in the process I had to re-look at how the collector service (the heart of EventScavenger) performs under really high stress. In case you don’t fully appreciate it, with full auditing for successful and failed events these event logs full up within an hour or so – and they have been set to something like 100MB+! Events literally flood into the logs at (sometimes) 10000 plus events in a single minute.

Now, I’ve never had any performance problems with the collectors until now. I’ve had a single collector running on a plain old pc gathering around 50 logs or more running 24×7 for 4-5 years. In all those cases each individual event log never have more than 20MB of data in them and almost never full of data that gets overwritten within an hour or so. The problem with the domain controller event logs are the sheer volume of events written ‘all the time’ plus the collector is running on a separate pc with the sql database on a remote system that is shared with dozen of other systems. The end result is that events retrieved from the logs cannot (1) be read quickly enough and (2) written quickly enough to the database. The first problem is not something I can do anything about now – other than having a collector installed on each of the domain controllers. This is not feasible as it won’t be allowed and they are Windows Core machines anyway.

To address the second issue I did some research into how to submit batches of records to the database since doing it one by one just isn’t fast enough. I know about the new table variable in newer versions of sql server but unfortunately the server that was provided is only sql 2005 which does not support that. Fortunately it does support the xml data type even as a parameter. I found an article ‘Sending Multiple Rows to Database for Modification‘ on codeproject that specifically deals with sql 2005 that describes 2 ways to accomplish this: using a delimited string or the xml data type. The delimited option won’t work for me as it is limited to about 4000 characters and the amount of data I deal with is orders of magnitude more than that. The xml data type allows for up to 2GB in a single shot!

I did quite a few performance tests and there is a real increase in number of records that can be processed using batches – despite the possible overhead of first converting the rows to xml and then sql server converting it back to a table (in memory). Interestingly, increasing the batch size beyond a certain point (1000 as I tested) does not increase the overall throughput. It more or less stay linear. The problem with that is you increase the chance that a single failure caused by one row can make the whole batch fail. So the best is to have a batch size that is big enough to warrant the benefit but not too big so it can cause failures.

Example code

A basic example of the sql xml handling:

Declare @xml  XML
SET @xml = N’
<rs>
<r cn=”machineName” l=”logName” etc />

</rs>’
SELECT
T.Row.value(‘@cn’, ‘VARCHAR(255)’) MachineName,
T.Row.value(‘@l’, ‘varchar(255)’) LogName
FROM   @xml.nodes(‘/rs/r’) AS T(Row)

To generate the xml on client side (simplified code):

System.IO.MemoryStream outstream = new System.IO.MemoryStream();
using (System.Xml.XmlTextWriter wrt = new System.Xml.XmlTextWriter(outstream, Encoding.UTF8))
{

wrt.Formatting = System.Xml.Formatting.Indented;
wrt.Indentation = 1;
wrt.IndentChar = ‘ ‘;
wrt.WriteStartDocument();
wrt.WriteStartElement(“rs”);

foreach (EventLogEntry ele in eventEntries)
{

wrt.WriteStartElement(“r”);
wrt.WriteAttributeString(“cn”, machine);
wrt.WriteAttributeString(“l”, logName);


wrt.WriteEndElement(); //r

}

wrt.WriteEndElement(); //rs
wrt.WriteEndDocument();
wrt.Flush();

outstream.Position = 0;
InsertEventLogEntries(outstream);

}

public InsertEventLogEntries(System.IO.Stream xml)
{

string sql = “InsertEventLogEntries”;
System.Data.SqlTypes.SqlXml sxl = new System.Data.SqlTypes.SqlXml(xml);
SqlParameter[] parms = new SqlParameter[] {
new SqlParameter(“@xml”, SqlDbType.Xml) { Value = sxl}
};
using (SqlCommand cmnd = new SqlCommand(sql, insertConn))
{
cmnd.CommandType = CommandType.StoredProcedure;
cmnd.CommandTimeout = CommandTimeout;
cmnd.Parameters.AddRange(parms);
try
{

cmnd.ExecuteNoQuery();

}
catch { throw; }

}

So with this change it is possible to handle larger sets of inserts into the database. Now only if there was a way to make reading an event log faster…