Tag Archives: Database

CreateSqlDBHelper – Tool to create and set up database from scripts

I’ve done it again. re-invented the wheel… uhm, no. But I did create a tool to create and set up a new sql server database from scripts that can be launched from the command line. This could be useful if you must automate the process of setting up a test database each time you need to run a new set of tests.

It works like this (coomand line options)

Usage: CreateSqlDBHelper.exe <server> <database> <data files path> <-d> <script1.sql [script2.sql …]>
Where
-d : drop database first

This allows you to specify a sql server/instance name, database name, path where the database files will be located plus any number of sql script files that needs to be run to set up a new database. Typically you’ll start by running scripts that will create tables, views, sprocs and also initial data rows etc. Optionally the tool can first ‘drop’ the existing database (but for now that requires a manual press ‘Y’ step). It makes use of the .Net 4.0 client framework plus SQL server SMO libraries – which come with SQL server…

You can find a copy of the tool here.

SQLScripter

I’ve been wanting to creating something like this for myself for a while. A simple tool to quickly create a backup copy of a SQL Server database ‘structure’ (or schema if you will). It is really a simple command line tool that simply takes a few parameters like server\instance name, database, output directory and a prefix name for the output files.

It creates separate files for Tables, Views and Stored procedures (which includes UDFs)

Example usage:

SQLScripter.exe <-s:server> <-d:database> [-od:outputDirectory] [-p:prefixFileName]

Grab it here (plain exe in a zip file). It make use of the .Net 4.0 client framework.

GenDB

I’v been involved for many years with helping, developing, working with an uncle of mine’s genealogical research – at least the ‘IT’ side of things. Over this time I developed more than one solution to house a database for storing all the relevant data gathered – first an Access database and later stand-alone application using either the old Access database or newer sql express. Lately I’ve been trying to find a way to get away from the clumsiness of having a separate piece of software installed to get a database. It just makes setting up and maintaining an application so much more difficult – since the type of person using these applications are usually very close to ‘computer-illiterate’ or just doesn’t care ‘how it it works’ but rather just ‘that it to works’.

So I’ve been working to create something that does not require a traditional database but is still easy to maintain and troubleshoot if needs be. I choose to use a plain and simple xml file. You might start to think that this is bad idea because of various reasons like loosing all the features built into professional database systems. But you would be wrong for the ‘wrong’ reasons 🙂 . Let me explain. For the purpose of this system the requirements are different from what you would expect when using a ‘traditional’ database. Consider this – for this little ‘system’ the following are applicable:

  • Single user, single machine use only
  • The volume of data is ‘low’. I can’t imagine someone want to put millions of persons’s information in this little application. I have an actual working example with the information of 11000+ people and the performance is really good.
  • No need for fancy transactionality, relational integrity and other jargon users don’t understand anyway (at the database/file level). The app itself (dal/object level) takes care of it.
  • backups are as easy as simply copying a singe file to some other location
  • although xml is well structured and for ‘computer’ use even a normal user can open the file and view it as text to read it. Of course, they can also easily break it by editing it manually…
  • Fewer components to maintain. No security fixes/service packs of sub components of which most parts are not even used at all. That is the ‘price’ you pay for using a technology that has to cater for lots of different possible ways to use it.
So the little application I created is a plain and simple C# Windows application. All it requires is the ‘client-side’ .Net 4 framework (not even the full thing) which is relatively small these days. It is still really a ‘proof-of-concept’ application but is fully functional and usable.
Summary of the features:
  • Enter/view details of persons (like names, surnames, dates and places of birth/christening/death, parents and children, marriages, history, photos and even unlimited separate notes)
  • View a family tree for a person – both predecessors and descendants  plus a summary of each person (birth/death/marriages). Up to 7 generations (up and down) are allowed but for practical reasons it is better to specify less.
  • There are several built-in reports (too many to name now). Since they are all html based they are easy to export and share with others.
  • Searching for persons (of course)
  • Comparing details of multiple persons (even a graphical view of living-years so you can see who lived around the same time)
  • Mark selected persons as ‘favourites’ so you can more easily find them again.
  • You can have multiple ‘databases’ (files) – simple got to settings and select another file.
  • the .gfdb file extension is associated with the application meaning you can simply double-click  such a file in Windows Explorer/Desktop and it will be opened with the application.
  • Since we mostly use the ‘de Villiers/Pama System‘ genealogical numbering system in South Africa there is some support to automatically generate these numbers based on a person marked as the progenitor.
It also features bilingual ‘run-time’ support which is something you don’t find often in applications. At the moment it support both English and Afrikaans.
Data format
I did consider GEDCOM and even GEDXML at some time but both have ‘issues’ of their own. GEDCOM is a text flat file format that is not very user friendly and easy to ‘break’. It is very widely used though. GEDXML is not really a standard format at all. They started working on it and abandoned it after a while – not sure why.
‘My’ format is not really a standard on its own – I’m not claiming it to be super or fantastic. It is plain and simple and works for my purposes.
Installer

Grab a copy of the installer here and let me know what you think. The zip file contains an example gfdb file containing some (fictional) Star Wars character information to illustrate how the application work.

Update: Version 1.1 available here.

Another Update: Version 1.2 available here

Revisited – XML File as database

I’ve again relooked at the idea of using a plain XML file as a source of a ‘small’ database. The previous attempt resulted in me going down the path of using datasets (typed and non-typed).

This time I started with a set of very simple classes (.Net/C# classes) that when you serialize them to XML it looks very much like a small data store/structure I also happened to use for another import/export genealogical database app I created.

Effectively through the use of ‘Generics’ I am able to create a simple ‘base’ class with which any such small in-memory database can be created/used/persisted to and from disk again. At the ‘lowest’ level I have a class that only does 3 things: Create the data store, load from disk/file and then save to disk/file. It makes use of a small utility class that handles serialization to (xml) files.

public interface ISmallDataStore
{

void CreateNewDataStore();
void LoadDataStore(string source);
void SaveDataStore(string destination);

}

public abstract class SDSXMLFileBase<T> : ISmallDataStore where T: class
{

private string sourceFilePath;
internal T DataContainer;

#region ISDS Members
abstract public void CreateNewDataStore();
abstract internal void SetRelationObjects();
abstract internal void RefreshCacheObjects();

public void LoadDataStore(string source)
{

sourceFilePath = source;
DataContainer = SerializationUtils.DeserializeXMLFile<T>(source);
SetRelationObjects();
RefreshCacheObjects();

}
public void SaveDataStore(string destination)
{

sourceFilePath = destination;
SerializationUtils.SerializeXMLToFile<T>(destination, DataContainer);

}
#endregion

}

Two abstract methods are added to facilitate functionality that will be explained later (SetRelationObjects and RefreshCacheObjects). They are optional and but I needed them for specific reasons.

The next ‘layer’ is a class that implements ‘DataContainer’ with data structures you want to use as the data store. These data structures are what is going to be saved/serialized. The following is small example of what it looks:

public class SampleSDSImplementation : SDSXMLFileBase<SampleTDataContainer>
{

public override void CreateNewDataStore()
{

DataContainer = new SampleTDataContainer();

}

}

Through the use of serialization attributes you can define the way the resulted xml would look that is stored. In my example the class SampleTDataContainer is effectively the data store and its fields are decorated with the attributes like XmlElement, XmlAttribute etc.

The following is a small excerpt from a sample class:

[Serializable(), XmlType(“data”)]
public class SampleTDataContainer
{

[XmlElement(“p”)]
public List<Person> Persons = new List<Person>();
[XmlElement(“m”)]
public List<Marriage> Marriages = new List<Marriage>();

}

[Serializable(), XmlType(“p”)]
public class Person
{

[XmlAttribute(“id”)]
public int Id { get; set; }
[XmlAttribute(“fn”)]
public string FirstName { get; set; }
[XmlAttribute(“sn”)]
public string Surname { get; set; }

}

A basic output of the (xml) file will look something like this:

<?xml version=”1.0″ encoding=”utf-16″?>
<data>

<p id=”1″ fn=”Anakin” sn=”Skywalker” … />
<p id=”2″ fn=”Padmé” sn=”Amidala” … />

<m id=”1″ mno=”1″ hid=”1″ wid=”2″ … />

</data>

Now that is all good and well for very basic stuff but what about a few more advanced requirements, like auto generating the ‘id’s, having ‘references’ between the objects (the Marriage class will have two references to Person (husband and wife), Person will have two (father and mother) and even reverse referencing (Person having a list of Marriage objects..).

Setting up those references in code is easy but once it has been serialized and then deserialized you get all kind of funnies. For example, An instance of a Person class might have a reference to a marriage and that Marriage instance has a reference to the same Person instance (the first time when you set it up in the code). Next time when the data is loaded from file (deserialized) the Marriage instance reference will not pont to the same instance as the Person instance (and vice versa). Ok this is a generic explanation of the issue but hopefully you get the drift…  Well, this is where one of those 2 methods I mentioned in the original base class comes in – SetRelationObjects. To ensure all referencing objects (at run-time) are actually referencing the correct objects you can do something like this:

internal override void SetRelationObjects()
{

foreach (Marriage m in DataContainer.Marriages)
{

if (m.HusbandId > 0 && m.Husband == null)

m.Husband = DataContainer.Persons.FirstOrDefault(h => h.Id == m.HusbandId);

if (m.WifeId > 0 && m.Wife == null)

m.Wife = DataContainer.Persons.FirstOrDefault(w => w.Id == m.WifeId);

}
foreach (Person p in DataContainer.Persons)
{

if (p.FatherId > 0 && p.Father == null)

p.Father = DataContainer.Persons.FirstOrDefault(f => f.Id == p.FatherId);

if (p.MotherId > 0 && p.Mother == null)

p.Mother = DataContainer.Persons.FirstOrDefault(f => f.Id == p.MotherId);

//Reassign union objects since deserialization created new/separated instances.
for (int i = 0; i < p.Marriages.Count; i++)
{

Marriage m = p.Marriages[i];
p.Marriages[i] = DataContainer.Marriages.FirstOrDefault(u => u.Id == m.Id);

}

}

}

The fields for things like (Person) Father and (Person) Mother was not shown in the class listing but you can probably guess what they should look like. These fields are ‘NOT’ serialized per se but rather handled by adding a (int) FatherId and (int) MotherId set of fields that ‘ARE’ serialized. It both makes it easier to read and smaller to store in the xml file. When deserializing only the ‘id’ fields are restored letting the SetRelationObjects method correct the referencing just after load. There may be other ways to do these kind of things but I choose this one as it suits me at the moment.

When designing classes for serialization it helps to know a couple of things about Xml serialization attributes and hidden methods. Lets say you want to have a public field in your class (like the Father/Mother ones) that you do not want exposed by serialization you can use a public bool ShouldSerialize<FieldName>() method inside the class. e.g.

public bool ShouldSerializeFatherId()
{

return FatherId > 0;

}
public bool ShouldSerializeMotherId()
{

return MotherId > 0;

}

[XmlAttribute(“fid”)]
public int FatherId { get; set; }

[XmlAttribute(“mid”)]
public int MotherId { get; set; }

private Person father = null;
[XmlIgnore]
public Person Father
{

get { return father; }
set
{

if (value != null)
{

father = value;
FatherId = father.Id;

}
else

FatherId = 0;

}

}
private Person mother = null;
[XmlIgnore]
public Person Mother
{

get { return mother; }
set
{

if (value != null)
{

mother = value;
MotherId = mother.Id;

}
else

MotherId = 0;

}

}

Helper methods for interacting with the data

It is possible that you can use the data container as is and directly call methods on the Person objects but that means you could be duplicating a lot of code or functionality each time you use it. To help with this I added a few simple helper methods in the class that implements the base class (SampleSDSImplementation). For example:

public Person AddPerson(string firstName, string surname)
{

int nextPersonId = 1;
if (DataContainer.Persons.Count > 0)

nextPersonId = (from p in DataContainer.Persons select p.Id).Max() + 1;

Person newPerson = new Person() { FirstName = firstName, Surname = surname };
newPerson.Id = nextPersonId;
DataContainer.Persons.Add(newPerson);
return newPerson;

}

public Person FindSinglePersonById(int id)
{

return DataContainer.Persons.FirstOrDefault(p => p.Id == id);

}

As an example of how to use it look at the following:

string dataStoreLocation = System.IO.Path.Combine(System.Environment.GetFolderPath(Environment.SpecialFolder.Desktop), “TestSDS.xml”);
SampleSDSImplementation sdsSample = new SampleSDSImplementation();
sdsSample.CreateNewDataStore();

Person vader = sdsSample.AddPerson(“Anakin”, “Skywalker”);
vader.IsMale = true;
vader.History = “The baddy of the story”;
vader.NickName = “Darth Vader”;
vader.PlaceOfBirth = “Tatooine”;
vader.PlaceOfDeath = “Death star 2”;

Person padme = sdsSample.AddPerson(“Padmé”, “Amidala”);
padme.DateOfDeath.Year = 2005;
padme.PlaceOfDeath = “Polis Massa”;
sdsSample.AddMarriage(vader, padme, 1, null, “Naboo”, null, “Mustafar”);

Person luke = sdsSample.AddPerson(“Luke”, “Skywalker”);
luke.IsMale = true;
luke.ChildNo = 1;
luke.NickName = “Master Luke”;
luke.PlaceOfBirth = “Polis Massa”;
luke.Father = vader;
luke.Mother = padme;

sdsSample.SaveDataStore(dataStoreLocation);

Conclusion

This is a simple way to build a little database (don’t fool yourself that you can easily build huge databases with this hehe) with which you can run a simple system that requires small database that can fit into memory.

See example project here: SDSTest

 

Mail Aggregator Service update

Just a quick update. I’ve now added a ‘Sql aggregator’ as well. This means there are now 2 ways aggregated messages can be sourced – from text files or Sql server data.

The ‘sql aggregator’ can use any existing table with messages by simply adding a bit field that indicates whether or not the message (in the table) has already been sent or not. The aggregator simply calls one stored proc (or if you want to a plain tsql statement can be used but I don’t recommend it) to retrieve any messages that must be send. This list must contains a message Id, ‘to’ address, subject and a body field. How you get by them combining fields, texts and other stuff is up to the creator of the stored proc. Once sent the ‘sql aggregator’ then calls a second stored proc passing the message Id to mark the message as sent. The sql parameter must be named ‘@Id’ and be an ‘int’.

An example sql script to modify the QuickMon sql notifier table plus two stored procedures are included in the sample project.

Eample source: MailAggregator Source code

Moving the SQL server databases of a BizTalk (2006) installation

This is hopefully not something you have to do often and even more so for a production environment!

Firstly if you don’t have any applications installed yet or don’t worry about re-installing everything then rather go ahead and do a fresh install. Secondly, if you can recreate the sql server with exactly the same name – that should be safest option because you don’t need to make any changes to the BizTalk front end servers. Lastly, you can try the ‘restore’ vb scripts that is located in the ‘C:\Program Files\Microsoft BizTalk Server 2006\Schema\Restore’ directory of the BizTalk server(s).

To use these scripts you have to perform the following steps (after making proper backups of the ‘whole’ environment)

1. Stop all host instances plus SSO services.

2. Edit the sample xml file and replace all the placeholder names ‘SourceServer’ and ‘DestinationServer’ plus uncomment the sections you need and save the file.

3. Run the 2 scripts ‘UpdateDatabase.vbs’ (once) and ‘UpdateRegistry.vbs’ (on each BizTalk server).

4. Start SSO service (master first)

5. Start BizTalk hosts

6. If it worked go buy yourself a drink… else start praying…

Xml file as a database – part 4

Today’s part will be a slight deviation from what I planned last time. I got side-tracked with trying to save the xml file in a compressed format and actually managed to achieve it.

DotNetZip library

Thanks to this open source library it is possible to output the xml generated by the DataSet straight to a zip file – it is not necessary to first save the xml file and then zip it. To achieve this all you need is the following:

private void WriteToZipFile(string zipFilePath, DataSet ds)
{

MemoryStream mStream = new MemoryStream();
ds.WriteXml(mStream, XmlWriteMode.WriteSchema);
mStream.Seek(0, SeekOrigin.Begin);

if (File.Exists(zipFilePath))

File.Delete(zipFilePath);

using (ZipFile zipFile = new ZipFile(zipFilePath))
{

zipFile.CompressionLevel = Ionic.Zlib.CompressionLevel.BestCompression;
zipFile.AddEntry(ds.DataSetName + “.xml”, mStream);
zipFile.Save();

}

}

private Stream ReadFromZipFile(string zipFilePath, string dsName)
{

MemoryStream mStream = new MemoryStream();
using (ZipFile zipFile = new ZipFile(zipFilePath))
{

if ((from z in zipFile.Entries
where z.FileName == dsName + “.xml”
select z).Count() > 0)
{

ZipEntry ze = (from z in zipFile.Entries
where z.FileName == dsName + “.xml”
select z).First();

ze.Extract(mStream);
mStream.Seek(0, SeekOrigin.Begin);

}

}

return mStream;

}

Using it is as simple as this:

DatabaseTest dbt = new DatabaseTest(); // Typed DataSet

//Saving
WriteToZipFile(“SomeFileName.zip”, dbt);

//Loading
DatabaseTest dbt2 = new DatabaseTest();
dbt2.ReadXml(ReadFromZipFile(“SomeFileName.zip”, dbt.DataSetName), XmlReadMode.IgnoreSchema); //Important to ‘exclude’ the schema for a typed DataSet

Really simple hey? Since xml is very ‘compressionable’ (like most text files) you can save quite a bit on disk space.

Find a copy of it here.

WriteToZipFile(saveFileDialogXml.FileName + ".zip"dbt);

Xml file as a database – part 3

Last time I stopped with the sample code how to produce a DataSet in code and then save and load it from a file. This time I’ll just continue with the ‘typed’ DataSet version plus show what the output would look like. Interestingly, the output of both the untyped and typed versions looks (essentially) the same.

First I’ll show how the Typed DataSet version works. Just to refresh the old minds – to create a Typed DataSet you use Visual Studio’s new template for a DataSet xsd and then use the designer to add tables, columns and relationships. Then in code you can do the following (the DataSet is named ‘DatabaseTest’):

DatabaseTest dbt = new DatabaseTest();
DatabaseTest.SurnameRow surname = (DatabaseTest.SurnameRow)dbt.Surname.NewRow();
surname.Surname = “Henning”;
surname.TS = DateTime.Now;
dbt.Surname.AddSurnameRow(surname);
surname = (DatabaseTest.SurnameRow)dbt.Surname.NewRow();
surname.Surname = “Van Der Merwe”;
surname.TS = DateTime.Now;
dbt.Surname.AddSurnameRow(surname);

DatabaseTest.PersonRow pa = (DatabaseTest.PersonRow)dbt.Person.NewRow();
pa.FirstName = “Daddy Koos”;
pa.SurnameId = 1;
pa.TS = DateTime.Now;
pa.YearOfBirth = 1945;
dbt.Person.AddPersonRow(pa);

DatabaseTest.PersonRow ma = (DatabaseTest.PersonRow)dbt.Person.NewRow();
ma.FirstName = “Mommy Johanna”;
ma.SurnameId = 2;
ma.TS = DateTime.Now;
ma.YearOfBirth = 1950;
dbt.Person.AddPersonRow(ma);

DatabaseTest.PersonRow guy = (DatabaseTest.PersonRow)dbt.Person.NewRow();
guy.FirstName = “Little Koos”;
guy.SurnameId = 1;
guy.TS = DateTime.Now;
guy.YearOfBirth = 1971;
guy.FatherID = 1;
guy.MotherID = 2;
guy.ChildNo = 1;
dbt.Person.AddPersonRow(guy);

if (saveFileDialogXml.ShowDialog() == System.Windows.Forms.DialogResult.OK)
{

if (System.IO.File.Exists(saveFileDialogXml.FileName))

System.IO.File.Delete(saveFileDialogXml.FileName);

dbt.WriteXml(saveFileDialogXml.FileName, XmlWriteMode.WriteSchema);

var people = (from p in dbt.Person
join s in dbt.Surname on p.SurnameId equals s.Id
select new
{

PersonId = p.Id,
FirstName = p.FirstName,
Surname = s.Surname

});
StringBuilder sb = new StringBuilder();
foreach (var p in people)
{

sb.AppendLine(string.Format(“{0} {1}”, p.FirstName, p.Surname));

}
MessageBox.Show(sb.ToString());

System.Diagnostics.Process pro = new System.Diagnostics.Process();
pro.StartInfo = new System.Diagnostics.ProcessStartInfo(System.IO.Path.Combine(System.Environment.GetFolderPath(Environment.SpecialFolder.System), “notepad.exe”));
pro.StartInfo.Arguments = “\”” + saveFileDialogXml.FileName + “\””;
pro.Start();

}

This provides the same functionality as before. The output looks something like this (sorry, I’m a bit lazy to format/indend the output too much now):

<?xml version=”1.0″ standalone=”yes”?>
<DatabaseTest xmlns=”http://tempuri.org/DatabaseTest.xsd”>
<xs:schema id=”DatabaseTest” targetNamespace=”http://tempuri.org/DatabaseTest.xsd” xmlns:mstns=”http://tempuri.org/DatabaseTest.xsd” xmlns=”http://tempuri.org/DatabaseTest.xsd” xmlns:xs=”http://www.w3.org/2001/XMLSchema” xmlns:msdata=”urn:schemas-microsoft-com:xml-msdata” attributeFormDefault=”qualified” elementFormDefault=”qualified”>
<xs:element name=”DatabaseTest” msdata:IsDataSet=”true” msdata:UseCurrentLocale=”true”>
<xs:complexType>
<xs:choice minOccurs=”0″ maxOccurs=”unbounded”>
<xs:element name=”Person“>
<xs:complexType>
<xs:sequence>
<xs:element name=”Id” msdata:AutoIncrement=”true” msdata:AutoIncrementSeed=”1″ type=”xs:int” />
<xs:element name=”FirstName” type=”xs:string” />
<xs:element name=”SurnameId” type=”xs:int” />
<xs:element name=”Description” type=”xs:string” minOccurs=”0″ />
<xs:element name=”YearOfBirth” type=”xs:int” minOccurs=”0″ />
<xs:element name=”MonthOfBirth” type=”xs:unsignedByte” minOccurs=”0″ />
<xs:element name=”DayOfBirth” type=”xs:unsignedByte” minOccurs=”0″ />
<xs:element name=”TS” type=”xs:dateTime” />
<xs:element name=”FatherID” type=”xs:int” minOccurs=”0″ />
<xs:element name=”MotherID” type=”xs:int” minOccurs=”0″ />
<xs:element name=”PlaceOfBirthId” type=”xs:int” minOccurs=”0″ />
<xs:element name=”YearOfDeath” type=”xs:int” minOccurs=”0″ />
<xs:element name=”MonthOfDeath” type=”xs:unsignedByte” minOccurs=”0″ />
<xs:element name=”DayOfDeath” type=”xs:unsignedByte” minOccurs=”0″ />
<xs:element name=”PlaceOfDeathId” type=”xs:int” minOccurs=”0″ />
<xs:element name=”GenealogicalNo” type=”xs:string” minOccurs=”0″ />
<xs:element name=”ChildNo” type=”xs:unsignedByte” default=”0″ minOccurs=”0″ />
<xs:element name=”ProgenitorCode” type=”xs:string” minOccurs=”0″ />
<xs:element name=”IsProgenitor” type=”xs:boolean” minOccurs=”0″ />
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name=”Surname“>
<xs:complexType>
<xs:sequence>
<xs:element name=”Id” msdata:AutoIncrement=”true” msdata:AutoIncrementSeed=”1″ type=”xs:int” />
<xs:element name=”Surname” type=”xs:string” />
<xs:element name=”TS” type=”xs:dateTime” />
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name=”Place“>
<xs:complexType>
<xs:sequence>
<xs:element name=”Id” msdata:AutoIncrement=”true” msdata:AutoIncrementSeed=”1″ type=”xs:int” />
<xs:element name=”Place” type=”xs:string” />
<xs:element name=”TS” type=”xs:dateTime” />
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name=”Marriage“>
<xs:complexType>
<xs:sequence>
<xs:element name=”Id” msdata:AutoIncrement=”true” msdata:AutoIncrementSeed=”1″ type=”xs:int” />
<xs:element name=”No” type=”xs:unsignedByte” default=”0″ minOccurs=”0″ />
<xs:element name=”HusbandId” type=”xs:int” />
<xs:element name=”WifeId” type=”xs:int” />
<xs:element name=”YearOfMarriage” type=”xs:int” minOccurs=”0″ />
<xs:element name=”MonthOfMarriage” type=”xs:unsignedByte” minOccurs=”0″ />
<xs:element name=”DayOfMarriage” type=”xs:unsignedByte” minOccurs=”0″ />
<xs:element name=”PlaceOfMarriageId” type=”xs:int” minOccurs=”0″ />
<xs:element name=”YearOfDivorce” type=”xs:int” minOccurs=”0″ />
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:choice>
</xs:complexType>
<xs:unique name=”Constraint1″ msdata:PrimaryKey=”true”>
<xs:selector xpath=”.//mstns:Person” />
<xs:field xpath=”mstns:Id” />
</xs:unique>
<xs:unique name=”Surname_Constraint1″ msdata:ConstraintName=”Constraint1″ msdata:PrimaryKey=”true”>
<xs:selector xpath=”.//mstns:Surname” />
<xs:field xpath=”mstns:Id” />
</xs:unique>
<xs:unique name=”Constraint2″>
<xs:selector xpath=”.//mstns:Surname” />
<xs:field xpath=”mstns:Surname” />
</xs:unique>
<xs:unique name=”Place_Constraint1″ msdata:ConstraintName=”Constraint1″ msdata:PrimaryKey=”true”>
<xs:selector xpath=”.//mstns:Place” />
<xs:field xpath=”mstns:Id” />
</xs:unique>
<xs:unique name=”Place_Constraint2″ msdata:ConstraintName=”Constraint2″>
<xs:selector xpath=”.//mstns:Place” />
<xs:field xpath=”mstns:Place” />
</xs:unique>
<xs:unique name=”Marriage_Constraint1″ msdata:ConstraintName=”Constraint1″ msdata:PrimaryKey=”true”>
<xs:selector xpath=”.//mstns:Marriage” />
<xs:field xpath=”mstns:Id” />
</xs:unique>
<xs:keyref name=”FK_Person_Marriage” refer=”Constraint1″>
<xs:selector xpath=”.//mstns:Marriage” />
<xs:field xpath=”mstns:HusbandId” />
</xs:keyref>
<xs:keyref name=”FK_Person_Marriage1″ refer=”Constraint1″>
<xs:selector xpath=”.//mstns:Marriage” />
<xs:field xpath=”mstns:WifeId” />
</xs:keyref>
<xs:keyref name=”FK_Person_Person1″ refer=”Constraint1″>
<xs:selector xpath=”.//mstns:Person” />
<xs:field xpath=”mstns:MotherID” />
</xs:keyref>
<xs:keyref name=”FK_Person_Surname” refer=”Surname_Constraint1″>
<xs:selector xpath=”.//mstns:Person” />
<xs:field xpath=”mstns:SurnameId” />
</xs:keyref>
<xs:keyref name=”FK_Place_Person” refer=”Place_Constraint1″>
<xs:selector xpath=”.//mstns:Person” />
<xs:field xpath=”mstns:PlaceOfBirthId” />
</xs:keyref>
<xs:keyref name=”FK_Person_Person” refer=”Constraint1″>
<xs:selector xpath=”.//mstns:Person” />
<xs:field xpath=”mstns:FatherID” />
</xs:keyref>
</xs:element>
</xs:schema>

<Person>
<Id>1</Id>
<FirstName>Daddy Koos</FirstName>
<SurnameId>1</SurnameId>
<YearOfBirth>1945</YearOfBirth>
<TS>2011-06-06T09:15:58.0021555+02:00</TS>
<ChildNo>0</ChildNo>
</Person>
<Person>
<Id>2</Id>
<FirstName>Mamma Johanna</FirstName>
<SurnameId>2</SurnameId>
<YearOfBirth>1950</YearOfBirth>
<TS>2011-06-06T09:52:51.4151521+02:00</TS>
<ChildNo>0</ChildNo>
</Person>
<Person>
<Id>3</Id>
<FirstName>Little Koos</FirstName>
<SurnameId>1</SurnameId>
<YearOfBirth>1971</YearOfBirth>
<TS>2011-06-06T09:52:51.4151521+02:00</TS>
<FatherID>1</FatherID>
<MotherID>2</MotherID>
<ChildNo>1</ChildNo>
</Person>

<Surname>
<Id>1</Id>
<Surname>Henning</Surname>
<TS>2011-06-06T09:15:57.9771555+02:00</TS>
</Surname>
<Surname>
<Id>2</Id>
<Surname>Van Der Merwe</Surname>
<TS>2011-06-06T09:52:51.4121521+02:00</TS>
</Surname>

</DatabaseTest>

As you can see the xsd is actually embedded inside the xml file. This is helpful to see the structure of the data.

Next step

The next step is to create a Data Access Layer because dealing with a DataSet directly in code is so… old fashioned… and so on. Stay tuned for the next iteration.