Wednesday, May 13, 2009

MSDN-Style Documentation with Sandcastle and NAnt

kick it on DotNetKicks.com

Sandcastle is a code documentation utility, which you run against your compiled .NET assembly to produce “accurate, MSDN style, comprehensive documentation by reflecting over the source assemblies and optionally integrating XML Documentation Comments” [http://sandcastle.codeplex.com/]. NAnt is a .NET-based build automation utility, which allows you to set up build scripts to perform pretty much any task, from compiling assemblies, to file operations, to executing tasks [http://nant.sourceforge.net/]. Both are available for download, absolutely free. Consider the introduction complete.

While NAnt documentation and samples are readily available, Sandcastle does not have as much content devoted to it. Luckily, with the install, several examples are provided – both as MSBuild and batch files. I have used the MSBuild examples to create a single, reusable NAnt script for documenting any assembly (with some caveats which will be pointed out). This will create a CHM file, which will be our end result documentation.

Prerequisites

These examples require the following:

Overview

The script itself is fairly straightforward. First, we will make the property declarations. Then, we will have a series of tasks grouped into “targets”. Each target will depend on the previous step to be complete before it will itself run.

<?xml version="1.0" encoding="utf-8" ?>

<project xmlns="http://nant.sf.net/release/0.86-beta1/nant.xsd"

         name="Document.Assembly" default="All">

    <!--

    <property name="project.name" value="MyAssembly"/>

    <property name="project.bin.dir" value="Path\"/>

    <property name="chm.destination.dir" value="Path\"/>

    -->

 

 

</project>

We’ll be setting up a NAnt script that is expecting three properties passed in: the project or assembly name (“project.name”), the directory in which the assembly is located (“project.bin.dir”), and the destination directory for the resulting CHM file (“chm.destination.dir”). The end result is that a file named projectname.chm will be placed in the destination directory.

Step 1: Property declarations

Here, we are assuming the DLL and XML comments file have the same name as the project. If necessary, you can change the script to expect additional inputs, declaring the comment file name and/or the assembly name.

The properties we declare in the NAnt script will make use of the three input properties, and also define environment- or process-specific details.

    <property name="project.dll.file" value="${project.bin.dir}\${project.name}.dll"/>

    <property name="project.xml.file" value="${project.bin.dir}\${project.name}.xml"/>

 

    <!-- Build environment -->

    <property name="build.root.dir" value="E:\Builds\_BuildArea\"/>

    <property name="build.docs.dir" value="${build.root.dir}Docs\${project.name}\"/>

    <property name="build.dxtmp.dir" value="${build.docs.dir}dxtmp\"/>

 

    <!-- Documentation ("dx") info -->

    <property name="dx.presentation.style" value="vs2005"

              unless="${property::exists('dx.presentation.style')}" />

    <property name="dx.reflection.base.xml.file" value="${build.dxtmp.dir}reflection_base.xml"/>

    <property name="dx.reflection.xml.file" value="${build.dxtmp.dir}reflection.xml"/>

    <property name="dx.manifest.xml.file" value="${build.dxtmp.dir}Output\manifest.xml"/>

    <property name="dx.toc.xml.file" value="${build.dxtmp.dir}Output\toc.xml"/>

 

    <!-- Tools / Utilities -->

    <property name="hhc.root.dir" value="C:\Program Files\HTML Help Workshop\"/>

    <property name="sc.root.dir" value="C:\Program Files\Sandcastle\"/>

    <property name="sc.tools.dir" value="${sc.root.dir}ProductionTools\"/>

    <property name="sc.transforms.dir" value="${sc.root.dir}ProductionTransforms\"/>

    <property name="sc.presentation.dir" value="${sc.root.dir}Presentation\${dx.presentation.style}\"/>

 

    <!-- NAnt-specific -->

    <property name="nant.onfailure" value="Failure"/>

    <property name="nant.onsuccess" value="Success"/>

Step 2: Preparing the environment

Once we have our properties defined, we’ll need to prepare the environment. This includes creating directories, and moving any files we need to.

    <target name="Prepare" description="Initializes working area">

        <!-- Remove directories (if needed) -->

        <delete file="${build.docs.dir}${project.name}.chm" failonerror="false"/>

        <delete dir="${build.dxtmp.dir}" failonerror="false" />

        <delete dir="${build.docs.dir}" failonerror="false" />

 

        <!-- Create directories -->

        <mkdir dir="${build.dxtmp.dir}"/>

        <mkdir dir="${build.dxtmp.dir}chm\"/>

        <mkdir dir="${build.dxtmp.dir}Intellisense\"/>

        <mkdir dir="${build.dxtmp.dir}Output\"/>

        <mkdir dir="${build.dxtmp.dir}Output\html\"/>

        <mkdir dir="${build.dxtmp.dir}Output\icons\"/>

        <mkdir dir="${build.dxtmp.dir}Output\media\"/>

        <mkdir dir="${build.dxtmp.dir}Output\scripts\"/>

        <mkdir dir="${build.dxtmp.dir}Output\styles\"/>

 

        <!-- Copy documentation content -->

        <copy todir="${build.dxtmp.dir}Output\icons\">

            <fileset basedir="${sc.presentation.dir}icons\">

                <include name="**.*"/>

            </fileset>

        </copy>

        <copy todir="${build.dxtmp.dir}Output\scripts\">

            <fileset basedir="${sc.presentation.dir}scripts\">

                <include name="**.*"/>

            </fileset>

        </copy>

        <copy todir="${build.dxtmp.dir}Output\styles\">

            <fileset basedir="${sc.presentation.dir}styles\">

                <include name="**.*"/>

            </fileset>

        </copy>

 

        <!-- Copy comments file to build area -->

        <copy file="${project.xml.file}"

              todir="${build.dxtmp.dir}" />

        <move file="${build.dxtmp.dir}${project.name}.xml"

              tofile="${build.dxtmp.dir}comments.xml" />

 

    </target>

Now we are ready to start executing tasks which will generate our documentation. You will notice that each of the following steps utilize <exec/> tasks, which have the working directory set. This is important! If you are manually executing these programs from the command line, you must be running them from the documentation working directory.

Step 3: Reflection file generation

This is where the real work begins. We must run a reflection tool against the assembly, which identifies types, classes, methods, and everything else. Once we’ve reflected the information, we will transform it according to our presentation style (in the properties, we’ve chosen “vs2005”).

    <target name="GenerateReflection" depends="Prepare" description="Generates reflection data">

        <exec program="${sc.tools.dir}MRefBuilder.exe"

              workingdir="${build.dxtmp.dir}">

            <arg value="${project.dll.file}" />

            <arg value="/out:${dx.reflection.base.xml.file}" />

        </exec>

 

        <exec program="${sc.tools.dir}XslTransform.exe"

              workingdir="${build.dxtmp.dir}"

              if="${dx.presentation.style == 'prototype'}">

            <arg value="/xsl:&quot;${sc.transforms.dir}ApplyPrototypeDocModel.xsl&quot;" />

            <arg value="/xsl:&quot;${sc.transforms.dir}AddGuidFilenames.xsl&quot;" />

            <arg value="${dx.reflection.base.xml.file}" />

            <arg value="/out:${dx.reflection.xml.file}" />

        </exec>

 

        <exec program="${sc.tools.dir}XslTransform.exe"

              workingdir="${build.dxtmp.dir}"

              if="${dx.presentation.style == 'vs2005'}">

            <arg value="/xsl:&quot;${sc.transforms.dir}ApplyVSDocModel.xsl&quot;" />

            <arg value="/xsl:&quot;${sc.transforms.dir}AddFriendlyFilenames.xsl&quot;" />

            <arg value="${dx.reflection.base.xml.file}" />

            <arg value="/out:${dx.reflection.xml.file}" />

            <arg value="/arg:IncludeAllMembersTopic=true" />

            <arg value="/arg:IncludeInheritedOverloadTopics=true" />

        </exec>

 

        <exec program="${sc.tools.dir}XslTransform.exe"

              workingdir="${build.dxtmp.dir}"

              if="${dx.presentation.style == 'hana'}">

            <arg value="/xsl:&quot;${sc.transforms.dir}ApplyVSDocModel.xsl&quot;" />

            <arg value="/xsl:&quot;${sc.transforms.dir}AddFriendlyFilenames.xsl&quot;" />

            <arg value="${dx.reflection.base.xml.file}" />

            <arg value="/out:${dx.reflection.xml.file}" />

            <arg value="/arg:IncludeAllMembersTopic=false" />

            <arg value="/arg:IncludeInheritedOverloadTopics=true" />

        </exec>

    </target>

*** Note ***

The reflection tool must be pointed at the assembly. For those familiar with reflection, assembly references must be readily available to accurately reflect the contents. Because I want a script that is reusable, I cannot pass in all required references to the location. Therefore, your assembly’s referenced DLLs must be in the same location as the assembly!

Step 3: Generate the manifest

This transforms the reflection XML file into a manifest file, which will be used to generate the HTML, which will eventually be compiled into the CHM.

    <target name="GenerateManifest" depends="GenerateReflection" description="Generates manifest">

        <exec program="${sc.tools.dir}XslTransform.exe"

              workingdir="${build.dxtmp.dir}">

            <arg value="/xsl:&quot;${sc.transforms.dir}ReflectionToManifest.xsl&quot;"/>

            <arg value="${dx.reflection.xml.file}"/>

            <arg value="/out:${dx.manifest.xml.file}"/>

        </exec>

    </target>

Step 4: Generate the HTML

Since we have the manifest, we can then produce the HTML. This can be a time-consuming step, depending on the size of your assembly.

    <target name="GenerateHTML" depends="GenerateManifest" description="Generates HTML for CHM">

        <exec program="${sc.tools.dir}BuildAssembler.exe"

              workingdir="${build.dxtmp.dir}">

            <arg value="/config:&quot;${sc.presentation.dir}configuration\sandcastle.config&quot;" />

            <arg value="${dx.manifest.xml.file}" />

        </exec>

    </target>

Step 5: Generate the table of contents

Depending on the presentation style we’ve chosen, we will create a specific table of contents XML file, which the CHM requires for navigation.

    <target name="GenerateTOC" depends="GenerateHTML" description="Generates table of contents">

        <exec program="${sc.tools.dir}XslTransform.exe"

              workingdir="${build.dxtmp.dir}"

              if="${dx.presentation.style == 'prototype'}">

            <arg value="/xsl:&quot;${sc.transforms.dir}CreatePrototypeToc.xsl&quot;" />

            <arg value="${dx.reflection.xml.file}" />

            <arg value="/out:${dx.toc.xml.file}" />

        </exec>

 

        <exec program="${sc.tools.dir}XslTransform.exe"

              workingdir="${build.dxtmp.dir}"

              if="${dx.presentation.style != 'prototype'}">

            <arg value="/xsl:&quot;${sc.transforms.dir}CreateVSToc.xsl&quot;" />

            <arg value="${dx.reflection.xml.file}" />

            <arg value="/out:${dx.toc.xml.file}" />

        </exec>

    </target>

Step 6: Generate the CHM file

We have generated all necessary content for the CHM project to be created. We will now use Sandcastle tools to create the CHM project files, and the HTML Help Workshop to generate the CHM file itself.

    <target name="GenerateCHM" depends="GenerateTOC" description="Generates CHM file">

        <!-- Copy resources for CHM -->

        <copy todir="${build.dxtmp.dir}chm\icons\">

            <fileset basedir="${build.dxtmp.dir}Output\icons\">

                <include name="**.*" />

            </fileset>

        </copy>

        <copy todir="${build.dxtmp.dir}chm\scripts\">

            <fileset basedir="${build.dxtmp.dir}Output\scripts\">

                <include name="**.*" />

            </fileset>

        </copy>

        <copy todir="${build.dxtmp.dir}chm\styles\">

            <fileset basedir="${build.dxtmp.dir}Output\styles\">

                <include name="**.*" />

            </fileset>

        </copy>

 

        <!-- Create CHM -->

        <exec program="${sc.tools.dir}ChmBuilder.exe"

              workingdir="${build.dxtmp.dir}">

            <arg value="/project:${project.name}" />

            <arg value="/html:${build.dxtmp.dir}Output\html" />

            <arg value="/lcid:1033" />

            <arg value="/toc:${dx.toc.xml.file}" />

            <arg value="/out:${build.dxtmp.dir}chm\" />

        </exec>

 

        <exec program="${sc.tools.dir}DBCSFix.exe"

              workingdir="${build.dxtmp.dir}">

            <arg value="/d:${build.dxtmp.dir}chm\" />

            <arg value="/l:1033" />

        </exec>

 

        <exec program="${hhc.root.dir}hhc.exe"

              workingdir="${build.dxtmp.dir}"

              failonerror="false">

            <arg value="${build.dxtmp.dir}chm\${project.name}.hhp" />

        </exec>

    </target>

*** Notes ***

There are two items worth mentioning here. First, regardless of whether the HTML Help Workshop (hhc.exe) process succeeded, it will always return non-zero results. We indicate in the <exec/> task that we will not fail on error. This causes a potential headache with the second issue, which is odd…

Most assemblies are processed fine by the hhc.exe process, like “DotNetNuke.dll” or “DoyleITS.Samples.dll”. If your assembly name contains “.h”, the CHM will not be generated. “DotNetNuke.HttpModules.dll” will fail with numerous HHC3002 and HHC3004 errors and warnings, as image files used in the documentation are parsed for HTML. This has something to do with how the utility scans for .h* files (help files, HTML files, who knows).

Step 7: Success

Now that the CHM is created, we can copy it to our destination, and clean up the working area. This gets called by setting the NAnt property “nant.onsuccess” to the name of our target.

    <target name="Success" description="Cleans up after success">

        <copy file="${build.dxtmp.dir}chm\${project.name}.chm"

              todir="${chm.destination.dir}"/>

        <delete dir="${build.dxtmp.dir}" failonerror="false"/>

    </target>

The last thing you need is your initial target, which will be the default target, or get called externally.

    <target name="All" depends="GenerateCHM" description="Runs build" />

Running the script

The easiest way to execute the NAnt script is with a batch file, or a process to manage your builds like CruiseControl.NET [http://ccnet.thoughtworks.com]. Below is how you would execute the script at the command line:

"C:\Program Files\NAnt\nant-0.86-beta1\bin\nant.exe" /f:"E:\Visual Studio 2008 Projects\DoyleITS.Build\DoyleITS.Build.NAntScripts\Document.Assembly.build.xml" /D:project.name=DotNetNuke /D:project.bin.dir="E:\Downloads\DotNetNuke\DotNetNuke_Community_05.00.01_Source\Website\bin" /D:chm.destination.dir=E:\Builds\

The “/f” argument defines the NAnt script, while the “/D” arguments pass the expected input properties. The example uses my NAnt script “Document.Assembly.build.xml”, located at E:\Visual Studio 2008 Projects\DoyleITS.Build\DoyleITS.Build.NAntScripts\. I am documenting the DotNetNuke.dll assembly, which resides in the same directory as any dependent or referenced assemblies.

Closing thoughts

Overall, I am constantly impressed by the quality developer resources that others contribute to the landscape. Sandcastle will make your life simpler, once you have a consistent, reusable approach, which I believe NAnt (or any other automation tool) can provide. API developers should really pay attention to what Sandcastle provide – the exact documentation another developer needs to see!

Monday, May 11, 2009

Building, Branching, and Releasing

As I’ve developed DotNetNuke modules, I’ve hacked together a simple NAnt build process to handle assembling the DNN install files. This reduced the amount of time I spent building them manually, and drastically cut the “human error” factor. Inadvertently, I was doing a favor to myself by also creating Source zip files (in addition to my Standard and Enterprise license packages). Every build I released had a code snapshot I could later use. To understand how I benefitted, you’ll need some back-story…

My Zero-In store locator module has been very popular, but I get numerous requests for paid customizations or enhancements. One client I’ve done a lot of work with recently opted for Google Map’s premier licensing, something I had not accounted for in Zero-In. I added this to the latest version (baseline) codebase, but one of their sites was on a previous version. The challenge was implementing the change on a previous release, as to prevent negatively affecting the DNN installation and module settings. There really wasn’t a challenge, since I had the source for every version since the module’s inception.

All I had to do was to install the version on a clean DNN instance, unzip the specific version’s Source zip file into the DesktopModules folder (where DNN modules are installed to), fire up the solution, and make the change. The client was happy with the DLLs, which dropped straight into the DotNetNuke bin folder. Still, I got to thinking there was probably a better way.

Branching in Source Control

I use Visual SourceSafe (VSS), since it came with Visual Studio. I don’t (yet) have a reason to go to Team Foundation Server (TFS), and although I’ve heard good things about SVN, I just haven’t bothered. Why fix what’s not broken, right? Anyway, since the previous-version change, I’ve revisited my source control and branching strategy.

My original source control strategy was extremely simple, and although it has worked, I’m making improvements which will benefit me more. Originally, I used the check-in-through-Visual-Studio method, where it creates the SolutionName.root VSS project, then places the solution and project beneath. My original need was just to have a source control repository I could back up, check-in/out, undo, etc.

My new strategy is a little more complex, but having tailored my build scripts to it, will potentially be a life saver. Beneath each module’s “root” project, there are three projects: Baseline, Branch, and Release. Beneath the Baseline project are the Application and Database projects, for the module codebase and database scripts, respectively. Under Release are versioned projects (01.02.00, 01.05.04, etc) for each release, with the Application and Database projects under each. Branch will be similar to Release. My final structure looks like:

Obviously, I have to backfill the previous releases into source control, but I just haven’t yet. The bottom line is that I now will have all releases placed immediately into source control, and can readily branch from previous releases.

Closing Thoughts

I’ll cover the NAnt scripts I use in this process in a later article, because I think there’s some useful information to be had. This will include the file structure, NAnt and NAntContrib tasks, and all that fun stuff.

By the way, you probably noticed the Database project’s sub-projects – Queries, Stored Procedures, Tables, and User-Defined Functions. I’m using SQL Server Management Studio to develop my database procedures, and the SSMS solutions are a great and easy way to manage them. I’ll write something up on the tips and tricks I learned checking that stuff into VSS, too.

Sunday, May 3, 2009

Configuring the Enterprise Library Data Access Block

Since the advent of the Enterprise Library, the Patterns & Practices group at Microsoft has made the job of a .NET developer simpler in myriad ways. One of the most common jobs of someone building Line of Business applications is invariably data access, and the Data Access Application Block (DAAB) is absolutely one of these time-savers.

Although we could spend quite a while discussing the various features, my favorite feature has to be the ability to pass an object array as the parameters in the stored procedure or SQL statement, as shown in this example:

using System;

using System.Collections.Generic;

using System.Text;

using System.Data;

using System.Data.Common;

using Microsoft.Practices.EnterpriseLibrary.Data;

 

namespace DoyleITS.Samples.Data

{

    public class AdventureWorksDatabase

    {

        public IDataReader GetEmployeeManager(int EmployeeId)

        {

            Database db = DatabaseFactory.CreateDatabase("AdventureWorks");

            object[] parameters = { EmployeeId };

            DbCommand command = db.GetStoredProcCommand("dbo.uspGetEmployeeManagers", parameters);

            return db.ExecuteReader(command);

        }

    }

}

This block of codes relies on the following information in the application configuration file:

<?xml version="1.0" encoding="utf-8" ?>

<configuration>

    <configSections />

    <connectionStrings>

        <add name="AdventureWorks"

             connectionString="Server=DOYLE002\SQL2K5;Database=AdventureWorks;Integrated Security=SSPI;"

             providerName="System.Data.SqlClient"/>

    </connectionStrings>

    <appSettings />

</configuration>

As you can see, it takes very little code to execute the stored procedure. The CreateDatabase() method of the DatabaseFactory determines the appropriate data access provider from the connection string (“AdventureWorks”), and investigates the stored procedure parameters and data types.

Using the data access class I’ve created is simple:

    AdventureWorksDatabase db = new AdventureWorksDatabase();

    IDataReader reader = db.GetEmployeeManager(1);

    while (reader.Read())

    {

        // do stuff

    }

Overall assumptions

The overall design of the DAAB seems to be that database connectivity information will always be provided in the application configuration file. This approach works great in websites, web services, and Windows services, where the process can run under a service account using integrated security. It also works well in a trusted environment using SQL authentication (knowing that credentials are stored in the configuration file). The underlying assumption, however, is that the connection information is known at build or deploy time.

Even in a non-trusted environment, or when you are deploying applications (and configuration files) to your user PCs, you can handle secure credentials. One method is to use another the Configuration Application Block, also in the Enterprise Library, to encrypt configuration information. Again, this must be done at build or deploy time.

What if you don’t know your database connection (server or database name, or even credentials) until runtime?

Imagine you have an environment in which you host multiple instances of an identically-structured database, one for each customer. Then imagine you have created an application which must manage data within any or all of those databases. You certainly wouldn’t want to maintain all connections within the application configuration file. Why not? If you get a new customer, you will have to update the configuration file and redeploy the application. If you are using deployment methods such as ClickOnce, you will be pushing new versions simply because you gained another data source.

Regardless of how you store and retrieve the customer database information (that’s an architectural design outside the scope of this discussion), once you know where you need to go, how do you plug that into the DAAB?

The first approach could be the GenericDatabase class, which inherits the DAAB’s abstract Database class. This object allows you to specify the connection string and the provider, as shown below:

using System;

using System.Collections.Generic;

using System.Configuration;

using System.Text;

using System.Data;

using System.Data.Common;

using DoyleITS.Samples.Common;

using Microsoft.Practices.EnterpriseLibrary.Data;

 

namespace DoyleITS.Samples.Data

{

    public class MultiCustomerDatabase

    {

        private string serverName;

        private string databaseName;

 

        public MultiCustomerDatabase(string ServerName, string DatabaseName)

        {

            this.serverName = ServerName;

            this.databaseName = DatabaseName;

        }

 

        public Database GetDatabase()

        {

            DbConnectionStringBuilder builder = new DbConnectionStringBuilder();

            builder.ConnectionString = ConfigurationManager.ConnectionStrings["MultiCustomerDB"].ConnectionString;

 

            if (builder.ContainsKey("Server"))

                builder["Server"] = this.serverName;

 

            if (builder.ContainsKey("Database"))

                builder["Database"] = this.databaseName;

 

            string providerName = ConfigurationManager.ConnectionStrings["MultiCustomerDB"].ProviderName;

            Database db = new GenericDatabase(builder.ConnectionString, DbProviderFactories.GetFactory(providerName));

 

            return db;

        }

 

    }

}

The concept here is that you can dynamically generate and use the GenericDatabase to make runtime decisions on database connectivity. To cover things you may have noticed, you would definitely want validation in the MultiCustomerDatabase constructor. Also, in the GetDatabase() method, you would want to implement a more robust process of identifying, updating, or creating the necessary connection string elements (e.g., “Database” versus “Initial Catalog”). 

The example above relies on a connection string, as shown below, but depending on your implementation, it is not completely necessary – again, another architectural design consideration.

        <add name="MultiCustomerDB"

             connectionString="Server=SPECIFY;Database=SPECIFY;Integrated Security=SSPI;"

             providerName="System.Data.SqlClient"/>

The GenericDatabase accepts in the constructor the database provider information, stored in your connection string. This basically informs the DAAB of the provider (e.g., SQL Client, OLEDB) you wish to use.

Any caveats?

Unfortunately, the GenericDatabase does not support parameter discovery, which means the following code will not work:

        public IDataReader GetEmployeeManager(int EmployeeId)

        {

            Database db = this.GetDatabase();

            object[] parameters = { EmployeeId };

            DbCommand command = db.GetStoredProcCommand("dbo.uspGetEmployeeManagers", parameters);

            return db.ExecuteReader(command);

        }

The above example uses my favorite DAAB feature, the parameter object array. A NotSupportedException will be thrown with the message, “Parameter discovery is not supported for connections using GenericDatabase. You must specify the parameters explicitly, or configure the connection to use a type deriving from Database that supports parameter discovery.” Yes, you would have to code every parameter and add it to the command object, before executing the stored procedure.

We have some choices to make: We can go through all of our data access code, and code the parameters to the command object manually, or we can alter our GetDatabase() logic.

Instantiating the correct Database

If I have only a few data access methods, it may be easiest to code the parameters. This would allow you to retain the database-agnostic approach. In many cases, you’ll have dozens of methods you would have to update, or worse yet, hundreds (if you were to use a base database class to handle database connectivity). The code sample below shows a modification to the GetDatabase() method to determine the appropriate Database implementation based on the provider name. My logic is that, in all reality, you will know which database technology or technologies you will support. If you decide to support more, like MySQL, you can update the switch statement.

First, I will add any using statements, to include technology-specific namespaces.

using Microsoft.Practices.EnterpriseLibrary.Data.Sql;

using Microsoft.Practices.EnterpriseLibrary.Data.Oracle;

Now, I will use the switch statement to instantiate the Database object.

        public Database GetDatabase()

        {

            DbConnectionStringBuilder builder = new DbConnectionStringBuilder();

            builder.ConnectionString = ConfigurationManager.ConnectionStrings["MultiCustomerDB"].ConnectionString;

 

            if (builder.ContainsKey("Server"))

                builder["Server"] = this.serverName;

 

            if (builder.ContainsKey("Database"))

                builder["Database"] = this.databaseName;

 

            string providerName = ConfigurationManager.ConnectionStrings["MultiCustomerDB"].ProviderName;

            Database db = null;

 

            switch (providerName)

            {

                case "System.Data.SqlClient":

                    db = new SqlDatabase(builder.ConnectionString);

                    break;

                default:

                    db = new GenericDatabase(builder.ConnectionString, DbProviderFactories.GetFactory(providerName));

                    break;

            }

 

            return db;

        }

Finally, here’s how I use it:

    string server = "";

    string database = "";

    // ... logic to determine server/database ...

    MultiCustomerDatabase db = new MultiCustomerDatabase(server, database);

    IDataReader reader = db.GetEmployeeManager(1);

    while (reader.Read())

    {

        // do stuff

    }

Closing thoughts

Sometimes, simplicity is best. I could spend time implementing a more “robust” solution, that would handle any database technology, but again, I should know or assume what technologies I should support.

In a real-world implementation, I would also configure the connection string elements according to the provider name, eliminate case-sensitivity, check for the existence of all required elements, and add any missing elements.

kick it on DotNetKicks.com