Most of today’s business applications deal with relational databases, despite the apparent movement of many developers that feel we should abandon some of the most mature software technology in the industry and move to object databases. If you are using SQL Server as your relational database you may run into a scenario where you want to execute a parameterized SQL statement, but need to return the value of an identity column after it is generated. A simple way to this is to use an output parameter and execute an extra statement after the insert to get the identity column using the SCOPE_IDENTITY function.

/// <summary>/// Insert.  Insert a row in the Data Store.
/// </summary>
/// <param name="dataSourceCommand">The command object</param>
/// <param name="item">The item to save</param>
private static void InsertRow(DbCommand dataSourceCommand, ItemClass item)
{
     private const string SQL_INSERT =
@"INSERT INTO Table(name,lastChangeDate)
VALUES (@name,@lastChangeDate) SET @sak=SCOPE_IDENTITY()";

    try
    {
        dataSourceCommand.Parameters.Clear();

        string nameParameterName = string.Format("@{0}", _nameColumnName);
        DbParameter nameParm = dataSource.Command.CreateParameter();
        nameParm.ParameterName = nameParameterName;
        nameParm.DbType = System.Data.DbType.Int32;
        nameParm.Value = item.Name;
        dataSource.Command.Parameters.Add(nameParm);

        string lastChangeDateParameterName =             string.Format("@{0}", _lastChangeDateColumnName);
        DbParameter lastChangeDateParm = dataSource.Command.CreateParameter();
        lastChangeDateParm.ParameterName = lastChangeDateParameterName;
        lastChangeDateParm.DbType = System.Data.DbType.DateTime;
        lastChangeDateParm.Value = item.LastChangeDate;
        dataSource.Command.Parameters.Add(lastChangeDateParm);

        string identityParameterName = string.Format("@{0}", _identityName);
        DbParameter identityParm = dataSource.Command.CreateParameter();
        identityParm.ParameterName = identityParameterName;
        identityParm.DbType = System.Data.DbType.Int32;
        identityParm.Direction = System.Data.ParameterDirection.Output;
        dataSource.Command.Parameters.Add(identityParm);

        dataSource.Command.CommandText = SQL_INSERT;

        int i = dataSource.Command.ExecuteNonQuery();
        item.Identity = (int)identityParm.Value;
    }
    catch (Exception ex)
    {
        throw;
    }
}

Advertisements

I watched a good bit of the PDC 2009 webcasts last night and found there are some interesting technologies coming out. I don’t think it comes as much of a surprise that Microsoft is cramming Azure and Silverlight down developer’s throats. There were also talks of IE 9 (don’t worry they are just starting development) and Visual Studio 2010. In my opinion Visual Studio 2010 is a game changer in development if they can get the performance up and memory requirements down. The top feature in, in my opinion, is the historical debugging capability in IntelliTrace, which allows a snapshot call stack to be taken in a different environment and then allows a developer to open that snapshot and step through the source code right on their desktop.

All that aside, there seems to be an increased focus on data this year and a multitude of code name and product name changes. I want to provide an overview of my understanding of the recent items that were discussed at PDC 2009 relating to data.

  • Windows Azure Tables and Blobs are a way to store data in the cloud. When that wasn’t enough Microsoft moved to Azure SQL Data Services. This technology allowed users to store data in SQL tables and was extremely scalable. Apparently the public doesn’t care about scale because they want relational. So Microsoft has renamed the offering to SQL Azure. The primary differences are that now you can actually use SQL Server Management Studio to connect using the (Tabular Data Stream) TDS protocol and you can use relational queries.
  • Oslo was a metadata framework for managing data and has now been boiled down to M and Quadrant and now falls under a larger umbrella of SQL Server Modeling Services. Honestly I can’t quite see the compelling argument for this yet unless you are a DSL (domain specific language) designer.
  • Astoria was a REST based data access service. It was renamed to ADO.Net data services and recently changed to WCF data services. The new name change also introduced OData, which sounds like the new ADO.Net (yes, yet another data access technology).
  • Dallas is a new data service from Microsoft intended to create a market place for data, the new commodity (I guess this is Microsoft’s answer to Apple’s iTunes and App Store). There are some public data sets available already for free. It seems the intention here is to enable innovation using data.
  • Pivot from Microsoft labs was one of the more compelling things I saw. It is essentially a new Internet browser that has an extension for understanding another new data format called Collection XML (cxml). The viewer uses Silverlight and DeepZoom to visualize data.
  • LINQ to SQL was a compelling technology that abstracted developer from writing T-SQL code by using an ORM (object-relational mapper). However, it seems tying this technology to SQL Server was not popular with the real world and Microsoft has been strongly advising against using it. In the 4.0 version of the .Net framework Microsoft placed this technology within the ADO.Net Entity Framework (EF) and calls it “Direct Mapping”. The EF also supports mapping at a more abstract level using an Entity Data Model (EDM) and a raw schema, should that be desired.
Copyright © Scott P. Rudy 2009 All Rights Reserved

I am currently involved in a project that uses ASP.NET. The project is on its second implementation. During the first implementation the environment was controlled by the project team and they chose to turn on persistent sessions for the load balancers. This allowed the system to store session data in memory for the application.

Now the project is being rolled out for a new client where the environment is no longer controlled by the team. The client has specified that a central session server must be used so that the load balancers can be used without persistant sessions, as a recommended best practice by the vendor, F5 Networks.

I was tasked to assess the readiness of the product to be placed in this environment. After changing the web.config file to point to the new session server I immediately started seeing problems with pages not loading correctly. The problem was I couldn’t debug the issue because the .Net Framework was doing the work and I didn’t have source code or PDB files for the libraries. I guessed the issue dealt with serialization since objects that cannot be serialized will not go across the wire to the session server. So I took the first page the application hits and placed the following code into the Page_Load event handler (wrapped by a try…catch, of course):

IFormatter formatter = new BinaryFormatter();
using (System.IO.Stream stream = new System.IO.FileStream(
    "C:\\Users\\rudyscot\\desktop\\userstate.bin",
    System.IO.FileMode.Create, System.IO.FileAccess.Write,
    System.IO.FileShare.None))
{
    formatter.Serialize(stream, Session);
    stream.Flush();
    stream.Close();
}

Any issues with the serialization will throw an exception, so I was able to see what the issues were by looking at the Message property of the caught Exception. All of the issues turned out to be what I was expecting. They were all classes not marked with a Serialization attribute. Unfortunately, one of the objects turned out to be a .Net framework object though, so code changes needed to occur to move the needed properties of that control to an object that could be serialized.

Copyright © Scott P. Rudy 2009 All Rights Reserved

I was writing some unit tests the other day and wanted a way to compare an entire object for a test assertion. I also wanted to be able to hand write an object and then use it to compare a result in a test. I immediately thought of XML serialization in .Net and started writing the code.

Unfortunately, it was then that I realized that the XML Serializer has a few limitations. First of all, you must have a default constructor declared for all of your classes. Now I normally use this convention (especially with the C# 3.0 initialization features), but there were several classes I didn’t write in the project I am working on.

The second limitation had to do with properties in a class that were declared as interfaces (e.g. IList<T>). I don’t know why Microsoft didn’t include this as part of the serializer, but it is definitely missing.

I started thinking about WCF and serialization across the wire and realized this problem had to have been solved by the DataContractSerializer or people would be irate. So I created some code to use the DataContractSerializer instead. The code looked like this:

DataContractSerializer oDcs =
new DataContractSerializer(typeof(TypeToBeSerialized));
XmlWriterSettings settings =
new XmlWriterSettings() { Indent = true };
using (XmlWriter w = XmlWriter.Create(fileName, settings))
{
    oDcs.WriteObject(w, list);
}

That worked quite well.

Copyright © Scott P. Rudy 2009 All Rights Reserved

I haven’t been really keen on using the AJAX features within .NET to date. I had been using the XMLHTTP feature in IE for years and just didn’t see the need. However, the other day I had a need to collapse some data within a ListView and I really didn’t want to figure out how to render the script properly to do this (read: I wanted to do something easy). The ListView is control that is deserving of its own post as it gives complete control over how HTML is rendered, a big change from some of the other data controls that have shipped earlier in .Net. There are plenty of examples on how to use the ListView control on the web. However, when it came to the CollapsiblePanelExtender control, I had a bit of trouble finding any good examples. After spending about an hour with the CollapsiblePanel I feel there are some basic usage rules missing on the control’s website. The properties of the control are well documented, but I really needed to know how to wire the control up. Essentially I wanted this:

Show:

[-] My Items

Here is the data I want to hide and show.

Hide:

[+] My Items

It turns out the simple way to get it setup is to use a Panel for the data you want to hide, AND a panel to host the label that you want to control the show and hide functionality. In addition, you need a style to change the cursor when the user hovers over your label. So it would look something like this:

<style type="text/css">
    .ExpanderLabel {cursor:pointer;}
</style>
<cc1:CollapsiblePanelExtender ID="MyPanelExtender"
    runat="server" TargetControlID="MyItemsPanel"
    CollapsedText="[+] My Items"
    ExpandedText="[-] My Items"
    TextLabelID="MyItemsPanelLabel"
    ExpandControlID="MyItemsPanelControl"
    CollapseControlID="MyItemsPanelControl"
    SuppressPostBack="true"/>
<asp:Panel ID="MyItemsPanelControl" runat="server">
    <asp:Label Width="100%"
        CssClass="MyItemsPanelLabel"
        ToolTip="Click to Show/Hide this section"
        ID="MyItemsPanelLabel" runat="server" />
</asp:Panel>
<asp:Panel ID="MyItemsPanel" runat="server">
    Here is the data I want to hide and show.
</asp:Panel>

 

So the CollapsablePanelExtender references a TargetControlID which contains the actual data I want to hide. It also references the ExpandControlID and the CollapseControlID which I wanted to be the same. The last set of references is to the TextLabelID, CollapsedText and ExpandedText. This set controls the text that will be shown as a title bar for the data section.

Copyright © Scott P. Rudy 2009 All Rights Reserved