17 September 2008

Column layouts with XSL

Marking up a set of items so that they are laid out in a fixed number of columns from left to right can be tricky if the items vary in height. You need markup something like the example below to achieve this...

<div class="row">
 <div> class="item"></div>
 <div> class="item"></div>
 <div> class="item"></div>
<div class="row">
 <div> class="item"></div>
 <div> class="item"></div>

...with CSS like this...

.item { float: left; width: 250px; }

Outputting this from XSL is fairly straight forward and requires relatively little "code" by juggling some XSL and XPath. Here is some example XML:

<?xml version="1.0"?>
 <item title="Item 1"/>
 <item title="Item 2"/>
 <item title="Item 3"/>
 <item title="Item 4"/>
 <item title="Item 5"/>

The XSL to transform this into HTML of the format shown earlier looks like this:

<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
 <xsl:variable name="cols" select="3"/>
 <xsl:template match="/items">
  <xsl:apply-templates select="item[position() mod $cols = 1]" mode="row"/>
 <xsl:template match="item" mode="row">
  <div class="row">
   <xsl:apply-templates select=".|./following-sibling::item[position() &lt; $cols]"/>
 <xsl:template match="item">
  <div class="item">
   <xsl:value-of select="@title"/>

First off, we've got a variable named "cols" for setting the number of columns to output. Next, inside the first template matching our root element, we have an apply-templates which selects all the item element with a position that when divided by the number of columns leaves a remainder of one. This should give us items 1, 4, 7, 10 etc and these are then passed to a template in mode "row".

This apply-templates will match the next template whose job is to output the div element with class row. Within this div there is another apply-templates, this time matching the current item element and a number of its following siblings. That being one less than then number of columns required so in this example we get the context item plus its two following siblings. The template that matches this apply is that last one, without the mode specified.

The last "item" matching template simply writes out the div with class item surrounding the content of the item itself.

21 August 2008

Function overloading in JavaScript

Trying to write a function so that it will handle different numbers and types of argument always seems to take a lot more code that it should. Today I ended up with some nightmarish if...else... because I wanted to support:

  • Object, Object
  • Object, String
  • Object, Object, String

I remembered a post by John Resig from a while back on this, however his method doesn't handle different types. A quick Google found this snippet which does the job but seems a bit clunky, converting constructors to strings and the like. Satisfied that it could probably be improved upon here is my go at it.


Here an Impl class holds the different method implementations against their signatures, defined as an array of constructors. The add method maps a new signature to an implementation, the exec method identifies the correct implementation for the current arguments and executes it and the compile method returns a function that can be used to overwrite the "daddy" function.

function Impl(){
 var _impl = {};
 return {
  add: function(aSignature, fImpl){
   _impl[aSignature] = fImpl;
  exec: function(args, scope){
   var aArgs = Array.prototype.slice.call(args);
   var aCtors = [];
   for(var i in aArgs) aCtors.push(aArgs[i].constructor);
   return (_impl[aCtors] || function(){}).apply(scope, aArgs);
  compile: function(){
   var impl = this;
   return function(){
    return impl.exec(arguments, this);


Here's an example setup of a function stringify that will take two strings, two numbers or an array and do different things depending on what arguments are passed.

function stringify(){
 var someVar = 'Items: ';
 var impl = new Impl();
 impl.add([String, String], function(sLeft, sRight){
  return sLeft + ': ' + sRight;
 impl.add([Number, Number], function(iLeft, iRight){
  return 'Sum: ' + (iLeft + iRight).toString();
 impl.add([Array], function(aIn){
  return someVar + aIn.join(', ');
 stringify = impl.compile();
 return stringify.apply(this, arguments);

Each implementation is added with an array of constructors e.g. [Number, Number] representing the arguments signature to match and the implementation function. Notice that we don't need to do any type checking of the variables in the implementation function, as we have to have matched the signature to get that far, and we can give then relevant names.

Notice the last two lines are:

stringify = impl.compile();
return stringify.apply(this, arguments);

So on the first execution of the function it is overwritten with a "compiled" version and then that version is called on the next line. Subsequent calls to the function will go straight to the "compiled" version thus speeding things up. This could also be done without the compilation and overwriting bit by just calling exec directly thus:

return impl.exec(arguments, this);

Calling our function

stringify('foo', 'bar'); // => 'foo: bar'
stringify(26, 5); // => 'Sum: 31'
stringify( [ 1, 2, 3 ] ); // => 'Items: 1, 2, 3'

Our different numbers and types of argument are giving us the output we'd expect.

Excluding particular derived types in NHibernate queries

Today I came across an NHibernate problem where I needed to select every instance of a particular base type and all its derived types from a database, apart from one particular derived type. Here is a trivial example:

public class Mammal {}
public class Dog : Mammal {}
public class Cat : Mammal {}
public class DomesticCat : Cat {}

In this case the problem was equivalent to selecting every mammal that isn't a domestic cat.

We're using the table per class hierarchy inheritance model in NHibernate which uses values in a discriminator column to determine which type is held in a particular row of the table.

Selecting the whole hierarchy is done like this:

IQuery q = sess.CreateQuery("from Mammal");
IList mammals = q.List();
In Criteria:
ICriteria crit = sess.CreateCriteria(typeof(Mammal));
List mammals = crit.List();

I then needed to be able to effectively add a WHERE discriminator <> 'DomesticCat' to the end of the query. I had a quick search for this special discriminator property and for a Criteria Expression for excluding a particular type but couldn't find either.

The Solution

I finally found the solution on the WHERE clause page of the HQL chapter in the NHibernate reference. There is a special property called class which you can test against a type name in HQL or an actual type in Criteria queries e.g.

IQuery q = sess.CreateQuery("from Mammal m where m.class != 'DomesticCat'");
IList mammals = q.List();
In Criteria:
ICriteria crit = sess.CreateCriteria(typeof(Mammal));
crit.Add( Expression.Not( Expression.Eq("class", typeof(DomesticCat)) ) );
List mammals = crit.List();

04 August 2008

Generic collections and inheritance

Update - 07/2010

Covariance and contravariance support in .NET 4.0 takes care of this problem without the need for casting. Here's the relevant MSDN page: Covariance and Contravariance in Generics

I stumbled upon a small annoyance today when trying to use a generic collection of type B where a generic collection of type A is expected where B inherits from A. With arrays this works fine and the elements are implicitly cast to the base type.

Here's a snippet compiler script which illustrates the problem - list-converter.txt.

In the script I'm using BaseType and InheritingType for A and B. The script initially shows the implicit cast taking place for an array of B objects on line 19. The ArrayTest method expects an array of A but is quite happy being called with an array of B.

public static void ArrayTest(BaseType[] bar) { ... }

InheritingType[] myArray = new InheritingType[]{ it1, it2 };

If we now look at the ListTest method and try running this section of code...

public static void ListTest(List<BaseType> bar) { ... }

List<InheritingType> myList = new List<InheritingType>();

...we get an error...

Argument '1': cannot convert from 
'System.Collections.Generic.List<InheritingType>' to 

We get the same result if we try an explicit cast


Obviously allowing an implicit conversion for generics in general doesn't make a lot of sense but for lists I think it does and it's a pain to have to convert from one generic collection type to another.


Thankfully, with the help of some generics (of all things) and the ConvertAll method of List there's quite an elegant solution to this problem, we can create ourselves a nice generic list converter, here's an example:

public class ListConverter<TFrom, TTo> where TFrom : TTo
 public static IEnumerable<TTo> Convert(IEnumerable<TFrom> from)
  return Convert(new List<TFrom>(from));
 public static List<TTo> Convert(List<TFrom> from)
  return from.ConvertAll<TTo>(new Converter<TFrom, TTo>(Convert));

 public static TTo Convert(TFrom from)
  return (TTo)from;

We use two type arguments, the type we're converting from, which will be B from the example above, and the type we're converting to, A. Notice also the where TFrom : TTo which enforces that B must inherit from A.

As we need the ConvertAll method of List we have a method that takes an IEnumerable and creates a new List. We also have the method that does the main ConvertAll on the List and the delegate which is passed to ConvertAll.

This allows us to create a type converter as we code without needing to mess about e.g.

ListTest(ListConverter<InheritingType, BaseType>.Convert(myList));

16 July 2008

A quick go at CSS preprocessing in JavaScript

Chris Heilmann's post today about CSSPP, a PHP preprocessor for CSS, set the old brain going so I decided to knock up a quick proof of concept (FF only) in JavaScript doing some of the same stuff. Namely, it has very quickly hacked nested rule parsing and a form of the replacement variables feature.

Currently you have to specify your CSS for processing in a script tag with a particular type...

<script type="text/csspp">
.red {
 border: 2px solid red;
 h2 {
  background: #933;
 p {
  background: $derek$;
.blue {

 border: 2px solid blue;
 background: $other$;

...and the variables are retrieved from a JavaScript object by name...

<script type="text/javascript">
var cssVars = {
 derek: '#FCC',
 other: '#CCF'

Obviously a JavaScript implementation like this would slow down the user experience and wouldn't have any of the snazzy caching features of a server-side version but it might be useful if you're hacking a bit of CSS and don't have a PHP server handy.

19 June 2008

Comparison of "netbook" screen resolutions

I'm thinking about pushing the boat out and getting myself a "netbook" and was wondering whether I'd actually find the relatively low resolutions they offer usable so I knocked up this quick comparison. To make them as usable a possible you'd probably have your browser in full screen mode so that's what I've depicted here.

As you can see both the greater resolutions would be very usable for browsing and the Mini-Note's resolution is brilliant for a 9" screen; being the same as that you usually see on much larger machines.

The 7" EEE PC's 800 x 480 resolution isn't quite enough. You can see in the screen shot the horizontal scroll bar at the bottom and I think that would be a common problem as lots of sites are designed for 1024 pixels as minimum width rather than 800 pixels these days.

To sum up the HP would almost be a viable alternative to a full size laptop if it were more powerful. For now though, it looks like the Acer Aspire One with its good resolution, 1.6GHz Intel Atom processor and excellent price of £199, would be the best one to go for.

02 June 2008

"duplicate association path" bug in NHibernate Criteria API

This problem exists in Hibernate itself as well and, contrary to some comments I've seen in the bug tracker, I believe it is a bug in the Criteria API and not in HQL.

A trivial example

I have a Store type representing a shop which has a collection, Products, containing Product types the shop stocks e.g. "Golf Balls", "Bananas", "Hats" etc. I want to get all the stores who stock both Golf Balls and Hats.

In HQL this would be :

FROM Store AS s 
INNER JOIN s.Products AS prod1
INNER JOIN s.Products AS prod2
WHERE prod1.Type = 'Golf Balls' 
   AND prod2.Type = 'Hats'

...pretty straight forward and works fine.

In Criteria API this would be:

IList stores = sess.CreateCriteria(typeof(Store))
   .CreateAlias("Products", "prod1")
   .CreateAlias("Products", "prod2")
   .Add( Expression.EqProperty("prod1.Type", "Golf Balls") )
   .Add( Expression.EqProperty("prod2.Type", "Hats") )

...again straight forward and seems logical but this produces an error...

NHibernate.QueryException: duplicate association path Products

As I said, it seems like a pretty solid candidate for a bug and it's odd considering surely the meaning of CreateAlias is that I want to use the same association more than once so need to alias it to different labels.

Unfortunately there's no way to get around this issue if you need distinct association joins like the above example and looking at the NHibernate code it doesn't seem like an easy fix. If, however, you can apply your criterion or sorts to the same alias then there is a workaround.


NoteThis only applies where you don't require distinct association joins.

If we take a look at this handy NHibernate API reference we see that the two implementing classes for ICriteria are NHibernate.Impl.CriteriaImpl and NHibernate.Impl.CriteriaImpl.Subcriteria.

CriteriaImpl is the root criteria you get calling CreateCriteria on ISession and Subcriteria you get with every call to CreateCrteria or CreateAlias on ICriteria.

First you need to retrieve the root CriteriaImpl for your working ICriteria. Your working ICriteria may be the root but if it isn't you need to recurse up through the Parent property until you reach the CriteriaImpl object.

CriteriaImpl has an IterateSubcriteria method which returns an IList of all its Subcriteria descendants. You can loop through this list checking the Parent and Path properties of each item. The Parent because the value of Path is relative and you're only interested in what will be sibling Subcriteria to the one you're about to add.

If you find a match you can retrieve its alias from the Alias property, otherwise you can add a new alias to your working ICriteria.

Update - Jan 2014

It seems this bug is still not fixed in NHibernate (or Hibernate for that matter) and that it may also affect the LINQ provider. The relevant issues links are:

I'm very tempted to have a go at fixing this myself given there still seem to be a few people struggling with it. Will post another update if I get anywhere.

21 May 2008

Implementation of the Visitor pattern using .NET Generics

In a recent post I discussed using the Visitor pattern to solve a lazy initialization problem in NHibernate. The example Visitor class in that post is tied to the base class of the class hierarchy it is dealing with so everywhere you need to use a Visitor class you'd need to define at least one of these Visitor classes and then possibly inherit from it to implement alternative functionality.

A better solution is to use .NET Generics to create the Visitor e.g.:

// Visitor for base type TBase
public class Visitor<TBase>

 // Delegate for type TSub which can be any subclass of TBase
 // that takes a parameter of type TSub
 public delegate void VisitDelegate<TSub>(TSub u) where TSub : TBase;
 // Dictionary to contain our delegates
 Dictionary<Type, object> vDels = new Dictionary<Type, object>();
 // Method to add a delegate for type TSub which can be any subclass of TBase
 public void AddDelegate<TSub>(VisitDelegate<TSub> del) where TSub : TBase
  vDels.Add(typeof(TSub), del);
 // Visit method for type TSub which can be any subclass of TBase
 // takes one parameter of type TSub, picks the right delegate
 // and executes it passing the parameter to it
 public void Visit<TSub>(TSub x) where TSub : TBase

I've knocked up a quick Snippet Compiler demo here. The key parts are the Accept methods in the classes of the hierarchy...

public class Cat : Mammal
 public override void Accept(Visitor<Mammal> visitor)

...and adding the actual work to be done by creating delegate for each of the types you want to "capture"...

// Our visitor on our base class which will do our type specific work
Visitor<Mammal> visitor = new Visitor<Mammal>();
string outerVar = "A variable from outside delegate";

// Add the work to be done for DomesticCat
visitor.AddDelegate<DomesticCat>(delegate(DomesticCat a){
 WL("Doing some DomesticCat specific work");
 WL(a.Age + ", " + a.Color + ", " + a.Name);

// Add the work to be done for Dog
visitor.AddDelegate<Dog>(delegate(Dog b){
 WL("Doing some Dog specific work");
 WL(b.Age + ", " + b.Color);

The visitor pattern is quite widely applicable in OO environments however, while this solution may not be ideal where you need the visitor class to have a lot more information about the task it is to perform, it is certainly preferable to a large if...else if...else... type construct you might otherwise use for small tasks.

20 May 2008

dp.SyntaxHighlighter jQuery plugin

I've just added syntax highlighing to the code samples on my blog using the excellent dp.SyntaxHighlighter. Adding a class name e.g. "c#" or "xml" to your pre or textarea elements and calling HighlightAll magically highlights the content and turns the surrounding element into a fancy box complete with toolbar.

I tend to wrap my samples in a <pre><code>...</code></pre> so applying the highlight on the pre tag would have broken because of the additional code inside. A quick gander at the code, bit of refactoring in shCore.js and a jQuery one-liner later and I've got a nice(ish) plugin that will apply highlighting to the elements selected through jQuery thusly:

   $("code").syntaxHighlight({showGutter: false, firstLine: 5});

The settings are:

  • showGutter < bool >
  • showControls < bool >
  • collapseAll < bool >
  • firstLine < int >
  • showColumns < bool >

You can grab the files from here:

This mod is using the latest stable release (1.5.1) of dp.SyntaxHighlighter. It looks like version 1.6 is in the pipeline with some major changes so I'll keep an eye on that and maybe revisit this when it comes out.

16 May 2008

Implicit polymorphism and lazy collections in NHibernate

If you create a lazy loaded property or collection in NHibernate which can contain any type from a class hierarchy, for example by having mappings like this:

<?xml version="1.0" encoding="utf-8"?>
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2">
  <class name="Mammal" table="Mammal">
    <discriminator column="mammal_type" type="String"/>
    <subclass name="Cat" discriminator-value="CAT">
      <subclass name="DomesticCat" discriminator-value="CAT-DOMESTIC">
    <subclass name="Dog" discriminator-value="DOG">
  <class name="Zoo" table="Zoo">
    <bag name="Animals">
      <key column="zoo_fkey"/>
      <one-to-many class="Mammal"/>

You'll find that on requesting your objects from your collection they will be of a special new type NHibernate has created, derived from your base class which in this case is "Mammal".

This is a real problem because it means that you can't perform is or as operations on it to determine which actual type it is and you can't cast it to access properties and methods of your derived types.

The solution is to use the Visitor pattern which is described in detail on this site with a couple of examples in C# on this site. Essentially it involves creating a class with a method which is overloaded for each of the types in your class hierarchy.

class MammalVisitor
  public void Visit(Cat c) { ... Cat operations ... }
  public void Visit(DomesticCat dc) { ... DomesticCat operations ... }
  public void Visit(Dog d) { ... Dog operations ... }

This "visitor" object is then passed to a method defined on the base class of your hierarchy and then subsequently overridden on each derived type.

class Mammal
  public virtual void Accept(MammalVisitor mv) { mv.Visit(this); }

class Cat : Mammal
  public override void Accept(MammalVisitor mv) { mv.Visit(this); }

class Dog : Mammal
  public override void Accept(MammalVisitor mv) { mv.Visit(this); }

These methods simply call the visitor's method passing this to it which in turn will automatically execute the correct overload.

Mammal m; // some unknown derived type of mammal
MammalVisitor mv = new MammalVisitor();

These overloaded methods can then perform your type specific functions. In the example above you have no knowledge of the type of Mammal that you have however when you call Accept the relevant code is automatically executed. If Mammal happens to be a Cat type then the Cat overloaded Visit method is called.

13 May 2008

Creating Views using XSL in ASP.NET MVC

There's something about Web Forms that never felt quite right to me; it all seemed a bit too much like a bodge that created more problems than it solved. I've been having a tinker with ASP.NET MVC for the past few days now and I'm really liking the way it all fits together; giving you clean separation between the layers and a means of passing data between them.

The View layer in MVC uses a very Classic ASP-esque markup for mixing code and HTML. Seems rather dirty but by this point any major processing should have been done and all that you'll need to do is render HTML with maybe some looping.

That being said if all we'll need to do by this point is some looping at the most then why not use an XSL transform instead?

XSL is well defined and established now with a load of tools out there for giving you a WYSIWYG view of your tranform as you build it. Plus as it is purely XML based it will likely be a lot easier for designers (the folk who we really want to build Views) to get to grips with.

A quick proof of concept

Step 1 - Create a model class for your XSL ViewData

This has simply an XmlDocument which will contain our serialized object and a string which will contain a path to our XSL file.

public class XslViewData
 private System.Xml.XmlDocument _doc;
 private string _xslPath;

 public XslViewData(System.Xml.XmlDocument doc, string xslPath)
  _doc = doc;
  _xslPath = xslPath;
 public System.Xml.XmlDocument Doc
   return _doc;

 public string XslPath
   return _xslPath;

Step 2 - Serialize your object to XML in your Controller

First make sure the type you want to render is ready for XML serialization. This involves adding some attributes to classes and properties so read up on that first before you carry on. It's important to get your type serializing out in a nice way as it will be easier to work with in the XSL.

using System.Xml;
using System.Xml.Serialization;
Product prod = ProductRepository.GetProduct(id);
XmlSerializer ser = new XmlSerializer(typeof(Product));  
XmlDocument doc = new XmlDocument();
System.IO.MemoryStream ms = new System.IO.MemoryStream();
XmlWriter xw = XmlWriter.Create(ms);
ser.Serialize(xw, prod);
ms.Position = 0;
RenderView("Xsl", new XslViewData(doc, "/Content/prodDetail.xsl"));

Step 3 - Create a ViewPage or ViewUserControl to host your XSL

All you need in the aspx/ascx file is an Xml ASP.NET control...

<asp:Xml ID="Xml1" runat="server" onload="Xml1_Load"></asp:Xml>

...and in the code behind...

public partial class Xsl : ViewPage<XslViewData>
 protected void Xml1_Load(object sender, EventArgs e)
  Xml1.Document = ViewData.Doc;
  Xml1.TransformSource = ViewData.XslPath;

Step 4 - Create an XSL file

<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
    <xsl:output method="xml" indent="yes" omit-xml-declaration="yes"/>
 <xsl:template match="/Product">
  <h2><xsl:value-of select="Name"/></h2>
  <xsl:apply-templates select="Image"/>

 <xsl:template match="Image">
   <xsl:attribute name="src">
    <xsl:value-of select="ImageUrl"/>
 <xsl:template match="*">


Not only is this simple to implement it has a lot of scope for improvment; compiling transforms, adding caching and configuation etc. This method also has the advantage that it requires no recompile to alter the template.


24 April 2008

Cross-browser data binding with jQuery

From version 4.0 Internet Explorer has had the ability to bind certain HTML elements to a client-side data source. Pretty much any data you can access with ADO you can use as a data source but the one we've used most commonly at work is XML. By adding some attributes to your HTML elements e.g.

<input type="text" datasrc="#dsoComposers" datafld="compsr_last"/>

you can get them to render out the data from the data source and, in the case of inputs, persist any changes back to the source. Additionally, if you have a multi-row data source and you bind it to a table the contents of the tbody tag are duplicated for each row.

This is very useful for making web applications because:

  • you only send the raw data along with a small amount of rendering HTML to the client
  • the client edits the data, adding/deleting rows without postbacks
  • the client posts back structured data rather than a ton of awkwardly named querystring variables
  • you need no presentation code for translating your data to or from HTML

All this considered I thought I'd have a go at creating a simple cross-browser version of this databinding functionality in jQuery with a limited feature set.

Features to implement

For this proof of concept I decided the features I would try and implement were:

  • Support for JSON as a data source
  • Repeat rendering of a table's tbody section for each item in an array of objects
  • CSS class name based way of tying properties of the objects to HTML elements
  • Rendering of property values within HTML elements innerHTML or value
  • Attaching of event handlers to persist data changes on inputs back to the array data source
  • Some ability to reflect programmatic changes to the underlying data source in the bound HTML elements
  • Doing the above without needing to fully refresh the HTML every time


Some "template" HTML...

    <table id="info">
          <th>Common name</th>  
          <th>First developed</th>
          <td class="field[name]"></td>
          <td><input type="text" class="field[origin]"/></td>
          <td class="field[developed]"></td>
          <td class="field[use]"></td>
          <td><button class="field[delete]">Delete Row</button></td>
          <td colspan="4" style="border-bottom: 2px solid #888;"><textarea class="field[comments]" style="width: 100%;"></textarea></td>
          <td colspan="4"><button id="add">Add Item</button></td>

...some data...

var data = [
   { name: 'Braeburn', origin: 'New Zealand', developed: '1950s, United States', comments: '', use: 'Eating' },
   { name: 'Bramley', origin: 'Southwell, Nottinghamshire, England', developed: 'about 1809', comments: '', use: 'Cooking' },
   { name: 'Cox\'s Orange Pippin', origin: 'Great Britain, New Zealand', developed: 'c. 1829', comments: '', use: 'Eating' },
   { name: 'Empire', origin: 'New York', developed: '1966', comments: 'Lovely white subacid flesh. Tangy taste.', use: 'Eating' },
   { name: 'Granny Smith', origin: 'Australia', developed: '1868, Australia', comments: 'This is the apple once used to represent Apple Records. Also noted as common pie apple.', use: 'Eating or cooking' }

...and a small amount of JavaScript...


The add button handler looks like this...

data.push({ name: '', origin: '', developed: '', comments: '', use: '' });

...any code making programmatic changes to the data must recall databind to refresh the bound HTML elements.


I've set up this quick proof of concept here. I haven't looked at how it performs with large arrays of data but I expect the answer would be badly.

Going forward there may be scope for turning this into a proper jQuery plugin with support for other types of data source and the ability to bind an arbitrary group of elements to some data rather than just a table.


14 March 2008

JSLinted and namespaced Mapstraction

I've been using Mapstraction in work over the past few months and now my current project has come to an end I've generated a few Subversion patches of my changes, which include:

  • JSLinting the source
  • Adding simple namespacing (read "hacking in") with backward compatibity "pollute" mode
  • Adding ability to execute event handlers in a particular scope along with a general rework of the event implementation to allow easier extensibility
  • Fixing several bugs with the Multimap implementation
  • Adding new and refining existing JSDoc comments


Should probably point out that I'm not part of the team that maintain Mapstraction and these aren't "official" patches. Hopefully the bug fixes at least will make it back into the repository though.


The namespacing is on by default so to get it to work with an old page you do need to call mxn.activatePolluteMode() first. Additionally there are a few loose utility functions in the mapstraction file that are also within the namespacing. You can get to three of them (metresToLon, lonToMetres and loadScript) from the mxn.fn namespace so if you have been making use of those you'd have to change your code to mxn.fn.metresToLon() for example. I'm not sure which of the other functions in there people would be likely to make use of so I left it at those three to try and minimise surface area but it wouldn't be a problem to add them all to fn.

There is also some backward compatibility stuff in the events implementation. I modified events so that they pass an eventArgs object rather than the arguments individually, that way the signature of all handlers is the same. This is only activated when you pass the third "scope" argument to addEventListener so old implementations should work as before either being passed no argument for move or a LatLonPoint for click.

18 February 2008

Large capacity PMP roundup

After my considerable disappointment at no news of a 160GB iPod Touch from Apple at this year's Macworld I decided to have a look at what the competition is doing in the way of large capacity portable media players.

My criteria

  • 40GB or greater disk
  • 3" or larger screen (preferably 16:9)

The contenders

Model Capacity (GB) Screen (Inches) Formats Battery A/V (Hours) Price
504 40, 80, 160 4.3" Video: MPEG-4
Audio: MP3, WMA, WAV
17/5.5 £172.21, £258.99, £394.95
704 WiFi 40, 80 7" Video: MPEG-4, WMV
Audio: MP3, WMA, WAV
25/5.5 £269.99, £368.23
605 WiFi 80, 160 4.3" Video: MPEG-4, WMV
Audio: MP3, WMA, WAV
17/5.5 £214.99, £254.64
705 WiFi 80, 160 7" Video: MPEG-4, WMV
Audio: MP3, WMA, WAV
25/5.5 £269.99, £368.23
A3 60 4 Video: DivX, XviD, MPEG 1/2/4, WMV, H.264
Audio: MP3, WMA, FLAC, OGG, AAC, AC3, BSAC, True Audio, WavPack, G.726, CM
25/5.5 £315.95
Q5 60 5 Video: DivX, XviD, MPEG4, WMV
13/7 £409.95
PMC-140 40 3.5 Video: WMV
Audio: MP3, WMA
PMP-140 40 3.5 Video: AVI, XviD, MPEG-4, MPEG-1
Audio: MP3, WMA, WAV, ASF

Archos have the most players meeting the criteria with large disk versions up to 160GB however they are let down by poor codec support although you can add support for more by downloading them from their website (at extra cost).

The two Cowon players offer an impressive list of codecs out-of-the-box but they only come in a maximum 60 gigabyte version and are a lot more expensive than in particular the Archos players.

The iRiver players have small disks, small screens and poor codec support but I couldn't find anywhere stocking them anyway so it looks like I couldn't buy one even if I were interested.


The Archos players definitely seem to be the best deal if you're not bothered about having support for a large number of codecs. One big criticism of them however is that, looking at the overview of for example the 605 WiFi on the Archos site, it seems like you're getting loads of features for your money. Reading the small print, however, will tell you that to use some of them you need to buy optional add-ons and they're not cheap. The DVR dock for recording TV is an extra £60, the web browser is £20, the Video podcast plug-in (read iPod compatibility pack featuring H.264 and AAC codecs) is £15! I'm surprised they are allowed to sell the product based on features it doesn't have out of the box and it certainly puts a sting in the tail of the good deal.

16 February 2008

NHibernate in Visual Web Developer Express

If you fancy using NHibernate in VWD you'll have trouble because you can't compile your mapping files into the same assembly as your classes. As a result you can't add mappings to your session factory by assembly name or class name.

Thankfully there is a simple solution to this, which I'll demonstrate with the aid of the quickstart example in the NHibernate documentation.

Start by creating all your persistance class files in your App_Code folder with your mapping files (.hbm.xml) alongside. Next, make the alterations to your web.config as outlined in section 1.1 of the quickstart but leave out this line:

<mapping assembly="QuickStart" />

In your mapping files, change the assembly attribute of the root hibernate-mapping element to "App_Code" and remove the namespace attribute if you're not using a namespace (the default behaviour of VWD) e.g:

<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="App_Code">

Finally, copy the code for the NHibernateHelper class from section 1.4 and change its constructor from this:

static NHibernateHelper()
   sessionFactory = new Configuration().Configure().BuildSessionFactory();

to this:

static NHibernateHelper()
   Configuration cfg = new Configuration().Configure();
   cfg.AddDirectory(new System.IO.DirectoryInfo(HttpContext.Current.Server.MapPath(@"~/App_Code/")));
   sessionFactory = cfg.BuildSessionFactory();

The original constructor used the mapping element in the web.config to find out which mappings to load, here we're telling it to load all the mapping files it finds in the App_Code folder. You can also use the AddFile method to add individual mapping files.

09 February 2008

Merge with "How to Code"

Initially I thought it was a good idea to write separate blogs because people interested in gadgets and codecs aren't necessarily interested in code and databases.

I've since realised three things; that the target audiences were pretty much the same, that dividing my traffic was a bad idea and that I don't really have enough to say or the time to fill two blogs.

So, having just finished copying all the posts over from Derek's How to Code, I'll be deleting it and maintaining just the one from now on.

What a shame I never got to use this snazzy bit of artwork...

18 January 2008

Processing instructions and Microsoft XML DOM

If you're using Microsoft's XML DOM you'll find it strips out the encoding attribute from your processing instruction (the <?xml ... ?> bit) when you get the value of the Xml property. This is because it always returns unicode regardless of what the input encoding was. As unicode is the default for the XML specification, no encoding attribute is required.

The Save method, which writes the contents of the DOM document to a file, maintains the original encoding. If you need your XML in the original encoding after some manipulation in the DOM then you can do this:

Set fs = Server.CreateObject("Scripting.FileSystemObject")
Set ts = fs.OpenTextFile("c:\blah.xml")
strXml = ts.ReadAll

Programmatically changing a processing instruction

Looks a bit dodgy but this is the way to do it:

Set pi = xmldoc.createProcessingInstruction("xml", "version=""1.0"" encoding=""UTF-16""")
xmldoc.replaceChild pi, xmldoc.childNodes.Item(0)

12 January 2008

Mapstraction "to do" list

I've been using Mapstraction to implement draggy maps in our platforms at work for the past month or so and thought I'd share some of the problems I came across so, should you decide to do the same, you know what to watch out for. The issues I've come across below I've fixed for the APIs with which I was working and I'll be submitting the changes back to the project in the near future.

I should say first that Mapstraction is an awesome piece of work that frees you from having to commit to one mapping provider. When clients don't want to pay for maps so you give them Google and then change their mind when Google start inserting advertisements, all you have to do is change one line of code.

The List

1. Patchy implementations

Mapstraction abstracts the lowest common denominator set of functionality for its supported APIs. I've been working primarily with Multimap and have discovered that, for this API at least, some API classes are missing methods or have non-working implementations.

In the Multimap case this is just a few methods here and there but the thing I found annoying was that, rather than the method just being empty or throwing a "not implemented" exception, the method was populated with the code for one of the other APIs.

2. Incomplete events

The only events catered for by Mapstraction are "move end" and "click" which is severely limiting considering the vast array of events available through the native APIs. Events like "zoom", "open bubble", "change map type" etc aren't there.

Arguably, abstracting events is probably the most difficult part of the process as you have to take into account differences in when similar events fire, orders that they fire and event arguments that are passed back to the handler.

The event implementation for the two events also doesn't cater for overriding the scope in which the handler function is to be executed via the call method which is essential if you're working with methods of JavaScript objects.

3. No namespacing or prefixing

The Mapstraction code is not namespaced, scoped or prefixed in any way so is dangerous when used with a lot of JavaScript code from different sources as you'll likely end up with name clashes and global variables being overwritten.

4. No standards enforced on the source code

There appear to be no coding standards enforced on the source. It's full of undelared variables, dodgy constructs and some bits that just aren't formatted well, making the code difficult to follow.

The likes of Yahoo! run all their code through JSLint before it goes anywhere near deployment and with a library of this type I think at least a loose Linting is a must.

To sum up

If your integration is complex and requires a lot of event handling and dynamic manipulation of the map artifacts then Mapstraction isn't quite there yet but for most other things it's worth filling in any holes yourself for that freedom of API.