05 December 2010

Taking my music listening in a new direction

or, Why I'm cancelling my Spotify Premium subscription

Not entirely sure when I started using Spotify but it was probably late 2008 / early 2009 and I've found it to be a revelation of music discovery. I've spent hours just clicking from one artist to another, exploring back catalogues and having a serious listen to full albums in a way that would be quite difficult without already having bought the album or "obtained" it from P2P. Previously, using a combination of Last.fm and Myspace you could get quite close but the Spotify desktop app made the whole experience so much more seamless and enjoyable with full, consistent quality tracks.

I've been a Premium subscriber since 1 Aug 2009 with several factors leading to my decision to pay up. The first being high-bitrate uninterrupted audio; having some decent audio kit at home I wanted to make the most of it. Second was the Spotify for Android app I could use on my HTC Hero which is hands down the most convenient means of getting music on a mobile device. Put tracks in a playlist in the desktop app and they magically appear on the device – brilliant.

So, why am I quitting?

1. Cost

To date that's £169.83 in subscription fees - £9.99 a month for 17 months. I tend to buy CDs for £5 off Amazon so that equates to about 33 CD albums or about 2 albums a month. I’ve listened to a lot more albums than that during the time but I doubt that there would have been more than 33 that I would have considered buying a CD copy of. I’ve never paid for an MP3, I refuse to pay the same price as a CD for a lossy version but I paid for Spotify as the service does offer significantly more especially when you use the mobile apps. I’m just not sure it’s worth £9.99 a month.

2. Quality

Spotify Premium ups the track bitrate from 160kbps to 320kbps. At least that’s the idea, in practice it seems large portions of their library are only available in the lower quality and I doubt that more than 10% of the tracks I’ve listened to recently have been high bitrate. There’s also no visibility on "high quality" tracks in the app so I’m seriously sceptical about whether I’m getting the high-bitrates I’m paying for. The quality is certainly still miles off CD audio and having made a return to CDs recently it’s very noticeable that I’m missing out on audio clarity and have been making do with poor quality audio whilst also paying for the privilege.

3. Nothing to show for it

It’s a bitter pill to swallow but worst of all is the fact that after all the cost I’ve just been renting the music. I don’t get to keep the OGG tracks, I don’t own any of it and, when I cancel, the app on my phone will just stop working.

What service would I be happy with?

I’ve been wondering about the kind of service I’d like to see and that I’d be happy to pay for. Unlimited ad-supported listening of any tracks for discovering new music would be fine. I’d like to be able to buy albums, download them in full CD quality and stream them uninterrupted (no ads) in a reasonable bitrate to other computers  and mobile devices. I’d also like to be able to register CDs I own with the service so those tracks are also available wherever I am.

The roll-your-own solution might be buying CDs, ripping them and paying $9.99 for at 50GB Dropbox to sync up my machines. Apparently the Dropbox for Android app has the ability to stream music and movies straight to the device so maybe that’s an option worth considering.

Lossless

In this day and age of high-def video, broadband internet  and huge hard disks I don’t want to pay for, and there is no necessity for, low bitrate music. It’s rather interesting that the medium with the highest audio quality most widely available is Blu-ray disc in the form of Dolby TrueHD and DTS-HD. With video the soundtrack is more of a supporting role so lossy compression can be forgiven to some extent but with music the audio is the main event, it should be CD quality at least. MP3 was great for portability but it has a lot to answer for in terms of killing our appreciation of high quality audio and therefore the market’s desire to provide us with (and push) a high-definition medium solely for audio.

25 November 2010

Adding a design mode to your MVC app

When developing websites you'll likely have ended up in the situation where you need to make some styling changes to a page that's buried deep within the site. If that page is at the end of a process such as registration or checkout then it can be extremely time consuming entering test data that passes validation in order to navigate to the correct page. Add to that the complexity of maybe needing to log in and also having to do the same thing on multiple browsers and things can get ridiculous. If you’re using the WebForms view engine then you have limited design time capability in Visual Studio but this isn’t satisfactory for ensuring cross-browser compatibility.

What's needed is a dumb version of the site which simply renders the views using a variety of data. Effectively you want to create a load of static pages, each with ViewData, Model etc set up so that they represent a different step in one of the real processes on the site. Using this version you’d be able to get to the correct page straight away, be able to refresh it quickly after making markup or CSS changes and be able to visit the page in all your test browsers. Ideally using this version of the site will require no authentication and it wont have any external dependencies like databases or web services that must be set up or configured.

We can use a set of different controllers to do this, each having some hard coded model data for example:

// A real controller may look like this... 
public class PeopleController : Controller
{
	public ActionResult Index()
	{
		List<Person> people = GetListOfPeopleFromDatabase();
		return View(people);
	}

	private List<Person> GetListOfPeopleFromDatabase()
	{
		// Do some data access
		
		return new List<Person>
			{
				new Person{ Name = "Runtime Person 1" },
				new Person{ Name = "Runtime Person 2" },
				new Person{ Name = "Runtime Person 3" },
			};
	}
}


// And our design time controller like this...
public class PeopleController : Controller
{
	[Description("Empty people list page")]
	public ActionResult EmptyList()
	{
		return View("Index", new List<Person>{});
	}

	[Description("People list page with 5 random people")]
	public ActionResult ListWithFivePeople()
	{
		return View("Index", new List<Person>
			{
				new Person
				{
					Name = "John Smith"
				},
				new Person
				{
					Name = "Betty Davis"
				},
				new Person
				{
					Name = "Steve Jobs"
				},
				new Person
				{
					Name = "Bill Gates"
				},
				new Person
				{
					Name = "John Carmack"
				},
			});
	}
}

This will work best if your model classes or, the data entity classes you're passing on to your views are dumb i.e. they don't try to do any database access when the view renders. If you already have your controllers in a separate assembly then it should be a relatively simple task to swap your design time ones in and use them instead. If however you have the standard MVC setup of controllers, views and models all in the same project and assembly then things are a bit more difficult.

At the very  least we want our design time controllers in a separate folder of our project, away from the real ones. The issue with this is that the default MVC controller factory will find them here anyway. Thankfully we don't need to implement an entire new factory, we can hide them from the default one by simply breaking with the convention it uses to identify them, the easiest way being not naming them "...Controller".

Home page

A nice to have in this "design" mode would be a default page which shows a list of links to all the actions of the design time controllers with descriptions for what each represents. This would be particularly useful when handing the markup and CSS over to a third party to be styled up as it allows them to quickly access each variation of each screen. You'd end up with something like this:

  • Products
    • List products
    • Search products
    • View product
    • Product category
  • Basket
    • Empty
    • Full
    • Saved
  • My Account
    • Addresses
    • Billing details
  • Home
  • Contact us

Variations

In addition to each individual view the design time functionality could also allow for variations of these pages e.g. logged in / logged out  views, special offer views, user customised views etc. Variations could be  defined on an action, a controller or on the whole site and rather than defining the particular data in each of these cases a transform function could be defined which is called before view render. This function could do work along the lines of setting IsAuthenticated booleans for the logged in / logged out case and possibly more complex operations otherwise.

This would allow a wide variety of viewable pages to be created without  needing to specifically define data in all those cases.

Proof of concept

I've put a quick proof of concept up on Github here:
https://github.com/dezfowler/MvcDesignMode

There's the main MvcDesignMode library and an example MVC app based on the standard template site which has few design time controllers named "...Designer" rather than "...Controller". When not in design mode this should prevent them ever being accidentally accessed provided you're using the default controller factory. I have the code to enable design mode in the App_Start of Global.asax.cs and it looks like this:

bool designMode = Convert.ToBoolean(ConfigurationManager.AppSettings["DesignMode"]);
if (designMode)
{
	DesignMode.Activate(typeof(HomeController));
}
else
{
	AreaRegistration.RegisterAllAreas();
	RegisterRoutes(RouteTable.Routes);
}

Here I'm just using a boolean configuration setting in web.config to turn the mode on and off but how you might choose to do it is up to you. If the design mode is activated the standard application startup stuff is skipped mainly because design mode uses a standard set of routes. Any links in your pages built using custom routes wont work correctly but the point of design mode isn't to be able to navigate around the site as normal it is that you can jump straight to a particular page in one click. I’m passing a type in to the Activate method simply to server as a pointer to the assembly where my design time controllers reside.

Once in design mode the design time controller factory hunts down the special controllers ending with "...Designer" and effectively indexes them pulling out action method names and also the text from a Description attribute defined on the methods. Using this index it builds up a special site map listing each controller and its action methods as links.

Conclusion

Have a look at the solution on Github or have a go implementing something similar yourself. On a number of recent projects I could see having a setup like this saving a lot of time and effort not just for styling and markup but probably developing simple JavaScript stuff as well. I'll definitely be using it myself in all my future MVC projects.

18 November 2010

Pretty print hex dump in LINQPad

Was messing around with byte arrays a lot in LINQPad this week and really wanted a pretty hex print of the contents of the array so wrote this:

public static object HexDump(byte[] data)
{
	return data
		.Select((b, i) => new { Byte = b, Index = i })
		.GroupBy(o => o.Index / 16)
		.Select(g => 
			g
			.Aggregate(
				new { Hex = new StringBuilder(), Chars = new StringBuilder() },
				(a, o) => {a.Hex.AppendFormat("{0:X2} ", o.Byte); a.Chars.Append(Convert.ToChar(o.Byte)); return a;},
				a => new { Hex = a.Hex.ToString(), Chars = a.Chars.ToString() }
			)
		)
		.ToList()
		.Dump();
}

You use it like this:

byte[] text = Encoding.UTF8.GetBytes("The quick brown fox jumps over the lazy dog");

HexDump(text);

...and it will produce output akin to:

hexdump

02 October 2010

The Null Object pattern and the Maybe monad

Dmitri Nesteruk’s recent post Chained null checks and the Maybe monad struck a chord with me as I had messed about with something similar for performing a visitor-esque operation. I’ve glanced at a few posts about monads in the past however this is the first time I’ve had a proper look at one of them.

The purpose of the Maybe monad is essentially to remove the need for null reference checking. If you try to perform some function on an object which turns out to be null you might get a null reference exception. If, however, you perform the function on a Maybe then if the object is null the function is never called. It’s particularly useful if you’re performing a long chain of functions on an object, any of which may return null. In these cases when the null is encountered the remainder of the chain is skipped resulting in more robust, better performing code.

The implementations in .NET that I could find vary quite widely:

One aspect shared by most of these implementations, and which was pointed out in the comments of Dmitri’s post, is that they still end up doing all the null checking, it’s just hidden away. They are treating the “nothing” state as a value, effectively just creating a Nullable<T> which wraps reference types and then checking the HasValue at the beginning of each method call. I think a more elegant solution to this is to use the Null Object pattern.

A Null Object is a special inert type derived from our real class or a common base class. Each method is overridden by a version which has no effect. By wrapping any non-null objects we encounter in an instance of our real type and any nulls in an instance of our inert type we can continually call the methods of these types without fear of null reference exceptions occurring. Moreover, once we receive our inert type from one of the method calls we’re calling the methods on that type so we don’t need null checks at the beginning of our methods as the implementations we’re calling will have no effect.

Example

// Simple testing class
class Node
{
	public int Number { get; set; }
	public Node Parent { get; set; }
}


// Arrange
Node node = new Node
{
	Number = 1,
	Parent = new Node
	{
		Number = 2,
		Parent = new Node
		{
			Number = 3
		}
	}
};

// Act
var third = node.Maybe()
	.Apply(n => n.Parent)
	.Apply(n => n.Parent)
	.Return();

// Assert
Assert.IsNotNull(third);
Assert.AreEqual(3, third.Number);

Here we've got a simple test class and object graph and our code is trying to return the grandparent of node. First we use the Maybe extension method to create the Maybe object after this we're calling methods on the Maybe object itself. The Apply method behaves like a Map method and applies the supplied Func to the subject of the Maybe, returning its result as a new Maybe object. The Return then unwraps the Maybe and returns the subject object if there is one. If any of the methods called on the Maybe object fail we'll end up with a null coming back from Return.

Implementation

The basic structure is an abstract Maybe class with two derived classes; ActualMaybe which contains the real implementation and NothingMaybe which is the Null Object type. The implicit operator on Maybe is where any null is handled.

public abstract class Maybe<T> where T : class
{
	public static readonly Maybe<T> Nothing = new NothingMaybe<T>();
 
	public static implicit operator Maybe<T>(T t)
	{
		return t == null ? Nothing : new ActualMaybe<T>(t);
	}
}

class ActualMaybe<T> : Maybe<T> where T : class
{
	readonly T _t;
	public ActualMaybe(T t)
	{
		if (t == null) throw new ArgumentNullException("t");
		_t = t;
	}
}

class NothingMaybe<T> : Maybe<T> where T : class
{

}

The implementation for Apply is as follows:

// Maybe<T> 
public abstract Maybe<TResult> Apply<TResult>(Func<T, TResult> func) where TResult : class;

// ActualMaybe<T>
public override Maybe<TResult> Apply<TResult>(Func<T, TResult> func)
{
	return func(_t);
}

// NothingMaybe<T>
public override Maybe<TResult> Apply<TResult>(Func<T, TResult> func)
{
	return Maybe<TResult>.Nothing;
}

Apply takes the map function func which operates on the type T and returns some other type TResult. Apply itself returns the Maybe of TResult.

The ActualMaybe implementation simply calls func passing _t, which is the contained object, and returns the result of func. There is more going on here though; first _t can't be null because of the check in the ActualMaybe constructor so we don't need a null check, second we return whatever comes out of func but because the method returns a Maybe of TResult the implicit cast takes place and any nul coming out of func is replaced.

The NothingMaybe implementation ignores func altogether and just returns a NothingMaybe of TResult using the static readonly Nothing field on Maybe<T>.

The ActualMaybe implementation of Return returns _t while the NothingMaybe implementation always returns null.

I’ve implemented a couple of other useful methods including Do(Action<T>), If(Predicate<T>), Cast<TResult>() and AsEnumerable() as well as several overloads.

Possibilities

I think this Null Object approach could be combined with the Visitor pattern to achieve some extensibility although I’m not entirely sure how it would work or whether it would even be necessary.

Another possible extension is some kind of Collect method which would allow you to cherry pick particular objects from a graph and then would return an IEnumerable over just those objects at the end.

Code

I’ve put the code up on Github here:
http://github.com/dezfowler/Monads

18 September 2010

ASP.NET custom error not shown in Azure

If you’re using a custom error handler in an ASP.NET Azure web role, for example, to return a branded error page you may find the custom page isn’t surfaced too the browser and instead you receive a standard IIS error.

When your handler is setting the correct response status, relevant to the type of error e.g. 404, 500 etc, the default web role configuration means the error page content you supply will not be passed on. This is complicated by the DevFabric not using the same configuration i.e. your custom error page will appear as expected when you’re testing in DevFabric.

The configuration setting requiring tweaking is in the system.webServer section; setting httpErrors’ existingResponse attribute to “PassThrough” will ensure that, if any content is supplied with the ASP.NET error response, it is returned to the browser.

<configuration>
  <system.webServer>
    <httpErrors existingResponse="PassThrough"/>
  </system.webServer>
<configuration>

28 August 2010

Aggregate full outer join in LINQ

I’ve recently been working on adding a feature to Rob Ashton’s AutoPoco project, a framework which enables dynamic creation of Plain Old CLR Object test data sets using realistic ranges of values. Rather than explicitly defining sets of objects in code, loading them from a database or deserializing them from a file the framework allows you to pre-define the make-up of the data set and then automatically generates the objects to meet your criteria.

I had a requirement that, from some sets of possible values for particular properties of a type, I  needed to create an instance for every variation of those values. Defining all the variations manually would take along time, be difficult to maintain and error prone. Dynamic generation seemed the way to go and after checking with Rob whether this was already a feature of AutoPoco and finding out it wasn’t I proceeded to have a go at implementing a GetAllVariations method.

The principal problem here is that we need to perform an operation analogous to a SQL full outer join on n sets of values. For example, give the following type:

public class Blah
{
	public int Integer { get; set; }
	public string StringA { get; set; }
	public string StringB { get; set; }
}

and the possible values:

Integer: [ 1, 2, 3 ]
StringA: [ "hello", "world" ]
StringB: [ "foo", "bar" ]

the output should be 12 objects with the following property values:

# Integer StringA StringB
1 1 hello foo
2 1 hello bar
3 1 world foo
4 1 world bar
5 2 hello foo
6 2 hello bar
7 2 world foo
8 2 world bar
9 3 hello foo
10 3 hello bar
11 3 world foo
12 3 world bar

Achieving this using LINQ

A full outer join can be performed in LINQ as follows:

var A = new List<object>
	{
		1, 
		2,
		3,
	};

var B = new List<object>
	{
		"hello",
		"world",
	};

A.Join(B, r => 0, r => 0, (a, b) => new List<object>{ a, b }).Dump();

Note: I’m using the LINQPad Dump() extension method here.

Fairly straight forward, we just set the join values to zero which forces a set to be produced where every value in A is joined to every other value in B. Ordinarily the join result selector would create a new anonymous type but I’m creating a new List here for reasons that will become obvious in a second.

We don’t know in advance how many sets of values we’re going to have, the user may want to set values for two or twenty properties. We need to be able to perform this same join for n sets, we’ll be working with a collection of these value sets. We can achieve this by combining the join with an aggregate operation e.g.

List<List<object>> sources = new List<List<object>>
{
	new List<object>
	{
		1, 
		2,
		3,
	},
	new List<object>
	{
		"hello",
		"world",
	},
	new List<object>
	{
		"foo",
		"bar",
	},
};

sources.Aggregate(
 	Enumerable.Repeat(new List<object>(), 1),
	(a, d) => a.Join(d, r => 0, r => 0, (f, g) => new List<object>(f) { g })
).Dump();

Here sources could contain any number of List objects and those List objects, containing the raw property values, can also contain any number of items. The output of the operation will be an enumeration over every variation of the values in sources, each represented as a List (in this case containing three items, one for each of the sources). We seed the Aggregate with what we expect to get out i.e. an IEnumerable of List objects. Our aggregating function is our join operation with a slight modification, our result selector creates a new List containing the result of the previous join (f) and the uses the collection initializer syntax to add one additional item (g), from the current set of values being joined on.

A relatively complex operation reduced to, effectively, a one-liner using LINQ. Snazzy.

22 August 2010

Roll your own mocks with RealProxy

These days there are more than enough mocking frameworks to choose from but if you need something a bit different, or just fancy having a go at the problem as an exercise, creating your own is easier than you might think. You don’t need to go anywhere near IL generation for certain tasks as where are a couple of types in the Framework which can get us most of the way on their own.

.NET 4.0 has the DynamicObject class which can be used for this as it allows you to provide custom implementations for any method or property. However there is another class which has been in the Framework since 1.1 that can be used in a similar way.

RealProxy is meant for creating proxy classes for remoting however there’s no reason why we can’t make use of its proxy capabilities and forget the remoting part, instead providing our own mocking implementation. Lets look at a simple example.

If it looks like a duck but can't walk it's a lame duck

If you're using dependency injection and are writing your code defensively you'll probably have constructors which look something like this:

public MyClass(ISupplyConfiguration config, ISupplyDomainInfo domain, ISupplyUserData userRepository)
{
 if(config == null) throw new ArgumentNullException("config");
 if(domain == null) throw new ArgumentNullException("domain");
 if(userRepository == null) throw new ArgumentNullException("userRepository");
 // ...assignments...
}

The unit test for whether this constructor correctly throws ArgumentNullExceptions when it's expected to will require at least some implementation of ISupplyConfiguration and ISupplyDomainInfo in order to successfully test the last check for userRepository.

All we need here is something that looks like the correct interface; it needn't be a concrete implementation or work as, for these tests, all we need is for it to not be null. Here’s how we could achieve this with RealProxy and relatively little code.

First we create a class inheriting from the abstract RealProxy:

public class RubbishProxy : System.Runtime.Remoting.Proxies.RealProxy
{
 public RubbishProxy(Type type) : base(type) {}

 public override System.Runtime.Remoting.Messaging.IMessage Invoke(System.Runtime.Remoting.Messaging.IMessage msg)
 {
  throw new NotImplementedException();
 }

 /// <summary>
 /// Creates a transparent proxy for type <typeparamref name="T"/> and 
 /// returns it.
 /// </summary>
 /// <typeparam name="T"></typeparam>
 /// <returns></returns>
 public static T Make<T>()
 {
  return (T)new RubbishProxy(typeof(T)).GetTransparentProxy();
 }
}

That's all, effectively just the boiler plate implementation code for the abstract class with one constructor specified and a static generic method for ease of use. We can then use it in our test method like so:

[Test]
[ExpectedException(typeof(ArgumentNullException))]
public void ExampleRealWorldTest_EnsureExceptionOnNullConfig()
{
 var myClass = new MyClass(null, null, null);
}

[Test]
[ExpectedException(typeof(ArgumentNullException))]
public void ExampleRealWorldTest_EnsureExceptionOnNullDomain()
{
 var config = RubbishProxy.Make<ISupplyConfiguration>();
 var myClass = new MyClass(config, null, null);
}

[Test]
[ExpectedException(typeof(ArgumentNullException))]
public void ExampleRealWorldTest_EnsureExceptionOnNullRepository()
{
 var config = RubbishProxy.Make<ISupplyConfiguration>();
 var domain = RubbishProxy.Make<ISupplyDomainInfo>();
 var myClass = new MyClass(config, domain, null);
}

Not bad for one line of code. How about something more complex?

Making a mockery of testing

The Invoke method we overrode in RubbishProxy can perform any action we like including checking arguments, returning values and throwing exceptions. In mocking frameworks, the most common method of setting up this behaviour is using a fluent interface e.g.

[Test]
public void ReadOnlyPropertyReturnsCorrectValue()
{
	var mock = new Mock<IBlah>();
	mock.When(o => o.ReadOnly).Return("thing");
	var blah = mock.Object;
	Assert.AreEqual("thing", blah.ReadOnly);
}

Here the When call captures o.ReadOnly as an expression, determining which member was the invokation target and returning a Call object. The Call object is then used to set up a return value as in the example above, or to check the passed arguments (CheckArguments) or throw an exception (Throw). It can also be set up to ignore the call or, in the case of a method call, to apply any one of those previous behaviours to only when particular arguments are passed in.

[Test]
[ExpectedException(typeof(ForcedException))]
public void MethodCallThrows()
{
	var mock = new Mock<IBlah>();
	mock.When(o => o.GetThing()).Throw();
	var blah = mock.Object;
	int i = blah.GetThing();
}

[Test]
public void MethodCallValid()
{
	var mock = new Mock<IBlah>();
	mock.When(o => o.DoThing(5)).CheckArguments();
	var blah = mock.Object;
	blah.DoThing(5);
}

[Test]
[ExpectedException(typeof(MockException))]
public void MethodCallInvalid()
{
	var mock = new Mock<IBlah>();
	mock.When(o => o.DoThing(5)).CheckArguments();
	var blah = mock.Object;
	blah.DoThing(4);
}

Source code for the example mock framework is up on GitHub here:
http://github.com/dezfowler/LiteMock

11 August 2010

Model binding and localization in ASP.NET MVC2

When creating an MVC site catering for different cultures, one option for persisting the culture value from one page to the next is by using an extra route value containing some form of identifier for the locale e.g.

/en-gb/Home/Index
/en-us/Cart/Checkout
/it-it/Product/Detail/1234

Here just using the Windows standard culture names based on RFC 4646 but you could use some other standard or your own custom codes. This method doesn’t rely on sessions or cookies and also has the advantage that the site can be spidered in each supported language.

Creating a base controller class for your site allows you to override one of its methods in order to set your current culture. For example if you amend your route configuration to "{locale}/{controller}/{action}/{id}" you could do the following:

string locale = RouteData.GetRequiredString("locale");
CultureInfo culture = CultureInfo.CreateSpecificCulture(locale);
Thread.CurrentThread.CurrentCulture = culture;
Thread.CurrentThread.CurrentUICulture = culture;

It's important to set both CurrentCulture and CurrentUICulture as ResourceManager, used for retrieving values form localized .resx files, will refer to CurrentUICulture whereas most other formatting routines use CurrentCulture.

Once our culture is set, when we output values in our views ResourceManager can pick up our culture specific text translations from the correct .resx file and dates and currency values will be correctly formatted. String.Format("{0:s}", DateTime.Now), with "s" being the format string for a short date, will produce mm/dd/yyyy for en-US versus dd/mm/yyyy for en-GB.

This isn't the end of the story however, the problem arises of where in the controller do you perform your culture setting. It can't happen in the constructor because the route data isn't yet available so instead we could put it in an override of OnActionExecuting. This will seem to work fine for values output in your views but you come across a gotcha within model binding. Create a textbox in a form which binds to a DateTime and you'll end up with the string value being parsed using the default culture of the server. Using the US and UK dates example where your server's default culture is US but your site is currently set to UK. If you try to enter a date of 22/01/2010 you'll get a model validation error because it's being parsed as the US mm/dd/yyyy and 22 isn't a valid value for the month. Model binding happens before OnActionExecuting so that's no good.

A bit of digging around in Reflector and the Initialize method comes out as probably the best candidate for this as it is where the controller first receives route data and it occurs before model binding. We end up with something like (exception handling omitted for brevity):

protected override void Initialize(RequestContext requestContext) 
{
    base.Initialize(requestContext);
    string locale = RouteData.GetRequiredString("locale");
    CultureInfo culture = CultureInfo.CreateSpecificCulture(locale);
    Thread.CurrentThread.CurrentCulture = culture;
    Thread.CurrentThread.CurrentUICulture = culture;
 }

Both model binding and output of values will now be using the correct culture.

18 July 2010

Creating a light-weight visitor, fluently in C#

In object-oriented programming a common problem is performing some conditional logic based on the type of an object at run-time. For example, one form you may come across is:

public void DoStuff(MemberInfo memberInfo)
{
 EventInfo eventInfo = memberInfo as EventInfo;
 if(eventInfo != null)
 {
  // do something
  return;
 }

 MethodInfo methodInfo = memberInfo as MethodInfo;
 if(methodInfo != null)
 {
  // do something
  return;
 }

 PropertyInfo propertyInfo = memberInfo as PropertyInfo;
 if(propertyInfo != null)
 {
  // do something
  return;
 }

 throw new Exception("Not supported.");
}

Drawbacks to this being you have to wrap the whole thing in a method to make use of the "bomb out" return statements and it's quite a lot of code repetition which, as I’ve talked about previously, I'm not a fan of. Another example is a dictionary type->operation lookup:

// set up some type to operation mappings
static readonly Dictionary<Type, Action<MemberInfo>> operations = new Dictionary<Type, Action<MemberInfo>>();

// probably inside the static constructor...
operations.Add(typeof(EventInfo), memberInfo => 
{
 EventInfo eventInfo = (EventInfo)memberInfo;
 // do somthing 
});
operations.Add(typeof(MethodInfo), memberInfo =>
{
 MethodInfo methodInfo = (MethodInfo)memberInfo;
 // do something
});
operations.Add(typeof(PropertyInfo), memberInfo =>
{
 PropertyInfo propertyInfo = (PropertyInfo)memberInfo;
 // do something
});

// use it like this...
Type type = memberInfo.GetType();
Type matchingType = operations.Keys.FirstOrDefault(t => t.IsAssignableFrom(type));
if(matchingType != null)
{
 operations[matchingType](memberInfo);
}

The major drawback with this method is that you have to use IsAssignableFrom otherwise it doesn't match inherited types. In fact, the above example doesn't work if you just look up the type of memberInfo directly because we'll get types derived from EventInfo etc, not those types themselves. We also still need to cast to the type we want to work with ourselves and enumerating the dictionary isn’t ideal from a performance point of view.

The GoF pattern for solving this is the visitor which I’ve blogged about in the past however this is rather heavy duty, especially if your "do something" is only one line. It is much more performant than the alternatives though, as it’s using low level logic inside the run-time to make the decision about which method to call, so that should be a consideration.

Then next best alternative to the proper visitor is the first ...as...if...return... form but we can wrap it up quite nicely with a couple of extension methods to cut down on the amount of code we have to write. Here’s a trivial example trying to retrieve the parameters for either a method or a property. Depending on the type we need to call a different method so we identify that method using a fluent visitor:

private Type[] GetParamTypes(MemberInfo memberInfo)
{
 Func<ParameterInfo[]> paramGetter = null;

 memberInfo
  .As<MethodInfo>(method => paramGetter = method.GetParameters)
  .As<PropertyInfo>(property => paramGetter = property.GetIndexParameters)
  .As<Object>(o => { throw new Exception("Unsupported member type."); });

 return paramGetter().Select(pi => pi.ParameterType).ToArray();
}

The As extension attempts to cast “this” as the type specified by the type parameter T and if successful calls the supplied delegate. The overload used in the example above will skip the remaining As calls once one has been successful. There is a second overload which takes a Func<T, bool> rather than an Action<T> and will continue to try the next As if false is returned from the Func. The last As call, by specifying Object as the type, is a catch all and allows providing a default implementation or catering for an error case as shown above. The extensions are implemented like so:

/// <summary>
/// Tries to cast an object as type <typeparamref name="T"/> and if successful 
/// calls <paramref name="operation"/>, passing it in.
/// </summary>
/// <typeparam name="T">Type to attempt to cast <paramref name="o"/> as</typeparam>
/// <param name="o"></param>
/// <param name="operation">Operation to be performed if cast is successful</param>
/// <returns>
/// Null if the object cast was successful, 
/// otherwise returns the object for chaining purposes.
/// </returns>
public static object As<T>(this object o, Action<T> operation)
 where T : class
{
 return o.As<T>(obj => { operation(obj); return true; });
}

/// <summary>
/// Tries to cast an object as type <typeparamref name="T"/> and if successful 
/// calls <paramref name="operation"/>, passing it in.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="o"></param>
/// <param name="operation">Operation to be performed if cast is successful, must return 
/// a boolean indicating whether the object was handled.</param>
/// <returns>
/// Null if the object cast was successful and <paramref name="operation"/> returned true, 
/// otherwise returns the object for chaining purposes.
/// </returns>
public static object As<T>(this object o, Func<T, bool> operation)
 where T : class
{
 if (Object.ReferenceEquals(o, null)) return null;

 T t = o as T;
 if (!Object.ReferenceEquals(t, null))
 {
  if (operation(t)) return null;
 }
 return o;
}

14 July 2010

UTC gotchas in .NET and SQL Server

After doing some work with DateTime recently I stumbled across the interesting behaviour that a DateTime which is DateTimeKind.Unspecified will be treated as a DateTimeKind.Local whenever you try to perform some operation upon it. You get an “unspecified” DateTime whenever you don’t explicitly say it is Utc or Local. This makes sense because, when you do the following, in most cases what you intended was to use local time:

DateTime d1 = new DateTime(2010, 07, 01, 12, 0 ,0, 0);

If the current timezone is UTC +01:00 here's what I get when working with the DateTime created above:

d1.Kind; // => Unspecified
d1; // => 01/07/2010 12:00:00
d1.ToUniversalTime(); // => 01/07/2010 11:00:00
TimeZoneInfo.Local.GetUtcOffset(d1); // => 01:00:00

Note it’s applied an offset when calculating the UTC value which as we can see for clarification is +1 hour.

If what we actually wanted was a UTC time we need to explicitly specify the kind e.g.

DateTime d2 = new DateTime(2010, 07, 01, 12, 0 ,0, 0, DateTimeKind.Utc);
DateTime d2 = DateTime.UtcNow;

If you need to work with timezones other than UTC or the system timezone then you'll want to use DateTimeOffset rather than DateTime.

SQL Server and SqlDataReader

Another interesting gotcha arising from this is that the SQL Server datetime data type is also timezone agnostic. Any datetime values retrieved through the SqlDataReader will be an “unspecified” kind DateTime. This means that, even if you're correctly using the C# DateTime.UtcNow or the SQL GETUTCDATE() to produce the values in the database, when you try to retrieve them they will be shifted incorrectly according to the local timezone. Yikes!

There are two ways to deal with this.

DateTime.SpecifyKind()

The first is in C# using DateTime.SpecifyKind():

DateTime d3 = DateTime.SpecifyKind(d1, DateTimeKind.Utc);
d3.Kind; // => Utc
d3; // => 01/07/2010 12:00:00
d3.ToUniversalTime(); // => 01/07/2010 12:00:00

Which could be wrapped up in an extension method for ease of use e.g.

public static class SqlDataReaderExtensions
{
 public static DateTime GetDateTimeUtc(this SqlDataReader reader, string name)
 {
  int fieldOrdinal = reader.GetOrdinal(name);
  DateTime unspecified = reader.GetDateTime(fieldOrdinal);
  return DateTime.SpecifyKind(unspecified, DateTimeKind.Utc);
 }
}

SQL Server 2008 datetimeoffset

If you're using SQL Server 2008 you have the option of using the datetimeoffset data type instead. This will store the +00:00 timezone internally and the SqlDataReader will then retrieve the value correctly as a DateTimeOffset. No need to muck about with Kind.

If you have an existing database using datetime you can CAST these as a datetimeoffset in your query which usefully uses an offset of +00:00 in this case. (It treats "unspecified" as UTC – tut!)

31 May 2010

JavaScript-style Substring in C#

One thing that really bugs me when writing code is having to use unnecessary extra constructs to avoid exceptions or useless default values emerging. One such situation is trimming a string to a particular length e.g.

string sentence = "The quick brown fox jumps over the lazy dog";
string firstFifty = sentence.Substring(0, 50);

I want the first 50 characters from the sentence but in this example we get an ArgumentOutOfRangeException because there aren’t 50 characters in sentence. Not too helpful and it's an easy mistake to make. To avoid the exception we have to do this:

firstFifty = sentence.Length < 50 ? sentence : sentence.Substring(0, 50);

Yikes! That’s a lot of extra rubbish when all I want is the equivalent of LEFT(sentence, 50) in SQL.

We can easily wrap this up in a "Left" method but chances are we’re going to need a “Right” too so instead we can go down the route JavaScript takes with its "slice" function. JavaScript’s string slice can take one integer argument which, if positive, returns characters from the start of the string and, if negative, returns characters from the end of the string. Adding an overload to allow it to take a padding character is probably sensible too. The end result looks like this:

firstFifty = sentence.Slice(50);
// "The quick brown fox jumps over the lazy dog"
	
string firstTen = sentence.Slice(10);
// "The quick "

string lastTen = sentence.Slice(-10);
// "e lazy dog"

firstFifty = sentence.Slice(50, '=');
// "The quick brown fox jumps over the lazy dog=============="

string lastFifty = sentence.Slice(-50, '=');
// "==============The quick brown fox jumps over the lazy dog"

A lot more concise and quite useful.

public static class StringExtensions
{
   /// <summary>
   /// Returns a portion of the String value. If value has Length longer than 
   /// maxLength then it is trimmed otherwise value is simply returned.
   /// </summary>
   /// <returns>
   /// String whose Length will be at most equal to maxLength.
   /// </returns>
   public static string Slice(this string value, int maxLength)
   {
      if (value == null) throw new ArgumentNullException("value");
      
      int start = 0;
      if (maxLength < 0)
      {
         start = value.Length + maxLength;
         maxLength = Math.Abs(maxLength);
      }
      return value.Length < maxLength ? value : value.Substring(start, maxLength);
   }
   
   /// <summary>
   /// Returns a portion of the String value. If value has Length longer than 
   /// length then it is trimmed otherwise value is padded to length with 
   /// shortfallPaddingChar.
   /// </summary>
   /// <returns>
   /// String whose Length will be equal to length.
   /// </returns>
   public static string Slice(this string value, int length, char shortfallPaddingChar)
   {
      if (value == null) throw new ArgumentNullException("value");
      
      string part = value.Slice(length);
      int abslen = Math.Abs(length);
      if(abslen > part.Length)
      {
         part = length < 0 ? part.PadLeft(abslen, shortfallPaddingChar) : part.PadRight(abslen, shortfallPaddingChar);
      }
      return part;
   }
}

25 May 2010

Silverlight 3 Behavior causing XAML error

A recent XAML error I received from a Silverlight Behavior had me going round in circles trying to find the cause for quite a while. I was getting an AG_E_PARSER_BAD_PROPERTY_VALUE in code similar to the following:

<canvas x:name="Blah">
   <i:Interaction.Behaviors>
      <myapp:SpecialBehavior Source="{Binding SomeProperty}" />
   </i:Interaction.Behaviors>
   ...
</canvas>

The error identified the myapp:SpecialBehavior line as the culprit but didn't give me any further information so I proceeded to try and debug the binding to see what was going wrong. This didn’t shed any light on the cause, the binding was being created fine – the error was occurring later on.

This had me stumped for a couple of hours – I even tried setting up Framework source stepping only to find that the Silverlight 3 symbols weren’t yet available. In the end I stumbled upon the answer by chance – looking at the Canvas class in Reflector I noticed that it didn’t inherit from Control, only FrameworkElement via Panel. A quick check of my Behavior code and I found this:

public class SpecialBehavior : Behavior<Control>

It was the Behavior itself that was invalid in the Interaction.Behaviors property due to the incompatible type parameter. I changed Control to FrameworkElement and everything started working fine.

16 May 2010

Running UI operations sequentially in Silverlight

I've been playing around with Silverlight recently and have come across a requirement of needing to wait for the UI to do something before continuing. For example I have a UI with elements such and an image and text bound to properties of a model object. When the model object changes the interface updates to reflect this change but I need to perform an "unload" transition just before the model changes and a "load" transition just after it had changes.

Instead of this:

before I want this:

after

The orange arrows representing the transitions.

I considered having BeforeChange and AfterChange events, hooking my transition storyboards up to them and then firing them in the model setter. The trouble with this is that the storyboards will be playing in a separate thread so as soon as the BeforeChange one starts our code will have moved on and fired the AfterChange one. The result will be that we'll never see the "before" transition which will ruin the whole effect.

Mike Taulty posted about this same issue in 2008 highlighting that, to achieve the correct result, we end up needing to chain our code together using the Completed events of our storyboards. His solution was using some classes to wrap this up and I've taken a similar approach apart from that I have the sequence defined fluently and included the option of using visual states rather than explicitly defined storyboards.

private Album CurrentAlbum
{
   get
   {
      return this.DataContext as Album;
   }
   set 
   {
      if (CurrentAlbum != value)
      {
         new Sequence()
            .GoTo(this, LayoutRoot, "VisualStateGroup", "AlbumUnloaded")
            .Execute(() =>
            {
               this.DataContext = value;
            })
            .GoTo(this, LayoutRoot, "VisualStateGroup", "AlbumLoaded")
            .Run();		
      }
   }
}

It ends up being a lot quicker to write the code and I think it's quite obvious by reading it what will happen. If the visual state group or states aren't defined then only the inner assignment occurs.

The source for the Sequence class is a bit big for this post so the gist is here: Sequence.cs 

Considerations

Deferred execution
The storyboard or visual state change Completed event we're waiting for may never happen - do we try to execute the next steps anyway? I’ve taken the approach of firing off the next step in the destructor of the class however it may make more sense to set some arbitrary timeout so if the transition hasn’t completed after say 10 seconds we fire off the next step anyway.
Reuse
Should we allow a sequence to be created once and then reused many times - we could have an overload of Run() that takes a context object and passes it on to each of the steps. Could run into issues with people using closures like I do in the example. I’ve stuck with single use in the class, throwing an exception if Run() is called a second time.

22 February 2010

DLNA in the real world

In my very first post of 2007 I talked about the promise of DLNA and that with compliant devices you could enjoy freedom to consume your digital media wherever or however you wanted. Unfortunately, as I have experienced first hand, the reality is far from the convenient ideal that DLNA professes to provide.

I currently have a Linksys Media Hub NMH410 and a Sony Bravia 32V5500 television connected to my network. Both are DLNA-compliant devices and support streaming pictures, audio and video over the network. The Media Hub runs Twonky Media Server which, to all intents and purposes, is the reference implementation of a DLNA server. The Bravia has a DLNA compatible renderer, similar to the PS3 which can also stream content over a network.

The trouble is that, while I can view photos from the Media Hub on the TV fine, both audio and video fail to work. I can browse audio files on the Media Hub however only the first 20 seconds of each file will play and after that I get a "Playback not available" error. Video files on the other hand are a non-starter and can't be browsed or played.

I suspected my network or the format of files I was trying to stream may have been to blame but after a (lengthy) process of elimination I took both out of the equation. Installing Twonky Media Manager on my PC, which acts as both a server and client, I was able to stream all the content types off the Media Hub fine and was also able to stream content to the TV fine. It was just the particular version of Twonky on the Media Hub and the renderer in the TV that didn't like each other.

Threads on a fair few forums across the net show that lots of folk are experiencing the same issues:

I found this all very odd as surely the DLNA badge on both devices ensures that they're compatible and will have no trouble in streaming the content. This is apparently not the case and Sony themselves confirmed this to me by e-mail saying:

Unfortunately the KDL-32V5500 Media compatibility results with DLNA Media servers are as follows for Twonky Media 1.0.0.115 you can play back JPG formats. This information is the result of tests performed in Sony laboratories.

In other words their DLNA-compliance for that particular server only extends to pictures. Quite how they can call this compliance I don't know but luckily they have a nice get out clause:

Unfortunately the Twonky Media software versions you are currently using are not guaranteed to work, as third party software developers are susceptible to modify the features of their software.

So basically, "even though we're DLNA-compliant and you're using a DLNA-compliant server we give you no guarantees they'll work together whatsoever". If this is the way companies behave with their DLNA certification, if they can just get around any non-compliance issues by saying "it's the other guy's problem" then one really does wonder what the point of DLNA is.

16 February 2010

SQL Server Error Severity and .NET

I actually wrote this almost a year ago but forgot to post it.

The other day I noticed some oddness in the Messages window of SQL Server Management Studio when using RAISERROR with different error severities. Running the following:

PRINT 'Start.'
RAISERROR (N'Error.', 16, 1)
PRINT 'End.'

produces...

Start.
Msg 50000, Level 16, State 1, Line 2
Error.
End.

However if you increase the severity to 17 or 18 you get this...

Start.
End.
Msg 50000, Level 18, State 1, Line 2
Error.

Odd as the error message has now moved from between the "Start" and "End" to after the "End". Wondering what the significance of the change from severity 16 to 17 was and why Management Studio should treat them differently I headed over to SQL Server Books Online which says:

  • 0-10 are informational messages
  • 11-16 are errors that can be corrected by the user
  • 17-19 are application errors that the user can't correct
  • 20 and above are fatal errors

So there is a difference but that still doesn't explain Management Studio's behaviour. Management Studio uses the .NET SqlClient for running queries so a brief look at the docs shows the SqlConnection class has a FireInfoMessageEventOnUserErrors property which, when set to true, reports any errors of severity less than 17 to the InfoMessage handler rather than throwing a SqlException.

I've put together a quick Snippet Compiler script to test this out which you can download here. The script has a connection string at the top which is looking for a local SQL Express instance with integrated security by default so you may need to amend this.

Results

Queries 3 and 4

These two queries show the differences observed in Management Studio quite nicely with the InfoMessage handler being fired for the severity 16 error but a SqlException being thrown for the severity 18.

-------------------------------------
Executing Query 3:


PRINT 'Start.'
RAISERROR (N'Error.', 16, 1)
PRINT 'End.'


Messages:

Info message fired: Start.
Info message fired: Error.
Info message fired: End.


Result:
Success.
-------------------------------------
Executing Query 4:


PRINT 'Start.'
RAISERROR (N'Error.', 18, 1)
PRINT 'End.'


Messages:

Info message fired: Start.
Info message fired: End.


Result:
SqlException.
Error severity: 18
Message: Error.
-------------------------------------

Query 5

Query 5 causing a severity 20 error (a fatal error) displays some slightly different behaviour as it both fires the InfoMessage handler and throws a SqlException.

-------------------------------------
Executing Query 5:


PRINT 'Start.'
RAISERROR (N'Error.', 20, 1) WITH LOG
PRINT 'End.'


Messages:

Info message fired: Start.
Info message fired: Process ID 51 has raised user error 50000, severity 20. SQL
Server is terminating this process.


Result:
SqlException.
Error severity: 20
Message: Error.
A severe error occurred on the current command.  The results, if any, should be
discarded.
-------------------------------------

Queries 6, 7 and 8

These show how the RAISERROR interacts with a SQL TRY...CATCH block. The RAISERROR within the TRY block only fires the InfoMessage handler in the severity 20 case with the RAISERROR in the CATCH block only firing the handler in the severity 16 case.

Conclusion

It seems like you could build in some quite nice fine grain logging into your SQL statements, stored procs etc by using hooking up a SqlConnection InfoMessage handler and setting FireInfoMessageEventOnUserErrors to true. What's more is that you could write this information out to your application's log file along with the Debug or Trace calls from your code. Considering in some cases you may have quite a lot of logic in a stored procedure especially it may turn out to be really helpful having all your debugging information in one place.

13 February 2010

Windows Update KB977165 causes BSoD

Had fun and games with Kath's laptop last night as it had mysteriously started blue screening with a PAGE_FAULT_IN_NONPAGED_AREA error on startup. Safe mode was also affected with the boot getting as far as mup.sys and then failing.

She's running Windows XP dual-boot on a MacBook however OS X was unaffected so a hardware issue seemed unlikely. I suspected a recent Windows Update may be to blame although I've never come across such a severe failure resulting from an update before.

After a quick Google search (using Safari) I found this thread on the Microsoft forums entitled "BLUE SCREEN, UNABLE TO BOOT AFTER WINDOWS XP UPDATE TODAY". The thread currently has 336 replies and 132,262 views so it's a fair bet a lot of people's machines have been affected.

Reading the replies to the post it seems only XP and Vista are affected with the BSoD although the patch applied to all recent versions of Windows. Some replies speculate that it may be down to a previous virus infection which Kath's laptop has had so this could well be the case.

The culprit is one of Tuesday 9th February's Windows Updates, KB977165, and you need to boot into Recovery Console using an XP CD and run some commands in order to uninstall it - full details are in Kevin Hau's answer in the thread.

That's fine for people with an XP CD but pre-installed shop-bought PCs don't normally come with one and Netbooks don't even have a CD drive. Anyone without access to an XP CD will have to either buy a copy or consider upgrading to Windows 7. I suppose any bad publicity Microsoft may get from this won't concern them as they'll be busy watching their Win 7 uptake figures get a healthy bump this month. Of course, maybe people will decide to go with OS X or Ubuntu instead.

Links