27 December 2013

Windows 8.1 on high DPI

I’ve been working with Windows 8.1 on a Dell XPS 15 for about eight weeks now and I thought I’d share some of my experiences of working with display scaling as the Dell has a 3200x1800 display.

Being what Apple would term “Retina”, the display has a pixel density of almost 250 PPI, which is matched only by a handful of other Windows laptops at the moment. Until recently the limiting factor in this area has been that desktop operating systems have expected a display’s resolution to scale more or less linearly with its size meaning the pixels per inch didn’t change a great deal.

Using fonts for an example, 10 point text should be about 10/72 of an inch or 3.5mm high  (1 point = 1/72 inch). Windows, by default, renders 10 point text about 13 pixels high which, if you do the math, is assuming a PPI of 96. Some background on where this 96 comes from can be found in this MSDN article. In the case of printers – when you print 10 point text you will get text that is 3.5mm high regardless of the printer’s DPI. The higher the DPI the crisper the text will appear but the characters will be the same size. The same is not true for displays, however. This hasn’t been so much of a problem up until now because average pixel density has been between about 90 and 120 but now we’re nearer to 250 pixels per inch that same 10 point text is only about 1mm high which is essentially impossible to read.

Obviously with high DPI displays some solution to this and the reasonable scaling of other elements on screen is required so that we can have nice crisp visuals that aren’t comically tiny. The operating systems are finally catching up and in Windows 8.1 are some usable scaling options for a high DPI display but it’s fair to say that, yet again, Apple have led the charge in this department with their Retina options in OSX.

In Windows 8.1 the scalings are 100%, 125%, 150% and 200%. Set at 200% on the XPS 15, for example, this renders things like text at a size you would see on a 1600x900 display. The scaling happens differently depending on the application. For most classic desktop applications such as Chrome it simply does a crude resize – essentially rendering at 1600x900 and then blowing the image up so you get a lot of pixilation and rough edges. For “Metro” apps and some desktop apps the scaling factor is passed to the app which scales the sizes of the UI elements as appropriate but renders them using the full resolution of the display.

It’s reasonable but it’s far from perfect unfortunately as there are still a lot of visual elements which don’t scale quite right and every so often you encounter some custom rendered dialog that isn’t scaled at all and you have to break out the magnifier tool.

Another oddity, which may be exclusive to the drivers for the XPS 15, is that coming out of standby mode loses the scaling option. It switches back to 100% scaling and you have to switch to external display mode and back to force it to pick up the scaling again.

Hopefully things will improve with updates to applications and subsequent revisions of Windows.

05 October 2012

NFC payments - it's not for you!

NFC payment terminals are becoming more common and all the credit/debit cards in my wallet have supported NFC for about 6 months which is great as it's much more convenient, especially for buying a coffee or lunch.

NFC and by extension RFID are nothing new - I think I first saw a dog getting an RFID implant put in on Blue Peter in the early 90's and next year NFC will have run the London Underground for 10 years in the form of the Oyster card. It's taken a long time for the banks to warm to this technology - maybe because there's a lot of security protocols to be determined and a lot of liability sums to be calculated etc.

I've had a Google Nexus S for about 18 months which was, from what I've read, the first NFC-enabled handset available in the UK. When I bought it Google were yet to release Wallet, their NFC payment app for Android, but there weren't many NFC payment terminals available so it wasn't that much of a big deal.

The Wallet logo is quite a clever echo of the NFC payment logo

Wallet has since been released in the US and is supported by all the major credit card companies but that's where the good news ends. It seems Google have deals with particular networks, Sprint being the main one, meaning that even if you have an NFC-enabled Android handset you can only use Wallet if you're on one of the approved networks. What's worse is that it isn't available in the UK and there's no word from Google on when or if it will be.

What is particularly odd is that the Nexus 7 has no such restriction. I can only assume this is because it has no GSM modem so there is no deal to be made with a mobile network. This is particularly frustrating because I can see that, if you buy a phone from a particular carrier and that carrier doesn't have a deal with Google, you won't get Wallet but I bought my Nexus S SIM free from Carphone Warehouse so the phone itself has no network affiliation and yet I still can't use Wallet.

What is quite interesting and may shine some light on the whole delay in Wallet getting to the UK is the release of Quick Tap from Barclaycard and Orange. Although Orange sell 10 NFC-enabled handsets only 2 of them are "Quick Tap ready", both of which happen to be the Galaxy SIII, probably their most popular and expensive handset apart from the iPhone. I doubt there's technically anything special about the SIII that means it can be used for payments where the other handsets can't - all the others are cheaper so my guess is it's entirely about forcing people to buy a more expensive handset.

If the other UK networks and card companies are doing similar deals it's no wonder a service like Wallet is unavailable as there is money to be made and phones to be sold. All in all it's pretty rubbish for the early adopter and the consumer in general.

Surely the fact that a phone is PIN protected and the NFC is not always on actually makes it a more secure way of implementing NFC payments. People can't skim your phone the way they can the cards in your actual wallet.

Guess I'll just have to wait and see where this farcical endeavour goes. In the meantime I'll look forward to an Oyster app (which would be pretty ace) and scanning some NFC business cards, I suppose. Whoopee!

20 March 2012

Interfaces and IoC

If you want to use inversion of control, unit testing and adhere to SOLID principals in your C# code this often means you have a lot of interfaces. Core considerations when dealing with interfaces are things like:

  • Where should the interface be defined – alongside the main implementation or in a separate assembly?
  • Should the interface be generic or not?
  • Am I breaking interface segregation principal?

The one that sometimes falls by the wayside is:

  • Does the the interface definition match my intended usage?

Example

A trivial example of this might be where you have a database containing a Log table of messages from an application where each has an ID of some kind, type, source, message and date/time recorded. The interface for the data access to this table might be:

public interface ILogRepository { IEnumerable<Log> GetLogs(); }

Innocuous enough however what if all our usages of this interface and method require that the resultant IEnumerable is ordered by the recorded time of the log message. IEnumerable alone doesn’t guarantee anything about the order and reordering the output at each point of use would be very inefficient, not to mention that the database would likely be a much better place to perform the ordering action.

Attempt 1 – Be more descriptive

The simplest option is simply to bake the ordering information in to the interface definition e.g.

public interface ILogRepository { IEnumerable<Log> GetLogsOrderedByDate(); }

This way we are clear at the point of implementation and the point of use about what the ordering of the items should be. Of course, renaming a method still doesn’t guarantee the result will be ordered correctly but at least if an ordering is missing you have the additional information in the definition about what the correct order should be.

The major problem with this option is that we head towards potentially violating the Open/Closed principal where our API should be open for extension but closed for modification. If we need to change the order log items are returned then we have to rename the method (violating OCP) or add a new method which specifies a different ordering potentially making the original completely redundant in the codebase.

Attempt 2 – Expose IQueryable instead

Another option is to swap from using IEnumerable to using IQueryable and allow the calling code to specify its own ordering e.g.

public interface ILogRepository { IQueryable<Log> GetLogs(); }

...

var logs = logRepo.GetLogs().OrderBy(l => l.DateTime);

This method would be more efficient, always performing the ordering in the database, but with this option we have to repeat the OrderBy part at every point of use to ensure our ordering will be correct. This gives us flexibility but isn’t particularly DRY and may be difficult to change.

It’s also somewhat of a leaky abstraction as we’re spilling data access innards into our other layers and losing control of the queries being executed on our database – calling code can do more with  IQueryable than specify an order which may not be desirable.

Attempt 3 – Allow ordering to be passed in

This is somewhat similar to option 2 however by allowing order to be passed in we can use a specified default ordering while also giving the calling code the ability to override it if necessary without exposing the all-powerful IQueryable.

public interface ILogRepository 
{
   IEnumerable<Log> GetLogs<TKey>(Expressions<Func<Log, TKey>> ordering);
}

Of course this option still has the potential for a lot of repetition of desired ordering and OCP may rear its head again if we need to expose some other IQueryable feature in a similar controlled fashion. Another undesirable feature of this method is that the specified ordering cannot be easily validated, much like with option 2 it may provide the caller with too much power.

Attempt 4 – Return IOrderedEnumerable

An interesting option is amending the interface definition so that the method returns IOrderedEnumerable instead of plain IEnumerable e.g.

public interface ILogRepository { IOrderedEnumerable<Log> GetLogs(); }

A very slight tweak to the definition with no specific ordering defined in the API but it provides a cue to the calling code that an ordering is being applied, should it care, and also makes it difficult for the interface implementation to accidentally miss out the ordering.

Obviously with this option we return to the problem of there being no particular guarantee of the specific ordering being applied not to mention it being quite tricky to return IOrderedEnumerable in the first place.

Alternatives?

Perhaps a better question than:

  • Does the the interface definition match my intended usage?

would be:

  • Can I describe my intended usage sufficiently with an interface?

It’s difficult to define this, and many other kinds of behaviours, using interfaces alone. A better approach in this case would probably be to not interface this class out at all and have the business code expect an instance of the concrete type as its dependency thus providing a guarantee of order. The class would still be abstracted from the backing store such that it can itself be tested e.g.

// Data access code
internal interface ILogStore { IQueryable<Log> Logs { get; } }

public class LogRepository
{
   private ILogStore _store;   

   public LogRepository() : this(null) {}

   internal LogRepository(ILogStore store)
   {
      _store = store ?? new DatabaseLogStore();
   }

   public IEnumerable<Log> GetLogs()
   {
      return _store.Logs.OrderBy(l => l.DateTime);
   }
}

// Business code
public class LogReader
{
   private LogRepository _logRepo;

   public LogReader(LogRepository logRepo)
   {
      if (logRepo == null) throw new ArgumentNullException("logRepo");
      _logRepo = logRepo;
   }

   ...
}

13 September 2011

Fun with enum

If you’ve done any vaguely serious programming with a pre-4 version of the .NET Framework then chances are you’ve had to write an Enum.TryParse() method. You probably wrote something like this:

public static bool TryParse<TEnum>(string value, out TEnum enumValue)
{
 Type enumType = typeof(TEnum);
 if (!enumType.IsEnum) throw new ArgumentException("Type is not an enum.");
 
 enumValue = default(TEnum);
 
 if (Enum.IsDefined(enumType, value))
 {
  enumValue = (TEnum)Enum.Parse(enumType, value);
  return true;
 }
 
 return false;
}

Everything went fine until someone decided to pass in a string representing a value of the underlying type such as “0” at which point Enum.IsDefined() said no even though your enum looked like this:

public enum MyEnum
{
 Zero = 0, One, Two, Three
}

Enum.Parse() will accept “0” just fine but IsDefined() requires the value be of the correct underlying type so in this case you’d need 0 as an integer for it to return true. Doesn't that mean I now need to work out the underlying type and then do the appropriate Parse() method using reflection? Oh dear, looks like our nice generic solution may get rather complicated!

Fear not. Because we know our input type is a string and there are a very limited number of underlying types we can have there’s a handy framework method we can use to sort this out – Convert.ChangeType().

public static bool IsUnderlyingDefined(Type enumType, string value)
{
 if (!enumType.IsEnum) throw new ArgumentException("Type is not an enum.");
 
 Type underlying = Enum.GetUnderlyingType(enumType);
 
 var val = Convert.ChangeType(value, underlying, CultureInfo.InvariantCulture);
  
 return Enum.IsDefined(enumType, val);
}

ChangeType() is effectively selecting the correct Parse method for us and calling it, passing in our string and returning a nice strongly typed underlying value which we can pass into Enum.IsDefined(). So our TryParse now looks like this:

public static bool TryParse<TEnum>(string value, out TEnum enumValue)
{
 Type enumType = typeof(TEnum);
 if (!enumType.IsEnum) throw new ArgumentException("Type is not an enum.");
 
 enumValue = default(TEnum);
 
 if (Enum.IsDefined(enumType, value) || IsUnderlyingDefined(enumType, value))
 {
  enumValue = (TEnum)Enum.Parse(enumType, value);
  return true;
 }
 
 return false;
}

This exercise is somewhat contrived especially now Enum.TryParse is part of .NET 4.0 but the synergy of ChangeType and IsDefined is quite nice and a technique worth pointing out nonetheless.

Links

13 May 2011

Bulk upsert to SQL Server from .NET

or, “How inserting multiple records using an ORM should probably work”

Anyone familiar with .NET ORMs should know that one area they’re lacking in is where it comes to updating or inserting multiple objects at the same time. You end up with many individual UPDATE and INSERT statements being executed on the database which can be very inefficient and often results in developers having to extend the ORM or break out of it completely in order to perform particular operations. An added complication is that, where identities are being used in tables, each INSERT command the ORM performs must immediately be followed by a SELECT SCOPE_IDENTITY() call to retrieve the identity value for the newly inserted row so that the CLR object may be amended.

It’s possible to drastically improve on this by making use of a couple of features already supported in the .NET Framework and SQL Server and I’m hoping that a similar solution will feature in future releases of the major ORMs.

  • The .NET Framework’s SqlBulkCopy class allowing you to take advantage of BULK operations supported by SQL Server.
  • SQL Server temporary tables.
  • SQL Server 2008’s MERGE command which allows upsert operations to be performed on a table and in particular its ability, using the OUTPUT command, to return identities for inserted rows.

The process

The main steps of the process are as follows:

  1. Using ADO.NET create a temporary table in SQL Server whose schema mirrors your source data and whose column types match the types in the target table.
  2. Using SqlBulkCopy populate the temporary table with the source data.
  3. Execute a MERGE command via ADO.NET on the SQL Server which upserts data from the temporary table into the target table, outputting identities.
  4. Read the row set of inserted identities.
  5. Drop the temporary table.

So instead of n INSERT statements to insert n records that’s four SQL commands in all to insert or update n records.

There’s already a blog post on this technique that goes into more detail by Kelias which you can read here. The only part missing from Kelias’ post is the piece utilising the OUTPUT modifier to retrieve the inserted identities from the MERGE command. This is simply an additional line in the merge command e.g.

OUTPUT $action, INSERTED.$IDENTITY

and the small matter of reading those returned identities out of a SqlDataReader.

This is the crucial piece, however, as it is this which allows us to tie the inserted row back to the original CLR “entity” item that formed part of our source data. Updating our CLR object with this identity will allow us to save subsequent changes away as an UPDATE to the now existing database row.

Performance

I did some brief testing to get rough timings of this technique versus individual INSERT calls using a parameterised ADO.NET command. With a variety of numbers and sizes of rows from 100 to 10,000 and with row sizes from 1k to 10k roughly the upsert technique nearly always executed in less than half the time of the individual INSERT statements. For example, 1,000 rows of about 1k each took individual INSERTs an average of just over 500ms versus bulk upsert’s 150ms on my quite old desktop with not very much RAM.

That’s pretty cool considering the upsert could be performing either an INSERT or an UPDATE command in the same number of calls whereas if I were to factor that into the individual SQL statements method it would be a lot of extra commands to try an UPDATE and then check whether any rows had been affected etc.

Github project

I decided to have a go at wrapping the upsert technique up in a library which would automatically generate the SQL necessary for creating the temporary table and running the MERGE. I pushed an initial version of this SqlBulkUpsert project to github which can be found here:
https://github.com/dezfowler/SqlBulkUpsert

Usage would be something like this:

using (var connection = DatabaseHelper.CreateAndOpenConnection())
{
 var targetSchema = SqlTableSchema.LoadFromDatabase(connection, "TestUpsert", "ident");

 var columnMappings = new Dictionary<string, Func<TestDto, object>>
       {
        {"ident", d => d.Ident},
        {"key_part_1", d => d.KeyPart1},
        {"key_part_2", d => d.KeyPart2},
        {"nullable_text", d => d.Text},
        {"nullable_number", d => d.Number},
        {"nullable_datetimeoffset", d => d.Date},
       };

 Action<TestDto, int> identUpdater = (d, i) => d.Ident = i;

 var upserter = new TypedUpserter<TestDto>(targetSchema, columnMappings, identUpdater);

 var items = new List<TestDto>();

 // Populate items with TestDto instances
 
 upserter.Upsert(connection, items);

 // Ident property of TestDto instances updated
}

with TestDto just being a simple class like this:

public class TestDto
{
 public int? Ident { get; set; }
 public string KeyPart1 { get; set; }
 public short KeyPart2 { get; set; }
 public string Text { get; set; }
 public int Number { get; set; }
 public DateTimeOffset Date { get; set; }
}

In this TypedUpserter example we:

  1. define the schema of the target table either in code or by loading it from the database (shown in the example)
  2. define mappings from column names of the target to a lambda retrieving the appropriate property value from the TestDto class
  3. define an action to be called to allow setting the the new identity to a property of the DTO
  4. instantiate the Upserter and call Upsert() with a list of items and a database connection
  5. the identity properties of the TestDto instances will have been updated using the defined action so the CLR objects will now be consistent with the database rows.

Next step

The object model could probably do with some refinement and it needs lots more tests adding but it’s in pretty good shape so next I’m going to look at integrating it into Mark Rendle’s Simple.Data project which should mean that, to my knowledge, it’s the only .NET ORM doing proper bulk loading of multiple records.

26 January 2011

Adding collections to a custom ConfigurationSection

The attributed model for creating custom ConfigurationSection types for use in your app.config or web.config file is quite verbose and examples are hard to come by. Collections in particular are a pain point, there is very little documentation around them and the examples all tend to follow the default add/remove/clear model i.e. that used in <appSettings/>.

Three particular scenarios with collections which caused me problems while doing the same piece of work were:

  • When the items of a collection have a custom name e.g. "item" instead of add/remove/clear
  • When the items of a collection can have different element names representing different actions or subclasses e.g. the  <allow/> and <deny/> elements used with <authorization/>
  • When the items of a collection don’t have an attribute which represents a unique key e.g. not having anything like the key attribute of an <add/> or <remove/> element

This first and last are relatively trivial to fix, the second less so and it took me a bit of digging around in Reflector to work out how to set up something that worked.

Collection items with a custom element name

This scenario can be accomplished as follows.


public class MySpecialConfigurationSection : ConfigurationSection
{
 [ConfigurationProperty("", IsRequired = false, IsKey = false, IsDefaultCollection = true)]
 public ItemCollection Items
 {
  get { return ((ItemCollection) (base["items"])); }
  set { base["items"] = value; }
 }
}

[ConfigurationCollection(typeof(Item), CollectionType = ConfigurationElementCollectionType.BasicMapAlternate)]
public class ItemCollection : ConfigurationElementCollection
{
 internal const string ItemPropertyName = "item";

 public override ConfigurationElementCollectionType CollectionType
 {
  get { return ConfigurationElementCollectionType.BasicMapAlternate; }
 }

 protected override string ElementName
 {
  get { return ItemPropertyName; }
 }

 protected override bool IsElementName(string elementName)
 {
  return (elementName == ItemPropertyName);
 }

 protected override object GetElementKey(ConfigurationElement element)
 {
  return ((Item)element).Value;
 }

 protected override ConfigurationElement CreateNewElement()
 {
  return new Item();
 }

 public override bool IsReadOnly()
 {
  return false;
 }

}

public class Item
{
 [ConfigurationProperty("value")]
 public string Value 
 {
  get { return (string)base["value"]; }
  set { base["value"] = value; }
 }
}

Which will allow us to specify our section like so:


<configSections>
  <section name="mySpecialSection" type="MyNamespace.MySpecialConfigurationSection, MyAssembly"/> 
</configSections>

...

<mySpecialSection>
 <item value="one"/>
 <item value="two"/>
 <item value="three"/>
</mySpecialSection>

First off we have a property representing our collection on our ConfigurationSection or ConfigurationElement whose type derives from ConfigurationElementCollection. This property decorated with a ConfigurationProperty attribute. If the collection should be contained directly within the parent element then set IsDefaultCollection equal to true and leave element name as empty string. If the collection should be contained within a container element specify an element name.

Next, the ConfigurationElementCollection derived type of the property should have a ConfigurationCollection attribute specifying element type and collection type. The collection type specifies the inheritance behaviour when the section appears in web.config files nested deeper in the folder structure for example.

For the collection type itself we do this:

  • Override ElementName to return collection item element  name
  • Override IsElementName to return true when encountering element name
  • Override GetNewElement() to new up an instance of your item type
  • Override GetElementKey(element) to return an object which uniquely identifies the item. This could be a property value, a combination of values as some hash or the element itself

Collection items with varying element name


public class MySpecialConfigurationSection : ConfigurationSection
{
 [ConfigurationProperty("items", IsRequired = false, IsKey = false, IsDefaultCollection = false)]
 public ItemCollection Items
 {
  get { return ((ItemCollection) (base["items"])); }
  set { base["items"] = value; }
 }    
}
    
[ConfigurationCollection(typeof(Item), AddItemName = "apple,orange", CollectionType = ConfigurationElementCollectionType.BasicMapAlternate)]
public class ItemCollection : ConfigurationElementCollection
{
 public override ConfigurationElementCollectionType CollectionType
 {
  get { return ConfigurationElementCollectionType.BasicMapAlternate; }
 }

 protected override string ElementName
 {
  get { return string.Empty; }
 }

 protected override bool IsElementName(string elementName)
 {
  return (elementName == "apple" || elementName == "orange");
 }

 protected override object GetElementKey(ConfigurationElement element)
 {
  return element;
 }

 protected override ConfigurationElement CreateNewElement()
 {
  return new Item();
 }

 protected override ConfigurationElement CreateNewElement(string elementName)
 {
  var item = new Item();
  if (elementName == "apple")
  {
   item.Type = ItemType.Apple;
  }
  else if(elementName == "orange")
  {
   item.Type = ItemType.Orange;
  }
  return item;
 }
 
 public override bool IsReadOnly()
 {
  return false;
 }
}

public enum ItemType
{
 Apple,
 Orange
}

public class Item
{
 public ItemType Type { get; set; }

 [ConfigurationProperty("value")]
 public string Value 
 {
  get { return (string)base["value"]; }
  set { base["value"] = value; }
 }
}

Which will allow us to specify our section like so:


<configSections>
  <section name="mySpecialSection" type="MyNamespace.MySpecialConfigurationSection, MyAssembly"/> 
</configSections>

...

<mySpecialSection>
 <items>
  <apple value="one"/>
  <apple value="two"/>
  <orange value="one"/>
 </items>
</mySpecialSection>

Notice that here we've specified two collection items with the value "one" which would have resulted in one overwriting the other in the previous example. To get around this, instead of returning the Value property we're returning the element itself as the unique key.

This time our ConfigurationElementCollection derived type's ConfigurationCollection attribute also specifies a comma delimited AddItemName e.g. "allow,deny". We override the methods of the base as follows:

  • Override ElementName to return empty string
  • Override IsElementName to return true when encountering a correct element name
  • Override GetNewElement() to new up an instance of your item type
  • Override GetNewElement(elementName) to new up an instance of the correct item type for particular element name setting relevant properties
  • Override GetElementKey(element) to return an object which uniquely identifies the item. This could be a property value, a combination of values as some hash or the element itself

Caveat

While our varying element names will be readable the object model is read-only. I haven't covered support for writing changes back to the config file here as it involves taking charge of the serialization of the objects so really requires its own blog post.

Links

05 December 2010

Taking my music listening in a new direction

or, Why I'm cancelling my Spotify Premium subscription

Not entirely sure when I started using Spotify but it was probably late 2008 / early 2009 and I've found it to be a revelation of music discovery. I've spent hours just clicking from one artist to another, exploring back catalogues and having a serious listen to full albums in a way that would be quite difficult without already having bought the album or "obtained" it from P2P. Previously, using a combination of Last.fm and Myspace you could get quite close but the Spotify desktop app made the whole experience so much more seamless and enjoyable with full, consistent quality tracks.

I've been a Premium subscriber since 1 Aug 2009 with several factors leading to my decision to pay up. The first being high-bitrate uninterrupted audio; having some decent audio kit at home I wanted to make the most of it. Second was the Spotify for Android app I could use on my HTC Hero which is hands down the most convenient means of getting music on a mobile device. Put tracks in a playlist in the desktop app and they magically appear on the device – brilliant.

So, why am I quitting?

1. Cost

To date that's £169.83 in subscription fees - £9.99 a month for 17 months. I tend to buy CDs for £5 off Amazon so that equates to about 33 CD albums or about 2 albums a month. I’ve listened to a lot more albums than that during the time but I doubt that there would have been more than 33 that I would have considered buying a CD copy of. I’ve never paid for an MP3, I refuse to pay the same price as a CD for a lossy version but I paid for Spotify as the service does offer significantly more especially when you use the mobile apps. I’m just not sure it’s worth £9.99 a month.

2. Quality

Spotify Premium ups the track bitrate from 160kbps to 320kbps. At least that’s the idea, in practice it seems large portions of their library are only available in the lower quality and I doubt that more than 10% of the tracks I’ve listened to recently have been high bitrate. There’s also no visibility on "high quality" tracks in the app so I’m seriously sceptical about whether I’m getting the high-bitrates I’m paying for. The quality is certainly still miles off CD audio and having made a return to CDs recently it’s very noticeable that I’m missing out on audio clarity and have been making do with poor quality audio whilst also paying for the privilege.

3. Nothing to show for it

It’s a bitter pill to swallow but worst of all is the fact that after all the cost I’ve just been renting the music. I don’t get to keep the OGG tracks, I don’t own any of it and, when I cancel, the app on my phone will just stop working.

What service would I be happy with?

I’ve been wondering about the kind of service I’d like to see and that I’d be happy to pay for. Unlimited ad-supported listening of any tracks for discovering new music would be fine. I’d like to be able to buy albums, download them in full CD quality and stream them uninterrupted (no ads) in a reasonable bitrate to other computers  and mobile devices. I’d also like to be able to register CDs I own with the service so those tracks are also available wherever I am.

The roll-your-own solution might be buying CDs, ripping them and paying $9.99 for at 50GB Dropbox to sync up my machines. Apparently the Dropbox for Android app has the ability to stream music and movies straight to the device so maybe that’s an option worth considering.

Lossless

In this day and age of high-def video, broadband internet  and huge hard disks I don’t want to pay for, and there is no necessity for, low bitrate music. It’s rather interesting that the medium with the highest audio quality most widely available is Blu-ray disc in the form of Dolby TrueHD and DTS-HD. With video the soundtrack is more of a supporting role so lossy compression can be forgiven to some extent but with music the audio is the main event, it should be CD quality at least. MP3 was great for portability but it has a lot to answer for in terms of killing our appreciation of high quality audio and therefore the market’s desire to provide us with (and push) a high-definition medium solely for audio.