10 January 2018

Throttling with BlockingCollection

Recently I was working with a data processing pipeline where some work items progressed through a number of different stages. The pipeline was running synchronously so it would fully complete before picking up the next work item.

The work items were not related in any way so processing them in parallel was an option and, as the different pipeline stages took varying amounts of time, I decided to parallelize each stage separately, have different numbers of worker threads for each stage and separated the stages with queues. The pipeline was running on a single machine with the worker threads all part of the same process and the queues were just FIFO data structures sitting in RAM - a relatively simple setup.

The issue I encountered pretty quickly was that the stages of the pipeline processed the work items at different rates and, in a couple of cases, not in a predictable way that I could solve by tweaking the numbers of worker threads used for each stage. Where the stage acting as the consumer of a queue was going slower than the stage that was the producer, the list of items pending built up and used up all the available memory pretty quickly.

I needed to be able to limit the number of pending items in each queue and block the publishers to that queue until the consumers caught up.

One way of achieving this is using semaphores to keep track of the number of "slots" used and have the producer threads block on the semaphore until a slot is available.

Another option is the underutilised TPL Dataflow library and solutions which work this way are relatively simple, examples of which are out there on the web such as this one on Stephen Cleary's blog where a BoundedCapacity is applied.

The option I went with was to wrap my ConcurrentQueue in a BlockingCollection with boundedCapacity specified. This has the effect of causing any Add operations on the collection to block until there is space available. Below is an example from MSDN slightly tweaked to introduce throttling to the producer Task.

You can see from the example output that, once the collection is at capacity the producer is forced to wait for the consumer to free up space in the collection before it can add more items.

28 December 2014

Camera memory card backup on the go

I'm recently back from a two-and-a-bit week holiday in Peru and before we went the wife and I invested in new cameras to catalogue our adventures. As our cameras are both enthusiast/semi-pro level you have the option of shooting in RAW format to take advantage of greater post processing capabilities however the huge file sizes involved can be a real problem. Both our cameras use SDHC cards and we bought four with pretty high capacity coupled with good performance. Even these large cards didn't leave us with much room for two week's worth of photos and we really wanted to be able to easily back up photos while we were away should we lose a card or if it got corrupted.

Jobo Giga One 300
On previous holidays we took a portable hard disk with built-in card reader which worked really well however you don't seem to be able to buy these any more. I'm guessing, these days, with more portable PC options like netbooks or ultrabooks a lot of people use those to back up so demand for an alternative has dropped. An iPad with a decent size internal storage and a lightning to SD adapter would also be an option.

We didn't want to buy a small laptop just for backup purposes as netbooks are still quite expensive and we were looking for a cheaper option, preferably that made use of my Nexus 10 Android tablet.

The Nexus 10 has a micro USB port which you use for charging the device but when you plug in an OTG (on-the-go) cable it gives you a full size USB port into which you can plug many different types of USB device and the Nexus 10 will host them and use their capabilities. For example, plugging in a USB keyboard will allow you to input text as you would on a full PC. Plugging in a USB hub allows you to connect multiple devices at the same time as with any other PC. Lots of other Android tablets have a micro USB port and will work in the same way, not just the Nexus devices.

What we ended up taking with us was:
This hub and card reader have the advantage that they’re both about 2” square so they form quite a compact unit and you could, for example, wrap an elastic band around them to keep them together. We took a small (3" x 5" x 2") tupperware-style box and, tablet excluded, all this fitted along with a couple of spare camera batteries and SDHC cards.

nexus-media-importerThe other piece of the puzzle to get it all working is the Nexus Media Importer app. Ignore the "Nexus" in the name, this app should work with any "Android 4.0+ devices with USB Host support". The app supports a variety of different media files (photos, video, audio) and allows you to preview files as well as perform file management (move, copy, delete, etc) operations. Usefully the app (or Android itself) has native support for all the major RAW file formats so regardless of what make of camera you have you should be able to preview your photos right in the app. 

Putting the pieces together

Using the USB hub means you can plug in the card reader and a memory stick (or a USB hard disk) at the same time - they all connected together and plugged in to the tablet as illustrated here:


Note that, if you're using a USB hard disk you'll probably need a powered USB hub unless the hard disk has its own power supply.

Once Nexus Media Importer in installed, when you connect a mass storage device you get a popup message asking if you want to open the app:

After you select OK and the app opens you'll be prompted to select the storage device you want to import from:

This is fine if you want to copy your photos onto the devices internal storage but we want to copy from one external storage (the SDHC card) onto another external device (the USB pen drive) and to do that we switch into the app's "Advanced" mode by selecting it in the drop down on the right that currently says "Importer".

Here we select our source and destination respectively and the app then switches to a view showing you the source file system on the left and the destination on the right.

Navigating to the correct folder is somewhat counter-intuitive at first as you have to tap the folder name to go into that folder, tapping on the folder icon to its left selects the folder meaning you can copy entire folders quite easily.

Once you've found the right folder e.g. the folder your camera saves photos to on the left and the place you're backing up those photos to on the right the app has a great feature allowing you to select any new photos and only copy those.

Once you've made a selection the other options such as "Copy" and "Move" become available in the menu and you pick the one you want.

Selecting one and you'll get a prompt about the action you're about to perform - hit OK and the transfer begins. The file transfer goes on in the background meaning you can swap to a different app while the transfer is happening or even put the tablet into standby to save power.

Assuming the read and write speeds of the memory cards and memory sticks you're using is good the transfers shouldn't take too long - the Transcend SDHC cards we bought had 90MB/s read speeds which made backup nice and quick.

We made two backups of our photos onto the two USB sticks, my wife kept one and I kept the other and then we just formatted and reused our SDHC cards as required. All in all, was a fairly low cost and space and weight efficient solution I was really happy with and will be using on subsequent trips.

20 May 2014

DDD South West 5

Last Saturday I was at DDD South West in Bristol. Unlike 2012 I was marginally more organised (thanks to a timely prompt from @mjjames) so I was straight in rather than going via the waiting list.

As ever, this instalment maintained the high standards of organisation, variety of quality sessions and great weather (at least ones I’ve attended) that I've come to expect from DDD events.

This year's addition of the Pocket DDD web app which allowed you to browse the agenda and collected session feedback added an extra point of interaction which seemed to work really well. I look forward to seeing how the DDD guys utilise the app for other things in future – linking out to Twitter and pre-populating a session hashtag, maybe?


This time around I ended up only attending sessions from people I haven’t seen speak before. The ones I went to were:

​Continuous Integration, in an hour, on a shoestring; Phil Collins

I found this session to be a great, light-hearted opener to the day with much praying to the demo gods as Phil attempted to set up a complete CI environment and show it working end-to-end in an hour. He was successful.

Complexity => Simplicity; Ashic Mahtab

This session was broadly a look at Domain Driven Design and how, when exercising it, you need to change your way of thinking about problems to create a less coupled solution.

F# Eye for the C# Guy; Phil Trelford

This was one of those "mind blown" sessions and it provided a great introduction to the power of F#. I understand what @dantup has been banging on about now.

The amount of content covered I found to be ideal and Phil’s delivery was great – definitely a presenter I’ll look out for in future!

An introduction to Nancy; Mathew McLoughlin

Somehow I’ve managed to avoid talks about Nancy up to now and, although I’ve had cursory looks at the documentation for it in the past, I thought I’d attend Mat’s talk and actually see it in action to gain a better insight.

Mat managed to cover quite a lot in this session and it was interesting to see how it differed from ASP.NET MVC and Simple.Web which I’m more familiar with.

10 things I learnt about web application security being pen tested by banks; James Crowley

Security talks tend to have a habit of making you walk out incredibly worried about your products out in the wild and this one was no exception.

I’m pretty familiar with the standard vulnerabilities for web sites – things like the OWASP Top 10 – but there’s nothing like a really scary demo of exploiting them with some script kiddie tools to really hammer home how much of a security risk they represent.

James managed to pack a lot of good advice into the hour with demos where appropriate and this was a great end to the day.


Overall it was a very enjoyable day – organisation and catering were great, the sessions were of a very high standard and it was good to catch up with some folks I haven’t seen in a while. Big thanks to everyone involved.

27 December 2013

Windows 8.1 on high DPI

I’ve been working with Windows 8.1 on a Dell XPS 15 for about eight weeks now and I thought I’d share some of my experiences of working with display scaling as the Dell has a 3200x1800 display.

Being what Apple would term “Retina”, the display has a pixel density of almost 250 PPI, which is matched only by a handful of other Windows laptops at the moment. Until recently the limiting factor in this area has been that desktop operating systems have expected a display’s resolution to scale more or less linearly with its size meaning the pixels per inch didn’t change a great deal.

Using fonts for an example, 10 point text should be about 10/72 of an inch or 3.5mm high  (1 point = 1/72 inch). Windows, by default, renders 10 point text about 13 pixels high which, if you do the math, is assuming a PPI of 96. Some background on where this 96 comes from can be found in this MSDN article. In the case of printers – when you print 10 point text you will get text that is 3.5mm high regardless of the printer’s DPI. The higher the DPI the crisper the text will appear but the characters will be the same size. The same is not true for displays, however. This hasn’t been so much of a problem up until now because average pixel density has been between about 90 and 120 but now we’re nearer to 250 pixels per inch that same 10 point text is only about 1mm high which is essentially impossible to read.

Obviously with high DPI displays some solution to this and the reasonable scaling of other elements on screen is required so that we can have nice crisp visuals that aren’t comically tiny. The operating systems are finally catching up and in Windows 8.1 are some usable scaling options for a high DPI display but it’s fair to say that, yet again, Apple have led the charge in this department with their Retina options in OSX.

In Windows 8.1 the scalings are 100%, 125%, 150% and 200%. Set at 200% on the XPS 15, for example, this renders things like text at a size you would see on a 1600x900 display. The scaling happens differently depending on the application. For most classic desktop applications such as Chrome it simply does a crude resize – essentially rendering at 1600x900 and then blowing the image up so you get a lot of pixilation and rough edges. For “Metro” apps and some desktop apps the scaling factor is passed to the app which scales the sizes of the UI elements as appropriate but renders them using the full resolution of the display.

It’s reasonable but it’s far from perfect unfortunately as there are still a lot of visual elements which don’t scale quite right and every so often you encounter some custom rendered dialog that isn’t scaled at all and you have to break out the magnifier tool.

Another oddity, which may be exclusive to the drivers for the XPS 15, is that coming out of standby mode loses the scaling option. It switches back to 100% scaling and you have to switch to external display mode and back to force it to pick up the scaling again.

Hopefully things will improve with updates to applications and subsequent revisions of Windows.

05 October 2012

NFC payments - it's not for you!

NFC payment terminals are becoming more common and all the credit/debit cards in my wallet have supported NFC for about 6 months which is great as it's much more convenient, especially for buying a coffee or lunch.

NFC and by extension RFID are nothing new - I think I first saw a dog getting an RFID implant put in on Blue Peter in the early 90's and next year NFC will have run the London Underground for 10 years in the form of the Oyster card. It's taken a long time for the banks to warm to this technology - maybe because there's a lot of security protocols to be determined and a lot of liability sums to be calculated etc.

I've had a Google Nexus S for about 18 months which was, from what I've read, the first NFC-enabled handset available in the UK. When I bought it Google were yet to release Wallet, their NFC payment app for Android, but there weren't many NFC payment terminals available so it wasn't that much of a big deal.

The Wallet logo is quite a clever echo of the NFC payment logo

Wallet has since been released in the US and is supported by all the major credit card companies but that's where the good news ends. It seems Google have deals with particular networks, Sprint being the main one, meaning that even if you have an NFC-enabled Android handset you can only use Wallet if you're on one of the approved networks. What's worse is that it isn't available in the UK and there's no word from Google on when or if it will be.

What is particularly odd is that the Nexus 7 has no such restriction. I can only assume this is because it has no GSM modem so there is no deal to be made with a mobile network. This is particularly frustrating because I can see that, if you buy a phone from a particular carrier and that carrier doesn't have a deal with Google, you won't get Wallet but I bought my Nexus S SIM free from Carphone Warehouse so the phone itself has no network affiliation and yet I still can't use Wallet.

What is quite interesting and may shine some light on the whole delay in Wallet getting to the UK is the release of Quick Tap from Barclaycard and Orange. Although Orange sell 10 NFC-enabled handsets only 2 of them are "Quick Tap ready", both of which happen to be the Galaxy SIII, probably their most popular and expensive handset apart from the iPhone. I doubt there's technically anything special about the SIII that means it can be used for payments where the other handsets can't - all the others are cheaper so my guess is it's entirely about forcing people to buy a more expensive handset.

If the other UK networks and card companies are doing similar deals it's no wonder a service like Wallet is unavailable as there is money to be made and phones to be sold. All in all it's pretty rubbish for the early adopter and the consumer in general.

Surely the fact that a phone is PIN protected and the NFC is not always on actually makes it a more secure way of implementing NFC payments. People can't skim your phone the way they can the cards in your actual wallet.

Guess I'll just have to wait and see where this farcical endeavour goes. In the meantime I'll look forward to an Oyster app (which would be pretty ace) and scanning some NFC business cards, I suppose. Whoopee!

20 March 2012

Interfaces and IoC

If you want to use inversion of control, unit testing and adhere to SOLID principals in your C# code this often means you have a lot of interfaces. Core considerations when dealing with interfaces are things like:

  • Where should the interface be defined – alongside the main implementation or in a separate assembly?
  • Should the interface be generic or not?
  • Am I breaking interface segregation principal?

The one that sometimes falls by the wayside is:

  • Does the the interface definition match my intended usage?


A trivial example of this might be where you have a database containing a Log table of messages from an application where each has an ID of some kind, type, source, message and date/time recorded. The interface for the data access to this table might be:

public interface ILogRepository { IEnumerable<Log> GetLogs(); }

Innocuous enough however what if all our usages of this interface and method require that the resultant IEnumerable is ordered by the recorded time of the log message. IEnumerable alone doesn’t guarantee anything about the order and reordering the output at each point of use would be very inefficient, not to mention that the database would likely be a much better place to perform the ordering action.

Attempt 1 – Be more descriptive

The simplest option is simply to bake the ordering information in to the interface definition e.g.

public interface ILogRepository { IEnumerable<Log> GetLogsOrderedByDate(); }

This way we are clear at the point of implementation and the point of use about what the ordering of the items should be. Of course, renaming a method still doesn’t guarantee the result will be ordered correctly but at least if an ordering is missing you have the additional information in the definition about what the correct order should be.

The major problem with this option is that we head towards potentially violating the Open/Closed principal where our API should be open for extension but closed for modification. If we need to change the order log items are returned then we have to rename the method (violating OCP) or add a new method which specifies a different ordering potentially making the original completely redundant in the codebase.

Attempt 2 – Expose IQueryable instead

Another option is to swap from using IEnumerable to using IQueryable and allow the calling code to specify its own ordering e.g.

public interface ILogRepository { IQueryable<Log> GetLogs(); }


var logs = logRepo.GetLogs().OrderBy(l => l.DateTime);

This method would be more efficient, always performing the ordering in the database, but with this option we have to repeat the OrderBy part at every point of use to ensure our ordering will be correct. This gives us flexibility but isn’t particularly DRY and may be difficult to change.

It’s also somewhat of a leaky abstraction as we’re spilling data access innards into our other layers and losing control of the queries being executed on our database – calling code can do more with  IQueryable than specify an order which may not be desirable.

Attempt 3 – Allow ordering to be passed in

This is somewhat similar to option 2 however by allowing order to be passed in we can use a specified default ordering while also giving the calling code the ability to override it if necessary without exposing the all-powerful IQueryable.

public interface ILogRepository 
   IEnumerable<Log> GetLogs<TKey>(Expressions<Func<Log, TKey>> ordering);

Of course this option still has the potential for a lot of repetition of desired ordering and OCP may rear its head again if we need to expose some other IQueryable feature in a similar controlled fashion. Another undesirable feature of this method is that the specified ordering cannot be easily validated, much like with option 2 it may provide the caller with too much power.

Attempt 4 – Return IOrderedEnumerable

An interesting option is amending the interface definition so that the method returns IOrderedEnumerable instead of plain IEnumerable e.g.

public interface ILogRepository { IOrderedEnumerable<Log> GetLogs(); }

A very slight tweak to the definition with no specific ordering defined in the API but it provides a cue to the calling code that an ordering is being applied, should it care, and also makes it difficult for the interface implementation to accidentally miss out the ordering.

Obviously with this option we return to the problem of there being no particular guarantee of the specific ordering being applied not to mention it being quite tricky to return IOrderedEnumerable in the first place.


Perhaps a better question than:

  • Does the the interface definition match my intended usage?

would be:

  • Can I describe my intended usage sufficiently with an interface?

It’s difficult to define this, and many other kinds of behaviours, using interfaces alone. A better approach in this case would probably be to not interface this class out at all and have the business code expect an instance of the concrete type as its dependency thus providing a guarantee of order. The class would still be abstracted from the backing store such that it can itself be tested e.g.

// Data access code
internal interface ILogStore { IQueryable<Log> Logs { get; } }

public class LogRepository
   private ILogStore _store;   

   public LogRepository() : this(null) {}

   internal LogRepository(ILogStore store)
      _store = store ?? new DatabaseLogStore();

   public IEnumerable<Log> GetLogs()
      return _store.Logs.OrderBy(l => l.DateTime);

// Business code
public class LogReader
   private LogRepository _logRepo;

   public LogReader(LogRepository logRepo)
      if (logRepo == null) throw new ArgumentNullException("logRepo");
      _logRepo = logRepo;


13 September 2011

Fun with enum

If you’ve done any vaguely serious programming with a pre-4 version of the .NET Framework then chances are you’ve had to write an Enum.TryParse() method. You probably wrote something like this:

public static bool TryParse<TEnum>(string value, out TEnum enumValue)
 Type enumType = typeof(TEnum);
 if (!enumType.IsEnum) throw new ArgumentException("Type is not an enum.");
 enumValue = default(TEnum);
 if (Enum.IsDefined(enumType, value))
  enumValue = (TEnum)Enum.Parse(enumType, value);
  return true;
 return false;

Everything went fine until someone decided to pass in a string representing a value of the underlying type such as “0” at which point Enum.IsDefined() said no even though your enum looked like this:

public enum MyEnum
 Zero = 0, One, Two, Three

Enum.Parse() will accept “0” just fine but IsDefined() requires the value be of the correct underlying type so in this case you’d need 0 as an integer for it to return true. Doesn't that mean I now need to work out the underlying type and then do the appropriate Parse() method using reflection? Oh dear, looks like our nice generic solution may get rather complicated!

Fear not. Because we know our input type is a string and there are a very limited number of underlying types we can have there’s a handy framework method we can use to sort this out – Convert.ChangeType().

public static bool IsUnderlyingDefined(Type enumType, string value)
 if (!enumType.IsEnum) throw new ArgumentException("Type is not an enum.");
 Type underlying = Enum.GetUnderlyingType(enumType);
 var val = Convert.ChangeType(value, underlying, CultureInfo.InvariantCulture);
 return Enum.IsDefined(enumType, val);

ChangeType() is effectively selecting the correct Parse method for us and calling it, passing in our string and returning a nice strongly typed underlying value which we can pass into Enum.IsDefined(). So our TryParse now looks like this:

public static bool TryParse<TEnum>(string value, out TEnum enumValue)
 Type enumType = typeof(TEnum);
 if (!enumType.IsEnum) throw new ArgumentException("Type is not an enum.");
 enumValue = default(TEnum);
 if (Enum.IsDefined(enumType, value) || IsUnderlyingDefined(enumType, value))
  enumValue = (TEnum)Enum.Parse(enumType, value);
  return true;
 return false;

This exercise is somewhat contrived especially now Enum.TryParse is part of .NET 4.0 but the synergy of ChangeType and IsDefined is quite nice and a technique worth pointing out nonetheless.