06 August 2019

Type checked "visitor" for discriminated unions using mapped types

Discriminated unions are one of the most useful features of TypeScript. After testing the discriminator value TypeScript can apply type checking based on the relevant member type of the union.

So, for a discriminated union of shapes, like this...

It's possible to write a calculateArea function using a switch on the discriminator (__typename in this case), like this...

Note that you can use the correct properties for the relevant member type of the union in each case body and you'll also get Intellisense prompting you with the correct property names.

Exhaustiveness problem

However, depending on the TypeScript compiler options you have set, you may end up missing cases where a type not covered by a case in the switch just falls through and the function returns an undefined. Adding the --noImplicitReturns options causes the compiler to complain if this is the case but this may not be an option for you if you make use of implicit returns. Another work around is to assign your instance to never in the switch's default e.g.

This will force the compiler warning but that is something you always need to remember to add when using a switch like this so it's not exactly a "pit of success".

Mapped types method

There is an alternative to the switch using mapped types instead which allows you to declare a map of the discriminator value to the correct operation for the type, like this...

The UnionMap mapped type which makes this work is defined like this...

It makes use of conditional types to enumerate the member types of the union in order to collate the discriminator values into a type e.g. UnionKeys<Shape, '__typename'> will be 'Circle' | 'Square' | 'Rectangle' | 'Triangle'. The mapped type UnionMap then requires a key for each of these string values to be present using [K in ...]: and it then does a "reverse lookup" of the original member type for that key using UnionPartForKey in order to provide the function argument type.

Using this method is cleaner and less error prone than the switch method. It will cause compile errors even with relaxed compiler options if a discriminator value is missed out of the map object.

Playground

I've created a TypeScript playground with mapped types here.

References

Written with StackEdit.

24 July 2019

TypeScript Gotchas: Type Assertions

TL;DR

In summary, unless you want lots of weird non-checking of your types avoid mixing type assertions and literals. The critical thing to remember is that when you use a type assertion you're telling TypeScript that you know what the type is and it doesn't need to check. If there is genuinely no way TypeScript can check the value, i.e. it's a runtime value; maybe it came back from a web service, then it's valid to use the assertion (possibly after you do your own validation). But if you're writing a literal value in TypeScript then the compiler knows everything about the value and the context so you shouldn't assert anything and should leave the compiler to do its job in peace.

The details...

I think TypeScript is the best thing to happen to JavaScript since Douglas Crockford's JavaScript: The Good Parts and the linters which checked your code against its recommendations. However in many ways TypeScript is much more a security blanket than a safety net as it's easy to break its type checking without realising. I'm going to dive in to a few of the most common gotchas (in my limited experience using it) in a couple of blog posts. This first post is reserved for, what I consider the worst offender, declaring literals using type assertions.

There is only a difference of one character in length between these two statements but the behaviour in terms of type checking is very different. It is very common to see this as shorthand in places where you don't normally use an explicit declaration such as a return value of a lambda e.g.

When you start writing literals in this way in one place in your code, you'll fall into a habit of using this everywhere.

I think one of the main reasons why this is so prevalent is that the behaviour of the IDE when using a type assertion like this is very similar to declaring a value the "right" way. You get intellisense prompting you with the property names of the asserted type...

...and when you start typing, if you provide an incorrect property name, you get a compiler warning...

These things are all pointing to the type checking on this value being no different to that on a standard declaration - it lulls you into a false sense of security.

A note on version - this is true as of version 3.5.1.

I created a TypeScript playground with these examples in for you to follow along.

Object literals

In the example above, rather than defining a literal of a target (asserted) type what you're actually doing here is defining a literal of an implicit type which you then assert as the target type so it's the same as doing this...

For example, taking this trivial type with two required properties and one optional...

When processing the type assertion TypeScript simply checks that any properties on the source type with a name that matches one of the target type's properties also have the same type as that property. The first gotcha about this is that, if there are no properties on the source type, it matches even if the target type has required properties...

If properties are declared on the source you need to supply all the required properties of the target but after that any additional properties that may be present on the source type are disregarded unless they are a name match with a target property. The gotcha arising here being that, if you have a typo in the name of an optional property, you won't know about it.

In `foo5` and `foo6` the property `other` is incorrectly spelled `othen` but only the literal without the assertion catches this.

Primitives

For primitives type assertions error as you would expect for assigning a value of one type to a variable of a different type albeit with a much more long-winded error message.

Things are a bit more interesting with union types of constant values, though. For example, with a union of three constant string values the assertion allows an incorrect string value but it will fail if given a value of another underlying type e.g. a number.

Arrays

Arrays and type assertions don't agree at all. The assertion doesn't catch any wrong values added to the array...

Tuples

Tuples, however, are a different story and the assertion is a lot more strict with it essentially behaving the same as a standard declaration.

Summary

As mentioned in the TL;DR - don't mix literals with type assertions. It is fraught with type checking peril and the use of assertions should be restricted to only the cases where TypeScript genuinely has no information about a value.

For the shorthanded lambda return values just ensure you define the return type on the function type itself e.g.

For covariant return types writing the declarations out longhand is the safest option.

Up next time - why you shouldn't use parameters with your functions!

27 June 2019

Date timezone changes in Chrome 67

Beginning in Chrome 67 (released on 29 May 2018) there was a change to how timezones are handled in the JavaScript Date object. Historical dates will now have a historically accurate timezone offset applied to them which means that, if you were supplying a UTC date/time to one of the new Date() overloads and then retrieving local time, the value you get back may have changed from Chrome 66 to Chrome 67.

For example, with the machine timezone set to London, if you evaluate...

new Date(0).getHours()
  • 0 ... on Chrome 66
  • 1 ... on Chrome 67

0 here is treated as a milliseconds offset from Unix epoch time (1970-01-01T00:00:00Z) so the date is holding a value of 1970-01-01T00:00:00Z.

getHours returns a local timezone adjusted version of that value which, by today’s timezone offset rules, is still ‘0’ because on 1st Jan we’re at UTC+0 and the historical time adjusted by today's offset rules is what Chrome 66 gives us.

Chrome 67 applies the correct historical offset that was in effect on that date for the current system timezone. Weirdly in 1969 and 1970 London didn't observe a daylight saving change and was at UTC+1 for the whole year hence the value returned by getHours() is `1` because the local time was 1970-01-01T01:00:00+01:00.

A common use of the new Date(milliseconds) constructor overload method of creating a Date object is using a small number of milliseconds to format a duration e.g. "01:30:38 remaining" discarding the date part altogether and a similar problem to this was highlighted by Rik Driever in his post on the change [1].

It was Rik's post that led me to the Chromium change which introduced this new behaviour [2] and the related change to the ECMA standard [3].

In Rik's post he concludes that his issue is down to incompatibility between JavaScript and the .NET's JavaScriptSerializer and he attempts various workarounds to try and account for the offset being applied to the Date object without much success.

In fact, JavaScript and .NET are working together fine, and there are two easy ways to get your intended value back out of the Date.

Option 1 - Use the getUTC* methods instead

The millisecond value we're passing in is UTC and what we really expect to get out is also UTC so we should use the getUTC* methods instead e.g. getUTCHours(), getUTCMinutes(). The fact that getHours() was returning the value we expected in Chrome 66 and before was a coincidence and we should never have been using getHours() in the first place.

Some code coincidentally giving you the right value so you assume it's correct is a very common cause of bugs and this is a great example. It's also a great example of why you should use constants as the expected values in unit tests because if you were to use a value returned by new Date(blah).getHours() as your expected value your test would still pass.

Option 2 - Initialise the Date with a local time

If you want to keep using the local offset methods of the Date e.g. getHours(), getMinutes() then you can initialize the date slightly differently to get the result you expect:


new Date(1970, 0, 1, 0, 0, 0, milliseconds)

This overload of new Date() expects a local time so doing this instead will initialise the date to midnight local time and the constructor gracefully handles a millisecond value greater than 999 by just incrementing the other parts of the date by the correct amount. So in this case the date being held is 1970-01-01T00:00:00+-<offset> and getHours() will return 0 for any system timezone and any Chrome version.

[1] Rik Driever's post
https://medium.com/@rikdriever/javascript-date-issue-since-chrome-67-50aa555799d0
[2] Implement a new spec for timezone offset calculation
https://chromium-review.googlesource.com/c/v8/v8/+/572148
[3] The ECMA spec change
https://github.com/tc39/ecma262/pull/778

10 January 2018

Throttling with BlockingCollection

Recently I was working with a data processing pipeline where some work items progressed through a number of different stages. The pipeline was running synchronously so it would fully complete before picking up the next work item.

The work items were not related in any way so processing them in parallel was an option and, as the different pipeline stages took varying amounts of time, I decided to parallelize each stage separately, have different numbers of worker threads for each stage and separated the stages with queues. The pipeline was running on a single machine with the worker threads all part of the same process and the queues were just FIFO data structures sitting in RAM - a relatively simple setup.

The issue I encountered pretty quickly was that the stages of the pipeline processed the work items at different rates and, in a couple of cases, not in a predictable way that I could solve by tweaking the numbers of worker threads used for each stage. Where the stage acting as the consumer of a queue was going slower than the stage that was the producer, the list of items pending built up and used up all the available memory pretty quickly.

I needed to be able to limit the number of pending items in each queue and block the publishers to that queue until the consumers caught up.

One way of achieving this is using semaphores to keep track of the number of "slots" used and have the producer threads block on the semaphore until a slot is available.

Another option is the underutilised TPL Dataflow library and solutions which work this way are relatively simple, examples of which are out there on the web such as this one on Stephen Cleary's blog where a BoundedCapacity is applied.

The option I went with was to wrap my ConcurrentQueue in a BlockingCollection with boundedCapacity specified. This has the effect of causing any Add operations on the collection to block until there is space available. Below is an example from MSDN slightly tweaked to introduce throttling to the producer Task.

You can see from the example output that, once the collection is at capacity the producer is forced to wait for the consumer to free up space in the collection before it can add more items.

28 December 2014

Camera memory card backup on the go

I'm recently back from a two-and-a-bit week holiday in Peru and before we went the wife and I invested in new cameras to catalogue our adventures. As our cameras are both enthusiast/semi-pro level you have the option of shooting in RAW format to take advantage of greater post processing capabilities however the huge file sizes involved can be a real problem. Both our cameras use SDHC cards and we bought four with pretty high capacity coupled with good performance. Even these large cards didn't leave us with much room for two week's worth of photos and we really wanted to be able to easily back up photos while we were away should we lose a card or if it got corrupted.

Jobo Giga One 300
On previous holidays we took a portable hard disk with built-in card reader which worked really well however you don't seem to be able to buy these any more. I'm guessing, these days, with more portable PC options like netbooks or ultrabooks a lot of people use those to back up so demand for an alternative has dropped. An iPad with a decent size internal storage and a lightning to SD adapter would also be an option.

We didn't want to buy a small laptop just for backup purposes as netbooks are still quite expensive and we were looking for a cheaper option, preferably that made use of my Nexus 10 Android tablet.

The Nexus 10 has a micro USB port which you use for charging the device but when you plug in an OTG (on-the-go) cable it gives you a full size USB port into which you can plug many different types of USB device and the Nexus 10 will host them and use their capabilities. For example, plugging in a USB keyboard will allow you to input text as you would on a full PC. Plugging in a USB hub allows you to connect multiple devices at the same time as with any other PC. Lots of other Android tablets have a micro USB port and will work in the same way, not just the Nexus devices.

What we ended up taking with us was:
This hub and card reader have the advantage that they’re both about 2” square so they form quite a compact unit and you could, for example, wrap an elastic band around them to keep them together. We took a small (3" x 5" x 2") tupperware-style box and, tablet excluded, all this fitted along with a couple of spare camera batteries and SDHC cards.

nexus-media-importerThe other piece of the puzzle to get it all working is the Nexus Media Importer app. Ignore the "Nexus" in the name, this app should work with any "Android 4.0+ devices with USB Host support". The app supports a variety of different media files (photos, video, audio) and allows you to preview files as well as perform file management (move, copy, delete, etc) operations. Usefully the app (or Android itself) has native support for all the major RAW file formats so regardless of what make of camera you have you should be able to preview your photos right in the app. 

Putting the pieces together

Using the USB hub means you can plug in the card reader and a memory stick (or a USB hard disk) at the same time - they all connected together and plugged in to the tablet as illustrated here:

DSC_3596

Note that, if you're using a USB hard disk you'll probably need a powered USB hub unless the hard disk has its own power supply.

Once Nexus Media Importer in installed, when you connect a mass storage device you get a popup message asking if you want to open the app:


After you select OK and the app opens you'll be prompted to select the storage device you want to import from:


This is fine if you want to copy your photos onto the devices internal storage but we want to copy from one external storage (the SDHC card) onto another external device (the USB pen drive) and to do that we switch into the app's "Advanced" mode by selecting it in the drop down on the right that currently says "Importer".


Here we select our source and destination respectively and the app then switches to a view showing you the source file system on the left and the destination on the right.


Navigating to the correct folder is somewhat counter-intuitive at first as you have to tap the folder name to go into that folder, tapping on the folder icon to its left selects the folder meaning you can copy entire folders quite easily.

Once you've found the right folder e.g. the folder your camera saves photos to on the left and the place you're backing up those photos to on the right the app has a great feature allowing you to select any new photos and only copy those.


Once you've made a selection the other options such as "Copy" and "Move" become available in the menu and you pick the one you want.


Selecting one and you'll get a prompt about the action you're about to perform - hit OK and the transfer begins. The file transfer goes on in the background meaning you can swap to a different app while the transfer is happening or even put the tablet into standby to save power.

Assuming the read and write speeds of the memory cards and memory sticks you're using is good the transfers shouldn't take too long - the Transcend SDHC cards we bought had 90MB/s read speeds which made backup nice and quick.

We made two backups of our photos onto the two USB sticks, my wife kept one and I kept the other and then we just formatted and reused our SDHC cards as required. All in all, was a fairly low cost and space and weight efficient solution I was really happy with and will be using on subsequent trips.

20 May 2014

DDD South West 5

Last Saturday I was at DDD South West in Bristol. Unlike 2012 I was marginally more organised (thanks to a timely prompt from @mjjames) so I was straight in rather than going via the waiting list.

As ever, this instalment maintained the high standards of organisation, variety of quality sessions and great weather (at least ones I’ve attended) that I've come to expect from DDD events.

This year's addition of the Pocket DDD web app which allowed you to browse the agenda and collected session feedback added an extra point of interaction which seemed to work really well. I look forward to seeing how the DDD guys utilise the app for other things in future – linking out to Twitter and pre-populating a session hashtag, maybe?

Sessions

This time around I ended up only attending sessions from people I haven’t seen speak before. The ones I went to were:

​Continuous Integration, in an hour, on a shoestring; Phil Collins

I found this session to be a great, light-hearted opener to the day with much praying to the demo gods as Phil attempted to set up a complete CI environment and show it working end-to-end in an hour. He was successful.

Complexity => Simplicity; Ashic Mahtab

This session was broadly a look at Domain Driven Design and how, when exercising it, you need to change your way of thinking about problems to create a less coupled solution.

F# Eye for the C# Guy; Phil Trelford

This was one of those "mind blown" sessions and it provided a great introduction to the power of F#. I understand what @dantup has been banging on about now.

The amount of content covered I found to be ideal and Phil’s delivery was great – definitely a presenter I’ll look out for in future!

An introduction to Nancy; Mathew McLoughlin

Somehow I’ve managed to avoid talks about Nancy up to now and, although I’ve had cursory looks at the documentation for it in the past, I thought I’d attend Mat’s talk and actually see it in action to gain a better insight.

Mat managed to cover quite a lot in this session and it was interesting to see how it differed from ASP.NET MVC and Simple.Web which I’m more familiar with.

10 things I learnt about web application security being pen tested by banks; James Crowley

Security talks tend to have a habit of making you walk out incredibly worried about your products out in the wild and this one was no exception.

I’m pretty familiar with the standard vulnerabilities for web sites – things like the OWASP Top 10 – but there’s nothing like a really scary demo of exploiting them with some script kiddie tools to really hammer home how much of a security risk they represent.

James managed to pack a lot of good advice into the hour with demos where appropriate and this was a great end to the day.

 

Overall it was a very enjoyable day – organisation and catering were great, the sessions were of a very high standard and it was good to catch up with some folks I haven’t seen in a while. Big thanks to everyone involved.

27 December 2013

Windows 8.1 on high DPI

I’ve been working with Windows 8.1 on a Dell XPS 15 for about eight weeks now and I thought I’d share some of my experiences of working with display scaling as the Dell has a 3200x1800 display.

Being what Apple would term “Retina”, the display has a pixel density of almost 250 PPI, which is matched only by a handful of other Windows laptops at the moment. Until recently the limiting factor in this area has been that desktop operating systems have expected a display’s resolution to scale more or less linearly with its size meaning the pixels per inch didn’t change a great deal.

Using fonts for an example, 10 point text should be about 10/72 of an inch or 3.5mm high  (1 point = 1/72 inch). Windows, by default, renders 10 point text about 13 pixels high which, if you do the math, is assuming a PPI of 96. Some background on where this 96 comes from can be found in this MSDN article. In the case of printers – when you print 10 point text you will get text that is 3.5mm high regardless of the printer’s DPI. The higher the DPI the crisper the text will appear but the characters will be the same size. The same is not true for displays, however. This hasn’t been so much of a problem up until now because average pixel density has been between about 90 and 120 but now we’re nearer to 250 pixels per inch that same 10 point text is only about 1mm high which is essentially impossible to read.

Obviously with high DPI displays some solution to this and the reasonable scaling of other elements on screen is required so that we can have nice crisp visuals that aren’t comically tiny. The operating systems are finally catching up and in Windows 8.1 are some usable scaling options for a high DPI display but it’s fair to say that, yet again, Apple have led the charge in this department with their Retina options in OSX.

In Windows 8.1 the scalings are 100%, 125%, 150% and 200%. Set at 200% on the XPS 15, for example, this renders things like text at a size you would see on a 1600x900 display. The scaling happens differently depending on the application. For most classic desktop applications such as Chrome it simply does a crude resize – essentially rendering at 1600x900 and then blowing the image up so you get a lot of pixilation and rough edges. For “Metro” apps and some desktop apps the scaling factor is passed to the app which scales the sizes of the UI elements as appropriate but renders them using the full resolution of the display.

It’s reasonable but it’s far from perfect unfortunately as there are still a lot of visual elements which don’t scale quite right and every so often you encounter some custom rendered dialog that isn’t scaled at all and you have to break out the magnifier tool.

Another oddity, which may be exclusive to the drivers for the XPS 15, is that coming out of standby mode loses the scaling option. It switches back to 100% scaling and you have to switch to external display mode and back to force it to pick up the scaling again.

Hopefully things will improve with updates to applications and subsequent revisions of Windows.