tag:blogger.com,1999:blog-362015862024-03-06T03:12:06.391+00:00Derek says:Real-world technobabbleDerek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.comBlogger82125tag:blogger.com,1999:blog-36201586.post-70917087159646173432019-08-06T14:08:00.003+01:002019-08-06T14:12:15.120+01:00Type checked "visitor" for discriminated unions using mapped types<p>Discriminated unions are one of the most useful features of TypeScript. After testing the discriminator value TypeScript can apply type checking based on the relevant member type of the union.</p>
<p>So, for a discriminated union of shapes, like this...</p>
<script src="https://gist.github.com/dezfowler/cc5012581093080f54763ffe8e54988b.js?file=union.ts"></script>
<p>It's possible to write a <var>calculateArea</var> function using a <var>switch</var> on the discriminator (<var>__typename</var> in this case), like this...</p>
<script src="https://gist.github.com/dezfowler/cc5012581093080f54763ffe8e54988b.js?file=switch.ts"></script>
<p>Note that you can use the correct properties for the relevant member type of the union in each <var>case</var> body and you'll also get Intellisense prompting you with the correct property names.</p>
<img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2lPdfHKuZzi0gbkRnHzpe46V7tKvPAcbHkYpax_pYLLYh9hI5k67YVjJQWc6ZeALOuAvB3iQQjorbLvaMxDIK98NSKKxRA7VJ-OTC2YIUmsnisqvnQf1oYKfI68bUQnyQ0SjP/s400/Screenshot+2019-08-06+at+14.11.30.png" width="400" height="155" data-original-width="343" data-original-height="133" />
<h4 id="exhaustiveness-problem">Exhaustiveness problem</h4>
<p>However, depending on the TypeScript compiler options you have set, you may end up missing cases where a type not covered by a case in the switch just falls through and the function returns an <var>undefined</var>. Adding the <var>--noImplicitReturns</var> options causes the compiler to complain if this is the case but this may not be an option for you if you make use of implicit returns. Another work around is to assign your instance to <var>never</var> in the switch's default e.g.</p>
<script src="https://gist.github.com/dezfowler/cc5012581093080f54763ffe8e54988b.js?file=safe-switch.ts"></script>
<p>This will force the compiler warning but that is something you always need to remember to add when using a switch like this so it's not exactly a "pit of success".</p>
<h3 id="mapped-types-method">Mapped types method</h3>
<p>There is an alternative to the switch using mapped types instead which allows you to declare a map of the discriminator value to the correct operation for the type, like this...</p>
<script src="https://gist.github.com/dezfowler/cc5012581093080f54763ffe8e54988b.js?file=visitor.ts"></script>
<p>The <var>UnionMap</var> mapped type which makes this work is defined like this...</p>
<script src="https://gist.github.com/dezfowler/cc5012581093080f54763ffe8e54988b.js?file=union-map.ts"></script>
<p>It makes use of conditional types to enumerate the member types of the union in order to collate the discriminator values into a type e.g. <var>UnionKeys<Shape, '__typename'></var> will be <var>'Circle' | 'Square' | 'Rectangle' | 'Triangle'</var>. The mapped type <var>UnionMap</var> then requires a key for each of these string values to be present using <var>[K in ...]:</var> and it then does a "reverse lookup" of the original member type for that key using <var>UnionPartForKey</var> in order to provide the function argument type.</p>
<p>Using this method is cleaner and less error prone than the switch method. It will cause compile errors even with relaxed compiler options if a discriminator value is missed out of the map object.</p>
<h3 id="playground">Playground</h3>
<p>I've created a TypeScript playground with mapped types <a href="http://www.typescriptlang.org/play/#code/PQKhCgAIUgVBVAdgSwPaMpr24AsCmkAriupAC4CeADvlDLACLIDOAxgE7IC2yiAhuVQcssApAAmrTjz6Dhkah1S0OVSAO51owcFVqQkaRAGl8lFgB4EpRABo4zdrIFCR+AB7l8iCS0gA1uaoAGZwRugAfJAAvJBQ4bYA2kzSLvIcALqQnt6+-izkXIgA5pAAPhpE3ABG+CKVLJS1qAA2kAD88dg2xilObOlu2QBcGvgAbvUA3ODgoBDQicZ4hCQr+toMA1y8rgpihFLsu3Juisqq6ppbjtKn+xwAavytRISHkvdDChOv7xRUJAOPgishJoRyOJ1mRqPw1PRdJtDLYAArw8gAMWEZkolgSvXQDgJAx+7i8Pj8gWCYUJ9m6dxOZJeb0IuUp-gipnMVjpDlSzj2GUiCWicQSmDpOQp+Ug-EQlE6EtEO2Z-zZMqpdP6aSFw06DJwyzIyswY0QEI4yvNltm8zA9GNGE+MIwm0dApkeoO4mOXrOCiUKnq134Wg9ACVQUQOBgo+QY26aIQQgooYRChxBPgSoqQiQ2ORjP5EXpkyjjABZfjUfGSrnEyWk73kvJUoKUUJOxtweOJ2KQCaoZASUUDgDeDKSJkgfAr6FxvIbjMGLcimTGAAo4Woxlz0WpsRxcdZl56yQ4TJEAJSxaKwPux+IAXztyIAwsgOGxWoQ4uOEgAfUAzYbjGAAiT9v1-cDZkwLMpCIFhzWqOoOFmV85mRKNC3lEpfwnICQOTMDIHAnDyDwmC4MgAB3EcoRQ2oZgSAhkBKXByCYtCMLfcsAGUAEciHhP9IAAzBgNAsN8AgoSRJBWCEhYEdZKqZj0PATCywMWAuCosSJMgKSSJkiC9OQAylMwGp+BYNTEFQljMDYjiuPUnitL4gx+NwGsxKgn9CEqeTRIqSAKIM8KLIMu02HQQo5RBfh31eNgiFaQRjD3Wxq1rXz-IcAByEzaBuIqHEcjSxXEhJAt-MYWD8gwYmiasoQAOlRABJJZNya-yOoQ5AkMgAA9SAACZrzsBJItKBrIAGlromW-AOvoiQoSWNaOtczjZswUKQUa5q-1Ws6OpUiRCAmybDrgfSFrUta70gfrLts+zIGAKbbxgXb9vILy5nixBErWsYCpa2rJOIsqzMgIr6vwCqEmGpCxgAVhB8AwcS0T+AHQnUtadLMqLdAkl20qfBkzIPv8684oStp1taVASk3QnmfAIA">here</a>.</p>
<h3 id="references">References</h3>
<ul>
<li><a href="https://www.typescriptlang.org/docs/handbook/advanced-types.html#discriminated-unions">Discriminated unions</a></li>
<li><a href="https://www.typescriptlang.org/docs/handbook/advanced-types.html#mapped-types">Mapped types</a></li>
<li><a href="https://www.typescriptlang.org/docs/handbook/advanced-types.html#conditional-types">Conditional types</a></li>
</ul>
<blockquote>
<p>Written with <a href="https://stackedit.io/">StackEdit</a>.</p>
</blockquote>
Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com0tag:blogger.com,1999:blog-36201586.post-58045591851784465062019-07-24T14:12:00.000+01:002019-07-24T14:12:34.733+01:00TypeScript Gotchas: Type Assertions
<h3>TL;DR</h3>
<p>In summary, unless you want lots of weird non-checking of your types avoid mixing type assertions and literals. The critical thing to remember is that when you use a type assertion you're telling TypeScript that you know what the type is and it doesn't need to check. If there is genuinely no way TypeScript can check the value, i.e. it's a runtime value; maybe it came back from a web service, then it's valid to use the assertion (possibly after you do your own validation). But if you're writing a literal value in TypeScript then the compiler knows everything about the value and the context so you shouldn't <emp>assert</emp> anything and should leave the compiler to do its job in peace.</p>
<h3>The details...</h3>
<p>I think TypeScript is the best thing to happen to JavaScript since Douglas Crockford's <a href="https://www.amazon.co.uk/gp/product/0596517742/ref=as_li_tl?ie=UTF8&tag=dezfowler-21&camp=1634&creative=6738&linkCode=as2&creativeASIN=0596517742&linkId=5570853ed26b70f234c50be1649f1ab4">JavaScript: The Good Parts</a> and the linters which checked your code against its recommendations. However in many ways TypeScript is much more a security blanket than a safety net as it's easy to break its type checking without realising. I'm going to dive in to a few of the most common gotchas (in my limited experience using it) in a couple of blog posts. This first post is reserved for, what I consider the worst offender, declaring literals using type assertions.</p>
<script src="https://gist.github.com/dezfowler/b81b210c888b19aeb4057384b95ddc12.js?file=type-assert-example-1.ts"></script>
<p>There is only a difference of one character in length between these two statements but the behaviour in terms of type checking is very different. It is very common to see this as shorthand in places where you don't normally use an explicit declaration such as a return value of a lambda e.g.</p>
<script src="https://gist.github.com/dezfowler/b81b210c888b19aeb4057384b95ddc12.js?file=type-assert-example-2.ts"></script>
<p>When you start writing literals in this way in one place in your code, you'll fall into a habit of using this everywhere.</p>
<p>I think one of the main reasons why this is so prevalent is that the behaviour of the IDE when using a type assertion like this is very similar to declaring a value the "right" way. You get intellisense prompting you with the property names of the asserted type...</p>
<img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjz5WfYKmVj-rF0ORDsbFTFXut8Ye2SRQyitfci1CefPL6o_3Vq8RuOxmR_vs-pKAcFA2EPusRjz3IgkouD87oV7bMZmD_byvecUEyTP7-qtqWJwLvdV-Jh-UDgHikhmhy81A8f/s320/Screenshot+2019-07-24+at+08.23.39.png" width="320" height="125" data-original-width="429" data-original-height="168" />
<p>...and when you start typing, if you provide an incorrect property name, you get a compiler warning...</p>
<img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjr9X-L4tQsrFgrumJlAnCapGQ2ugIx9T9u1t1kfsk7SmeN-NBt9_5cdVTB1_PpMaZMCskH3-OlZfkO5XU-EQ4Gg0DFOy6y7BDY5N7Ivc3LlpXmnYYQXXcjL0Cb62CF5nq_HU1d/s400/Screenshot+2019-07-24+at+12.37.51.png" width="400" height="133" data-original-width="1100" data-original-height="366" />
<p>These things are all pointing to the type checking on this value being no different to that on a standard declaration - it lulls you into a false sense of security.</p>
<p>A note on version - this is true as of version 3.5.1.</p>
<p>I created a <a href="http://www.typescriptlang.org/play/#code/PTAEEYDpQeQIwFYFMDGAXUBDAztpAnNASwHsA7bAKEpFkVQzQE8AHJUAdyLQAtQ0OJUGXIBaEi2LlMAG1At8EgsSRVmbUADESQgLygA3vx5EyAcwBcobGnymzAGlAEAtlbIBXF3AIBuUCS8BAD8VjZ25qAAvr7UtACCuERmZC5IZBgkAGZYZM4ukkwB9OicJPgA1tigWaZI1ABumPg1OuCg+gA82iQAfEYxlE0tWToATB2GUVjVPbE0YAAiJPbGRNWmNkiYACY1mEQy1WhCKCQFh+xcvM74ivgLWHAkHhgu69irCkqERKpDzVaJAAzFYepMDINHgBZD6ibBsGQyVb4JAARw8RFRexYzUwLn2hyowyBABZJt0dP01uYrAByOlOVzYKwABmi81oAEkyFkCKiyCh2OQZEUUDxUFUsEjQKiMVikDi8Wk0ARqs12I9sJhiNhaorcntkiJUeqdjtuKQyLJ5MqkKr8NVsEJHphQO9cPDEcjIihMHlsMiWMZFB4zDwASMdABWCk9am8ez0xm3NygVlOQISsjJjlxMAwMii0A7JC1MgG75sX6qLCo6UyEgcA2mYrIUrIh2ySNAgBsYJ0EJpllADKZ+DTGYCQRzo7peceY2gABVWOwcHhfuRqtc+AoiC4dUQGux1P9aJoDkdONw+Pzyo9QE+nwBhcgnx1WgI5M+j8L2ecTn4NdR08bwCHnQ8ih8LBQEfZ93XWNBMAqdgfD9Dw8GEJBbwIYCNGwDwslqFA-gyYsSA-GRMBYHdbzg2gEOMYUgnwaAuR-EwdxwUBTFVDIrVkJwzjID9GAlZwAA8FFUT5yAYsAmKAukPDICoRA4Mh51qR00EgSgZHteR2i6MCfHwXoGU5MBLyJG8bnvB5GOfVcNDpAAidz53WYRAhmT4UkwOBDP4IRfzpMyIP0wyMBYMZ3C8czJis6hKF-eJkRSNIMmSk4WHnAAfUd3nNQzCtHZ40BOFw6X8fNQAAOSERz7L4UwmmRPYOo8dgIqUOke0wEzQE6DLjWytBLJENhavqt87gYYtHLom54Nfd81S-bJ8N6yL8EAsKQLpMasvSNBIMwaD1wUpiPWQ1DQHQzBMPYCtcJaX9COIohSLOiiqJola+DW5jpwlNjQA4tZuI2DIzsEmRhI2whQaQaTTTkvIQeU1T1KbLSaixGx9JJTAJi6E7UjO3oxms0BgWgeI7kuqhaCa257la3jBXKVFSjpSqTgJ7gkBcHtJOG0bMqpjIAG0AF1elluk8pTFWJDVwXyDpeW6fZlrdxLIhiIIM7QFU0t8FFVZutexKCHF8mRspiaFaV9X8qcD2UzGXW5t5xaimW1q1tc3r3K1sgvN46oRAwDdjSCkKgPCl2zrpEnAUk0FQDTuX5cmZXVa94uKsCIWdbp0kVw8FhDKoR5l1rkLTD5BbBXYHyXHKdh-1Kf09jgWQpXkoIbtMM4FtKW3WcUhCw9HCPy-IaOfLj0AzgKPFgtPQ63Lz87M5aSTyS6WWD6cA-FaLjWvcjyuQdABeIvt-aY98jBN9xfAk93nbRwPhncWsYz4X1ztLCa19vZOF9vMIAA">TypeScript playground</a> with these examples in for you to follow along.</p>
<h3>Object literals</h3>
<p>In the example above, rather than defining a literal of a target (asserted) type what you're actually doing here is defining a literal of an implicit type which you then assert as the target type so it's the same as doing this...</p>
<script src="https://gist.github.com/dezfowler/b81b210c888b19aeb4057384b95ddc12.js?file=type-assert-example-3.ts"></script>
<p>For example, taking this trivial type with two required properties and one optional...</p>
<script src="https://gist.github.com/dezfowler/b81b210c888b19aeb4057384b95ddc12.js?file=type-assert-1.ts"></script>
<p> When processing the type assertion TypeScript simply checks that any properties on the source type with a name that matches one of the target type's properties also have the same type as that property. The first gotcha about this is that, if there are no properties on the source type, it matches even if the target type has required properties...</p>
<script src="https://gist.github.com/dezfowler/b81b210c888b19aeb4057384b95ddc12.js?file=type-assert-2.ts"></script>
<p>If properties <emp>are</emp> declared on the source you need to supply all the required properties of the target but after that any additional properties that may be present on the source type are disregarded unless they are a name match with a target property. The gotcha arising here being that, if you have a typo in the name of an optional property, you won't know about it.</p>
<script src="https://gist.github.com/dezfowler/b81b210c888b19aeb4057384b95ddc12.js?file=type-assert-3.ts"></script>
<p>In `foo5` and `foo6` the property `other` is incorrectly spelled `othen` but only the literal without the assertion catches this.</p>
<h3>Primitives</h3>
<p>For primitives type assertions error as you would expect for assigning a value of one type to a variable of a different type albeit with a much more long-winded error message.</p>
<script src="https://gist.github.com/dezfowler/b81b210c888b19aeb4057384b95ddc12.js?file=primitive-assert-1.ts"></script>
<p>Things are a bit more interesting with union types of constant values, though. For example, with a union of three constant string values the assertion allows an incorrect string value but it will fail if given a value of another underlying type e.g. a number.</p>
<script src="https://gist.github.com/dezfowler/b81b210c888b19aeb4057384b95ddc12.js?file=primitive-assert-2.ts"></script>
<h3>Arrays</h3>
<p>Arrays and type assertions don't agree at all. The assertion doesn't catch any wrong values added to the array...</p>
<script src="https://gist.github.com/dezfowler/b81b210c888b19aeb4057384b95ddc12.js?file=array-assert-1.ts"></script>
<h3>Tuples</h3>
<p>Tuples, however, are a different story and the assertion is a lot more strict with it essentially behaving the same as a standard declaration.</p>
<script src="https://gist.github.com/dezfowler/b81b210c888b19aeb4057384b95ddc12.js?file=tuple-assert-1.ts"></script>
<h2>Summary</h2>
<p>As mentioned in the TL;DR - don't mix literals with type assertions. It is fraught with type checking peril and the use of assertions should be restricted to only the cases where TypeScript genuinely has no information about a value.</p>
<p>For the shorthanded lambda return values just ensure you define the return type on the function type itself e.g.</p>
<script src="https://gist.github.com/dezfowler/b81b210c888b19aeb4057384b95ddc12.js?file=lambda-alternative.ts"></script>
<p>For covariant return types writing the declarations out longhand is the safest option.</p>
<p>Up next time - why you shouldn't use parameters with your functions!</p>Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com0tag:blogger.com,1999:blog-36201586.post-38994869944028936102019-06-27T01:51:00.000+01:002019-06-27T01:51:24.044+01:00Date timezone changes in Chrome 67<p>Beginning in Chrome 67 (released on 29 May 2018) there was a change to how timezones are handled in the JavaScript Date object. Historical dates will now have a historically accurate timezone offset applied to them which means that, if you were supplying a UTC date/time to one of the <var>new Date()</var> overloads and then retrieving local time, the value you get back may have changed from Chrome 66 to Chrome 67.</p>
<p>For example, with the machine timezone set to London, if you evaluate...</p>
<pre><code>new Date(0).getHours()</code></pre>
<ul>
<li>0 ... on Chrome 66</li>
<li>1 ... on Chrome 67</li>
</ul>
<p>0 here is treated as a milliseconds offset from Unix epoch time (1970-01-01T00:00:00Z) so the date is holding a value of 1970-01-01T00:00:00Z.</p>
<p>getHours returns a local timezone adjusted version of that value which, by today’s timezone offset rules, is still ‘0’ because on 1<sup>st</sup> Jan we’re at UTC+0 and the historical time adjusted by today's offset rules is what Chrome 66 gives us.
</p>
<p>Chrome 67 applies the correct historical offset that was in effect on that date for the current system timezone. Weirdly in 1969 and 1970 London didn't observe a daylight saving change and was at UTC+1 for the whole year hence the value returned by getHours() is `1` because the local time was 1970-01-01T01:00:00+01:00.</p>
<p>A common use of the <var>new Date(milliseconds)</var> constructor overload method of creating a Date object is using a small number of milliseconds to format a duration e.g. "01:30:38 remaining" discarding the date part altogether and a similar problem to this was highlighted by Rik Driever in his post on the change [1].</p>
<p>It was Rik's post that led me to the Chromium change which introduced this new behaviour [2] and the related change to the ECMA standard [3].</p>
<p>In Rik's post he concludes that his issue is down to incompatibility between JavaScript and the .NET's JavaScriptSerializer and he attempts various workarounds to try and account for the offset being applied to the Date object without much success.</p>
<p>In fact, JavaScript and .NET are working together fine, and there are two easy ways to get your intended value back out of the Date.</p>
<h4>Option 1 - Use the getUTC* methods instead</h4>
<p>The millisecond value we're passing in is UTC and what we really expect to get out is also UTC so we should use the getUTC* methods instead e.g. getUTCHours(), getUTCMinutes(). The fact that getHours() was returning the value we expected in Chrome 66 and before was a coincidence and we should never have been using getHours() in the first place.</p>
<p>Some code coincidentally giving you the right value so you assume it's correct is a very common cause of bugs and this is a great example. It's also a great example of why you should use constants as the expected values in unit tests because if you were to use a value returned by new Date(blah).getHours() as your expected value your test would still pass.</p>
<h4>Option 2 - Initialise the Date with a local time</h4>
<p>If you want to keep using the local offset methods of the Date e.g. getHours(), getMinutes() then you can initialize the date slightly differently to get the result you expect:</p>
<pre><code>
new Date(1970, 0, 1, 0, 0, 0, milliseconds)
</code></pre>
<p>This overload of <var>new Date()</var> expects a local time so doing this instead will initialise the date to midnight local time and the constructor gracefully handles a millisecond value greater than 999 by just incrementing the other parts of the date by the correct amount. So in this case the date being held is 1970-01-01T00:00:00+-<offset> and getHours() will return 0 for any system timezone and any Chrome version.</p>
<dl>
<dt>[1] Rik Driever's post</dt>
<dd><a href="https://medium.com/@rikdriever/javascript-date-issue-since-chrome-67-50aa555799d0">https://medium.com/@rikdriever/javascript-date-issue-since-chrome-67-50aa555799d0</a></dd>
<dt>[2] Implement a new spec for timezone offset calculation</dt>
<dd><a data-saferedirecturl="https://www.google.com/url?q=https://chromium-review.googlesource.com/c/v8/v8/%2B/572148&source=gmail&ust=1561143401219000&usg=AFQjCNFBfrq7f8m_DngtR7osq8kCFd-EzQ" href="https://chromium-review.googlesource.com/c/v8/v8/+/572148" style="color: #1155cc;" target="_blank">https://chromium-review.<wbr></wbr>googlesource.com/c/v8/v8/+/572148</a></dd>
<dt>[3] The ECMA spec change</dt>
<dd><a data-saferedirecturl="https://www.google.com/url?q=https://github.com/tc39/ecma262/pull/778&source=gmail&ust=1561143401219000&usg=AFQjCNFMgRwWlLjRiWEhl-hH9Dkd_vFKvA" href="https://github.com/tc39/ecma262/pull/778" style="color: #1155cc;" target="_blank">https://github.com/tc39/<wbr></wbr>ecma262/pull/778</a></dd>
<dl></dl>
</dl>
Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com0tag:blogger.com,1999:blog-36201586.post-19577546527432707262018-01-10T00:17:00.000+00:002018-01-10T00:20:09.901+00:00Throttling with BlockingCollection<p>Recently I was working with a data processing pipeline where some work items progressed through a number of different stages. The pipeline was running synchronously so it would fully complete before picking up the next work item.</p>
<p>The work items were not related in any way so processing them in parallel was an option and, as the different pipeline stages took varying amounts of time, I decided to parallelize each stage separately, have different numbers of worker threads for each stage and separated the stages with queues. The pipeline was running on a single machine with the worker threads all part of the same process and the queues were just FIFO data structures sitting in RAM - a relatively simple setup.</p>
<p>The issue I encountered pretty quickly was that the stages of the pipeline processed the work items at different rates and, in a couple of cases, not in a predictable way that I could solve by tweaking the numbers of worker threads used for each stage. Where the stage acting as the consumer of a queue was going slower than the stage that was the producer, the list of items pending built up and used up all the available memory pretty quickly.</p>
<p>I needed to be able to limit the number of pending items in each queue and block the publishers to that queue until the consumers caught up.</p>
<p>One way of achieving this is using semaphores to keep track of the number of "slots" used and have the producer threads block on the semaphore until a slot is available.</p>
<p>Another option is the underutilised TPL Dataflow library and solutions which work this way are relatively simple, examples of which are out there on the web such as <a href="https://blog.stephencleary.com/2012/11/async-producerconsumer-queue-using.html" target="_blank">this one on Stephen Cleary's blog</a> where a BoundedCapacity is applied.</p>
<p>The option I went with was to wrap my ConcurrentQueue in a <a href="https://docs.microsoft.com/en-gb/dotnet/api/system.collections.concurrent.blockingcollection-1.-ctor?view=netframework-4.7.1#System_Collections_Concurrent_BlockingCollection_1__ctor_System_Collections_Concurrent_IProducerConsumerCollection__0__System_Int32_">BlockingCollection with boundedCapacity</a> specified. This has the effect of causing any Add operations on the collection to block until there is space available. Below is an example from MSDN slightly tweaked to introduce throttling to the producer Task.</p>
<script src="https://gist.github.com/dezfowler/192f84096037eccba36b01568988279f.js"></script>
<p>You can see from the example output that, once the collection is at capacity the producer is forced to wait for the consumer to free up space in the collection before it can add more items.</p>
<ul>
<li><a href="https://docs.microsoft.com/en-gb/dotnet/api/system.collections.concurrent.blockingcollection-1?view=netframework-4.7.1">BlockingCollection documentation</a></li>
</ul>Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com0tag:blogger.com,1999:blog-36201586.post-27299863415320212992014-12-28T14:57:00.000+00:002014-12-28T14:57:05.021+00:00Camera memory card backup on the go<div dir="ltr">
I'm recently back from a two-and-a-bit week holiday in Peru and before we went the wife and I invested in new cameras to catalogue our adventures. As our cameras are both enthusiast/semi-pro level you have the option of shooting in RAW format to take advantage of greater post processing capabilities however the huge file sizes involved can be a real problem. Both our cameras use SDHC cards and we bought four with pretty high capacity coupled with good performance. Even these large cards didn't leave us with much room for two week's worth of photos and we really wanted to be able to easily back up photos while we were away should we lose a card or if it got corrupted.<br />
<br /></div>
<div dir="ltr">
<div style="text-align: right;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLaNHrIWXH95BU0eUNbMcK4LEorfwiGE3qk-Nz2uoqlIQuIzC5w71Cpb12CHHHk_F7BMomm6FkKSZdZ24kpZlIeZIRX2YjGxHFj8f_D80s2zLLuIBSydxaeGUTm9R0F7OVxyO8/s1600-h/Jobo_Giga_One_300%25255B4%25255D.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img align="right" alt="Jobo Giga One 300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkKPGC2kz5CepzFtZqKKSy8SFrQntyzTs2XmqEmk-BT7rJEy18ApRSU_CxyK1_oNvf6BQ2HLfpsdzYuyPcQjglgLnkRr9cJQH4_W1jNjWyF_kIWaulV3e8us5BiiZ9tdq_T73i/?imgmax=800" height="335" style="display: inline; float: right;" title="Jobo Giga One 300" width="300" /></a></div>
On previous holidays we took a portable hard disk with built-in card reader which worked really well however you don't seem to be able to buy these any more. I'm guessing, these days, with more portable PC options like netbooks or ultrabooks a lot of people use those to back up so demand for an alternative has dropped. An iPad with a decent size internal storage and a <a asin="B00A6TOWXG" href="https://www.blogger.com/null" type="amzn">lightning to SD adapter</a> would also be an option.<br />
<br /></div>
<div dir="ltr">
We didn't want to buy a small laptop just for backup purposes as netbooks are still quite expensive and we were looking for a cheaper option, preferably that made use of my Nexus 10 Android tablet.<br />
<br /></div>
<div dir="ltr">
The Nexus 10 has a micro USB port which you use for charging the device but when you plug in an <a href="http://en.wikipedia.org/wiki/USB_On-The-Go" target="_blank">OTG (on-the-go) cable </a>it gives you a full size USB port into which you can plug many different types of USB device and the Nexus 10 will host them and use their capabilities. For example, plugging in a USB keyboard will allow you to input text as you would on a full PC. Plugging in a USB hub allows you to connect multiple devices at the same time as with any other PC. Lots of other Android tablets have a micro USB port and will work in the same way, not just the Nexus devices.<br />
<br /></div>
<div dir="ltr">
What we ended up taking with us was:</div>
<ul dir="ltr">
<li>My Nexus 10 </li>
<li><a asin="B0064GZAIQ" href="https://www.blogger.com/null" type="amzn">USB OTG cable</a> </li>
<li><a asin="B005GLDAVE" href="https://www.blogger.com/null" type="amzn">Small USB hub</a> </li>
<li><a asin="B003U8NZAQ" href="https://www.blogger.com/null" type="amzn">USB card reader</a> </li>
<li><a asin="B0084DFLLS" href="https://www.blogger.com/null" type="amzn">A couple of 64GB USB memory sticks</a> </li>
</ul>
<div dir="ltr">
This hub and card reader have the advantage that they’re both about 2” square so they form quite a compact unit and you could, for example, wrap an elastic band around them to keep them together. We took a small (3" x 5" x 2") tupperware-style box and, tablet excluded, all this fitted along with a couple of spare camera batteries and SDHC cards.<br />
<br /></div>
<div dir="ltr">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0osPAUZPDzLSnYbsTJRYOZTu61Ma3RxX13mqHb1J_rdux413VbWUt3JGiSJLk-mFFfudlK9j7kHHRHBGXQYJlKN65m4VJVyM_36Z0r_g6MjP0jyEWkCY_WNqB0sS8V_aNKhfK/s1600-h/nexus-media-importer%25255B3%25255D.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img align="right" alt="nexus-media-importer" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlO6oTtk6kUgdk6RVaPEQH4TIaREGtuz1A3zlNSTXE2u-xqldQppUQzGjmbVa_4GssT_VRFTXzvwDdFhwSnZ2jfSkLP2XaxoEP2CD22TJOgWirqB6AlEKdipLTe9zJjTZXxTp4/?imgmax=800" style="display: inline; float: right;" title="nexus-media-importer" /></a>The other piece of the puzzle to get it all working is the <a href="https://play.google.com/store/apps/details?id=com.homeysoft.nexususb.importer&hl=en_GB">Nexus Media Importer</a> app. Ignore the "Nexus" in the name, this app should work with any "Android 4.0+ devices with USB Host support". The app supports a variety of different media files (photos, video, audio) and allows you to preview files as well as perform file management (move, copy, delete, etc) operations. Usefully the app (or Android itself) has native support for all the major RAW file formats so regardless of what make of camera you have you should be able to preview your photos right in the app. </div>
<div dir="ltr">
</div>
<div dir="ltr">
<br style="clear: both;" />
<h4>
Putting the pieces together</h4>
Using the USB hub means you can plug in the card reader and a memory stick (or a USB hard disk) at the same time - they all connected together and plugged in to the tablet as illustrated here:<br />
<br />
<div dir="ltr">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIuhjbOVWPeb_i8cQbJT1A2DEAOdCXojW1YEggcapPSyCkbSttE-VwrLySU3DxeAx2RbuIsdahrdxuMXcx8sFLQbAwZnXYzt5IllleKXVaGNyjrTbEZaKeeLlvSDj9Ki5Gl2s2/s1600-h/DSC_3596%25255B5%25255D.jpg"><img alt="DSC_3596" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiAroJ13qZR3KyyoLm_zsq9Yy_R-yVj7CGsP3q3XwHyxs2TJgKzrDWXfxbe0u9B9EhhUTR_s1iKJEc6Bu-UUG-nLlVXmghD4S48uxVGSAoF43JzFNthCQtO-CdDbVSgQDWzPI6L/?imgmax=800" height="401" style="display: block; float: none; margin-left: auto; margin-right: auto;" title="DSC_3596" width="600" /></a></div>
<br />
Note that, if you're using a USB hard disk you'll probably need a powered USB hub unless the hard disk has its own power supply.<br />
<br />
Once Nexus Media Importer in installed, when you connect a mass storage device you get a popup message asking if you want to open the app:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDNw5gh9UXVws2Lfs22pdtqLjJzVnObM-r_TKAXLZCmuFJ1wr8eXngM_IOmLtCVHR8P3o-ZdD6IZayh2iDjVHIjlWHwuryqPjea_W7VCmGwH3a01norPfeA99xsCR9gzaGAlqt/s1600/Screenshot_2014-10-28-20-35-22.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDNw5gh9UXVws2Lfs22pdtqLjJzVnObM-r_TKAXLZCmuFJ1wr8eXngM_IOmLtCVHR8P3o-ZdD6IZayh2iDjVHIjlWHwuryqPjea_W7VCmGwH3a01norPfeA99xsCR9gzaGAlqt/s1600/Screenshot_2014-10-28-20-35-22.png" height="400" width="640" /></a></div>
<br />
After you select OK and the app opens you'll be prompted to select the storage device you want to import from:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtryOFlQ-EuQLAJNVI505Dxtoz_fdPk9UOF3gk1CKoeGynK3cay8s-6rhYtRzsm94GAiPQAFxPdmpyPyt4JNweucKFdvXe4SFG-AtI2TrMQunvXmJh_MxDe9KoxXu3D2JXUc6K/s1600/Screenshot_2014-10-28-20-35-54.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtryOFlQ-EuQLAJNVI505Dxtoz_fdPk9UOF3gk1CKoeGynK3cay8s-6rhYtRzsm94GAiPQAFxPdmpyPyt4JNweucKFdvXe4SFG-AtI2TrMQunvXmJh_MxDe9KoxXu3D2JXUc6K/s1600/Screenshot_2014-10-28-20-35-54.png" height="400" width="640" /></a></div>
<br />
This is fine if you want to copy your photos onto the devices internal storage but we want to copy from one external storage (the SDHC card) onto another external device (the USB pen drive) and to do that we switch into the app's "Advanced" mode by selecting it in the drop down on the right that currently says "Importer".<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0y_4JZ_2oAMJAh9bVDki62oAFV9bVMjXcXdJ-VRNmGY8HOKGMjjZCPgMNCRWDfiaM8gqC-Qh0hNIkPcjw78CAAX2dPnbDHI89TAx0xvmbqNrK8cAtEW4SmQEh63rJ-XySXIJX/s1600/Screenshot_2014-10-28-20-35-43.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0y_4JZ_2oAMJAh9bVDki62oAFV9bVMjXcXdJ-VRNmGY8HOKGMjjZCPgMNCRWDfiaM8gqC-Qh0hNIkPcjw78CAAX2dPnbDHI89TAx0xvmbqNrK8cAtEW4SmQEh63rJ-XySXIJX/s1600/Screenshot_2014-10-28-20-35-43.png" height="400" width="640" /></a></div>
<br />
Here we select our source and destination respectively and the app then switches to a view showing you the source file system on the left and the destination on the right.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmmseZE7tkuq-1_k5XpZxKordhO5amr-4HA4PVv4K89nkAWaWG7aFJyS1lU5lYAArHOcryr-lEyuZVHI6kuft2x27ovp1hb-w2FVpMVNcE-vhjsqnMQLbtX74z_jikBHsYKdaa/s1600/Screenshot_2014-10-28-20-36-24.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmmseZE7tkuq-1_k5XpZxKordhO5amr-4HA4PVv4K89nkAWaWG7aFJyS1lU5lYAArHOcryr-lEyuZVHI6kuft2x27ovp1hb-w2FVpMVNcE-vhjsqnMQLbtX74z_jikBHsYKdaa/s1600/Screenshot_2014-10-28-20-36-24.png" height="400" width="640" /></a></div>
<br />
Navigating to the correct folder is somewhat counter-intuitive at first as you have to tap the folder name to go into that folder, tapping on the folder icon to its left selects the folder meaning you can copy entire folders quite easily.<br />
<br />
Once you've found the right folder e.g. the folder your camera saves photos to on the left and the place you're backing up those photos to on the right the app has a great feature allowing you to select any new photos and only copy those.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZGbDySsL7Dq_5Q80tUtjfepN-9YXdqDdQL0nA9nO4YbwxbbsZyOV6LKa1f8_D9TBkTkU0iRW2_hYIaV_gAM0oS13sI8GoJLR6ngJq7GpOHkouCt1bYsbCCcDGN4bqQEu9XOBW/s1600/Screenshot_2014-10-28-20-38-16.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZGbDySsL7Dq_5Q80tUtjfepN-9YXdqDdQL0nA9nO4YbwxbbsZyOV6LKa1f8_D9TBkTkU0iRW2_hYIaV_gAM0oS13sI8GoJLR6ngJq7GpOHkouCt1bYsbCCcDGN4bqQEu9XOBW/s1600/Screenshot_2014-10-28-20-38-16.png" height="400" width="640" /></a></div>
<br />
Once you've made a selection the other options such as "Copy" and "Move" become available in the menu and you pick the one you want.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj21Q6Les4be4e3w_JenMGT5JgSnVWmjO9OIqnTOAUf9LY294utkUM8pLFIb8jctu3ka5554U3YjzqbRUbcIfE73D_LLOEyWWd33Ltv1u13W2sd1Dwdx2Q0eUfsjHieEaivD-A3/s1600/Screenshot_2014-10-28-20-38-37.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj21Q6Les4be4e3w_JenMGT5JgSnVWmjO9OIqnTOAUf9LY294utkUM8pLFIb8jctu3ka5554U3YjzqbRUbcIfE73D_LLOEyWWd33Ltv1u13W2sd1Dwdx2Q0eUfsjHieEaivD-A3/s1600/Screenshot_2014-10-28-20-38-37.png" height="400" width="640" /></a></div>
<br />
Selecting one and you'll get a prompt about the action you're about to perform - hit OK and the transfer begins. The file transfer goes on in the background meaning you can swap to a different app while the transfer is happening or even put the tablet into standby to save power.<br />
<br />
Assuming the read and write speeds of the memory cards and memory sticks you're using is good the transfers shouldn't take too long - the <a asin="B008CVHLT2" href="https://www.blogger.com/null" type="amzn">Transcend SDHC cards</a> we bought had 90MB/s read speeds which made backup nice and quick.<br />
<br />
We made two backups of our photos onto the two USB sticks, my wife kept one and I kept the other and then we just formatted and reused our SDHC cards as required. All in all, was a fairly low cost and space and weight efficient solution I was really happy with and will be using on subsequent trips.</div>
<div dir="ltr">
</div>
Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com0tag:blogger.com,1999:blog-36201586.post-85929093057961146652014-05-20T09:16:00.001+01:002014-05-23T19:02:39.664+01:00DDD South West 5<p dir="ltr">Last Saturday I was at DDD South West in Bristol. Unlike 2012 I was marginally more organised (thanks to a timely prompt from <a href="https://twitter.com/mjjames">@mjjames</a>) so I was straight in rather than going via the waiting list.</p> <p dir="ltr">As ever, this instalment maintained the high standards of organisation, variety of quality sessions and great weather (at least ones I’ve attended) that I've come to expect from DDD events. </p> <p dir="ltr">This year's addition of the Pocket DDD web app which allowed you to browse the agenda and collected session feedback added an extra point of interaction which seemed to work really well. I look forward to seeing how the DDD guys utilise the app for other things in future – linking out to Twitter and pre-populating a session hashtag, maybe?</p> <h3>Sessions</h3> <p>This time around I ended up only attending sessions from people I haven’t seen speak before. The ones I went to were:</p> <h4>Continuous Integration, in an hour, on a shoestring; Phil Collins</h4> <p>I found this session to be a great, light-hearted opener to the day with much praying to the demo gods as Phil attempted to set up a complete CI environment and show it working end-to-end in an hour. He was successful.</p> <h4>Complexity => Simplicity; Ashic Mahtab</h4> <p>This session was broadly a look at Domain Driven Design and how, when exercising it, you need to change your way of thinking about problems to create a less coupled solution. </p> <h4></h4> <h4>F# Eye for the C# Guy; Phil Trelford</h4> <p dir="ltr">This was one of those "mind blown" sessions and it provided a great introduction to the power of F#. I understand what <a href="https://twitter.com/dantup">@dantup</a> has been banging on about now.</p> <p dir="ltr">The amount of content covered I found to be ideal and Phil’s delivery was great – definitely a presenter I’ll look out for in future!</p> <h3></h3> <h4>An introduction to Nancy; Mathew McLoughlin</h4> <p>Somehow I’ve managed to avoid talks about Nancy up to now and, although I’ve had cursory looks at the documentation for it in the past, I thought I’d attend Mat’s talk and actually see it in action to gain a better insight.</p> <p>Mat managed to cover quite a lot in this session and it was interesting to see how it differed from ASP.NET MVC and Simple.Web which I’m more familiar with.</p> <h4>10 things I learnt about web application security being pen tested by banks; James Crowley</h4> <p>Security talks tend to have a habit of making you walk out incredibly worried about your products out in the wild and this one was no exception.</p> <p>I’m pretty familiar with the standard vulnerabilities for web sites – things like the OWASP Top 10 – but there’s nothing like a really scary demo of exploiting them with some script kiddie tools to really hammer home how much of a security risk they represent.</p> <p>James managed to pack a lot of good advice into the hour with demos where appropriate and this was a great end to the day.</p> <p> </p> <p>Overall it was a very enjoyable day – organisation and catering were great, the sessions were of a very high standard and it was good to catch up with some folks I haven’t seen in a while. Big thanks to everyone involved.</p> Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com0tag:blogger.com,1999:blog-36201586.post-45988566987595026602013-12-27T18:21:00.002+00:002013-12-27T18:21:28.459+00:00Windows 8.1 on high DPI<p>I’ve been working with Windows 8.1 on a Dell XPS 15 for about eight weeks now and I thought I’d share some of my experiences of working with display scaling as the Dell has a 3200x1800 display.</p>
<p>Being what Apple would term “Retina”, the display has a pixel density of almost 250 PPI, which is matched only by a handful of other Windows laptops at the moment. Until recently the limiting factor in this area has been that desktop operating systems have expected a display’s resolution to scale more or less linearly with its size meaning the pixels per inch didn’t change a great deal.</p>
<p>Using fonts for an example, 10 point text should be about 10/72 of an inch or 3.5mm high (1 point = 1/72 inch). Windows, by default, renders 10 point text about 13 pixels high which, if you do the math, is assuming a PPI of 96. Some background on where this 96 comes from can be found in <a href="http://blogs.msdn.com/b/fontblog/archive/2005/11/08/490490.aspx">this MSDN article</a>. In the case of printers – when you print 10 point text you will get text that is 3.5mm high regardless of the printer’s DPI. The higher the DPI the crisper the text will appear but the characters will be the same size. The same is not true for displays, however. This hasn’t been so much of a problem up until now because average pixel density has been between <a href="http://en.wikipedia.org/wiki/Dot_pitch">about 90 and 120</a> but now we’re nearer to 250 pixels per inch that same 10 point text is only about 1mm high which is essentially impossible to read.</p>
<p>Obviously with high DPI displays some solution to this and the reasonable scaling of other elements on screen is required so that we can have nice crisp visuals that aren’t comically tiny. The operating systems are finally catching up and in Windows 8.1 are some usable scaling options for a high DPI display but it’s fair to say that, yet again, Apple have led the charge in this department with their Retina options in OSX.</p>
<p>In Windows 8.1 the scalings are 100%, 125%, 150% and 200%. Set at 200% on the XPS 15, for example, this renders things like text at a size you would see on a 1600x900 display. The scaling happens differently depending on the application. For most classic desktop applications such as Chrome it simply does a crude resize – essentially rendering at 1600x900 and then blowing the image up so you get a lot of pixilation and rough edges. For “Metro” apps and some desktop apps the scaling factor is passed to the app which scales the sizes of the UI elements as appropriate but renders them using the full resolution of the display.</p>
<p>It’s reasonable but it’s far from perfect unfortunately as there are still a lot of visual elements which don’t scale quite right and every so often you encounter some custom rendered dialog that isn’t scaled at all and you have to break out the magnifier tool.</p>
<p>Another oddity, which may be exclusive to the drivers for the XPS 15, is that coming out of standby mode loses the scaling option. It switches back to 100% scaling and you have to switch to external display mode and back to force it to pick up the scaling again.</p>
<p>Hopefully things will improve with updates to applications and subsequent revisions of Windows.</p>Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com3tag:blogger.com,1999:blog-36201586.post-33681241392139654352012-10-05T00:19:00.000+01:002012-10-05T00:19:21.145+01:00NFC payments - it's not for you!NFC payment terminals are becoming more common and all the credit/debit cards in my wallet have supported NFC for about 6 months which is great as it's much more convenient, especially for buying a coffee or lunch.<br />
<br />
NFC and by extension RFID are nothing new - I think I first saw a dog getting an <a href="http://en.wikipedia.org/wiki/Microchip_implant_(animal)">RFID implant</a> put in on Blue Peter in the early 90's and next year NFC will have run the London Underground for 10 years in the form of the <a href="http://en.wikipedia.org/wiki/Oyster_card">Oyster card</a>. It's taken a long time for the banks to warm to this technology - maybe because there's a lot of security protocols to be determined and a lot of liability sums to be calculated etc.<br />
<br />
I've had a Google Nexus S for about 18 months which was, from what I've read, the first NFC-enabled handset available in the UK. When I bought it Google were yet to release <a href="http://www.google.com/wallet/">Wallet</a>, their NFC payment app for Android, but there weren't many NFC payment terminals available so it wasn't that much of a big deal.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-bottom: 0.5em; margin-left: auto; margin-right: auto; padding-bottom: 6px; padding-left: 6px; padding-right: 6px; padding-top: 6px; text-align: center;"><tbody>
<tr><td style="text-align: center;"><img alt="" border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYPC5jI8R9f53YZVHYRLuu7-hx179tuKL1R1_r2bT7mosuH726Feu9vmrkGqIRDZJZEegsno3kFOGTTQF18aDS9eUAyHYSTcCl3W0SGSJGcW12XV6oL5vNgziZIQiFrFFzETza/s1600/wwallet.png" style="clear: right; cursor: move; margin-bottom: 1em; margin-left: auto; margin-right: auto;" title="The Wallet logo is quite a clever echo of the NFC payment logo" /></td></tr>
<tr><td class="tr-caption" style="font-size: 13px; padding-top: 4px; text-align: center;">The Wallet logo is quite a clever echo of the NFC payment logo</td></tr>
</tbody></table>
<br />
Wallet has since been released in the US and is supported by all the major credit card companies but that's where the good news ends. It seems Google have deals with particular networks, Sprint being the main one, meaning that even if you have an NFC-enabled Android handset you can only use Wallet if you're on one of the approved networks. What's worse is that it isn't available in the UK and there's no word from Google on when or if it will be.<br />
<br />
What is particularly odd is that the Nexus 7 has no such restriction. I can only assume this is because it has no GSM modem so there is no deal to be made with a mobile network. This is particularly frustrating because I can see that, if you buy a phone from a particular carrier and that carrier doesn't have a deal with Google, you won't get Wallet but I bought my Nexus S SIM free from Carphone Warehouse so the phone itself has no network affiliation and yet I still can't use Wallet.<br />
<br />
What is quite interesting and may shine some light on the whole delay in Wallet getting to the UK is the release of Quick Tap from Barclaycard and Orange. Although Orange sell 10 NFC-enabled handsets only 2 of them are "Quick Tap ready", both of which happen to be the Galaxy SIII, probably their most popular and expensive handset apart from the iPhone. I doubt there's technically anything special about the SIII that means it can be used for payments where the other handsets can't - all the others are cheaper so my guess is it's entirely about forcing people to buy a more expensive handset.<br />
<br />
If the other UK networks and card companies are doing similar deals it's no wonder a service like Wallet is unavailable as there is money to be made and phones to be sold. All in all it's pretty rubbish for the early adopter and the consumer in general.<br />
<br />
Surely the fact that a phone is PIN protected and the NFC is not always on actually makes it a more secure way of implementing NFC payments. People can't skim your phone the way they can the cards in your actual wallet.<br />
<br />
Guess I'll just have to wait and see where this farcical endeavour goes. In the meantime I'll look forward to an Oyster app (which would be pretty ace) and scanning some NFC business cards, I suppose. Whoopee!<br />
<br />Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com0tag:blogger.com,1999:blog-36201586.post-80657178954490556982012-03-20T21:37:00.000+00:002012-03-20T21:37:57.018+00:00Interfaces and IoC<p>If you want to use inversion of control, unit testing and adhere to SOLID principals in your C# code this often means you have a lot of interfaces. Core considerations when dealing with interfaces are things like:</p> <ul> <li>Where should the interface be defined – alongside the main implementation or in a separate assembly? </li> <li>Should the interface be generic or not? </li> <li>Am I breaking interface segregation principal? </li> </ul> <p>The one that sometimes falls by the wayside is:</p> <ul> <li>Does the the interface definition match my intended usage? </li> </ul> <h3></h3> <h3>Example</h3> <p>A trivial example of this might be where you have a database containing a Log table of messages from an application where each has an ID of some kind, type, source, message and date/time recorded. The interface for the data access to this table might be:</p> <pre><code class="c#">public interface ILogRepository { IEnumerable<Log> GetLogs(); }</code></pre>
<p>Innocuous enough however what if all our usages of this interface and method require that the resultant IEnumerable is ordered by the recorded time of the log message. IEnumerable alone doesn’t guarantee anything about the order and reordering the output at each point of use would be very inefficient, not to mention that the database would likely be a much better place to perform the ordering action.</p>
<h3>Attempt 1 – Be more descriptive</h3>
<p>The simplest option is simply to bake the ordering information in to the interface definition e.g.</p>
<pre><code class="c#">public interface ILogRepository { IEnumerable<Log> GetLogsOrderedByDate(); }</code></pre>
<p>This way we are clear at the point of implementation and the point of use about what the ordering of the items should be. Of course, renaming a method still doesn’t guarantee the result will be ordered correctly but at least if an ordering is missing you have the additional information in the definition about what the correct order should be.</p>
<p>The major problem with this option is that we head towards potentially violating the Open/Closed principal where our API should be open for extension but closed for modification. If we need to change the order log items are returned then we have to rename the method (violating OCP) or add a new method which specifies a different ordering potentially making the original completely redundant in the codebase.</p>
<h3>Attempt 2 – Expose IQueryable instead</h3>
<p>Another option is to swap from using IEnumerable to using IQueryable and allow the calling code to specify its own ordering e.g.</p>
<pre><code class="c#">public interface ILogRepository { IQueryable<Log> GetLogs(); }
...
var logs = logRepo.GetLogs().OrderBy(l => l.DateTime);</code></pre>
<p>This method would be more efficient, always performing the ordering in the database, but with this option we have to repeat the OrderBy part at every point of use to ensure our ordering will be correct. This gives us flexibility but isn’t particularly DRY and may be difficult to change.</p>
<p>It’s also somewhat of a leaky abstraction as we’re spilling data access innards into our other layers and losing control of the queries being executed on our database – calling code can do more with  IQueryable than specify an order which may not be desirable.</p>
<h3>Attempt 3 – Allow ordering to be passed in</h3>
<p>This is somewhat similar to option 2 however by allowing order to be passed in we can use a specified default ordering while also giving the calling code the ability to override it if necessary without exposing the all-powerful IQueryable.</p>
<pre><code class="c#">public interface ILogRepository
{
IEnumerable<Log> GetLogs<TKey>(Expressions<Func<Log, TKey>> ordering);
}</code></pre>
<p></p>
<p>Of course this option still has the potential for a lot of repetition of desired ordering and OCP may rear its head again if we need to expose some other IQueryable feature in a similar controlled fashion. Another undesirable feature of this method is that the specified ordering cannot be easily validated, much like with option 2 it may provide the caller with too much power.</p>
<h3>Attempt 4 – Return IOrderedEnumerable</h3>
<p>An interesting option is amending the interface definition so that the method returns IOrderedEnumerable instead of plain IEnumerable e.g.</p>
<pre><code class="c#">public interface ILogRepository { IOrderedEnumerable<Log> GetLogs(); }</code></pre>
<p>A very slight tweak to the definition with no specific ordering defined in the API but it provides a cue to the calling code that an ordering is being applied, should it care, and also makes it difficult for the interface implementation to accidentally miss out the ordering.</p>
<p>Obviously with this option we return to the problem of there being no particular guarantee of the specific ordering being applied not to mention it being quite tricky to return IOrderedEnumerable in the first place.</p>
<h3>Alternatives?</h3>
<p>Perhaps a better question than:</p>
<ul>
<li>Does the the interface definition match my intended usage? </li>
</ul>
<p>would be:</p>
<ul>
<li>Can I describe my intended usage sufficiently with an interface? </li>
</ul>
<p>It’s difficult to define this, and many other kinds of behaviours, using interfaces alone. A better approach in this case would probably be to not interface this class out at all and have the business code expect an instance of the concrete type as its dependency thus providing a guarantee of order. The class would still be abstracted from the backing store such that it can itself be tested e.g.</p>
<pre><code class="c#">// Data access code
internal interface ILogStore { IQueryable<Log> Logs { get; } }
public class LogRepository
{
private ILogStore _store;
public LogRepository() : this(null) {}
internal LogRepository(ILogStore store)
{
_store = store ?? new DatabaseLogStore();
}
public IEnumerable<Log> GetLogs()
{
return _store.Logs.OrderBy(l => l.DateTime);
}
}
// Business code
public class LogReader
{
private LogRepository _logRepo;
public LogReader(LogRepository logRepo)
{
if (logRepo == null) throw new ArgumentNullException("logRepo");
_logRepo = logRepo;
}
...
}
</code></pre>Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com2tag:blogger.com,1999:blog-36201586.post-30468069220498285902011-09-13T00:14:00.002+01:002011-09-13T00:16:27.292+01:00Fun with enum<p>If you’ve done any vaguely serious programming with a pre-4 version of the .NET Framework then chances are you’ve had to write an Enum.TryParse() method. You probably wrote something like this:</p> <pre><code class="c#">public static bool TryParse<TEnum>(string value, out TEnum enumValue)
{
Type enumType = typeof(TEnum);
if (!enumType.IsEnum) throw new ArgumentException("Type is not an enum.");
enumValue = default(TEnum);
if (Enum.IsDefined(enumType, value))
{
enumValue = (TEnum)Enum.Parse(enumType, value);
return true;
}
return false;
}</code></pre>
<p>Everything went fine until someone decided to pass in a string representing a value of the underlying type such as “0” at which point Enum.IsDefined() said no even though your enum looked like this:</p>
<pre><code class="c#">public enum MyEnum
{
Zero = 0, One, Two, Three
}</code></pre>
<p>Enum.Parse() will accept “0” just fine but IsDefined() requires the value be of the correct underlying type so in this case you’d need 0 as an integer for it to return true. Doesn't that mean I now need to work out the underlying type and then do the appropriate Parse() method using reflection? Oh dear, looks like our nice generic solution may get rather complicated!</p>
<p>Fear not. Because we know our input type is a string and there are a very limited number of underlying types we can have there’s a handy framework method we can use to sort this out – Convert.ChangeType().</p>
<pre><code class="c#">public static bool IsUnderlyingDefined(Type enumType, string value)
{
if (!enumType.IsEnum) throw new ArgumentException("Type is not an enum.");
Type underlying = Enum.GetUnderlyingType(enumType);
var val = Convert.ChangeType(value, underlying, CultureInfo.InvariantCulture);
return Enum.IsDefined(enumType, val);
}</code></pre>
<p>ChangeType() is effectively selecting the correct Parse method for us and calling it, passing in our string and returning a nice strongly typed underlying value which we can pass into Enum.IsDefined(). So our TryParse now looks like this:</p>
<pre><code class="c#">public static bool TryParse<TEnum>(string value, out TEnum enumValue)
{
Type enumType = typeof(TEnum);
if (!enumType.IsEnum) throw new ArgumentException("Type is not an enum.");
enumValue = default(TEnum);
if (Enum.IsDefined(enumType, value) || IsUnderlyingDefined(enumType, value))
{
enumValue = (TEnum)Enum.Parse(enumType, value);
return true;
}
return false;
}</code></pre>
<p>This exercise is somewhat contrived especially now Enum.TryParse is part of .NET 4.0 but the synergy of ChangeType and IsDefined is quite nice and a technique worth pointing out nonetheless.</p>
<h4>Links</h4>
<ul>
<li><a href="http://msdn.microsoft.com/en-us/library/ms130977.aspx">Convert.ChangeType() on MSDN</a> </li>
<li><a href="http://msdn.microsoft.com/en-us/library/system.enum.isdefined.aspx">Enum.IsDefined() on MSDN</a> </li>
</ul>Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com0tag:blogger.com,1999:blog-36201586.post-22136846241610801392011-05-13T19:24:00.001+01:002012-10-03T00:57:52.876+01:00Bulk upsert to SQL Server from .NET<p>or, “How inserting multiple records using an ORM should probably work”</p><p>Anyone familiar with .NET ORMs should know that one area they’re lacking in is where it comes to updating or inserting multiple objects at the same time. You end up with many individual UPDATE and INSERT statements being executed on the database which can be very inefficient and often results in developers having to extend the ORM or break out of it completely in order to perform particular operations. An added complication is that, where identities are being used in tables, each INSERT command the ORM performs must immediately be followed by a SELECT SCOPE_IDENTITY() call to retrieve the identity value for the newly inserted row so that the CLR object may be amended.</p><p>It’s possible to drastically improve on this by making use of a couple of features already supported in the .NET Framework and SQL Server and I’m hoping that a similar solution will feature in future releases of the major ORMs.</p><ul><li>The .NET Framework’s <a href="http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopy.aspx" target="_blank">SqlBulkCopy</a> class allowing you to take advantage of BULK operations supported by SQL Server. </li>
<li>SQL Server temporary tables. </li>
<li>SQL Server 2008’s <a href="http://msdn.microsoft.com/en-us/library/bb510625.aspx" target="_blank">MERGE</a> command which allows upsert operations to be performed on a table and in particular its ability, using the OUTPUT command, to return identities for inserted rows. </li>
</ul><h3>The process</h3><p>The main steps of the process are as follows:</p><ol><li>Using ADO.NET create a temporary table in SQL Server whose schema mirrors your source data and whose column types match the types in the target table. </li>
<li>Using SqlBulkCopy populate the temporary table with the source data. </li>
<li>Execute a MERGE command via ADO.NET on the SQL Server which upserts data from the temporary table into the target table, outputting identities. </li>
<li>Read the row set of inserted identities. </li>
<li>Drop the temporary table. </li>
</ol><p>So instead of <em>n</em> INSERT statements to insert <em>n</em> records that’s four SQL commands in all to insert <strong>or</strong> update <em>n</em> records.</p><p>There’s already a blog post on this technique that goes into more detail by Kelias which you can read <a href="http://www.jarloo.com/c-bulk-upsert-to-sql-server-tutorial/" target="_blank">here</a>. The only part missing from Kelias’ post is the piece utilising the OUTPUT modifier to retrieve the inserted identities from the MERGE command. This is simply an additional line in the merge command e.g.</p><pre><code class="sql">OUTPUT $action, INSERTED.$IDENTITY</code></pre><p>and the small matter of reading those returned identities out of a SqlDataReader.</p><p>This is the crucial piece, however, as it is this which allows us to tie the inserted row back to the original CLR “entity” item that formed part of our source data. Updating our CLR object with this identity will allow us to save subsequent changes away as an UPDATE to the now existing database row.</p><h3>Performance</h3><p>I did some brief testing to get rough timings of this technique versus individual INSERT calls using a parameterised ADO.NET command. With a variety of numbers and sizes of rows from 100 to 10,000 and with row sizes from 1k to 10k roughly the upsert technique nearly always executed in less than half the time of the individual INSERT statements. For example, 1,000 rows of about 1k each took individual INSERTs an average of just over 500ms versus bulk upsert’s 150ms on my quite old desktop with not very much RAM.</p><p>That’s pretty cool considering the upsert could be performing either an INSERT or an UPDATE command in the same number of calls whereas if I were to factor that into the individual SQL statements method it would be a lot of extra commands to try an UPDATE and then check whether any rows had been affected etc.</p><h3>Github project</h3><p>I decided to have a go at wrapping the upsert technique up in a library which would automatically generate the SQL necessary for creating the temporary table and running the MERGE. I pushed an initial version of this SqlBulkUpsert project to github which can be found here: <br />
<a title="https://github.com/dezfowler/SqlBulkUpsert" href="https://github.com/dezfowler/SqlBulkUpsert">https://github.com/dezfowler/SqlBulkUpsert</a></p><p>Usage would be something like this:</p><pre><code class="c#">using (var connection = DatabaseHelper.CreateAndOpenConnection())
{
var targetSchema = SqlTableSchema.LoadFromDatabase(connection, "TestUpsert", "ident");
var columnMappings = new Dictionary<string, Func<TestDto, object>>
{
{"ident", d => d.Ident},
{"key_part_1", d => d.KeyPart1},
{"key_part_2", d => d.KeyPart2},
{"nullable_text", d => d.Text},
{"nullable_number", d => d.Number},
{"nullable_datetimeoffset", d => d.Date},
};
Action<TestDto, int> identUpdater = (d, i) => d.Ident = i;
var upserter = new TypedUpserter<TestDto>(targetSchema, columnMappings, identUpdater);
var items = new List<TestDto>();
// Populate items with TestDto instances
upserter.Upsert(connection, items);
// Ident property of TestDto instances updated
}</code></pre><p>with TestDto just being a simple class like this:</p><pre><code class="c#">public class TestDto
{
public int? Ident { get; set; }
public string KeyPart1 { get; set; }
public short KeyPart2 { get; set; }
public string Text { get; set; }
public int Number { get; set; }
public DateTimeOffset Date { get; set; }
}</code></pre><p>In this TypedUpserter example we:</p><ol><li>define the schema of the target table either in code or by loading it from the database (shown in the example) </li>
<li>define mappings from column names of the target to a lambda retrieving the appropriate property value from the TestDto class </li>
<li>define an action to be called to allow setting the the new identity to a property of the DTO </li>
<li>instantiate the Upserter and call Upsert() with a list of items and a database connection </li>
<li>the identity properties of the TestDto instances will have been updated using the defined action so the CLR objects will now be consistent with the database rows. </li>
</ol><h3>Next step</h3><p>The object model could probably do with some refinement and it needs lots more tests adding but it’s in pretty good shape so next I’m going to look at integrating it into <a href="https://github.com/markrendle/Simple.Data" target="_blank">Mark Rendle’s Simple.Data project</a> which should mean that, to my knowledge, it’s the only .NET ORM doing proper bulk loading of multiple records.</p>Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com7tag:blogger.com,1999:blog-36201586.post-78815661210655752752011-01-26T23:09:00.002+00:002012-03-25T18:53:27.286+01:00Adding collections to a custom ConfigurationSection<p>The attributed model for creating custom ConfigurationSection types for use in your app.config or web.config file is quite verbose and examples are hard to come by. Collections in particular are a pain point, there is very little documentation around them and the examples all tend to follow the default add/remove/clear model i.e. that used in <appSettings/>.</p>
<p>Three particular scenarios with collections which caused me problems while doing the same piece of work were:</p>
<ul>
<li>When the items of a collection have a custom name e.g. "item" instead of add/remove/clear</li>
<li>When the items of a collection can have different element names representing different actions or subclasses e.g. the  <allow/> and <deny/> elements used with <authorization/> </li>
<li>When the items of a collection don’t have an attribute which represents a unique key e.g. not having anything like the key attribute of an <add/> or <remove/> element </li>
</ul>
<p>This first and last are relatively trivial to fix, the second less so and it took me a bit of digging around in Reflector to work out how to set up something that worked.</p>
<h3>Collection items with a custom element name</h3>
<p>This scenario can be accomplished as follows.</p>
<pre><code class="c#">
public class MySpecialConfigurationSection : ConfigurationSection
{
[ConfigurationProperty("", IsRequired = false, IsKey = false, IsDefaultCollection = true)]
public ItemCollection Items
{
get { return ((ItemCollection) (base["items"])); }
set { base["items"] = value; }
}
}
[ConfigurationCollection(typeof(Item), CollectionType = ConfigurationElementCollectionType.BasicMapAlternate)]
public class ItemCollection : ConfigurationElementCollection
{
internal const string ItemPropertyName = "item";
public override ConfigurationElementCollectionType CollectionType
{
get { return ConfigurationElementCollectionType.BasicMapAlternate; }
}
protected override string ElementName
{
get { return ItemPropertyName; }
}
protected override bool IsElementName(string elementName)
{
return (elementName == ItemPropertyName);
}
protected override object GetElementKey(ConfigurationElement element)
{
return ((Item)element).Value;
}
protected override ConfigurationElement CreateNewElement()
{
return new Item();
}
public override bool IsReadOnly()
{
return false;
}
}
public class Item
{
[ConfigurationProperty("value")]
public string Value
{
get { return (string)base["value"]; }
set { base["value"] = value; }
}
}
</code></pre>
<p>Which will allow us to specify our section like so:</p>
<pre><code class="xml">
<configSections>
<section name="mySpecialSection" type="MyNamespace.MySpecialConfigurationSection, MyAssembly"/>
</configSections>
...
<mySpecialSection>
<item value="one"/>
<item value="two"/>
<item value="three"/>
</mySpecialSection>
</code></pre>
<p>First off we have a property representing our collection on our ConfigurationSection or ConfigurationElement whose type derives from ConfigurationElementCollection. This property decorated with a ConfigurationProperty attribute. If the collection should be contained directly within the parent element then set IsDefaultCollection equal to true and leave element name as empty string. If the collection should be contained within a container element specify an element name.</p>
<p>Next, the ConfigurationElementCollection derived type of the property should have a ConfigurationCollection attribute specifying element type and collection type. The collection type specifies the inheritance behaviour when the section appears in web.config files nested deeper in the folder structure for example.</p>
<p>For the collection type itself we do this:</p>
<ul>
<li>Override ElementName to return collection item element  name </li>
<li>Override IsElementName to return true when encountering element name </li>
<li>Override GetNewElement() to new up an instance of your item type </li>
<li>Override GetElementKey(element) to return an object which uniquely identifies the item. This could be a property value, a combination of values as some hash or the element itself </li>
</ul>
<h3>Collection items with varying element name</h3>
<pre><code class="c#">
public class MySpecialConfigurationSection : ConfigurationSection
{
[ConfigurationProperty("items", IsRequired = false, IsKey = false, IsDefaultCollection = false)]
public ItemCollection Items
{
get { return ((ItemCollection) (base["items"])); }
set { base["items"] = value; }
}
}
[ConfigurationCollection(typeof(Item), AddItemName = "apple,orange", CollectionType = ConfigurationElementCollectionType.BasicMapAlternate)]
public class ItemCollection : ConfigurationElementCollection
{
public override ConfigurationElementCollectionType CollectionType
{
get { return ConfigurationElementCollectionType.BasicMapAlternate; }
}
protected override string ElementName
{
get { return string.Empty; }
}
protected override bool IsElementName(string elementName)
{
return (elementName == "apple" || elementName == "orange");
}
protected override object GetElementKey(ConfigurationElement element)
{
return element;
}
protected override ConfigurationElement CreateNewElement()
{
return new Item();
}
protected override ConfigurationElement CreateNewElement(string elementName)
{
var item = new Item();
if (elementName == "apple")
{
item.Type = ItemType.Apple;
}
else if(elementName == "orange")
{
item.Type = ItemType.Orange;
}
return item;
}
public override bool IsReadOnly()
{
return false;
}
}
public enum ItemType
{
Apple,
Orange
}
public class Item
{
public ItemType Type { get; set; }
[ConfigurationProperty("value")]
public string Value
{
get { return (string)base["value"]; }
set { base["value"] = value; }
}
}
</code></pre>
<p>Which will allow us to specify our section like so:</p>
<pre><code class="xml">
<configSections>
<section name="mySpecialSection" type="MyNamespace.MySpecialConfigurationSection, MyAssembly"/>
</configSections>
...
<mySpecialSection>
<items>
<apple value="one"/>
<apple value="two"/>
<orange value="one"/>
</items>
</mySpecialSection>
</code></pre>
<p>Notice that here we've specified two collection items with the value "one" which would have resulted in one overwriting the other in the previous example. To get around this, instead of returning the Value property we're returning the element itself as the unique key.</p>
<p>This time our ConfigurationElementCollection derived type's ConfigurationCollection attribute also specifies a comma delimited AddItemName e.g. "allow,deny". We override the methods of the base as follows:</p>
<ul>
<li>Override ElementName to return empty string </li>
<li>Override IsElementName to return true when encountering a correct element name</li>
<li>Override GetNewElement() to new up an instance of your item type</li>
<li>Override GetNewElement(elementName) to new up an instance of the correct item type for particular element name setting relevant properties</li>
<li>Override GetElementKey(element) to return an object which uniquely identifies the item. This could be a property value, a combination of values as some hash or the element itself </li>
</ul>
<h4>Caveat</h4>
<p>While our varying element names will be readable the object model is read-only. I haven't covered support for writing changes back to the config file here as it involves taking charge of the serialization of the objects so really requires its own blog post.</p>
<h3>Links</h3>
<ul>
<li><a href="http://msdn.microsoft.com/en-us/library/system.configuration.configurationsection.aspx" target="_blank">ConfigurationSection</a></li>
<li><a href="http://msdn.microsoft.com/en-us/library/system.configuration.configurationelement.aspx" target="_blank">ConfigurationElement</a></li>
<li><a href="http://msdn.microsoft.com/en-us/library/system.configuration.configurationelementcollection.aspx" target="_blank">ConfigurationElementCollection</a></li>
<li><a href="http://msdn.microsoft.com/en-us/library/system.configuration.configurationpropertyattribute.aspx" target="_blank">ConfigurationPropertyAttribute</a></li>
<li><a href="http://msdn.microsoft.com/en-us/library/system.configuration.configurationcollectionattribute.aspx" target="_blank">ConfigurationCollectionAttribute</a></li>
</ul>Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com8tag:blogger.com,1999:blog-36201586.post-44023632244351069832010-12-05T23:28:00.000+00:002010-12-06T01:30:02.354+00:00Taking my music listening in a new direction<p>or, Why I'm cancelling my Spotify Premium subscription</p> <p>Not entirely sure when I started using <a href="http://www.spotify.com" target="_blank">Spotify</a> but it was probably late 2008 / early 2009 and I've found it to be a revelation of music discovery. I've spent hours just clicking from one artist to another, exploring back catalogues and having a serious listen to full albums in a way that would be quite difficult without already having bought the album or "obtained" it from P2P. Previously, using a combination of <a href="http://www.last.fm" target="_blank">Last.fm</a> and <a href="http://www.myspace.com/" target="_blank">Myspace</a> you could get quite close but the Spotify desktop app made the whole experience so much more seamless and enjoyable with full, consistent quality tracks.</p> <p>I've been a Premium subscriber since 1 Aug 2009 with several factors leading to my decision to pay up. The first being high-bitrate uninterrupted audio; having some decent audio kit at home I wanted to make the most of it. Second was the Spotify for Android app I could use on my HTC Hero which is hands down the most convenient means of getting music on a mobile device. Put tracks in a playlist in the desktop app and they magically appear on the device – brilliant.</p> <h3>So, why am I quitting?</h3> <h4>1. Cost</h4> <p>To date that's £169.83 in subscription fees - £9.99 a month for 17 months. I tend to buy CDs for £5 off Amazon so that equates to about 33 CD albums or about 2 albums a month. I’ve listened to a lot more albums than that during the time but I doubt that there would have been more than 33 that I would have considered buying a CD copy of. I’ve never paid for an MP3, I refuse to pay the same price as a CD for a lossy version but I paid for Spotify as the service does offer significantly more especially when you use the mobile apps. I’m just not sure it’s worth £9.99 a month.</p> <h4>2. Quality</h4> <p>Spotify Premium ups the track bitrate from 160kbps to 320kbps. At least that’s the idea, in practice it seems large portions of their library are only available in the lower quality and I doubt that more than 10% of the tracks I’ve listened to recently have been high bitrate. There’s also no visibility on "high quality" tracks in the app so I’m seriously sceptical about whether I’m getting the high-bitrates I’m paying for. The quality is certainly still miles off CD audio and having made a return to CDs recently it’s very noticeable that I’m missing out on audio clarity and have been making do with poor quality audio whilst also paying for the privilege.</p> <h4>3. Nothing to show for it</h4> <p>It’s a bitter pill to swallow but worst of all is the fact that after all the cost I’ve just been renting the music. I don’t get to keep the OGG tracks, I don’t own any of it and, when I cancel, the app on my phone will just stop working.</p> <h4></h4> <h3>What service would I be happy with?</h3> <p>I’ve been wondering about the kind of service I’d like to see and that I’d be happy to pay for. Unlimited ad-supported listening of any tracks for discovering new music would be fine. I’d like to be able to buy albums, download them in full CD quality and stream them uninterrupted (no ads) in a reasonable bitrate to other computers  and mobile devices. I’d also like to be able to register CDs I own with the service so those tracks are also available wherever I am.</p> <p>The roll-your-own solution might be buying CDs, ripping them and paying $9.99 for at 50GB Dropbox to sync up my machines. Apparently the Dropbox for Android app has the ability to stream music and movies straight to the device so maybe that’s an option worth considering.</p> <h4>Lossless</h4> <p>In this day and age of high-def video, broadband internet  and huge hard disks I don’t want to pay for, and there is no necessity for, low bitrate music. It’s rather interesting that the medium with the highest audio quality most widely available is <a href="http://en.wikipedia.org/wiki/Blu-ray_Disc#Audio" target="_blank">Blu-ray disc</a> in the form of Dolby TrueHD and DTS-HD. With video the soundtrack is more of a supporting role so lossy compression can be forgiven to some extent but with music the audio is the main event, it should be CD quality at least. MP3 was great for portability but it has a lot to answer for in terms of killing our appreciation of high quality audio and therefore the market’s desire to provide us with (and push) a high-definition medium solely for audio.</p> Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com0tag:blogger.com,1999:blog-36201586.post-21987274896470240172010-11-25T23:37:00.000+00:002010-11-25T23:38:07.706+00:00Adding a design mode to your MVC app<p>When developing websites you'll likely have ended up in the situation where you need to make some styling changes to a page that's buried deep within the site. If that page is at the end of a process such as registration or checkout then it can be extremely time consuming entering test data that passes validation in order to navigate to the correct page. Add to that the complexity of maybe needing to log in and also having to do the same thing on multiple browsers and things can get ridiculous. If you’re using the WebForms view engine then you have limited design time capability in Visual Studio but this isn’t satisfactory for ensuring cross-browser compatibility.</p> <p>What's needed is a dumb version of the site which simply renders the views using a variety of data. Effectively you want to create a load of static pages, each with ViewData, Model etc set up so that they represent a different step in one of the real processes on the site. Using this version you’d be able to get to the correct page straight away, be able to refresh it quickly after making markup or CSS changes and be able to visit the page in all your test browsers. Ideally using this version of the site will require no authentication and it wont have any external dependencies like databases or web services that must be set up or configured.</p> <p>We can use a set of different controllers to do this, each having some hard coded model data for example:</p> <pre><code class="c#">// A real controller may look like this...
public class PeopleController : Controller
{
public ActionResult Index()
{
List<Person> people = GetListOfPeopleFromDatabase();
return View(people);
}
private List<Person> GetListOfPeopleFromDatabase()
{
// Do some data access
return new List<Person>
{
new Person{ Name = "Runtime Person 1" },
new Person{ Name = "Runtime Person 2" },
new Person{ Name = "Runtime Person 3" },
};
}
}
// And our design time controller like this...
public class PeopleController : Controller
{
[Description("Empty people list page")]
public ActionResult EmptyList()
{
return View("Index", new List<Person>{});
}
[Description("People list page with 5 random people")]
public ActionResult ListWithFivePeople()
{
return View("Index", new List<Person>
{
new Person
{
Name = "John Smith"
},
new Person
{
Name = "Betty Davis"
},
new Person
{
Name = "Steve Jobs"
},
new Person
{
Name = "Bill Gates"
},
new Person
{
Name = "John Carmack"
},
});
}
}</code></pre>
<p>This will work best if your model classes or, the data entity classes you're passing on to your views are dumb i.e. they don't try to do any database access when the view renders. If you already have your controllers in a separate assembly then it should be a relatively simple task to swap your design time ones in and use them instead. If however you have the standard MVC setup of controllers, views and models all in the same project and assembly then things are a bit more difficult.</p>
<p>At the very  least we want our design time controllers in a separate folder of our project, away from the real ones. The issue with this is that the default MVC controller factory will find them here anyway. Thankfully we don't need to implement an entire new factory, we can hide them from the default one by simply breaking with the convention it uses to identify them, the easiest way being not naming them "...Controller".</p>
<h3>Home page</h3>
<p>A nice to have in this "design" mode would be a default page which shows a list of links to all the actions of the design time controllers with descriptions for what each represents. This would be particularly useful when handing the markup and CSS over to a third party to be styled up as it allows them to quickly access each variation of each screen. You'd end up with something like this:</p>
<ul>
<li>Products
<ul>
<li>List products </li>
<li>Search products </li>
<li>View product </li>
<li>Product category </li>
</ul>
</li>
<li>Basket
<ul>
<li>Empty </li>
<li>Full </li>
<li>Saved </li>
</ul>
</li>
<li>My Account
<ul>
<li>Addresses </li>
<li>Billing details </li>
</ul>
</li>
<li>Home </li>
<li>Contact us </li>
</ul>
<h3>Variations</h3>
<p>In addition to each individual view the design time functionality could also allow for variations of these pages e.g. logged in / logged out  views, special offer views, user customised views etc. Variations could be  defined on an action, a controller or on the whole site and rather than defining the particular data in each of these cases a transform function could be defined which is called before view render. This function could do work along the lines of setting IsAuthenticated booleans for the logged in / logged out case and possibly more complex operations otherwise.</p>
<p>This would allow a wide variety of viewable pages to be created without  needing to specifically define data in all those cases.</p>
<h3>Proof of concept</h3>
<p>I've put a quick proof of concept up on Github here:
<br /><a href="https://github.com/dezfowler/MvcDesignMode">https://github.com/dezfowler/MvcDesignMode</a></p>
<p>There's the main MvcDesignMode library and an example MVC app based on the standard template site which has few design time controllers named "...Designer" rather than "...Controller". When not in design mode this should prevent them ever being accidentally accessed provided you're using the default controller factory. I have the code to enable design mode in the App_Start of Global.asax.cs and it looks like this:</p>
<pre><code class="c#">bool designMode = Convert.ToBoolean(ConfigurationManager.AppSettings["DesignMode"]);
if (designMode)
{
DesignMode.Activate(typeof(HomeController));
}
else
{
AreaRegistration.RegisterAllAreas();
RegisterRoutes(RouteTable.Routes);
}</code></pre>
<p>Here I'm just using a boolean configuration setting in web.config to turn the mode on and off but how you might choose to do it is up to you. If the design mode is activated the standard application startup stuff is skipped mainly because design mode uses a standard set of routes. Any links in your pages built using custom routes wont work correctly but the point of design mode isn't to be able to navigate around the site as normal it is that you can jump straight to a particular page in one click. I’m passing a type in to the Activate method simply to server as a pointer to the assembly where my design time controllers reside.</p>
<p>Once in design mode the design time controller factory hunts down the special controllers ending with "...Designer" and effectively indexes them pulling out action method names and also the text from a Description attribute defined on the methods. Using this index it builds up a special site map listing each controller and its action methods as links.</p>
<h3>Conclusion</h3>
<p>Have a look at the solution on Github or have a go implementing something similar yourself. On a number of recent projects I could see having a setup like this saving a lot of time and effort not just for styling and markup but probably developing simple JavaScript stuff as well. I'll definitely be using it myself in all my future MVC projects.</p> Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com1tag:blogger.com,1999:blog-36201586.post-32976444445107327442010-11-18T22:07:00.001+00:002010-11-18T22:07:23.572+00:00Pretty print hex dump in LINQPad<p>Was messing around with byte arrays a lot in LINQPad this week and really wanted a pretty hex print of the contents of the array so wrote this:</p> <pre><code class="c#">public static object HexDump(byte[] data)
{
return data
.Select((b, i) => new { Byte = b, Index = i })
.GroupBy(o => o.Index / 16)
.Select(g =>
g
.Aggregate(
new { Hex = new StringBuilder(), Chars = new StringBuilder() },
(a, o) => {a.Hex.AppendFormat("{0:X2} ", o.Byte); a.Chars.Append(Convert.ToChar(o.Byte)); return a;},
a => new { Hex = a.Hex.ToString(), Chars = a.Chars.ToString() }
)
)
.ToList()
.Dump();
}</code></pre>
<p>You use it like this:</p>
<pre><code class="c#">byte[] text = Encoding.UTF8.GetBytes("The quick brown fox jumps over the lazy dog");
HexDump(text);</code></pre>
<p>...and it will produce output akin to:</p>
<p><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="hexdump" border="0" alt="hexdump" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoDCVPod4w25dwAeKoolOpZpJ0eqPDDUjAyD0OD3-LjkjslYmYSbUSt0MrlIL1bEz7kexlB2nUXeYWWTmha91ULjFGPbeQbWcWYGPEUPWVrfcXrEC43jVtJ6NZm9BE2xKJW88T/?imgmax=800" width="476" height="109" /></p> Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com1tag:blogger.com,1999:blog-36201586.post-46427400321932680092010-10-02T23:51:00.000+01:002010-10-02T23:52:39.633+01:00The Null Object pattern and the Maybe monad<p>Dmitri Nesteruk’s recent post <a href="http://www.codeproject.com/KB/cs/maybemonads.aspx" target="_blank">Chained null checks and the Maybe monad</a> struck a chord with me as I had messed about with <a href="http://derek-says.blogspot.com/2010/07/creating-light-weight-visitor-fluently.html" target="_blank">something similar</a> for performing a visitor-esque operation. I’ve glanced at a few posts about monads in the past however this is the first time I’ve had a proper look at one of them.</p> <p>The purpose of the <a href="http://www.haskell.org/all_about_monads/html/maybemonad.html" target="_blank">Maybe monad</a> is essentially to remove the need for null reference checking. If you try to perform some function on an object which turns out to be null you might get a null reference exception. If, however, you perform the function on a Maybe then if the object is null the function is never called. It’s particularly useful if you’re performing a long chain of functions on an object, any of which may return null. In these cases when the null is encountered the remainder of the chain is skipped resulting in more robust, better performing code.</p> <p>The implementations in .NET that I could find vary quite widely:</p> <ul> <li><a href="http://maybe.codeplex.com/" target="_blank">Maybe project</a> </li> <li><a href="http://code.google.com/p/lokad-shared-libraries/" target="_blank">Maybe monad in Lokad shared libraries</a> </li> <li><a href="http://sharpmalib.codeplex.com/" target="_blank">M<’a> Lib</a> </li> <li><a href="http://weblogs.asp.net/zowens/archive/2009/09/04/maybe-monad-my-c-version.aspx" target="_blank">Zack Owens’ version</a> </li> <li><a href="http://stackoverflow.com/questions/1196031/evil-use-of-maybe-monad-and-extension-methods-in-c" target="_blank">Random one from Stack Overflow (Judah Himango)</a> </li> </ul> <p>One aspect shared by most of these implementations, and which was pointed out in the comments of Dmitri’s post, is that they still end up doing all the null checking, it’s just hidden away. They are treating the “nothing” state as a value, effectively just creating a Nullable<T> which wraps reference types and then checking the HasValue at the beginning of each method call. I think a more elegant solution to this is to use the <a href="http://en.wikipedia.org/wiki/Null_Object_pattern" target="_blank">Null Object pattern</a>.</p> <p>A Null Object is a special inert type derived from our real class or a common base class. Each method is overridden by a version which has no effect. By wrapping any non-null objects we encounter in an instance of our real type and any nulls in an instance of our inert type we can continually call the methods of these types without fear of null reference exceptions occurring. Moreover, once we receive our inert type from one of the method calls we’re calling the methods on that type so we don’t need null checks at the beginning of our methods as the implementations we’re calling will have no effect.</p> <h3>Example</h3> <pre><code class="c#">// Simple testing class
class Node
{
public int Number { get; set; }
public Node Parent { get; set; }
}
// Arrange
Node node = new Node
{
Number = 1,
Parent = new Node
{
Number = 2,
Parent = new Node
{
Number = 3
}
}
};
// Act
var third = node.Maybe()
.Apply(n => n.Parent)
.Apply(n => n.Parent)
.Return();
// Assert
Assert.IsNotNull(third);
Assert.AreEqual(3, third.Number);</code></pre>
<p>Here we've got a simple test class and object graph and our code is trying to return the grandparent of node. First we use the Maybe extension method to create the Maybe object after this we're calling methods on the Maybe object itself. The Apply method behaves like a Map method and applies the supplied Func to the subject of the Maybe, returning its result as a new Maybe object. The Return then unwraps the Maybe and returns the subject object if there is one. If any of the methods called on the Maybe object fail we'll end up with a null coming back from Return.</p>
<h3>Implementation</h3>
<p>The basic structure is an abstract Maybe class with two derived classes; ActualMaybe which contains the real implementation and NothingMaybe which is the Null Object type. The implicit operator on Maybe is where any null is handled.</p>
<pre><code class="c#">public abstract class Maybe<T> where T : class
{
public static readonly Maybe<T> Nothing = new NothingMaybe<T>();
public static implicit operator Maybe<T>(T t)
{
return t == null ? Nothing : new ActualMaybe<T>(t);
}
}
class ActualMaybe<T> : Maybe<T> where T : class
{
readonly T _t;
public ActualMaybe(T t)
{
if (t == null) throw new ArgumentNullException("t");
_t = t;
}
}
class NothingMaybe<T> : Maybe<T> where T : class
{
}</code></pre>
<p>The implementation for Apply is as follows:</p>
<pre><code class="c#">// Maybe<T>
public abstract Maybe<TResult> Apply<TResult>(Func<T, TResult> func) where TResult : class;
// ActualMaybe<T>
public override Maybe<TResult> Apply<TResult>(Func<T, TResult> func)
{
return func(_t);
}
// NothingMaybe<T>
public override Maybe<TResult> Apply<TResult>(Func<T, TResult> func)
{
return Maybe<TResult>.Nothing;
}</code></pre>
<p>Apply takes the map function <var>func</var> which operates on the type T and returns some other type TResult. Apply itself returns the Maybe of TResult. </p>
<p>The ActualMaybe implementation simply calls func passing _t, which is the contained object, and returns the result of func. There is more going on here though; first _t can't be null because of the check in the ActualMaybe constructor so we don't need a null check, second we return whatever comes out of func but because the method returns a Maybe of TResult the implicit cast takes place and any nul coming out of func is replaced.</p>
<p>The NothingMaybe implementation ignores func altogether and just returns a NothingMaybe of TResult using the static readonly Nothing field on Maybe<T>.</p>
<p>The ActualMaybe implementation of Return returns _t while the NothingMaybe implementation always returns null.</p>
<p>I’ve implemented a couple of other useful methods including Do(Action<T>), If(Predicate<T>), Cast<TResult>() and AsEnumerable() as well as several overloads.</p>
<h3>Possibilities</h3>
<p>I think this Null Object approach could be combined with the <a href="http://en.wikipedia.org/wiki/Visitor_pattern" target="_blank">Visitor pattern</a> to achieve some extensibility although I’m not entirely sure how it would work or whether it would even be necessary.</p>
<p>Another possible extension is some kind of Collect method which would allow you to cherry pick particular objects from a graph and then would return an IEnumerable over just those objects at the end.</p>
<h3>Code</h3>
<p>I’ve put the code up on Github here:
<br /><a href="http://github.com/dezfowler/Monads" target="_blank">http://github.com/dezfowler/Monads</a></p> Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com0tag:blogger.com,1999:blog-36201586.post-64563764019387963342010-09-18T14:11:00.001+01:002010-09-18T14:11:50.756+01:00ASP.NET custom error not shown in Azure<p>If you’re using a custom error handler in an ASP.NET Azure web role, for example, to return a branded error page you may find the custom page isn’t surfaced too the browser and instead you receive a standard IIS error. </p> <p>When your handler is setting the correct response status, relevant to the type of error e.g. 404, 500 etc, the default web role configuration means the error page content you supply will not be passed on. This is complicated by the DevFabric not using the same configuration i.e. your custom error page will appear as expected when you’re testing in DevFabric. </p> <p>The configuration setting requiring tweaking is in the system.webServer section; setting httpErrors’ <a href="http://msdn.microsoft.com/en-us/library/ms690497(VS.90).aspx" target="_blank">existingResponse</a> attribute to “PassThrough” will ensure that, if any content is supplied with the ASP.NET error response, it is returned to the browser.</p> <pre><code class="xml"><configuration>
<system.webServer>
<httpErrors existingResponse="PassThrough"/>
</system.webServer>
<configuration></code></pre> Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com1tag:blogger.com,1999:blog-36201586.post-40840904590430118922010-08-28T19:27:00.001+01:002010-08-28T19:27:52.625+01:00Aggregate full outer join in LINQ<p>I’ve recently been working on adding a feature to <a href="http://codeofrob.com/" target="_blank">Rob Ashton</a>’s <a href="http://autopoco.codeplex.com/" target="_blank">AutoPoco</a> project, a framework which enables dynamic creation of Plain Old CLR Object test data sets using realistic ranges of values. Rather than explicitly defining sets of objects in code, loading them from a database or deserializing them from a file the framework allows you to pre-define the make-up of the data set and then automatically generates the objects to meet your criteria.</p> <p>I had a requirement that, from some sets of possible values for particular properties of a type, I  needed to create an instance for every variation of those values. Defining all the variations manually would take along time, be difficult to maintain and error prone. Dynamic generation seemed the way to go and after checking with Rob whether this was already a feature of AutoPoco and finding out it wasn’t I proceeded to have a go at implementing a GetAllVariations method.</p> <p>The principal problem here is that we need to perform an operation analogous to a SQL full outer join on <em>n</em> sets of values. For example, give the following type:</p> <pre><code class="c#">public class Blah
{
public int Integer { get; set; }
public string StringA { get; set; }
public string StringB { get; set; }
}</code></pre>
<p>and the possible values:</p>
<pre>Integer: [ 1, 2, 3 ]
StringA: [ "hello", "world" ]
StringB: [ "foo", "bar" ]</pre>
<p>the output should be 12 objects with the following property values:</p>
<table><thead>
<tr>
<th>#</th>
<th>Integer</th>
<th>StringA</th>
<th>StringB</th>
</tr>
</thead><tbody>
<tr>
<th>1</th>
<td>1</td>
<td>hello</td>
<td>foo</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>hello</td>
<td>bar</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>world</td>
<td>foo</td>
</tr>
<tr>
<th>4</th>
<td>1</td>
<td>world</td>
<td>bar</td>
</tr>
<tr>
<th>5</th>
<td>2</td>
<td>hello</td>
<td>foo</td>
</tr>
<tr>
<th>6</th>
<td>2</td>
<td>hello</td>
<td>bar</td>
</tr>
<tr>
<th>7</th>
<td>2</td>
<td>world</td>
<td>foo</td>
</tr>
<tr>
<th>8</th>
<td>2</td>
<td>world</td>
<td>bar</td>
</tr>
<tr>
<th>9</th>
<td>3</td>
<td>hello</td>
<td>foo</td>
</tr>
<tr>
<th>10</th>
<td>3</td>
<td>hello</td>
<td>bar</td>
</tr>
<tr>
<th>11</th>
<td>3</td>
<td>world</td>
<td>foo</td>
</tr>
<tr>
<th>12</th>
<td>3</td>
<td>world</td>
<td>bar</td>
</tr>
</tbody></table>
<h3>Achieving this using LINQ</h3>
<p>A full outer join can be performed in LINQ as follows:</p>
<pre><code class="c#">var A = new List<object>
{
1,
2,
3,
};
var B = new List<object>
{
"hello",
"world",
};
A.Join(B, r => 0, r => 0, (a, b) => new List<object>{ a, b }).Dump();</code></pre>
<p>Note: I’m using the LINQPad Dump() extension method here.</p>
<p>Fairly straight forward, we just set the join values to zero which forces a set to be produced where every value in A is joined to every other value in B. Ordinarily the join result selector would create a new anonymous type but I’m creating a new List here for reasons that will become obvious in a second.</p>
<p>We don’t know in advance how many sets of values we’re going to have, the user may want to set values for two or twenty properties. We need to be able to perform this same join for <em>n</em> sets, we’ll be working with a collection of these value sets. We can achieve this by combining the join with an aggregate operation e.g.</p>
<pre><code class="c#">List<List<object>> sources = new List<List<object>>
{
new List<object>
{
1,
2,
3,
},
new List<object>
{
"hello",
"world",
},
new List<object>
{
"foo",
"bar",
},
};
sources.Aggregate(
Enumerable.Repeat(new List<object>(), 1),
(a, d) => a.Join(d, r => 0, r => 0, (f, g) => new List<object>(f) { g })
).Dump();</code></pre>
<p>Here <var>sources</var> could contain any number of List objects and those List objects, containing the raw property values, can also contain any number of items. The output of the operation will be an enumeration over every variation of the values in sources, each represented as a List (in this case containing three items, one for each of the sources). We seed the Aggregate with what we expect to get out i.e. an IEnumerable of List objects. Our aggregating function is our join operation with a slight modification, our result selector creates a new List containing the result of the previous join (<var>f</var>) and the uses the collection initializer syntax to add one additional item (<var>g</var>), from the current set of values being joined on.</p>
<p>A relatively complex operation reduced to, effectively, a one-liner using LINQ. Snazzy.</p> Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com2tag:blogger.com,1999:blog-36201586.post-44291802091912372752010-08-22T22:57:00.000+01:002010-08-23T00:22:34.906+01:00Roll your own mocks with RealProxy<p>These days there are more than enough mocking frameworks to choose from but if you need something a bit different, or just fancy having a go at the problem as an exercise, creating your own is easier than you might think. You don’t need to go anywhere near IL generation for certain tasks as where are a couple of types in the Framework which can get us most of the way on their own. </p> <p>.NET 4.0 has the <a href="http://msdn.microsoft.com/en-us/library/system.dynamic.dynamicobject.aspx" target="_blank">DynamicObject</a> class which can be used for this as it allows you to provide custom implementations for any method or property. However there is another class which has been in the Framework since 1.1 that can be used in a similar way.</p> <p><a href="http://msdn.microsoft.com/en-us/library/system.runtime.remoting.proxies.realproxy.aspx" target="_blank">RealProxy</a> is meant for creating proxy classes for remoting however there’s no reason why we can’t make use of its proxy capabilities and forget the remoting part, instead providing our own mocking implementation. Lets look at a simple example.</p> <h3>If it looks like a duck but can't walk it's a lame duck</h3> <p>If you're using dependency injection and are writing your code defensively you'll probably have constructors which look something like this:</p> <pre><code class="c#">public MyClass(ISupplyConfiguration config, ISupplyDomainInfo domain, ISupplyUserData userRepository)
{
if(config == null) throw new ArgumentNullException("config");
if(domain == null) throw new ArgumentNullException("domain");
if(userRepository == null) throw new ArgumentNullException("userRepository");
// ...assignments...
}</code></pre>
<p>The unit test for whether this constructor correctly throws ArgumentNullExceptions when it's expected to will require at least some implementation of ISupplyConfiguration and ISupplyDomainInfo in order to successfully test the last check for userRepository.</p>
<p>All we need here is something that looks like the correct interface; it needn't be a concrete implementation or work as, for these tests, all we need is for it to not be null. Here’s how we could achieve this with RealProxy and relatively little code.</p>
<p>First we create a class inheriting from the abstract RealProxy:</p>
<pre><code class="c#">public class RubbishProxy : System.Runtime.Remoting.Proxies.RealProxy
{
public RubbishProxy(Type type) : base(type) {}
public override System.Runtime.Remoting.Messaging.IMessage Invoke(System.Runtime.Remoting.Messaging.IMessage msg)
{
throw new NotImplementedException();
}
/// <summary>
/// Creates a transparent proxy for type <typeparamref name="T"/> and
/// returns it.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <returns></returns>
public static T Make<T>()
{
return (T)new RubbishProxy(typeof(T)).GetTransparentProxy();
}
}</code></pre>
<p>That's all, effectively just the boiler plate implementation code for the abstract class with one constructor specified and a static generic method for ease of use. We can then use it in our test method like so:</p>
<pre><code class="c#">[Test]
[ExpectedException(typeof(ArgumentNullException))]
public void ExampleRealWorldTest_EnsureExceptionOnNullConfig()
{
var myClass = new MyClass(null, null, null);
}
[Test]
[ExpectedException(typeof(ArgumentNullException))]
public void ExampleRealWorldTest_EnsureExceptionOnNullDomain()
{
var config = RubbishProxy.Make<ISupplyConfiguration>();
var myClass = new MyClass(config, null, null);
}
[Test]
[ExpectedException(typeof(ArgumentNullException))]
public void ExampleRealWorldTest_EnsureExceptionOnNullRepository()
{
var config = RubbishProxy.Make<ISupplyConfiguration>();
var domain = RubbishProxy.Make<ISupplyDomainInfo>();
var myClass = new MyClass(config, domain, null);
}</code></pre>
<p>Not bad for one line of code. How about something more complex?</p>
<h3>Making a mockery of testing</h3>
<p>The <a href="http://msdn.microsoft.com/en-us/library/system.runtime.remoting.proxies.realproxy.invoke.aspx" target="_blank">Invoke</a> method we overrode in RubbishProxy can perform any action we like including checking arguments, returning values and throwing exceptions. In mocking frameworks, the most common method of setting up this behaviour is using a fluent interface e.g.</p>
<pre><code class="c#">[Test]
public void ReadOnlyPropertyReturnsCorrectValue()
{
var mock = new Mock<IBlah>();
mock.When(o => o.ReadOnly).Return("thing");
var blah = mock.Object;
Assert.AreEqual("thing", blah.ReadOnly);
}</code></pre>
<p>Here the When call captures <var>o.ReadOnly</var> as an expression, determining which member was the invokation target and returning a Call object. The Call object is then used to set up a return value as in the example above, or to check the passed arguments (CheckArguments) or throw an exception (Throw). It can also be set up to ignore the call or, in the case of a method call, to apply any one of those previous behaviours to only when particular arguments are passed in.</p>
<pre><code class="c#">[Test]
[ExpectedException(typeof(ForcedException))]
public void MethodCallThrows()
{
var mock = new Mock<IBlah>();
mock.When(o => o.GetThing()).Throw();
var blah = mock.Object;
int i = blah.GetThing();
}
[Test]
public void MethodCallValid()
{
var mock = new Mock<IBlah>();
mock.When(o => o.DoThing(5)).CheckArguments();
var blah = mock.Object;
blah.DoThing(5);
}
[Test]
[ExpectedException(typeof(MockException))]
public void MethodCallInvalid()
{
var mock = new Mock<IBlah>();
mock.When(o => o.DoThing(5)).CheckArguments();
var blah = mock.Object;
blah.DoThing(4);
}</code></pre>
<p>Source code for the example mock framework is up on GitHub here:
<br /><a href="http://github.com/dezfowler/LiteMock">http://github.com/dezfowler/LiteMock</a></p> Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com0tag:blogger.com,1999:blog-36201586.post-23437765750941026662010-08-11T23:32:00.000+01:002010-08-12T01:20:02.338+01:00Model binding and localization in ASP.NET MVC2<p>When creating an MVC site catering for different cultures, one option for persisting the culture value from one page to the next is by using an extra route value containing some form of identifier for the locale e.g.</p> <p>/en-gb/Home/Index <br />/en-us/Cart/Checkout <br />/it-it/Product/Detail/1234 </p> <p>Here just using the Windows standard culture names based on RFC 4646 but you could use some other standard or your own custom codes. This method doesn’t rely on sessions or cookies and also has the advantage that the site can be spidered in each supported language.</p> <p>Creating a base controller class for your site allows you to override one of its methods in order to set your current culture. For example if you amend your route configuration to <var>"{locale}/{controller}/{action}/{id}"</var> you could do the following:</p> <pre><code class="c#">string locale = RouteData.GetRequiredString("locale");
CultureInfo culture = CultureInfo.CreateSpecificCulture(locale);
Thread.CurrentThread.CurrentCulture = culture;
Thread.CurrentThread.CurrentUICulture = culture;</code></pre>
<p>It's important to set both CurrentCulture and CurrentUICulture as ResourceManager, used for retrieving values form localized .resx files, will refer to CurrentUICulture whereas most other formatting routines use CurrentCulture.</p>
<p>Once our culture is set, when we output values in our views ResourceManager can pick up our culture specific text translations from the correct .resx file and dates and currency values will be correctly formatted. <var>String.Format("{0:s}", DateTime.Now)</var>, with "s" being the format string for a short date, will produce mm/dd/yyyy for en-US versus dd/mm/yyyy for en-GB.</p>
<p>This isn't the end of the story however, the problem arises of where in the controller do you perform your culture setting. It can't happen in the constructor because the route data isn't yet available so instead we could put it in an override of OnActionExecuting. This will seem to work fine for values output in your views but you come across a gotcha within model binding. Create a textbox in a form which binds to a DateTime and you'll end up with the string value being parsed using the default culture of the server. Using the US and UK dates example where your server's default culture is US but your site is currently set to UK. If you try to enter a date of 22/01/2010 you'll get a model validation error because it's being parsed as the US mm/dd/yyyy and 22 isn't a valid value for the month. Model binding happens before OnActionExecuting so that's no good.</p>
<p>A bit of digging around in Reflector and the Initialize method comes out as probably the best candidate for this as it is where the controller first receives route data and it occurs before model binding. We end up with something like (exception handling omitted for brevity):</p>
<pre><code class="c#">protected override void Initialize(RequestContext requestContext)
{
base.Initialize(requestContext);
string locale = RouteData.GetRequiredString("locale");
CultureInfo culture = CultureInfo.CreateSpecificCulture(locale);
Thread.CurrentThread.CurrentCulture = culture;
Thread.CurrentThread.CurrentUICulture = culture;
}</code></pre>
<p>Both model binding and output of values will now be using the correct culture.</p>Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com1tag:blogger.com,1999:blog-36201586.post-27259145097669364382010-07-18T11:30:00.001+01:002010-07-18T23:52:00.515+01:00Creating a light-weight visitor, fluently in C#<p>In object-oriented programming a common problem is performing some conditional logic based on the type of an object at run-time. For example, one form you may come across is:</p> <pre><code class="c#">public void DoStuff(MemberInfo memberInfo)
{
EventInfo eventInfo = memberInfo as EventInfo;
if(eventInfo != null)
{
// do something
return;
}
MethodInfo methodInfo = memberInfo as MethodInfo;
if(methodInfo != null)
{
// do something
return;
}
PropertyInfo propertyInfo = memberInfo as PropertyInfo;
if(propertyInfo != null)
{
// do something
return;
}
throw new Exception("Not supported.");
}</code></pre>
<p>Drawbacks to this being you have to wrap the whole thing in a method to make use of the "bomb out" return statements and it's quite a lot of code repetition which, as I’ve talked about previously, I'm not a fan of. Another example is a dictionary type->operation lookup:</p>
<pre><code class="c#">// set up some type to operation mappings
static readonly Dictionary<Type, Action<MemberInfo>> operations = new Dictionary<Type, Action<MemberInfo>>();
// probably inside the static constructor...
operations.Add(typeof(EventInfo), memberInfo =>
{
EventInfo eventInfo = (EventInfo)memberInfo;
// do somthing
});
operations.Add(typeof(MethodInfo), memberInfo =>
{
MethodInfo methodInfo = (MethodInfo)memberInfo;
// do something
});
operations.Add(typeof(PropertyInfo), memberInfo =>
{
PropertyInfo propertyInfo = (PropertyInfo)memberInfo;
// do something
});
// use it like this...
Type type = memberInfo.GetType();
Type matchingType = operations.Keys.FirstOrDefault(t => t.IsAssignableFrom(type));
if(matchingType != null)
{
operations[matchingType](memberInfo);
}</code></pre>
<p>The major drawback with this method is that you have to use IsAssignableFrom otherwise it doesn't match inherited types. In fact, the above example doesn't work if you just look up the type of memberInfo directly because we'll get types derived from EventInfo etc, not those types themselves. We also still need to cast to the type we want to work with ourselves and enumerating the dictionary isn’t ideal from a performance point of view.</p>
<p>The GoF pattern for solving this is the <a href="http://en.wikipedia.org/wiki/Visitor_pattern">visitor</a> which I’ve <a href="http://derek-says.blogspot.com/2008/05/implicit-polymorphism-and-lazy.html">blogged about</a> in the past however this is rather heavy duty, especially if your "do something" is only one line. It is much more performant than the alternatives though, as it’s using low level logic inside the run-time to make the decision about which method to call, so that should be a consideration.</p>
<p>Then next best alternative to the proper visitor is the first ...as...if...return... form but we can wrap it up quite nicely with a couple of extension methods to cut down on the amount of code we have to write. Here’s a trivial example trying to retrieve the parameters for either a method or a property. Depending on the type we need to call a different method so we identify that method using a fluent visitor:</p>
<pre><code class="c#">private Type[] GetParamTypes(MemberInfo memberInfo)
{
Func<ParameterInfo[]> paramGetter = null;
memberInfo
.As<MethodInfo>(method => paramGetter = method.GetParameters)
.As<PropertyInfo>(property => paramGetter = property.GetIndexParameters)
.As<Object>(o => { throw new Exception("Unsupported member type."); });
return paramGetter().Select(pi => pi.ParameterType).ToArray();
}</code></pre>
<p>The As extension attempts to cast “this” as the type specified by the type parameter T and if successful calls the supplied delegate. The overload used in the example above will skip the remaining As calls once one has been successful. There is a second overload which takes a Func<T, bool> rather than an Action<T> and will continue to try the next As if false is returned from the Func. The last As call, by specifying Object as the type, is a catch all and allows providing a default implementation or catering for an error case as shown above. The extensions are implemented like so:</p>
<pre><code class="c#">/// <summary>
/// Tries to cast an object as type <typeparamref name="T"/> and if successful
/// calls <paramref name="operation"/>, passing it in.
/// </summary>
/// <typeparam name="T">Type to attempt to cast <paramref name="o"/> as</typeparam>
/// <param name="o"></param>
/// <param name="operation">Operation to be performed if cast is successful</param>
/// <returns>
/// Null if the object cast was successful,
/// otherwise returns the object for chaining purposes.
/// </returns>
public static object As<T>(this object o, Action<T> operation)
where T : class
{
return o.As<T>(obj => { operation(obj); return true; });
}
/// <summary>
/// Tries to cast an object as type <typeparamref name="T"/> and if successful
/// calls <paramref name="operation"/>, passing it in.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="o"></param>
/// <param name="operation">Operation to be performed if cast is successful, must return
/// a boolean indicating whether the object was handled.</param>
/// <returns>
/// Null if the object cast was successful and <paramref name="operation"/> returned true,
/// otherwise returns the object for chaining purposes.
/// </returns>
public static object As<T>(this object o, Func<T, bool> operation)
where T : class
{
if (Object.ReferenceEquals(o, null)) return null;
T t = o as T;
if (!Object.ReferenceEquals(t, null))
{
if (operation(t)) return null;
}
return o;
}</code></pre>Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com0tag:blogger.com,1999:blog-36201586.post-69553994481712949442010-07-14T23:37:00.001+01:002010-07-15T00:47:15.315+01:00UTC gotchas in .NET and SQL Server<p>After doing some work with <a href="http://msdn.microsoft.com/en-us/library/system.datetime.aspx" target="_blank">DateTime</a> recently I stumbled across the interesting behaviour that a DateTime which is DateTimeKind.Unspecified will be treated as a DateTimeKind.Local whenever you try to perform some operation upon it. You get an “unspecified” DateTime whenever you don’t explicitly say it is Utc or Local. This makes sense because, when you do the following, in most cases what you intended was to use local time:</p> <pre><code class="c#">DateTime d1 = new DateTime(2010, 07, 01, 12, 0 ,0, 0);</code></pre>
<p>If the current timezone is UTC +01:00 here's what I get when working with the DateTime created above:</p>
<pre><code class="c#">d1.Kind; // => Unspecified
d1; // => 01/07/2010 12:00:00
d1.ToUniversalTime(); // => 01/07/2010 11:00:00
TimeZoneInfo.Local.GetUtcOffset(d1); // => 01:00:00</code></pre>
<p>Note it’s applied an offset when calculating the UTC value which as we can see for clarification is +1 hour. </p>
<p>If what we actually wanted was a UTC time we need to explicitly specify the kind e.g.</p>
<pre><code class="c#">DateTime d2 = new DateTime(2010, 07, 01, 12, 0 ,0, 0, DateTimeKind.Utc);
DateTime d2 = DateTime.UtcNow;</code></pre>
<p>If you need to work with timezones other than UTC or the system timezone then you'll want to use <a href="http://msdn.microsoft.com/en-us/library/system.datetimeoffset.aspx" target="_blank">DateTimeOffset</a> rather than DateTime.</p>
<h3>SQL Server and SqlDataReader</h3>
<p>Another interesting gotcha arising from this is that the SQL Server <a href="http://msdn.microsoft.com/en-us/library/ms187819.aspx" target="_blank">datetime</a> data type is also timezone agnostic. Any datetime values retrieved through the SqlDataReader will be an “unspecified” kind DateTime. This means that, even if you're correctly using the C# DateTime.UtcNow or the SQL GETUTCDATE() to produce the values in the database, when you try to retrieve them they will be shifted incorrectly according to the local timezone. Yikes!</p>
<p>There are two ways to deal with this.</p>
<p></p>
<h4>DateTime.SpecifyKind()</h4>
<p>The first is in C# using <a href="http://msdn.microsoft.com/en-us/library/system.datetime.specifykind.aspx" target="_blank">DateTime.SpecifyKind()</a>:</p>
<pre><code class="c#">DateTime d3 = DateTime.SpecifyKind(d1, DateTimeKind.Utc);
d3.Kind; // => Utc
d3; // => 01/07/2010 12:00:00
d3.ToUniversalTime(); // => 01/07/2010 12:00:00</code></pre>
<p>Which could be wrapped up in an extension method for ease of use e.g.</p>
<pre><code class="c#">public static class SqlDataReaderExtensions
{
public static DateTime GetDateTimeUtc(this SqlDataReader reader, string name)
{
int fieldOrdinal = reader.GetOrdinal(name);
DateTime unspecified = reader.GetDateTime(fieldOrdinal);
return DateTime.SpecifyKind(unspecified, DateTimeKind.Utc);
}
}</code></pre>
<h4>SQL Server 2008 datetimeoffset</h4>
<p>If you're using SQL Server 2008 you have the option of using the <a href="http://msdn.microsoft.com/en-us/library/bb630289.aspx" target="_blank">datetimeoffset</a> data type instead. This will store the +00:00 timezone internally and the SqlDataReader will then retrieve the value correctly as a DateTimeOffset. No need to muck about with Kind.</p>
<p>If you have an existing database using datetime you can CAST these as a datetimeoffset in your query which usefully uses an offset of +00:00 in this case. (It treats "unspecified" as UTC – tut!)</p>Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com2tag:blogger.com,1999:blog-36201586.post-24083354729327802062010-05-31T16:54:00.000+01:002010-05-31T17:13:00.057+01:00JavaScript-style Substring in C#<p>One thing that really bugs me when writing code is having to use unnecessary extra constructs to avoid exceptions or useless default values emerging. One such situation is trimming a string to a particular length e.g.</p> <pre><code class="c#">string sentence = "The quick brown fox jumps over the lazy dog";
string firstFifty = sentence.Substring(0, 50);</code></pre>
<p>I want the first 50 characters from the sentence but in this example we get an ArgumentOutOfRangeException because there aren’t 50 characters in sentence. Not too helpful and it's an easy mistake to make. To avoid the exception we have to do this:</p>
<pre><code class="c#">firstFifty = sentence.Length < 50 ? sentence : sentence.Substring(0, 50);</code></pre>
<p>Yikes! That’s a lot of extra rubbish when all I want is the equivalent of LEFT(sentence, 50) in SQL.</p>
<p>We can easily wrap this up in a "Left" method but chances are we’re going to need a “Right” too so instead we can go down the route JavaScript takes with its "slice" function. JavaScript’s string slice can take one integer argument which, if positive, returns characters from the start of the string and, if negative, returns characters from the end of the string. Adding an overload to allow it to take a padding character is probably sensible too. The end result looks like this:</p>
<pre><code class="c#">firstFifty = sentence.Slice(50);
// "The quick brown fox jumps over the lazy dog"
string firstTen = sentence.Slice(10);
// "The quick "
string lastTen = sentence.Slice(-10);
// "e lazy dog"
firstFifty = sentence.Slice(50, '=');
// "The quick brown fox jumps over the lazy dog=============="
string lastFifty = sentence.Slice(-50, '=');
// "==============The quick brown fox jumps over the lazy dog"</code></pre>
<p>A lot more concise and quite useful.</p>
<pre><code class="c#">public static class StringExtensions
{
/// <summary>
/// Returns a portion of the String value. If value has Length longer than
/// maxLength then it is trimmed otherwise value is simply returned.
/// </summary>
/// <returns>
/// String whose Length will be at most equal to maxLength.
/// </returns>
public static string Slice(this string value, int maxLength)
{
if (value == null) throw new ArgumentNullException("value");
int start = 0;
if (maxLength < 0)
{
start = value.Length + maxLength;
maxLength = Math.Abs(maxLength);
}
return value.Length < maxLength ? value : value.Substring(start, maxLength);
}
/// <summary>
/// Returns a portion of the String value. If value has Length longer than
/// length then it is trimmed otherwise value is padded to length with
/// shortfallPaddingChar.
/// </summary>
/// <returns>
/// String whose Length will be equal to length.
/// </returns>
public static string Slice(this string value, int length, char shortfallPaddingChar)
{
if (value == null) throw new ArgumentNullException("value");
string part = value.Slice(length);
int abslen = Math.Abs(length);
if(abslen > part.Length)
{
part = length < 0 ? part.PadLeft(abslen, shortfallPaddingChar) : part.PadRight(abslen, shortfallPaddingChar);
}
return part;
}
}</code></pre> Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com0tag:blogger.com,1999:blog-36201586.post-40931083024095897842010-05-25T21:50:00.000+01:002010-05-25T00:31:51.911+01:00Silverlight 3 Behavior causing XAML error<p>A recent XAML error I received from a Silverlight Behavior had me going round in circles trying to find the cause for quite a while. I was getting an AG_E_PARSER_BAD_PROPERTY_VALUE in code similar to the following:</p> <pre><code class="xml"><canvas x:name="Blah">
<i:Interaction.Behaviors>
<myapp:SpecialBehavior Source="{Binding SomeProperty}" />
</i:Interaction.Behaviors>
...
</canvas></code></pre>
<p>The error identified the myapp:SpecialBehavior line as the culprit but didn't give me any further information so I proceeded to try and debug the binding to see what was going wrong. This didn’t shed any light on the cause, the binding was being created fine – the error was occurring later on.</p>
<p>This had me stumped for a couple of hours – I even tried setting up Framework source stepping only to find that the Silverlight 3 symbols weren’t yet available. In the end I stumbled upon the answer by chance – looking at the Canvas class in Reflector I noticed that it didn’t inherit from Control, only FrameworkElement via Panel. A quick check of my Behavior code and I found this:</p>
<pre><code class="c#">public class SpecialBehavior : Behavior<Control></code></pre>
<p>It was the Behavior itself that was invalid in the Interaction.Behaviors property due to the incompatible type parameter. I changed Control to FrameworkElement and everything started working fine.</p>Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com0tag:blogger.com,1999:blog-36201586.post-37911895825901315972010-05-16T01:25:00.001+01:002010-05-16T01:25:43.585+01:00Running UI operations sequentially in Silverlight<p>I've been playing around with Silverlight recently and have come across a requirement of needing to wait for the UI to do something before continuing. For example I have a UI with elements such and an image and text bound to properties of a model object. When the model object changes the interface updates to reflect this change but I need to perform an "unload" transition just before the model changes and a "load" transition just after it had changes.</p> <p>Instead of this:</p> <p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgg4E2TGqKB-G1aGiXS3nkCWPNJWwLK411KKO8c8Bp5ELDPv74EfXoSudsxvPg1hCgv-HmYFAUn9A4MH45voxc26Hm37DVFxrarui0W6tAXGjTGU9ZXX7cqRjxA1oj1VoinMjTl/s1600-h/before%5B6%5D.png"><img style="border-right-width: 0px; display: block; float: none; border-top-width: 0px; border-bottom-width: 0px; margin-left: auto; border-left-width: 0px; margin-right: auto" title="before" border="0" alt="before" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCniqLODSJo2eKVf7-kER3wbuwQBO3XhgnJeCDX5oc_FKyAnWkk7ELyicr5xaT3u9lonKOT9Gb8lq4bYhy76XrwemwarCskUK5RnAG_cta7_mSSiX_Xfg1bJ0Bu20qugfe42Cb/?imgmax=800" width="322" height="94" /></a> I want this:</p> <p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigNW3KPPz-YkfsG2P9H-aHO8G9euaNdPWP8x-xW6Obc_PL6gqUeRpVUoqhXYBOTkNihs7lTPg4fpNAyNSUIO0j4DwWK3cKDwq7UuhGWRoJO9k1v1GvxxYx-SBOlV2X94PtKmtS/s1600-h/after%5B8%5D.png"><img style="border-right-width: 0px; display: block; float: none; border-top-width: 0px; border-bottom-width: 0px; margin-left: auto; border-left-width: 0px; margin-right: auto" title="after" border="0" alt="after" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSTo3T23RoXInB1FIxQ1ZPFcL34zyoabpGgMD5-zsYf_LWAGSmcW_TInx4p-wC9SWPPAwk0Wyg3_V8Xg7aaDhvZupaxVv1Pk9f6-P1u3c_hbfKGYZe3cUjWQGJa4bU75vOIzdO/?imgmax=800" width="485" height="206" /></a> </p> <p>The orange arrows representing the transitions.</p> <p>I considered having BeforeChange and AfterChange events, hooking my transition storyboards up to them and then firing them in the model setter. The trouble with this is that the storyboards will be playing in a separate thread so as soon as the BeforeChange one starts our code will have moved on and fired the AfterChange one. The result will be that we'll never see the "before" transition which will ruin the whole effect.</p> <p>Mike Taulty <a href="http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/2008/04/13/10328.aspx">posted</a> about this same issue in 2008 highlighting that, to achieve the correct result, we end up needing to chain our code together using the Completed events of our storyboards. His solution was using some classes to wrap this up and I've taken a similar approach apart from that I have the sequence defined fluently and included the option of using visual states rather than explicitly defined storyboards.</p> <pre><code class="c#">private Album CurrentAlbum
{
get
{
return this.DataContext as Album;
}
set
{
if (CurrentAlbum != value)
{
new Sequence()
.GoTo(this, LayoutRoot, "VisualStateGroup", "AlbumUnloaded")
.Execute(() =>
{
this.DataContext = value;
})
.GoTo(this, LayoutRoot, "VisualStateGroup", "AlbumLoaded")
.Run();
}
}
}</code></pre>
<p>It ends up being a lot quicker to write the code and I think it's quite obvious by reading it what will happen. If the visual state group or states aren't defined then only the inner assignment occurs.</p>
<p>The source for the Sequence class is a bit big for this post so the gist is here: <a href="http://gist.github.com/402514" target="_blank">Sequence.cs</a> </p>
<h4>Considerations</h4>
<dl><dt>Deferred execution </dt><dd>The storyboard or visual state change Completed event we're waiting for may never happen - do we try to execute the next steps anyway? I’ve taken the approach of firing off the next step in the destructor of the class however it may make more sense to set some arbitrary timeout so if the transition hasn’t completed after say 10 seconds we fire off the next step anyway.</dd><dt>Reuse </dt><dd>Should we allow a sequence to be created once and then reused many times - we could have an overload of Run() that takes a context object and passes it on to each of the steps. Could run into issues with people using closures like I do in the example. I’ve stuck with single use in the class, throwing an exception if Run() is called a second time.</dd></dl> Derek Fowlerhttp://www.blogger.com/profile/09963865123124577525noreply@blogger.com0