Parsing a Comma Delimited List in SQL Server Scripts

There’s an external guy we use for really complex SQL. We have a report which has been the ruin of many a developer, and after many failures, it became apparent that developers could not do this. That’s when you need to get in a proper SQL merchant. Someone who thinks in terms of sets, rather than while loops.

But I digress. In one of his Scripts, I saw something which I found very interesting. He wrote a few short lines of code which parse a comma-delimited list (passed in as a parameter). Now, I’ve seen many a function which has been created for this, and they all seemed to fit about what I would do if I went to write it. But this guy… This is how he wrote it (inserting the result into a table variable called @HospitalIDTable):

		declare @HospitalIds CHAR (13) = '1,2,3,4,5,6,9' -- pretend this is a parameter
		declare			@XML as xml
		declare			@Delimiter as char(1) =','

		set				@XML = CAST(('<X>'+REPLACE(@HospitalIds ,@Delimiter ,'</X><X>')+'</X>') as xml)
		declare			@HospitalIDTable table (HospitalId int)
		insert into		@HospitalIDTable
		select			N.value('.', 'INT') AS HospitalId 
		from			@XML.nodes('X') AS T(N)
                -- done

If you output the @XML variable, you will see:


Now, the final SELECT statement – I don’t even understand. I know N is to make a string unicode, but value??? I need to research that. Back in a tick.

OK. So the value method is the XML data type method for XQuery. I should have known that, because I’ve done that before. Calling it off the N prefix is what threw me, and I still haven’t totally grokked what is going on there. But it is as cool as heck and if you do a SELECT on the @HospitalIDTable table variable, you get your list as a column of data.

Very concise and cool.

Experts like this guy make light work of complex SQL. We’ve had several dev’s look at his scripts and everyone agrees that their relationship with SQL is not on the same level as this guys. We’re front-end/middle-tiers devs. What did you expect?!

Why Cat?

Sometimes, when new-comers come to functional programming, they see some things which may seem strange to them. Things which don’t make sense from an object-oriented perspective. I’m going to take an example of this and show you what I mean. Lets say we take the following Javascript function cat, which takes multiple arrays and merges them all into one (I am using the Ramda library to assist. Anything with the R prefix is a Ramda function):

var cat = function () {
  var head = R.head(arguments);
  if (head)
    return head.concat.apply(head, R.tail(arguments));
    return [];

The question may be posed, why bother with that function when Javascript has concat built in as a method on the Array prototype? The big difference is that concat is called on the object (the instance array), whereas cat operates on whatever is passed to it.:

// concat
[2, 1].concat([7, 6, 3, 4], [9, 6, 8]);
// returns [2, 1, 7, 6, 3, 4, 9, 6, 8]

// cat
cat([2, 1], [7, 6, 3, 4], [9, 6, 8])
// also returns [2, 1, 7, 6, 3, 4, 9, 6, 8]

It’s all about composability. Functional programming is a very “composable” paradigm. Functions (like cat) operate on whatever is passed to them as parameters. They do not operate on objects upon which they are methods. They are “free-standing” in nature. Lets see cat get composed as part of a bigger pipeline. Here, we have a function called splat. splat will basically flatten the arguments object of a function, similar to LINQ’s SelectMany:

var splat = function (fun) {
	return function(array) {
		return fun.apply(null, array);

Now, lets say we want to use cat to merge some arrays. But the arrays are buried in the item of a larger array. Here’s how we could solve that:

var flatCat = splat(cat);
var arraysToMerge = [[7, 6, 3, 4], [2, 1],[77, 2, 45]]; // the arrays are in the 1st and only index of this larger array
var result = flatCat(arraysToMerge);
// result will be [7, 6, 3, 4, 2, 1, 77, 2, 45];

This works nicely with cat, but you cannot pass Array.prototype.concat into splat. It just won’t work. It’s object-focused nature is its undoing. Run it yourself with concat, instead of cat, to see it bomb out and you’ll understand why it failed.

So, whilst our first glance at cat seemed to size it up as a pointless refactoring to achieve the same thing as Array.prototype.concat, I hope I’ve demonstrated that by tweaking the concept into a function which stands alone (separate from any “owning” object), we can use it as a composable part in a larger operation. Just one cog in the functional pipeline.

Just remember, cat operates on whatever is passed to it.

Consuming a WebAPI Endpoint on Localhost from the Visual Studio Android Emulator

It is highly likely that any Xamarin Android application that you build will be consuming a RESTful API. And when you are developing the application, you don’t want to have to deploy that API to the Internet somewhere. You just want to run it up on localhost. It will make for a much faster development experience, as you will be able to iterate a lot quicker.

There’s not an abundance of help from the community on this and it took me about a day to figure it all out. So, hopefully this post will short-circuit the task for you.

My first port of call was this great post by the formidable Brian Noyes.

But it did not get me the whole way there. He left out a few important details, such as the fact that you need to run Visual Studio (“VS”) on elevated privileges to host the API on a IP address other than localhost. Otherwise, you will get the following error message Unable to connect to web server ‘IIS Express’.

So lets get to it…

You’ll need to install the Visual Studio emulator for Android which you can read about here. Note that it uses HyperV. This is great, but has a consequence. If you have VMWare’s VMWare player on your machine, you can forget about running any VMWare VMs as they are incompatible with HyperV. It is either one virtualization technology or the other. As the Visual Studio Android emulator is so good and fast, I ditched VMWare on my system.

So, here are the steps you need to do and I’ll assume you have already created an ASP.NET Core WebAPI project running on the full .NET framework (and not .NET Core).

  1. In this step, you need to configure your WebAPI project to serve up the API on a particular IP address (which is not localhost). Your Xamarin app needs to hit that IP address. Obviously, however, when production-ready, it will hit the full url of your production deployment of the Web API application. (I’m guessing you have already worked out that the localhost of your Android app is the Android device itself. So, localhost won’t help you.). To host the Web API on such an IP Address, open the applicationhost.config file, which lives in the hidden folder called .vs in the root directory of your ASP.NET Core solution (in a subfolder called config). Look for a bindings element inside the site element for your site and add a binding for; something like:
           <binding protocol="http" bindingInformation="*:57971:localhost" />					
           <binding protocol="http" bindingInformation="*:57971:" />

    Note the IP Address, which I will discuss in the next step. Remember, you have to run Visual Studio with elevated privileges to host the Web API on that IP address. So when you fire up Visual Studio, do it with Run as Administrator.

  2. From Noyes’ post, “With the VS Android Emulator running, click on the double caret >> button in the side toolbar. This opens up the settings UI, one of which is for Network.” Now, look through the Desktop Adapters for one which has the IP address For me, that was Desktop Adapter #4. It may have a different number for you. This is the one you have to use. It has to have an IP address of So, this step is really more to confirm that one of the Desktop Adapters has that IP address. The mistake I made was that Noyes’ article stressed the usage of Desktop Adapter #2. But it is the IP address which is important, not which adapter number it has. Once you have confirmed that is the IP Address for one of the adapters, you can move to the next step. Oh and obviously all calls to your local WebAPI in your Xamarin app should be made to that IP address.
  3. The last thing you need to do is open a port in your dev machine’s firewall for the port upon which your WebAPI project is listening. So, if you are running it on IIS Express on port 57543, then you need to create an inbound firewall rule permitting TCP connections to that port. This makes perfect sense, because the Emulator is a different machine, to all intents and purposes. It’s a stand alone virtual machine running in HyperV. And it is calling into your dev machine’s localhost services externally.

There’s not many steps, but the trick is figuring them out! Hopefully that saves you some time!

NDepend v2017

With NDepend v2017 being released recently, I thought I had to give it a test drive. Whilst I’m not an architect, sometimes I have to do a bit of architecture. Especially with home projects. It’s not like I can call Architects-R-Us and have them send an Architect around to produce an architecture for every app idea I have. And so I have a slowly growing interest in this space. This time I thought I’d write more of a tutorial about how to get going with NDepend.

Once you’ve downloaded NDepend and got the exe on your hard drive, you fire it up and point it at the sln file for a Visual Studio solution which you want to profile. It will pick up the projects in your solution and the dialog will look a little like this:

Figure 1 – Analyze Assemblies dialog

Click the Analyse 10 .NET Assemblies button and NDepend will perform its analysis and come back with another dialog:

Figure 2 – Analysis Complete dialog

The other thing it did was to launch an html NDepend Report Summary, because I had the Build Report checkbox ticked in the first dialog. More on that report summary later. So I clicked on the Show NDepend Interactive Graph button:

Figure 3 – Assemblies Graph

A dependency graph is displayed and it is interactive! I clicked on the UI node (far left) and it lit up the map with the dependant assemblies. More importantly, it showed a couple of windows which are yellow in colour and positioned at the left and foot of the screen. The window at the foot has help topics relevant to the current context. The window on the left is the most interesting. It is scoped to the selected graph node and provides a heap of at-a-glance information about that graph node and its descendants. Lets try again and click a node lower in the graph.

Figure 4 – Assemblies Graph with Descendant Selected

Figure 5 – The Dashboard

You can see that the ancestor nodes in the graph are green and the descendant nodes are blue. And again, the pop-up dialog (this time on the right) contains information scoped to the selected graph node (in this case, HomeLibrary.UiModel, a project which contains my DTOs): Despite a couple of issues in the code, I’m happy with the information which this graph is telling me. The dependencies are pretty much what I intended and I’m getting a good 50,000ft view of that. So lets poke around a bit more. I’ll click on the Dashboard tab a the foot of the main pane. This shows me a plethora of information and the thing I really like about it is the use of colour. Red – bad. Yellow – warn. Green – pass. At a glance, I can get a feel for the health of the application.

Figure 6 – Quality Gates

Figure 7 – Rule Violation

I can see I have 2 Quality Gate fails. That basically means I cannot release my application. Not until that number goes down to 0. Lucky this is just a sample application. So, lets take a look at what I need to fix! After clicking on the little red “2” link, I found that 1 of the 2 quality gates was a collection of 6 rules which I had transgressed:
There was one which immediately interested me, so I investigated further. It was called Attribute class name should be suffixed with ‘Attribute’. Absolutly! That would not like me to do otherwise. So, I double-clicked that rule to discover the culprit:

Looks like NoReorder was a class that was generated by JetBrains (probably Resharper). Being a generated class, I feel like I’m cool with this. And so here is where things get interesting. I’m going to try something here. If you look at the image immediately above, you can see some text which looks like a LINQ query. That’s called CQLinq and is what NDepend uses to return the results which show whether rules have been violated or not. As an extensibilty point, NDepend allows you to modify these queries and save them. And it saves them for this project. So, I made a quick and small modification to the query, taking advantage of the intellisense which is available in that context:

	// <Name>Attribute class name should be suffixed with 'Attribute'</Name>
	warnif count > 0 from t in Application.Types where 
	   t.IsAttributeClass && 
	  !t.NameLike (@"Attribute$") &&
	  t.FullName != "KesselRun.HomeLibrary.UiModel.Annotations.NoReorder"
	select new {
	   Debt = 5.ToMinutes().ToDebt(),
	   Severity = Severity.Major

I added the line of code t.FullName != "KesselRun.HomeLibrary.UiModel.Annotations.NoReorder". When I saved the query, that rule was no longer violated. I’m not sure if that is NDepend’s recommended way for handling that, but it is pretty cool that you can have such fine-grained control.

Moving on from Quality Gates, I want to take a look at a new Feature in NDepend 2017 called Technical Debt. They’ve taken this well-known concept and applied it in an innovative way into NDepend. You can see from the Dashboard screenshot above a section called Debt. Scoped to the overall solution, my app is receiving a B. But you can also see how this value changes depending on where your context is scoped to. If you take a look at the 2 dependency graph screenshot above which have the yellow information windows, you would have noticed a Debt Rating item. And it is scoped to whatever I have clicked. So what is this?

In a nutshell, they have used some algorithms to quantify the technical debt for each issue and rule violation. And that makes sense. If NDepend has found a problem, presumably there will be a time associated with fixing it. And even cooler, they have added an interest component. Every year the problem remains unfixed, the “time to fix” will grow. The debt grows with interest! That’s darn smart, because that’s how it works in the real world. The longer a code smell or bad design hangs around, the harder it becomes to fix. This kind of quantifying debt is great, because you can use it to buttress the business case to fix bad code (which, no doubt, you have inherited :)).

As with all things NDepend, the Debt data can be viewed from many different perspectives:

Figure 8 – Technical Debt Representations

The last feature I want to make mention of is the html report that you can generate with NDepend when it performs its analysis. The easiest way to do this is check the Build Report checkbox (under the Analyze button – see Figure 1 at the top of this post for that checkbox):

Figure 9 – Build Report

The report is kinda like the dashboard, but in html format. Here’s an online example of a report from an analysis that was run on NodaTime. In addition to the sections of the report (such as Diagrams, Quality Gates Summary, Rules Summary) on the front page, there is a fly-out menu from the left which gives you even more information. This includes items such as Dead Code, Hot Spots and even Trend Charts. This report is something you can host on a server, so it can easily be made available to stakeholders etc. in your organisation.

As an overall observation, I should also point out that the UX employed on NDepend is really innovative and clever. When you have that much information coming at you, it would be very daunting and confusing if the user interface was not good. But the NDepend user interface is excellent. It’s very intuitive. Navigating around is so easy and you only have to use it for half an hour or so before you feel really comfortable with all that data. That is a small amount of time to be able to get on top of a code analysis tool like NDepend. As time goes by, you become even more comfortable with it and stumble upon more of its hidden gems. And that is before I even thought about referring to the documentation on the NDepend site, which is very detailed and complete.

There’s so much more about this tool which could be covered, such as Continuous Integration, Visual Studio team Services build integration, Code Diffs … I’ve only really scratched the surface.

For people like me whose role slowly bleeds its way into the architecture space, we need all the help we can get. Tools like NDepend really help with this. Not only is it a hands-on-tool which gives a great overview of the health of your codebase, it also acts as a tutor. It gives advice and teaches with problem descriptions and suggestions. Resharper improved my code skills. NDepend will improve my design/architecture skills.

Dates in Javascript and UTC in the Browser

So dates are hard. Computer languages have varying success in implementing them. Take C#. If dates in C# were awesome, Jon Skeet would have had no cause to write Noda Time. I’m going to write a couple of posts about Javascript and dates, as they have caused me quite a bit of pain in recent months. Today, I will kick off with the UTC story (or lack thereof) for dates in Javascript (in the browser, anyway).

I’ll use C# to make a few comparisons in my analysis, just to highlight what is lacking.

As a really brief examination of what C# can do with dates in the context of UTC, just run the following:

var nowUTC = DateTime.UtcNow;
Console.WriteLine(nowUTC.ToLongDateString() + " " + nowUTC.ToLongTimeString() + " " + nowUTC.Kind);

You’ll see that you have a DateTime value which represents the UTC time for that point in time and has a Kind of UTC (not local). A DateTime can’t tell you what timezone it is in. It just knows whether it is local or UTC. Timezones (as distinct from offsets) are political and shift with policies of governments. So it is not trivial to build them into a computer language. Note: DateTimeOffsets are a generally better struct than DateTime and contain UTC and the offset from UTC. But the point is, here we have a UTC value representing a point in time and what that time is with a +0 UTC offset.

Now, lets turn our mind to Javascript. If you were to create a Date object, using the new keyword, and write that out to the console, it would look something like this:
Fri Jan 27 2017 22:04:36 GMT+1030 (Cen. Australia Daylight Time).

You can see there’s a time offset and even a helpful addition of the name of the timezone in brackets. Yay for Javascript. Now try and create a Javascript date object that is UTC (making sure you are located in a timezone other than the Zulu timezone, of course). Here’s a hint. You can’t. Not really, anyway.

But what about all those things you read on Stackoverflow, like toISOString and methods of the Date constructor such as getUTCFullYear etc.? Yes, lets talk about that.

First, I will dispense with toISOString. Some may say that to get UTC, you can just write:

var now = new Date();
var utcnow = now.toISOString();

The fact is, toISOString returns a string representation of UTC. Not a Date object. Now, you have a string. So my assertion holds.

What about a bit of sneaky epoch shifting? Say, we wrote something like this:

var now = new Date();
var now_utc = new Date(now.getUTCFullYear(), now.getUTCMonth(), now.getUTCDate(),  now.getUTCHours(), now.getUTCMinutes(), now.getUTCSeconds());

Have we achieved a UTC Date in Javascript yet? I would say no. If you log that Date object out to console, it would look a little like this:
Fri Jan 27 2017 11:34:36 GMT+1030 (Cen. Australia Daylight Time)
Look familiar? Sure, it’s 10.5 hours before the actual local time, but it still has the same local offset and timezone of the browser. So what happens if you now give that to a component which shifts the epoch again based on the offset? Now we are behind UTC. And we’ll have a Date object which still has the same offset. UTC has turned into a moving target, always leaping back in time by 10.5 hours.

But what about the static UTC method of the Date constructor? Something like:
var utc = Date.UTC(1900,08,18,0,0,0);
I hope you like milliseconds, because that is what that method returns. The number of milliseconds from Jan 1, 1970. And you can always pass that into a Date consructor, but you’ll just get a Date object with the browser offset and timezone.

And there’s one more scenario I’d like to cover. What if we call the Date constructor as a function? That is, without the new keyword?

var nowFromFunction = Date();

Sorry to disappoint, but that just returns a string. The string is local time as well. So, not a Date object and not UTC.

Now, it is true that Javascript dates track UTC internally. However, that is internals and components such as various HTML 5/Javascript datepickers don’t process internals.

Thus, unless someone can prove me wrong, I am asserting that you cannot create a UTC Date object in Javascript. Not like the way you can in C#.

Here’s a plunk to play with: Date plunk.

Oh and by the way, I was wondering why I did not pick this up when I learnt Javascript. So, I went back to the source – the Flanagan book. There is not a peep in there about how dates behave in the browser, vis-a-vis always stamping them with the local timezone and offset.

As I said at the outset, Dates are hard.