Hiring Software Engineers in Cambridge

Thanks for taking a look. As you’ll have gathered from the headline I’m hiring software engineers to join myself and David Priddle in our new Cambridge office. I’ve put a few salient details below and, if you’d like to find out more, I’d love to hear from you. Please drop me an email at [email protected].

The company is MIG Global, a small-ish London based market research company, operating since the mid-noughties under the name Morar Consulting. The team we’re building in Cambridge is concerned with two of their products:

  • The first is an long-standing survey platform that is being rebuilt and enhanced,
  • The second, and the one I’m hiring for here, is a brand new data visualisation and analysis product.

We’re very much aiming to build a best in class offering so, to that end, I’m looking for two mid- to senior-level software engineers. Ideally you’ll have some full stack experience, and you’ll work with myself and a UX engineer to deliver our new product.

Technology-wise, there are still some decisions to be made, but the overall platform will be C# – and very likely F# – with ASP.NET Core, TypeScript, D3, HTML5, SQL Server, RabbitMQ, and possibly a NoSQL store for caching. Wonderfully, we are not encumbered by the need to support older versions of Internet Explorer.

The office itself will be a small, friendly affair on the Science Park. We’re hoping to be in by the end of this week, at which point you’re welcome to stop by for a tour.

I mentioned that MIG is headquartered in London so naturally you’re probably wondering if you’ll need to spend any time in London. The answer is yes, but it’s likely to average out at once every week or two.

Before I finish, I should also mention package. Salary is obviously going to be dependent on your experience, but I can tell you with certainty that in terms of Cambridge we are extremely competitive. We also offer a good benefits and flexible working.

I’m very happy to provide more detail on any of the above so if you are interested drop me a line. Once again, my email address is [email protected].

If you have a website or a LinkedIn profile, please feel free to include a link, but don’t worry about a CV unless you already have one to hand. Mostly the email is just so I can arrange a time to speak with you, so short and sweet is fine.

Thanks for taking the time to read and consider!

Follow-up: a happy ending to the Visual Studio story – Microsoft team steps in to help

A couple of days ago I published a long post documenting the challenging experience I’d had trying to buy a new Visual Studio cloud subscription:

My sorry tale of trying and failing to buy a Visual Studio cloud subscription

Well, I’m happy to report that, after the efforts of a number of awesome people at Microsoft, I’ve managed to successfully activate my Visual Studio subscription and I’m now up and running again with both Windows 10, Visual Studio and (shortly) SQL Server installed and functioning correctly.

So, this time around, let me tell a happier tale…

It starts when, having seen my plight, John Montgomery got in touch via twitter, looping in Buck Hodges:

John Montgomery and Buck Hodges of Microsoft see my plight on twitter and kindly reach out.

These two are both heavy hitters in Visual Studio and .NET in Redmond. John is Partner Director of Program Management, and Buck Hodges is a Partner Director Software Engineer. Having these guys on the case is already reassuring.

Buck’s initial suggestion didn’t quite work out but after getting back in touch with them he asked me to drop him an email so he could expedite the process. Things then started happening quite quickly.

Buck immediately looped in Andrew Brenner, Mike Tayebi, and Marc Paine to help. Marc is a Principal Software Engineer Manager, and Andrew is a Senior Program Manager.

Marc and Andrew got to work on finding a fix and, later in the evening, Marc emailed me instructions with a workaround they’d come up with. Due to timezone differences, and meetings the following morning, I couldn’t immediately try it out. As soon as I could I gave it a try and was overjoyed to be find that I was now able to assign the subscription to myself via https://manage.visualstudio.com/ in a private browsing session:

I can now see and assign my Visual Studio Professional subscription to myself

I’m not quite home and dry yet but this, in itself, is serious progress. A few minutes later I received the following welcome email to activate my subscription:

Yes! Welcome email from Microsoft at last.

I click the Activate my subscription button (actually I copy the link into another private browsing session) and I’m able to successfully activate my subscription.

Now, when I log in to https://my.visualstudio.com/ I’m can access all my benefits:

Visual Studio subscription downloads.

Visual Studio subscription product keys.

(I’m loving the fact there’s an entry in there for Office 95 Professional, btw.)

I’m able to download and install both Windows 10 and Visual Studio 2015:

Success! BOOM!

Success! BOOM! That’s both of the above installed and running in a Parallels VM. I’m extremely happy. I’m also extremely impressed with the speed of the Windows 10 install – I didn’t time it, but it really was only a few minutes. Very cool.

Marc also tells me that they’ve figured out why I couldn’t see or manage any subscriptions and are discussing a solution so that in future the workaround won’t be necessary, as well as investigating some other failure points I identified. Andrew also spent time going through my previous post creating a list of issues that various teams need to address to avoid other people having a similar experience.

Honestly, I’m so impressed with the way these guys stepped up and helped out. I’d particularly like to thank John, Buck, Marc, and Andrew for all their work and time in getting me unblocked, and for taking ownership over the process.

This is absolutely consistent with my previous experience dealing with people who work for Microsoft. Once you find an in to the right person or group of people, past the seemingly impenetrable corporate exterior, what you find are smart people who really care about what they do and about delivering a great experience to customers, and who will go above and beyond to do that. I know they’re going to find and implement solutions for all the problems I had.

I’d also like to thank the UK licensing support team who, whilst they weren’t equipped to handle these kinds of problems, did try and help out as much as they could, as well as Jeff Lambert (Escalation Engineer), and Trevor Hancock (Senior Escalation Engineer), who got in touch to try and help, and followed up to see how I was getting on.

Lastly, I’d like to thank my friend Elisabeth Blees, who is a Program Manager in the Visual Studio team, and who checked in to see how I was getting on, followed up with Buck and his team, and updated me on what they’d been doing.

So I’m pleased to say I’m up and running, and help from Buck and his team really couldn’t have come at a better time: I’m giving a talk on performance tuning .NET and SQL Server web apps at tomorrow’s DDD event at Microsoft’s UK headquarters in Reading, and now I have everything I need to do that.

Thanks again to all!

My sorry tale of trying and failing to buy a Visual Studio cloud subscription

UPDATE: this story now has a happy ending – a bunch of awesome Microsoft people helped me activate my subscription so I could get up and running.

Let me start off by saying this isn’t a rant against Microsoft in general, or me saying they suck, or that I hate them.

Microsoft don’t suck.

I don’t hate them.

They make some of the best products in the world: Windows 10 is great, Visual Studio is fantastic, SQL Server is awesome, Azure is likewise brilliant, and the Xbox is a staggering achievement, especially considering where Microsoft started out in games.

They’ve had a bit of a miserable time in the mobile market but at least nowadays Windows 10 Mobile is a decent product and I’m sure, if they play the long game as they usually do, they stand a decent chance of building a good business around mobile, just like they’ve done in gaming, and just like they’re doing with cloud.

So what is this? Simply an expression of extreme frustration at the overwhelming crappiness of a specific process: that of buying a Visual Studio subscription.

I feel a bit bad about posting it actually. I started out building this document as a way to keep track of everything I’d tried and to make it easier to explain to Microsoft’s support team what was going on, but after 28 hours and no progress I feel like I need to make my voice heard perhaps a little more loudly. I also really need everything written down where anyone who’s involved can see and understand what I’m seeing as I work through this process.

By the way, it’s not that people at Microsoft haven’t tried to help (see below), but none of it has worked. I still don’t have access to subscriber downloads and product keys, which is blocking at least some of my work.

Some Background

I used to work for a company called Red Gate, up until the end of 2013 – a fact that will become relevant later. Back in those days I did some work on Node Tools for Visual Studio, which I carried on into 2014. As a result of this Microsoft kindly gave me an MSDN subscription, which only recently expired.

After a few months of doing other things I now need to start working with Visual Studio and Windows again for some projects, along with upcoming speaking engagements over the next three months. I therefore need to buy a Visual Studio subscription.

I don’t mind paying at all, and here’s why: a Visual Studio cloud subscription costs $539/yr. Given it includes SQL Server, plus Windows operating systems for for dev and test environments, to me this represents excellent value for money.

So no big deal, or at least it shouldn’t be. You just visit visualstudio.com, pay for a subscription, and you can download the products you need and crack on.

Reality Bites: The Purchase Process

I’m not enough of an idiot to try buying Visual Studio direct from microsoft.com. Microsoft are a massive company with sprawling web properties, so I start out with a web search. It doesn’t matter whether you use Bing or Google, if you search for “buy visual studio” you pretty quickly end up at:

https://www.visualstudio.com/products/how-to-buy-vs

The page itself looks really promising, and there are links to a page that describes the benefits included with the standard and annual cloud subscriptions. From my perspective the important point is that I get access to all the Microsoft products I need to develop software, along with some free Azure hosting, which is great.

How to buy Visual Studio page.

Looking at this, it’s a pretty easy sell for me to go with the annual cloud subscription at $539/year because it includes everything I need at a lower TCO to the standard perpetual license.

So I click on the Buy now button for the annual subscription and end up at:

https://marketplace.visualstudio.com/items?itemName=ms.vs-professional-annual

Visual Studio buy annual cloud subscription page.

This is marginally irritating to me because I’ve already said I want to buy the thing so I either want it in my shopping cart, or to be taken somewhere where I can pay for it, not to another page full of information I already know/don’t care about. I don’t need to be marketed to at this point.

Still, there’s an obvious Buy button, so I click that one too.

Now I end up at https://login.microsoftonline.com/common/oauth2/authorize?client_id=499b84ac-1321-427f-aa17-267ca6975798&site_id=501454&response_mode=query&response_type=code+id_token&redirect_uri=https%3A%2F%2Fapp.vssps.visualstudio.com%2F_signedin&nonce=1a6ab4f3-53be-4b72-b9f1-839effd426a9&state=realm%3Dapp.vssps.visualstudio.com%26allow_passthrough%3DTrue%26ctpm%3DMarketPlace%26request_silent_aad_profile%3DTrue%26reply_to%3Dhttps%253A%252F%252Fmarketplace.visualstudio.com%252Fitems%253FitemName%253Dms.vs-professional-annual%2526workflowId%253Dmarketplace%2526wt.mc_id%253Do%25257emsft%25257emarketplace%25257einstall%2526install%253Dtrue%2526auth_redirect%253DTrue%26nonce%3D1a6ab4f3-53be-4b72-b9f1-839effd426a9&resource=https%3A%2F%2Fmanagement.core.windows.net%2F

(No, I’m not making that URL up.)

Visual Studio login page.

I briefly debate whether to create a new Microsoft account against my business email address (for which I’m the company owner) rather than reuse my existing account against my personal email address.

In the end I decide to create the new account, so I type in my business email address and, as expected, I get an error because I’ve never signed up for a Microsoft account using this address:

Expected error after using new email address so I can create a new Microsoft account.

Like I say, this is fine, because it gives me what a want: a way to create a new Microsoft account. Arguably this should have been an option on the previous screen but whatever – the main thing is I have a way to do it.

This is where things start to get ugly though, because when I click on “get a new Microsoft account” what happens is it opens a new tab pointed at the original login screen:

Visual Studio login page.

This was in Chrome. In Firefox it’s even worse: it opens up two new tabs, both on this page. I tried it in private browsing, in case Microsoft have banjaxed their cookies, but still no luck.

Much as I might want to, I can’t use IE or Edge because I’m running OSX. I’d need to create a Windows 10 VM via Parallels. Unfortunately I can’t do this because without the Visual Studio subscription I’m trying to buy (!), and I also unfortunately don’t have any old Windows 8.x VMs lying around that I could use for this purpose.

OK, well, no worries. I’ll just use my existing Microsoft account attached to my personal email address instead. I realise this is probably a better idea anyway because my website at www.bartread.com is hosted in Azure and is tied to this account. I can just buy a new subscription and change the email address for my account, right?

Right?

So I enter the email address for the account I already have, against which the expired MSDN subscription is attached, and after a few seconds this redirects me to live.com to log in:

https://login.live.com/oauth20_authorize.srf?response_type=code&client_id=51483342-085c-4d86-bf88-cf50c7252078&scope=openid+profile+email+offline_access&response_mode=form_post&redirect_uri=https%3a%2f%2flogin.microsoftonline.com%2fcommon%2ffederation%2foauth2&state=REDACTED&estsfed=1&uaid=b5e16781b9884bf4b19e7705f138abc0&pcexp=&username=REDACTED&popupui=

(Again, no, I’m not kidding with that URL.)

Live.com page to login to existing Microsoft account.

So I enter my password, select Keep me signed in, and click Sign in. It doesn’t work. I’m redirected and end up on a page at https://login.microsoftonline.com/common/federation/oauth2 with the following error:

Basic login error after login attempt on live.com.

There’s some more detailed information at the bottom of the page:

More detailed login error information.

I try this several times with the same result. Suspecting cookies I switch from Chrome to Firefox, go through the same process, up to the login page, and try to log in again.

This time it works. I suspect it would also have worked if I’d used a private browsing window in Chrome. Clearly something about trying to create a new account hosed the cookies (I use ‘cookies’ in the loosest sense of the word – they could be using local storage on the client for all I know; I haven’t looked).

Now I’m at:

https://marketplace.visualstudio.com/items?itemName=ms.vs-professional-annual&workflowId=marketplace&wt.mc_id=o~msft~marketplace~install&install=true

This is a page I’ve been on before, but now I have a popup that looks like it’s going to take me through the purchase process:

Popup to purchase Visual Studio subscription.

There are no other options in the dropdown so I hit Continue. This goes through a couple of redirects, lands back on the same page with a different URL (https://marketplace.visualstudio.com/items?itemName=ms.vs-professional-annual&install=true&subscriptionId=REDACTED), says it’s checking my subscription, then gives me an error and greys out the Continue button:

Error saying I am not an admin or co-admin of the subscription.

(Later it becomes clear to me that this error is factually correct, if unhelpful. The Windows Azure MSDN – Visual Studio Ultimate item refers to an old, expired, MSDN subscription that I had through Red Gate. At this point I don’t realise this though.)

In desperation I click Create new Azure subscription but it takes me to a page on windowsazure.com that doesn’t seem immediately relevant to what I’m doing because it’s talking about Pay-As-You-Go Azure subscriptions, whereas I want to buy a Visual Studio subscription, so I ignore it.

In passing I notice that Firefox has blocked Flash on the site and briefly wonder why the site uses Flash at all:

Firefox has blocked Flash on windowsazure.com.

Anyway, as I said, the page doesn’t seem relevant so I ignore it and move on. Later it will become apparent that this was a mistake and, I suppose, my bad.

Frustrated, I decide to log in to my Microsoft account from elsewhere, in a private browsing session, to see if I can figure out what’s going on.

I end up back at https://login.live.com/:

Here I am, back at live.com trying to log in again.

Eventually, via some tediously circuitous route that I’m unable to fully recollect, I end up at https://app.vsaex.visualstudio.com/me?mkt=en-US&campaign=o~msft~vscom~signin. Oddly I note that there appear to be two profiles attached to my account, one named “Microsoft Account”, and the other named “Red Gate Software Ltd”. (Switching between these profiles involves 3 redirects taking around 4 seconds in total.)

The "Microsoft account" profile attached to my account.

The "Red Gate Software Ltd" profile attached to my account.

I have access only to Visual Studio Dev Essentials benefits (i.e., all the free stuff), but this isn’t surprising since I left Red Gate 3 years ago and, as I’ve already said, the MSDN subscription that Microsoft kindly gave me expired recently.

There’s nothing here that shows me definitively what the problem might be but I am a bit suspicious about this Red Gate Software Ltd profile. I’m at Red Gate for the day so I pay a visit to their Information Systems team to see if they have any insight into what’s happening. Not surprisingly, I’m no longer in the list of subscribers they manage, so no dice there.

In the end I call the relevant Microsoft UK support line on 0800 051 7215 and select option 3 for Visual Studio subscriptions.

I talk to a guy there, who’s name I didn’t get on this occasion, who after a few minutes on hold whilst he investigates tells me that I need to create a new Pay As You Go Azure subscription, then I’ll be able to attach my Visual Studio cloud subscription to that. There was a bit more to it than that because I didn’t immediately grasp what he was telling me to do or why, or where I had to go to do it. He gave me a number to call for Azure support who would be able to help me.

Not really wanting to waste any more time on the phone, I did a bit of googling, and ended up back where I was after I’d clicked Create a new Azure subscription above, only now I understood that this was the right place to be.

I went through the process of creating a new subscription, supplying new payment information along the way (my old payment method had long since expired), which went fairly smoothly and I ended up in the Azure portal, as expected, with all my stuff (most of it entirely uninteresting):

Successfully created an Azure Pay-As-You-Go subscription and logged in to Azure portal.

OK. So about an hour and a half has passed since I first tried to buy Visual Studio. Bear in mind that buying goods on Amazon takes me about 10 seconds once I’ve decided what to buy.

Now I go back through the process to buy Visual Studio for the third time, and end up back on this page again, only now my new Pay-As-You-Go subscription is showing up:

Now I can buy Visual Studio with my Pay-As-You-Go subscription.

This time when I click Continue the subscription check succeeds and I can go ahead and select a quantity, confirm my acceptance of the terms and conditions, and click Confirm. I get a message indicating that my purchase has been successful. (I don’t have screenshots of this because I can’t get back to this point – more below.)

This all looks good but I can’t immediately figure out how to download my software. When I go to the benefits section of my account, all I have are the essentials, and both the Subscriptions and Downloads sections of my account are still blank:

Blank subscriptions section in my visualstudio.com profile after purchasing.

At this point I’m stumped so it’s time to call support again. I speak to a guy called Adam.

He tells me I should have received a welcome email. I check my email and discover that I haven’t received the expected email.

He also asks me to send him screenshots of my subscriptions area, and my manage.visualstudio.com area, which also shows nothing:

MSDN administration blank subscriptions area and error.

Adam’s a nice guy and he’s trying to be helpful. In fact everyone I’ve spoken to at Microsoft is nice and trying to be helpful; it’s just that I don’t think they’re really equipped to actually help. This is frustrating. By now about 2 hours have passed since I first tried to buy Visual Studio and I’m getting chippy. I don’t yell at or insult Adam, because it’s not his fault, but I do ask him to pass on some fairly blunt feedback about the purchase process to his manager.

Adam tells me there’s no record of my purchase and I should try again. However, now when I go back through the purchase process the button I need to click to make a purchase is greyed out and I can’t get any further:

Greyed out button that I'd previously used to make a purchase.

(Interestingly, and I hadn’t realized this at the time, this seems like the process I’d go through to add more subscriptions, since if I increase the quantity to 2 the Update button enables. Of course, that adds to the cost so I don’t go through with it because I don’t want to be billed for two subscriptions.)

He doesn’t know what to do about this so he needs to pass the details along to technical support and will get back to me. This is fine. I have a meeting, then I have lunch.

Roughly another two hours have passed and I’ve heard nothing back about my attempted purchase, and still no welcome email.

I call support again and end up talking to Adam again. I feel a bit sorry for him at this point because I’m really not happy by now and it’s really not his fault. He tells me that my enquiry will be dealt with. I’m sure that’s true but I ask him for a timescale because it’s now getting in the way of what I need to do in the afternoon. He says he’ll pursue it as soon as we’ve finished talking and get back to me.

About half an hour later I get an email from Adam (contact details redacted):

 

On 30 Aug 2016, at 15:03, [email protected] wrote:

Dear Bart,

Please be informed that your predicament needs further investigation.

I have forwarded your issue with greyed out button ‘update’ to our technical support.

Once there is a feedback in your case we will inform you accordingly.

My apologies for this inconvenience.

Thank you for cooperation and understanding in advance.

Kind Regards,

Adam

 

So basically I’m back where I was before with an enquiry forwarded to technical support but no progress.

For the hell of it I try buying again with a completely fresh private browsing session in Firefox but end up at the same roadblock.

I log in to Azure again and have a look at my subscriptions at https://account.windowsazure.com/Subscriptions because I’m wondering if the subscription I created on Azure even worked if my Visual Studio subscription purchase didn’t. Here’s what I see:

List of Azure subscriptions.

At the bottom you can see the old Windows Azure MSDN – Visual Studio Ultimate subscription, which is still active even though the associated MSDN subscription is long gone. (I’m not complaining, btw: it just strikes me as a bit odd.)

Then at the top there’s the new Pay-As-You-Go subscription I just created. If I click on the new subscription I see this:

Summary information for my Pay-As-You-Go subscription.

So, according to this I clearly have a Visual Studio annual cloud subscription, but there’s no record of it shown on visualstudio.com. It confirms my suspicion about the disabled Update button: it’s disabled because I’ve already bought one subscription and it’s not going to enable unless I want to add more subscriptions to what I already have.

Still, it doesn’t really help me because I can’t download Visual Studio 2015, SQL Server, or Windows 10. If I go to marketplace it shows I haven’t made any purchases, and if I go to downloads it’s all SDKs and command line tools for working with Azure. This stuff is cool and I’ll probably use some of it, but it’s not what I need right now.

Anyway, I go to the pub, go home, sleep, and by the time I’ve got up the next morning I’ve now heard directly from two different Microsoft employees kindly trying to help me out. I’m really grateful to both of these gentlemen – I think it’s great when people take the initiative to reach out and help:

John Montgomery and Buck Hodges of Microsoft see my plight on twitter and kindly reach out.

I go with Buck’s suggestion since, at this point, I’m blocked, I’m getting no progress elsewhere, and I’ll try anything to get some help. He’s asked me to open a free support case at https://www.visualstudio.com/en-us/support/cloud-services-assisted-support-vs.

It’s worth pointing out that when I tried this first thing in the morning I got a 404 from the page where I had to select the problem type, but I tried again later – after a long meeting – and it seems to be working properly.

Cloud Services Assisted Support page suggested by Buck.

This seems pretty straightforward so I just click on Basic Support which, after a few redirects, takes me to https://support.microsoft.com/en-us/getsupport?tenant=ClassicCommercial&locale=en-us&supportregion=en-us&pesid=15339&oaspworkflow=start_1.0.0.0&ccsid=636082386553595594.

Choosing a problem type to create an incident.

I select Account Administration from the dropdown. This shows me a category list:

Incident category list.

Since I’m having trouble buying a Visual Studio subscription I select License assignment and purchasing. Now I get a section appear that allows me to contact Microsoft:

Now we get to start filing a support request.

I click Start request. This takes me to https://support.microsoft.com/en-us/getsupport?tenant=ClassicCommercial&locale=en-us&supportregion=en-us&pesid=15339&oaspworkflow=start_1.0.0.0&ccsid=636082386952800863, and it’s here that I start to worry that something isn’t right.

Form to create an incident.

The problem is in the top right hand corner. Note the product name, which is Visual Studio Team Services Preview. This doesn’t seem right to me at all: all I’m interested in is getting access to the subscriber benefits for my Visual Studio cloud subscription, which to me sounds like a different product.

Worried I might be about to disappear down another irrelevant rabbit hole I start a new private browsing session and try to go through the same process but this time I log in to visualstudio.com first. It makes no difference and I end up back in the same place, only it looks like it signed me out somewhere along the way. I decide to fill in the form anyway.

The last page of this form includes a severity rating, which serves only to aggravate me since I am entirely blocked on two separate projects because I can’t access my subscriber benefits and download Visual Studio:

The rather presumptuous severity rating.

Sure, from Microsoft’s perspective this might be “Severity C (Minimum business impact)” but not from mine, and who are they to make the decision about how much of a problem this really is? Remember that more than 24 hours has now elapsed since my initial purchase attempt.

At least on this occasion my submission is successful:

I've successfully submitted a support request.

In the meantime I hear from somebody called Michal at eu.subservices.com to say that they are still waiting on a response from the technical support department:

 

Dear Bart,

Thank you for your e-mail.

Let me kindly inform you that we are still waiting for response from the responsible department.

We will contact you as soon as any new information is available.

Kind Regards,

Michal REDACTED

 

Yesterday I was also in touch with @MicrosoftUK on twitter, where a guy called Tom asked me to DM him to talk about MSDN admin subscriptions. Having done so I’ve now heard back and had a brief exchange with him via DM:

Twitter DM conversation with Tom on the MicrosoftUK account.

At this point you can probably tell I’m getting a bit frustrated again, albeit that I’m trying to be polite.

In case Tom’s on to something here I go back to Red Gate again and verify that they are not administering any current or past MSDN or Visual Studio subscriptions attached to my Microsoft account. They verify that this is the case – I do not appear in their subscriber management list, can’t be found via search, nor by any other means.

Where does all of this leave me?

Well, after 30+ hours of wrangling there appears to be nothing I can do except wait. As much as people are clearly trying to help I don’t feel like anyone I’ve dealt with so far is in a position to resolve the problem.

As a result, I’ve still got no Visual Studio download, let alone a working copy installed, nor any other software downloads or license keys available to me. As I’ve said, this now means I’m blocked on a couple of projects, which is frustrating and shortly likely to become extremely problematic.

Those of you with long memories might remember Bill Gates’ rant from 2003 about his experience of trying to purchase Moviemaker and the Digital Plus pack: http://blog.seattlepi.com/microsoft/2008/06/24/full-text-an-epic-bill-gates-e-mail-rant/.

It’s not a terribly British turn of phrase but, as it echoes down the years, I can’t help but agree with him when he says, “I am quite disappointed at how Windows Usability has been going backwards and the program management groups don’t drive usability issues.”

Yeah, that pretty much sums it up. There isn’t one part of this process where something didn’t break, often in a way that blocked me completely or left me confused.

And, further on that point…

A few observations about my experience of using Microsoft’s various websites during this episode

Firstly, and most obviously, so much is broken. If you just go to read content you’ll probably be fine but the moment you want to interact with anything at all whether it works or not seems to be a complete crap-shoot. Compounding this, much of both what is and what isn’t broken seems confusing or non-obvious.

Microsoft sites use way too many redirects. I mean JUST WAY TOO MANY. So many actions result in three or four redirects back and forth between different properties. Not only does this slow everything down but it mostly breaks of the browser Back button functionality. This means if you do need to go back it’s often just easier, or your only option, to start the entire process again.

Microsoft websites are SLOW. Most pages take several seconds to load and, as already noted, the redirects really don’t help.

This isn’t universally true but usability and user experience overall is pretty poor. The next action you take often isn’t obvious. You end up just experimentally clicking around to find what you need. Moreover there’s very little consistency across sites. The broken functionality only serves to weaken an already poor experience in these cases.

Navigation in general is haphazard. For example, depending on how you go about getting there, if you want to manage your Azure account you might end up in the old Azure management console (with a prompt encouraging you to use the new portal), or you might go direct to the new portal. I still can’t figure out what I did to end up going down each different route but I can assure you it happened.

It’s really hard to find stuff. You end up having to use Google or Bing. Even then you might well end up with incorrect or irrelevant information. For example, if you search for “visual studio subscriber downloads” you get pushed towards the old MSDN Subscriber Downloads page. This is fine if what you have is an old but still live MSDN subscription. However, if you have a Visual Studio subscription you need to download from a different page, which doesn’t appear in the search results.

Really horrible errors are commonplace. Much of this seems to be due to extremely undisciplined use of cookies (or related client-side storage technologies). Sometimes these errors will appear in the page all nicely formatted even if they are pretty uninformative. Sometimes you’ll just get a raw response like this back from the server:

{"$id":"1","innerException":null,"message":"TF400898: An Internal Error Occurred. Activity Id: bc208830-44cd-435b-a79f-7e5e8db87730.","typeName":"System.FormatException, mscorlib","typeKey":"FormatException","errorCode":0,"eventId":0}

Indeed. A blob of incomprehensible JSON in my browser window. Classy.

In case you’re wondering, this came from https://manage.visualstudio.com/_apis/Home/DetermineRedirectDestination?redirectSource=Commerce&azureSubscriptionId=6d4a66cf-2c81-473b-a0a2-3691a4797515&galleryId=ms.vs-professional-annual&destination=Subscriptions

I feel like I’m being exposed to a sort of distorted external manifestation of Microsoft’s internal processes and architecture when I move between these sites and interact with their support teams. Whilst this is some mixture of interesting and infuriating, mostly the latter at this point, on a very basic level it’s not what I care about as a user: I just want to be able to buy, download, and use the software, and that’s it.

It’s not like everything is universally bad though. There are in fact many good individual bits:

  • Pages on the visualstudio.com site seem on the whole simple, well-designed, and easy to use. Maybe there’s the odd extra click in there that you don’t need, but that’s nitpicking and it feels like, if the purchase process actually worked properly then it would be fairly simple to use.
  • There’s a lot of information in the new Azure portal but, on the whole, it’s well laid out and fairly easy to find your way around. I wasn’t even looking particularly hard and managed to find one setting that suddenly made my website fly (which I think was new over the old management portal). Honestly, even the old management portal was pretty good. I certainly never had any issues figuring out what I needed to do.

The problem is that as a whole the experience just kind of sucks because there are too many overcomplicated and apparently poorly tested dependencies between the different sites. The plumbing that joins everything together just doesn’t seem to work properly. There’s just way too much that’s brittle, or just doesn’t work under any circumstance (that I can realistically achieve).

Like I say, this isn’t a rant against Microsoft in general, just an account of the frustrations of trying to do one particular thing: buy and download some software. The fact that this is so difficult, well… from one of the world’s largest and most successful software companies it’s all rather unsatisfactory.

Maybe I’m doing something wrong, but it’s not clear what. Maybe there’s something wrong with my account. Again, it’s not clear what that might be.

I’ll update when I have more.

A final postcript

I just received another update from the support team that isn’t saying anything useful, other than somebody’s looking at the problem. Notably, no timescales:

Dear Bart,

Thank you for your e-mail in regards to subscription access.

Please be informed that I placed your case in our internal department hands.

As soon as they provide me with the feedback we will inform you accordingly.

I hope the above information is of assistance.

Should you have any further issues, please do not hesitate to contact us.

Regards,          

Darius REDACTED

Update: A final postscript and a happy ending

I’m overjoyed to say that this tale, like all the best stories, has a happy ending. A group of fantastic people from Microsoft – notably John Montgomery, Buck Hodges, Marc Paine, and Andrew Brenner – kindly stepped in and helped me out.

They were able to provide a workaround to activate my subscription and are discussing solutions for the problems I encountered with a view to fixing them so nobody else runs into these problems.

Read the full story here.

Current talk list 2016: web and database performance

It’s that time of the year where, for me, talk proposals are submitted. I also tend to take it as an opportunity to refresh and rework talks.

This year I’ve submitted talks for DDD, DDD North, and NDC London (this one’s a bit of a long shot), and am keeping my eye out for other opportunities. I’ll also be giving talks at the Derbyshire .NET User Group, and DDD Nights in Cambridge in the autumn.

Voting for both DDD and DDD North is now open so, if you’re interested in any of the talks I’ve listed below, please do vote for them at the following links:

Here are my talks. If you’d like me to give any of them at a user group, meetup, or conference you run, please do get in touch.

Talk Title: How to speed up .NET and SQL Server web apps

Performance is a critical aspect of modern web applications. Recent developments in hardware, software, infrastructure, bandwidth, and connectivity have raised expectations about how the web should perform.

Increasingly this attitude is applied to internal line of business apps, and niche sites, as much as to large public-facing sites. Google even bases your search ranking in part on how well your site performs. Being slow is no longer an option.

Unfortunately, problems can occur at all layers and in all components of an application: database, back-end code, systems integrations, local and third party services, infrastructure, and even – increasingly – the client.

Complex apps often have problems in multiple areas. How do you go about tracking them down and fixing them? Where do you begin?

The answer is you deploy the right tools and techniques. The good news is that generally you can do this without changing your development process. Using a number of case studies I’m going to show you how to track down and fix performance issues. We’ll talk about the tools I used to find them, and the fixes that resulted.

That being said, prevention is better than cure, so I’ll also talk about how you can go about catching problems before they make it to production, and monitor to get earlier notification of trouble brewing.

By the end you should have a plethora of tools and techniques at your disposal that you can use in any performance analysis situation that might confront you.

Talk Title: Premature promotion produces poor performance: memory management in the CLR and JavaScript runtimes

The CLR, JVM, and well-known JavaScript runtimes provide automatic memory management with garbage collection. Developers are encouraged to write their code and forget about memory management entirely. But whilst ignorance is bliss, it can also lead to a host of problems further down the line.

With web applications becoming ever more interactive, and the meteoric rise in popularity of mobile browsers, the kind of performance and resource usage issues that once only concerned back-end developers have now become common currency on the client as well.

In this session we’ll look at how these runtimes manage memory and how you can get the best out of them. We’ll discuss the “classic” blunders that can trip you up, and how you can avoid them. We’ll also look at the tools that can help you if and when you do run into trouble, both on the client and the server.

You should come away from this session with a good understanding of managed memory, particularly as it relates to the CLR and JavaScript, and how you can write code that works with the runtimes rather than against them.

Talk Title: Optimizing client-side performance in interactive web applications

Web applications are becoming increasingly interactive. As a result, more code is shifting to the client, and JavaScript performance has become a key factor for many web applications, both on desktop and mobile. Just look at this still ongoing discussion kicked off by Jeff Atwood’s “The State of JavaScript on Android in 2015 is… poor” post: https://meta.discourse.org/t/the-state-of-javascript-on-android-in-2015-is-poor/33889/240.

Devices nowadays offer a wide variety of form factors and capabilities. On top of this, connectivity – whilst widely available across many markets – varies considerably in quality and speed. This presents a huge challenge to anyone who wants to offer a great user experience across the board, along with a need to carefully consider what actually constitutes “the board”.

In this session I’m going to show you how to optimize the client experience. We’ll take an in depth look at Chrome Dev Tools, and how the suite of debugging, data collection and diagnostic tools it provides can help you diagnose and fix performance issues on the desktop and Android mobile devices. We’ll also take a look at using Safari to analyse and debug web applications running on iOS.

Throughout I’ll use examples from https://arcade.ly to illustrate. Arcade.ly is an HTML5, JavaScript, and CSS games site. Currently it hosts a version of Star Castle, called Star Citadel, but I’m also working on versions of Asteroids (Space Rawks!), and Space Invaders (yet to find an even close to decent name). It supports both desktop and mobile play. Whilst this site hosts games the topics I cover will be relevant for any web app featuring a high level of interactivity on the client.

Talk Title: Complex objects and microORMs: an introduction to the Dapper.SimpleLoad and Dapper.SimpleSave extensions for StackExchange’s Dapper microORM

Dapper (https://github.com/StackExchange/dapper-dot-net) is a popular microORM for the .NET framework that provides simple way to map database rows to objects. It’s a great alternative when speed is of the essence, and when you just don’t need the functionality offered by EF.

But what happens when you want to do something a bit more complicated? What happens if you want to join across multiple tables into a hierarchy composed of different types of object? Well,then you can use Dapper’s multi-mapping functionality… but that can quickly turn into an awful lot of code to maintain, especially if you make heavy use of Dapper.

Step in Dapper.SimpleLoad (https://github.com/Paymentsense/Dapper.SimpleLoad), which handles the multi-mapping code for you and, if you want it to, the SQL generation as well.

So far so good, but what happens when you want to save your objects back to the database?

With Dapper it’s pretty easy to write an INSERT, UPDATE, or DELETE statement and pass in your object as the parameter source. But if you’ve got a complex object this, again, can quickly turn into a lot of code.

Step in Dapper.SimpleSave (https://github.com/Paymentsense/Dapper.SimpleSave), which you can use to save changes to complex objects without the need to worry about saving each object individually. And, again, you don’t need to write any SQL.

I’ll give you a good overview of both Dapper.SimpleLoad and Dapper.SimpleSave, with a liberal serving of examples. I’ll also explain their benefits and drawbacks, and when you might consider using them in preference to more heavyweight options, such as EF.

Aide Memoire: The “In Search of Stupidity” reading list

I’ve recently been reading Rick Chapman’s excellent In Search of Stupidity: Over 20 Years of High Tech Marketing Disasters on Kindle.

Towards the end of the book there’s a chapter on the avoidance of stupidity. In it he makes the point that an intense study of industry history by corporate management is a necessary, but not sufficient, pre-requisite for successful execution of business programmes. Or, as the relevant section is entitled, “You Shall Study the Past, and the Past Will Make You Less Stupid.”

Now, bearing in mind that the most recent version of In Search of Stupidity as of this writing (mid-2016) was published in 2008 some of the recommended reading may be less relevant than perhaps it was 8 years ago. Still, it’s history, and the past doesn’t change, and I certainly felt like all of the books in Rick’s list would make worthwhile.

I’ve therefore been scouring Amazon for them, and have managed to get hold of all of them, one way or another. For my own reference as much as anything else, I wanted an easy to find copy of the list, along with an indication of which format each book is available in/I’ve bought.

  • Where available, I’ve bought the Kindle edition, regardless of cost – I work away from home a lot so it’s both more convenient and probably better for the environment,
  • Some of the books are (or appear to be) out of print – in these cases I’ve bought them used; most are available at a reasonable price even if the sensibly priced offers aren’t the first in Amazon’s search results,
  • Of the remainder there are good offers on used copies – again, buying used is probably better for the environment, but you also have to consider the issue of author/publisher reimbursement. Regardless, if you’re a student on a budget or whatever, I’d recommend used.

Anyway, here’s the reading list and, by the way, I don’t make any money of these recommendations (in case it would bother you if I did).

Must Reads

Title and Author In Print Kindle
Apple: The inside Story of Intrigue, Egomania, and Business Blunders by Jim Carlton No No
Big Blues: The Unmaking of IBM by Paul Carroll No No
The Dream Machine: J. C. R. Licklider and the Revolution That Made Computing Personal by M. Mitchell Waldrop Yes Yes
Gates: How Microsoft’s Mogul Reinvented an Industry and Made Himself the Richest Man in America by Stephen Manes and Paul Andrews Yes Yes
Hackers: Heroes of the Computer Revolution – 25th Anniversary Edition by Steven Levy Yes Yes
Joel on Software* by Joel Spolsky Yes No
Marketing High Technology: An Insider’s View by William H. Davidow Yes Yes
The Reckoning by David Halberstam Uncertain Yes
Selling Air by Dan Herchenroether No No

*There is also a less well received follow-up.
 

Recommended Reading

Title and Author In Print Kindle
Beer Blast: The inside Story of the Brewing Industry’s Bizarre Battles for Your Money by Philip Van Munching Uncertain No
On the Firing Line: My 500 Days at Apple by Gil Amelio & William L. Simon No No
Open Source: The Unauthorized White Papers (Professional Mindware) by Donald K. Rosenberg No** No
Odyssey: Pepsi to Apple : A Journey of Adventure, Ideas, and the Future Hardcover by John Sculley with John A. Byrne No*** No
The Product Marketing Handbook for Software by Merrill R. (Rick) Chapman No No
The Second Coming of Steve Jobs by Alan Deutschman Yes Yes
iCon Steve Jobs: The Greatest Second Act in the History of Business by Jeffrey S. Young & William L. Simon No No
Once Upon a Time in Computerland: The Amazing, Billion-Dollar Tale of Bill Millard by Jonathan Littman No No

**And used copies are fudging expensive, suggesting either a very limited print run, or this really is worth reading.

***And used copies are not expensive at all which, given the 1987 publication date, suggests this may be an interesting cautionary tale on how you shouldn’t write an autobiographical text about how awesome you are until well after the outcomes are known. Of course, I could be wrong, but either way I can’t wait to get my grubby mitts on this.

By the way, if I’ve marked in print availability as uncertain it means that, whilst you might be able to get hold of a new copy of that book, my suspicion is that it’s probably new old stock and that the book may nevertheless be out of print.

Hope you find this useful. If I get time I may post some quick reviews.

Discovered a new tool for working with MongoDB: MongoChef from 3T Software Labs

I thought this was worth sharing. A former colleague of mine from Red Gate put me in touch with an awesome company called 3T Software Labs, who have a suite of tools for working with MongoDB, including a great shell for MongoDB called MongoChef. I was fortunate enough to be able to spend some time this afternoon running through a usability test on MongoChef with one of 3T’s co-founders, Thomas Zahn.

Up until now I’ve used both RoboMongo and MongoVUE for this kind of work, both of which have their strengths and weaknesses, and inevitably I’ve ended up using them both for different purposes.

MongoChef seems to offer the best of both whilst also being more capable and is being rapidly developed on a very short release cycle so the improvements seem to be coming thick and fast, so I suspect it’ll replace them both from now on.

Anyway, I really just wanted to run through some of the functionality very quickly…

First off, download and install MongoChef. If you want to use the integrated shell, make sure you’ve also got the MongoDB binaries installed.

Connecting to a MongoDB instance is dead simple. Just click the big green Connect button on the welcome screen, or use the corresponding toolbar button:

MongoChef welcome screen Connect button

This opens up the Connection Manager:

MongoChef Connection Manager dialog

Now click New Connection, and give your connection a name. The really nice thing is that it’s easy to populate this dialog from a URI, for example, one that you use in your web.config, using the From URL button:

Creating a new MongoDB connection in MongoChef

Just paste in your URI, click OK, and you’re good to go:

Populating the New Connection dialog from a URI

Likewise, if you need a URI as a connection string, you can grab that using the To URI button:

Exporting a URI from connection settings in MongoChef's New Connection dialog

Just double-click on any connection in the Connection Manager to connect to that instance.

Active connections are shown in the treeview on the left of the main window. You can drill in to view databases and connections:

Treeview showing active MongoDB connections

You can double-click on a collection to open it. Pagination is customisable and you can drill into documents:

MongoChef's collection view

There are also three different views for documents: Tree View, Table View, and JSON View.

Different view styles for MongoChef's collection view

My favourites are definitely Tree View and JSON View. Table View works particularly well for very flat documents.

Tree View works best with the Query Builder, which is a handy way to quickly throw together queries without having to scratch your head over the syntax. It’s particularly helpful if, like me, you’re from a SQL background, so tend to know what data you want back (and how you’d like it structured), but struggle a little to express that in JavaScript rather than SQL.

To use the Query Builder just hit the corresponding button in the top-right of the collection view:

Click the button to open the Query Builder

To build your query, just:

  • drag the fields you want to query against into the Query section, and set the query criteria for each field
  • drag the fields you’d like to sort by into the Sort section,
  • the fields you’d like returned into the Projection* section.

*A projection defines the subset or shape of data you’d like to receive back from MongoDB. It’s analagous to the column list after SELECT in a SQL query.

Then, to run your query, just click the Run button. Here’s an example:

Using the Query Builder

The one thing missing here is a script view that would display the JavaScript for the query, which would be a handy teaching tool, but Thomas assures me this is coming – they’re working on it right now in fact.

Editing documents is just as easy as querying. Just double-click on the value you want to edit and type in something else. This even works with projections:

Editing a document in the collection view

If you prefer to get a bit lower level and want to bash out queries directly in JavaScript you can access the integrated shell via the Shell toolbar button:

Use the Shell toolbar button to open the integrated shell

Enter your query in the pane at the bottom of Shell, which includes intellisense/autocomplete to help you, which is absolutely invaluable. If it takes to long to appear just use the standard CTRL+SPACE shortcut to force it:

Editing a query in MongoChef's integrated shell with intellisense/autocomplete

You can hit ENTER to execute your query. Results appear in the pane at the top of the Shell.

Query results in the integrated shell

If this isn’t the behaviour you want just uncheck Enter Executes.

If you’ve ever installed MongoDB from scratch you’ll know that setting up users can be a royal pain in the backside. Fortunately MongoChef provides help with user management too.

Just select the database you want and click Users on the toolbar:

Accessing the user management view in MongoChef

You can drill into each user to see which roles they have. Click Add to add a new user:

MongoChef's user management view

Assign a username and password to your new user. If you want them to be able to do anything, you’ll also need to assign one or more roles to them. Click Grant Roles to do this:

Adding a user in MongoChef

To assign a single role, just double-click on it. To assign multiple roles, use CTRL+Click to multi-select, then click Grant:

Assigning roles to a user

Back in the Add User dialog box just click the Add User button, and you’re done.

This is by no means a comprehensive look at all the features available in MongoChef, but hopefully it’s given you a flavour of what the tool can do. I’d strongly recommend you try it yourself though, so here’s that download link again.

MongoChef is available for Windows, OSX, and Linux, and is compatible with all recent 2.x versions of MongoDB, along with the latest 3.0 release. It’s free for personal, non-commercial use, and a snip at US$69 + VAT for a commercial license.

Enjoy!

How to quickly convert all NTEXT columns to NVARCHAR(MAX) in a SQL Server database

I was at a client’s earlier today and the question came up of how to convert all NTEXT columns to NVARCHAR(MAX) in their SQL Server databases, and it turns out they have rather a lot of them.

There are a couple of obvious advantages to this conversion:

  1. Online index rebuilds with SQL Server Enterprise Edition become a possibility,
  2. Values are stored in row by default, potentially yielding performance gains.

My response to this was, “Yeah, sure: I can write a script to do that.” Two seconds after I said this I thought, “Hmm, I bet 30 seconds of Googling will provide a script because this must have come up a zillion times before.”

Sure enough, there are some pretty reasonable hits. For example, http://stackoverflow.com/questions/18789810/how-can-i-easily-convert-all-ntext-fields-to-nvarcharmax-in-sql-query.

Buuuuuuuuuuuut as ever, you’d be naive indeed to think that you can just copy and paste code from StackOverflow and have it work first time. Moreover, even with modification, you need to go over it with a fine-toothed comb to make sure you’ve squashed every last bug.

For example, this boned me earlier because I wasn’t paying proper attention:

CASE WHEN is_nullable = 1 THEN 'NOT' ELSE '' END

You can see the logic is reversed from what it should be.

So, anyway, I ended up concocting my own, of which you can find the latest version at https://github.com/bartread/sqlscripts/blob/master/scripts/AlterAllNtextColumnsInDbToNvarcharMax.sql.

There are essentially three phases:

  1. Change the datatype of the NTEXT columns to NVARCHAR(MAX) using ALTER TABLE statements
  2. Pull any values small enough to fit in row out of LOB storage and back into rows using UPDATE statements. Thanks to John Conwell for pointing out the necessity of doing that to realise any performance increase with existing data.
  3. Refresh the metadata for any views using sp_refreshview – this makes sure that, for example, columns listed for them in sys.columns have the correct data type.

Phases 1 and 2 are actually done together in a loop for each NTEXT column in turn, whilst phase 3 is done in a separate loop at the end. I just refresh the metadata for all views because, although I could figure out only the views that depend on the tables, it’s simpler to just do them all and doesn’t take that long. Of course, if you have thousands of views and a relatively small number of NTEXT columns you might want to rethink this. My situation is numbers of tables, views, and NTEXT columns are all of the same order of magnitude so a simple script is fine.

For those of you who don’t have git installed, or aren’t comfortable with DVCS, here’s the full script:

USE _YOUR_DATABASE_NAME_
GO

SET NOCOUNT ON;

-- Set this to 0 to actually run commands, 1 to only print them.
DECLARE @printCommandsOnly BIT = 1;

-- Migrate columns NTEXT -> NVARCHAR(MAX)

DECLARE @object_id INT,
      
@columnName SYSNAME,
      
@isNullable BIT;

DECLARE @command NVARCHAR(MAX);

DECLARE @ntextColumnInfo TABLE (
  
object_id INT,
  
ColumnName SYSNAME,
  
IsNullable BIT
);

INSERT INTO @ntextColumnInfo ( object_id, ColumnName, IsNullable )
  
SELECT  c.object_id, c.name, c.is_nullable
  
FROM    sys.columns AS c
  
INNER JOIN sys.objects AS o
  
ON c.object_id = o.object_id
  
WHERE   o.type = 'U' AND c.system_type_id = 99;

DECLARE col_cursor CURSOR FAST_FORWARD FOR
   SELECT
object_id, ColumnName, IsNullable FROM @ntextColumnInfo;

OPEN col_cursor;
FETCH NEXT FROM col_cursor INTO @object_id, @columnName, @isNullable;

WHILE @@FETCH_STATUS = 0
BEGIN
   SELECT
@command =
      
'ALTER TABLE '
      
+ QUOTENAME(OBJECT_SCHEMA_NAME(@object_id))
           +
'.' + QUOTENAME(OBJECT_NAME(@object_id))
       +
' ALTER COLUMN '
      
+ QUOTENAME(@columnName)
       +
' NVARCHAR(MAX) '
      
+ CASE
          
WHEN @isNullable = 1 THEN ''
          
ELSE 'NOT'
        
END
      
+ ' NULL;';
      
  
PRINT @command;
  
IF @printCommandsOnly = 0
  
BEGIN
       EXECUTE
sp_executesql @command;
  
END

   SELECT @command =
      
'UPDATE '
      
+ QUOTENAME(OBJECT_SCHEMA_NAME(@object_id))
           +
'.' + QUOTENAME(OBJECT_NAME(@object_id))
       +
' SET '
      
+ QUOTENAME(@columnName)
       +
' = '
      
+ QUOTENAME(@columnName)
       +
';'

   PRINT @command;
  
IF @printCommandsOnly = 0
  
BEGIN
       EXECUTE
sp_executesql @command;
  
END

   FETCH NEXT FROM col_cursor INTO @object_id, @columnName, @isNullable;
END

CLOSE col_cursor;
DEALLOCATE col_cursor;

-- Now refresh the view metadata for all the views in the database
-- (We may not need to do them all but it won't hurt.)

DECLARE @viewObjectIds TABLE (
  
object_id INT
);

INSERT INTO @viewObjectIds
  
SELECT o.object_id
  
FROM sys.objects AS o
  
WHERE o.type = 'V';

DECLARE view_cursor CURSOR FAST_FORWARD FOR
   SELECT
object_id FROM @viewObjectIds;

OPEN view_cursor;
FETCH NEXT FROM view_cursor INTO @object_id;

WHILE @@FETCH_STATUS = 0
BEGIN
   SELECT
@command =
      
'EXECUTE sp_refreshview '''
      
+ QUOTENAME(OBJECT_SCHEMA_NAME(@object_id)) + '.' + QUOTENAME(OBJECT_NAME(@object_id))
       +
''';';
      
  
PRINT @command;

   IF @printCommandsOnly = 0
  
BEGIN
       EXECUTE
sp_executesql @command;
  
END

   FETCH NEXT FROM view_cursor INTO @object_id;
END

CLOSE view_cursor;
DEALLOCATE view_cursor;
GO

NOTE: this won’t work where views are created WITH SCHEMABINDING. It will fail at ALTER TABLE for any table upon which schemabound views depend. Instead, to make it work, you have to DROP the views, then do the ALTERs and UPDATEs, then re-CREATE the views. Bit of a PITA but there’s no way around it unfortunately. I didn’t need to worry about this because my client doesn’t use schemabound views.

Of course it goes without saying that you should back up your database before you run any script like this!

To use it you just need to substitute the name of your database where it says _YOUR_DATABASE_NAME_ at the top of the script.

Also, As with automating many tasks in SQL Server, dynamic SQL is a necessity. It’s a bit of a pain in the backside so a @printCommandsOnly mode is advised for debugging purposes, and I’ve switched this on by default. You can copy and paste the commands into a query window, parse them, or even execute them to ensure they work as expected.

Once you’re happy this script does what you want set the value of @printCommandsOnly to 0 and rerun it to actually execute the commands it generates.

You might wonder why I’ve written this imperatively rather than in set-based fashion. Well, it’s not just because I’m a programmer rather than a DBA. In fact the original version, which you can still see if you look at the file’s history, was set-based. It looked pretty much like this:

USE _YOUR_DATABASE_NAME_
GO

-- Migrate columns NTEXT -> NVARCHAR(MAX)

DECLARE @alterColumns NVARCHAR(MAX) = '';
SELECT  @alterColumns = @alterColumns
  
+'ALTER TABLE '
  
+ QUOTENAME(OBJECT_SCHEMA_NAME(c.object_id)) + '.' + QUOTENAME(OBJECT_NAME(c.object_id))
   +
' ALTER COLUMN '
  
+ QUOTENAME(c.Name)
   +
' NVARCHAR(MAX) '
  
+ CASE WHEN c.is_nullable = 1 THEN '' ELSE 'NOT' END + ' NULL;'
  
+ CHAR(13)
   +
'UPDATE '
  
+ QUOTENAME(OBJECT_SCHEMA_NAME(c.object_id)) + '.' + QUOTENAME(OBJECT_NAME(c.object_id))
   +
' SET '
  
+ QUOTENAME(c.Name)
   +
' = '
  
+ QUOTENAME(c.Name)
   +
';' + CHAR(13) + CHAR(13)
FROM    sys.columns AS c
INNER JOIN sys.objects AS o
ON c.object_id = o.object_id
WHERE   o.type = 'U' AND c.system_type_id = 99; --NTEXT

PRINT @alterColumns;

EXECUTE sp_executesql @alterColumns;
GO

-- Update VIEW metadata

DECLARE @updateViews NVARCHAR(MAX) = '';
SELECT @updateViews = @updateViews
  
+ 'EXECUTE sp_refreshview '''
  
+ QUOTENAME(OBJECT_SCHEMA_NAME(o.object_id)) + '.' + QUOTENAME(OBJECT_NAME(o.object_id))
   +
''';' + CHAR(13)
FROM sys.objects AS o
WHERE o.type = 'V'

PRINT @updateViews;

EXECUTE sp_executesql @updateViews;
GO

It’s certainly a lot less code, which is nice. And it doesn’t use CURSORs, which is also nice.

However, it does have problems:

  • The PRINT statement in T-SQL truncates output if it goes beyond a certain length. I don’t know exactly what this length is off the top of my head, but my generated scripts were more than long enough to reach it.
  • The result of this is you can’t copy and paste the complete generated script into another query window, so it might make debugging a bit trickier in some instances.
  • The really problematic thing is that, when something goes wrong, you can’t necessarily relate it back to the exact command that failed, whereas the imperative version makes this easy since each generated command is executed individually.

So I threw away the, on the face of it, "cleverer" and more "elegant" set-based version in favour of the longer, clunkier (but easier to debug) imperative version.

I hope you find it useful and please feel free to submit patches, pull requests, bug reports, feature requests via the main project GitHub page at https://github.com/bartread/sqlscripts. (Turns out I’m building up a small library of handy scripts so I’ll be pushing a few more items into this repo in due course.)

The Spike Board: A Quick Agile Solution for Managing and Visualising Tech Spikes and Bug Bashes

I’ve recently been fortunate enough to start working with comparethemarket.com as one of my clients, specifically with their home team, who deal with their home insurance site. These guys are fully awesome, and I’m having a really good time.

What I want to do with this post is share with you a way in which we reorganised one of our agile boards, in a way that you might find helpful.

CTM make heavy use of agile techniques, to the point where bugs are often not filed in any kind of tracker (we use Mingle, when we use anything at all), but instead appear as cards on the appropriate board. The past couple of days we’ve been doing a bug bash in the run up to a release of some new functionality, and we’d written all the areas that needed testing as a big list on one of the boards.

People would pick areas for testing, write their initials by them, and mark them as completed either by ticking or crossing them off the list. Bugs were written in a separate list at the bottom of the board in the order they were encountered. After some discussion, some items we decided we didn’t care about or postponed until later, both test and bugs.

As you can imagine the board’s starting to look pretty messy at this point. Not a problem for those of us who’ve been around the whole time, but a couple of the team had been out and, at the standup this morning, it became clear that our board wasn’t really doing a great job of communicating:

  • what we’d done
  • what was left to do
  • what we’d decided not to do
  • what each of the items (test areas and bugs) actually meant

Lightweight is good but we’d probably gone a bit too far in that direction and, in fact, there was quite a bit of confusion.

The net result is we had to go through each item one by one. It didn’t take absolutely ages, but it was somewhat time-consuming.

So… we decided to rework the board to make it clearer to anyone and everyone what was happening and where we were in the process.

Here’s what we came up with for the “work item” board, where a work item is either an area for test, or a bug.

proposedworkitemboard

The basic idea is that work items are written on cards and start in the top left under proposed. They then migrate either to rejected or done on the bottom right. Obviously cards can skip over stages – so they can move directly from proposed to accepted, for example.

Note that rejected doesn’t mean rejected for all time: it just means we’ve chosen not to do something in this tech spike.

Bug prioritisation was another issue so we came up with this, although we haven’t yet needed it. In future though, when bugs are found we can write them on cards and stick them on another board that looks like this:

proposedbugboard

The axes are severity on the left (high or low) and incidence (alternatively hit probability) at the bottom. Priorities are shown in red – we just pick off the bugs in priority order. It’s rough and ready but should make it easy to prioritise.

You can obviously choose different axes that are more relevant for you if you like. Likewise, if you have different states for your work items than we use, or you have more or less of them, go ahead and use them.

Bugs that we’re fixing then become work items (on different coloured cards) that go back on the work item board, probably going straight into accepted. We probably lift them directly from the bugs board and place them on the work item board – thus the bugs board only contains live bugs we’re not actively working on.

Work item cards look like this:

proposedworkcard

Everything apart from the title and name(s) are optional, to keep it as lightweight as possible. We could just use avatars instead of names – we all have little stickies with our chosen avatar on that we add to any cards we’re working on. For things that are done it might be handy to use names so we don’t need to create loads of avatar stickies for everyone.

The cards on the bug board would be similar, but just with a title and description (optional). Potentially we could transfer them from the bug board to the work item board when we start working on them so that (i) we’re not duplicating cards, and (ii) it’s easy to see how many outstanding bugs there are.

Here’s what our work item board now looks like:

reorganisedworkitemboard

(Note that we decided not to add everything we’d already done to the new board, which comprised around two thirds of the total work items, but we took a photo as a backup so we have a record of the areas we need to test for future releases, and we’ll use the new board layout in future instead of the vanilla list.)

As you can see, it’s easy to understand:

  • the state of work items
  • how much WIP we have
  • how much is done
  • how much is left to do

Hopefully some of you will find this helpful as well.

Is there more to life than increasing its speed? Web performance: how fast does your website need to be?

How fast does your website need to be?

Web performance is a hot button topic so that question is pretty much guaranteed to start an argument. Perhaps this is more because of the answer – which is, “it depends” – than the question. But it’s fair to say that if much of your business either arrives, or is transacted, online then the answer is pretty darned fast. (It’s also fair to say if the speed of your website is the only differentiator you have from your competitors, you may have bigger problems.)

In this post I want to cover the following:

  • The relationship between web performance and
    • Key business metrics such as retention, conversion rates, and revenue
    • Mobile computing
    • SEO
  • Ideal benchmark web performance
  • How to improve web performance

That’s obviously quite a lot of ground to cover, so let’s get cracking.

Web Performance & Key Business Metrics

It’s a couple of years old now but Tammy Everts’ excellent post on the web “performance poverty line” still rings true. You can find a more recent reworking here, although the graphs are the same.

I’m not going to rehash everything she said because there’s really no point, but is it honestly beyond the bounds of possibility that if she were to redraw the graphs for 2014 then the lines might fall something like this?

Landing page speed versus bounce rate Landing page speed versus pages per visit fall-offLanding page speed versus conversion rate fall-off

No, I don’t think so either. Nobody’s become any more tolerant of slow websites in the last two years.

It’s worth pointing out that the performance poverty line is NOT an absolute line for all websites, in contrast to the way I’ve sometimes seen it presented. Tammy took data for 5 companies that were Strangeloop customers and suggests that you should collect your own data from your own site to find where your performance poverty line is. Nevertheless, I think the line at 8 seconds is a good ballpark figure.

What it means is that for page loads over 8 seconds, relatively small improvements in performance will make little or no difference to key business metrics because you’ve already lost people. For example, you’re unlikely to see any improvement in bounce rate, pages per visit, or conversion rate if you just improve your loading time from 10 seconds to 8 seconds. You need to halve your page load time, or better, to see any real improvement.

Companies like Amazon and Facebook take this very seriously, and have hard numbers for the negative effect poor performance can have on both revenue and engagement.

In 2006 Amazon announced that revenue increased by 1% for every 100ms they were able to shave off page load times: a claim that you can find on slide 10 of their 2009 Make Data Useful presentation. Strangeloop went on to create an infographic illustrating this for Amazon, along with several other major websites:

Illustration of performance findings across different websites from Strangeloop.

(Click to see a larger version. NB. They’re happy for people to reproduce this.)

To summarise:

  • Shopzilla saw a 12% revenue increase after improving average page load times from 6 seconds to 1.2 seconds.
  • Amazon saw 1% revenue increase for every 100ms improvement.
  • AOL found visitors in the top ten percentile of site speed view 50% more pages than visitors in the bottom ten percentile.
  • Yahoo increased traffic by 9% for every 400ms improvement.
  • Mozilla estimated 60 million more Firefox downloads as a result of making page loads 2.2 seconds faster.

I also mentioned Facebook. They’re far from my favourite site, but back in 2010 at Velocity China they revealed that 500ms extra on page load times lead to a 3% drop-off in traffic, and 1000ms lead to 6% drop-off. One suspects that as page loads get slower still that nice linear relationship probably turns into a cliff drop.

And the evidence goes back even further. Remember how, in the late 90s, that search engine nobody had heard of – Google – managed to trounce all opposition? One of the major reasons for that (apart from better search results) was that the homepage was incredibly sparse, such that it loaded very quickly even over the slowest of dial-up connections. This was in stark contrast to the (relatively – remember, slow connections) bloated and content laden homepages of sites such as AltaVista and Yahoo. Here’s AltaVista’s homepage on January 17th, 1999. Ironically they were doing a better job back in 1996.

I’m not seriously suggesting that in the case of your site you’ll definitely lose 1% of revenue for every extra 100ms on page load time. Amazon has an extraordinarily broad customer base, whereas in a niche you might not suffer as badly… alternatively, you might do even worse. If you collect performance metrics from your site you should be able to figure out the real impact for yourself.

What’s true is that you’ll lose out to faster competitors. You need to be amongst the best of them; ideally you want to beat them. (Unfortunately for any business involved in some kind of online retail activity, unless you’re particularly nichey, one of your competitors probably is Amazon. This is a colossal pain in the backside because their page load times are VERY fast.)

Anyway, to summarize: a faster website leads to higher conversion rates and more revenue. Win!

(Btw, I don’t rate AdSense as an income source but, if you do, a faster site should mean higher bids, which means more money for both you and Google.)

Web Performance & Mobile Computing

I’ve touched on this briefly in my aside above but mobile devices, unless they’re being used with WiFi, are notorious for suffering slow, choppy connections. In theory this gets better with 4G, and particularly with LTE-Advanced (see my previous post). In reality bandwidth caps and contention may make the additional speed and reduced latency of 4G a moot point, so don’t bank on better performance just because the headline figures suggest it’s available.

If you expect a lot of customers to access your site from a mobile device, you should make sure you test on these devices, and make any changes necessary to give users a good experience. DON’T test exclusively on the latest greatest hardware. I realise it’s tiresome but make sure you use the kind of low-end/mid-range smartphones that are common currency. There are still plenty of iPhone 3GSs and 4s, along with a gazillion veteran and scuffed Android devices doing good service.

Web Performance & SEO

SEO’s a bit of a tricky topic, because I (sort of) don’t believe in it. I’m not saying it doesn’t work but the problem is, if overdone, it can backfire quite badly. These days it seems barely a month goes by where I don’t read about another legitimate outfit who’ve been boned by a drop in traffic as Google update their search index filters. MetaFilter springs immediately to mind just because it’s been on HN the past few days, but there are others. (That particular story is sad because it’s had such a severe effect that they’ve had to let staff go, but I digress…)

The point is that nowadays the performance of your website does have an effect on its ranking in search results. The faster your site, the higher it will rank, and vice versa. A faster site is one SEO trick that Google won’t penalize you for, so take advantage of it!

Ideal Benchmark Web Performance

This is another slightly tricky area. Some people will give you a hard figure for this as though it’s holy writ, but I don’t necessarily think that’s helpful. Also, whilst it’s important that you get landing page performance right, you shouldn’t focus on that to the exclusion of your site as a whole. If you offer people a crappy experience once they’ve got past the landing page they’re still going to bail.

You need to benchmark against competitors, ideally over a variety of connection speeds, but at the very least check how you fare against them over a low latency connection to get a good idea of baseline performance. If you need to, set up a VM on Azure or EC2 and remote desktop into it, then check speeds from there. You don’t necessarily need to be the fastest site on the web, but you want to be amongst the fastest (or better if you can) as compared to your competitors.

You can use services such as Neustar for more systematic testing under load from a variety of location. You can even use them on your competitors but I wouldn’t recommend it because they probably won’t be very happy with you, and may lawyer up.

If you really want some figures to aim at, the Amazon’s numbers aren’t a bad target:

  • <200ms time to first byte,
  • <500ms to render above the fold content,
  • <2000ms for a complete page load

(NOTE: these measurements were taken on a connection with ~5ms latency. You won’t see this performance over, for example, a home broadband connection, or 3G. The effect of a slower connection compounds on slower sites though, often because of roundtripping. You should test your site over the kinds of connections your target audience will use, and on the kinds of devices they use, especially low-end laptops, cheap tablets, mobiles with no 4G connectivity, etc.)

They actually aren’t that hard to achieve. One situation in which you may find them more of a struggle is if you’re using a CMS: optimisation could require customisation, but you’ll often find plugins that can help you. WordPress, for example, offers plenty.

You want to improve the average page load, so make sure you load test under circumstances that emulate your anticipated usage patterns. This used to be a hassle but nowadays services such as the aforementioned Neustar make it pretty straightforward.

How To Improve Web Performance

There are two key areas for improvement:

  • Time to first byte (server-side optimization)
  • Client-side processing, loading and rendering

Taking latency out of the picture, time to first byte (TTFB) is a function of how much work you have to do on the server before you start returning page data. Lots of data retrieval or dynamic generation on the server side can have a devastating effect on time to first byte. Web servers are never faster than when serving static content so you want to get as close to this as possible, particularly for landing pages.

For example, if you need to present a lot of user specific information, instead of executing half a dozen SQL queries to execute the data, consider storing a blob of JSON in a key-value store so you can quickly look it up and return it by user ID. You can even use caching and indexing software, such as Endeca, to help if you feel the complexity is warranted. Selective denormalisation of data can really improve performance. You can also offload work after the page load by asynchronously retrieving via AJAX or similar; this will improve the perceived performance of your site even if some page elements aren’t completely rendered immediately (you can often insert placeholder information to help as well).

Note that TTFB is a concept that applies to any request sent over HTTP, so it’s as applicable to any AJAX/web service requests made within your page as it is to the initial page load. Make sure you pay attention to both!

Client side performance is about minimising the payload you deliver (image sizes, CSS and JS minification, etc), and the number of requests. It’s also about minimising blocking so move JavaScript loading to the end of the page. JavaScript loading always blocks because your browser has to assume there’s code in there it might need to execute. You want to make sure that nothing slows the rendering of the above the fold portions of your pages, and moving <script> tags further down the page is one very good way of doing so.

Services such as Google PageSpeed Insights and Yahoo! YSlow can help you do this by telling you exactly what you need to optimise. Just point them at the appropriate URL, or install the extension in the case of YSlow, and set them off.

They’ll often tell you to put static resources, like images, on a CDN but this can be a mixed blessing. You might realise a bit of extra speed, but you’ll also lose out on SEO juice if people post links to these resources because they’ll be linking to files on a CDN, not on your website. (Yeah, I know, I know: I’m supposed to be uncomfortable with SEO, but you do need to give it some consideration.)

All of this is time, effort, and money so, if you’re desperate or lazy (and even if you’re neither) you can cheat…

Google PageSpeed Service claims to be able to improve website performance by 20-60%. Whether you believe that or not you lose nothing by at least giving it a go, even if you’re actively working on other optimisations.

To test it out, visit webpagetest.org and hand over the URL of one of your landing pages. It’ll queue up your test and, when it’s finished, present you with results like this:

Basic results for Google PageSpeed Service test, including video comparison.

(Sorry Autotrader, I’m not picking on you: I’ve just been looking at motorbikes recently and noticed your site could be a bit faster.)

The video comparison is kind of cool. You can see that with www.autotrader.co.uk (which I tested from Dublin, Ireland), the above the fold content on the optimized page appears much more quickly. However, there’s nothing quite like hard numbers, so I like the filmstrip comparison, and this sequence really highlights the differences in above the fold performance:

Timeline showing start of above the fold rendering at 0.6 seconds for optimized page. Timeline showing start of above the fold rendering for unoptimized page. Completion of above the fold rendering for optimized page. Completion of above the fold rendering for unoptimized page.

(You can click through the thumbnails for a larger view.)

I’ve switched to a Thumbnail Interval of 0.1 seconds, which shows that above the fold content begins to render at 0.6 seconds for the optimized version, as opposed to 2.2 seconds for unoptimized. That’s a full 1.6 second improvement, which is massive. Unfortunately it still doesn’t complete until 4.7 seconds, which isn’t great, but still better than 5.4 seconds for the original.

The total load time is only about 10% better for the optimized version – 4.9 seconds vs. 5.5 seconds – but the improvement in above the fold performance is key, because that’s what defines the user’s experience.

So how does this work? Google basically proxies your site. It sits between your server and your users, optimizes your pages and serves the optimized versions, instead of the versions on your servers. It is smart though so it will retrieve dynamic content from your servers whenever it’s needed. The only hassle is that to use it for real you’ll obviously need to update your DNS configuration.

As I say, they claim a 20%-60% improvement, but for dynamic sites you should realistically expect to achieve something at the lower end of that range. Also, what it often can’t overcome is a very poor TTFB because it’s not as if it can make your server any faster. Things will probably be a little better but if you have big problems you’re going to have to do some work yourself (or you could get in touch and hire me to do it for you!).

One surprising outcome of using PageSpeed Service is that sometimes overall page load times can increase. That might sound bad but, as I’ve already said, it’s the user experience that really counts: if above the fold render performance improves you’re still onto a winner.

Another reason you may not see the speed gains you hope for is that non-cacheable resources cannot be proxied by PageSpeed Service. For some resources you won’t be able to do anything about this, but you should make sure any resources that can be set cacheable are.

Final point on PageSpeed Service: you’re probably wondering about cost. Companies like Akamai offer similar services for serious $$$$ but, for now, the good news is that PageSpeed Service is free. Google do plan to charge for it, but they’ve said you’ll get 30 days notice before you have to start forking over cash, and can cancel within that period.

Conclusion

Hopefully it’s clear by now that a focus on performance leads to improvements in key business metrics related to both engagement and revenue. You also understand the need to consider mobile computing, and the potential for improved search ranking through higher performance. Finally you should have a pretty good idea of exactly what you’re aiming for performance-wise, and how to get there, by focussing on specific areas of improvement on both server and client.

Timing is everything in the performance tuning game: learn to choose the right metrics to hunt down bottlenecks

So much of life is about timing. Just ask David Davis. He was arrested after getting into a scuffle whilst having his hair cut:

David Davis with half a haircut in his police mugshot.

Bad timing, right?

But that’s not really the kind of timing I’m talking about. When you’re performance tuning an application an understanding of timing is crucial to success – it can reveal truth that would otherwise remain masked. In this post I want to cover three topics:

  • The different types of timing data you can collect, and the best way to use them,
  • Absolute versus relative timing measures, and
  • The effect of profiling method (instrumentation versus sampling) on the timing data you collect.

Let’s start off with the first…

Regardless of your processor architecture, operating system, or technology platform most (good) performance profiling software will use the most accurate timer supported by your hardware and OS. On x86 and x64 processors this is the Time Stamp Counter, but most other architectures have an equivalent.

From this timer it’s possible to derive a couple of important metrics of your app’s performance:

  • CPU time – that is, the amount of time the processor(s) spend executing code in threads that are part of your process ONLY – i.e., exclusive of I/O, network activity (e.g., web service or web API calls), database calls, child process execution, etc.
  • Wall clock time – the actual amount of time elapsed executing a particular piece of code, such as a method, including I/O, network activity, etc.

Different products might use slightly different terminology, or offer subtly differing flavours of these two metrics, but the underlying principles are the same. For this post I’ll show the examples using ANTS Performance Profiler but you’ll find that everything I say is also applicable to other performance tools, such as DotTrace, the Visual Studio Profiling Tools, and JProfiler, so hopefully you’ll find it useful.

The really simple sequence diagram below illustrates the differences between CPU time and wall clock time for executing a method called SomeMethod(), which we’ll assume is in a .NET app, that queries a SQL Server database.

Sequence diagram illustrating the difference between wall clock and CPU time.

The time spent actually executing code in SomeMethod() is represented by regions A and C. This is the CPU time for the method. The time spent executing code in SomeMethod() plus retrieving data from SQL Server is represented by regions A, B, and C. This represents the wall clock time – the total time elapsed whilst executing SomeMethod(). Note that, for simplicity’s sake:

  • I’ve excluded any calls SomeMethod() might make to other methods in your code, into the .NET framework class libraries, or any other .NET libraries. Were they included these would form part of the CPU time measurement because this is all code executing on the same thread within your process.
  • I’ve excluded network latency from the diagram, which would form part of the wall clock time measurement.

Most good performance profilers will allow you to switch between CPU and wall clock time. All the profilers I mentioned above support this. Here’s what the options look like in ANTS Performance Profiler; other products are similar:

Timing options in Red Gate's ANTS Performance Profiler

There’s also the issue of time in method vs. time with children. Again the terminology varies a little by product but the basics are:

  • Time in method represents the time spent executing only code within the method being profiled. It does not include callees (or child methods), or any time spent sleeping, suspended, or out of process (network, database, etc.). It follows from this that the absolute value of time in method will be the same regardless of whether you’re looking at CPU time, or wall clock time.
  • Time with children includes time spent executing all callees (or child methods). When viewing wall clock time it also includes time spent sleeping, suspended, and out of process (network, database, etc.).

OK, let’s take a look at an example. Here’s a method call with CPU time selected:

CPU times for method

And here’s the same method call with wall clock time selected:

Wall clock times for method

Note how in both cases Time (ms), which represents time in method, is the same at 0.497ms, but that with wall clock time selected the time with children is over 40 seconds as opposed to less than half a second. We’ll take a look at why that is in a minute. For now all you need to understand is that this is time spent out of process, and it’s the kind of problem that can easily be masked if you look at only CPU time.

All right, so how do you know whether to look at CPU time or wall clock time? And are there situations where you might need to use both?

Many tools will give you some form of real-time performance data as you use them to profile your apps. ANTS Performance Profiler has the timeline; other tools have a “telemetry” view, which shows performance metrics. The key is to use this, along with what you know about the app to gain clues as to where to look for trouble.

The two screengrabs above are from a real example on the ASP.NET MVC back-end support systems for a large B2B ecommerce site. They relate to the user clicking on an invoice link from the customer order page. As you’d expect this takes the user to a page containing the invoice information, but the page load was around 45 seconds, which is obviously far too long.

Here’s what the timeline for that looked like in ANTS Performance Profiler:

ANTS Performance Profiler timeline for navigating from order to invoice page on internal support site.

(Note that I’ve bookmarked such a long time period not because the profiler adds that much overhead, but because somebody asked me a question whilst I was collecting the data so there was a delay before a clicked Stop Live Bookmark!)

As you can see, there’s very little CPU activity associated with the worker process running the site; just one small spike over to the left.

This tells you straight away that the time isn’t being spent on lots of CPU intensive activity in the website code. Look at this:

Call tree viewing CPU time - doesn't look like there's much amiss.

We’re viewing CPU time and there’s nothing particularly horrendous in the call tree. Sure, there’s probably some room for optimisation, but absolutely nothing that would account for the observed 45 second page load.

Switch to wall clock time and the picture changes:

Call graph looking at wall clock time - now we're getting somewhere!

Hmm, looks like the problem might be those two SQL queries, particularly the top one! Maybe we should optimise those*.

Do you see how looking at the “wrong” timing metric masked the problem? In reality you’ll want to use both metrics to see what each can reveal and you’ll quickly get to know which works best in different scenarios as you do more performance tuning.

By the way: for those of you working with Java, JProfiler has absolutely great database support with multiple providers for different RDBMSs. I would highly recommend you check it out.

You may have noticed that throughout the above examples I’ve been looking at absolute measurements of time, in this case milliseconds. Ticks and seconds are often also available, but many tools often offer relative measurements – generally percentages – in some cases as the default unit.

I find relative values often work well when looking at CPU time but that, generally, absolute values are a better bet for wall clock time. The reason for this is pretty simple: wall clock time includes sleeping, waiting, suspension, etc., and so often your biggest “bottleneck” can appear to be a single thread that mostly sleeps, or waits for a lock (e.g., the Waiting for synchronization item in the above screenshots). This will often be something like the GC thread and the problem is, without looking at absolute values, you’ve no real idea how significant the amounts of time spent in other call stacks really are. Switching to milliseconds or (for really gross problems – the above would qualify) seconds can really help.

Let’s talk about instrumentation versus sampling profiling and the effect this has on timings.

Instrumentation is the more traditional of the two methods. It actually modifies the running code to insert extra instructions that collect timing values throughout the code. For example, instructions will be inserted at the start and end of methods and, depending upon the level of detail selected, at branch points in the code, or at points which mark the boundaries between lines in the original source. Smarter profilers need only instrument branch points to accurately calculate line level timings and will therefore impose less overhead in use.

Back in the day this modification would be carried out on the source code, and this method may still be used with C++ applications. The code is modified as part of the preprocessor step. Alternatively it can be modified after compilation but before linking.

Nowadays, with VM languages, such as those that run in the JVM or the .NET CLR, the instrumentation is generally done at runtime just before the code is JITed. This has a big advantage: you don’t need a special build of your app in order to diagnose performance problems, which can be a major headache with older systems such as Purify.

Sampling is available in more modern tools and is a much lower overhead, albeit less detailed, method of collecting performance data. The way it works is that the profiler periodically takes a snapshot of the stack trace of every thread running in the application. It’ll generally do this many times a second – often up to 1,000 times per second. It can then combine the results from the different samples to work out where most time is spent in the application.

Obviously this is only good for method level timings. Moreover methods that execute very quickly often won’t appear in the results at all, or will have somewhat skewed timings (generally on the high side) if they do. Timings for all methods are necessarily relative and any absolute timings are estimates based on the number of samples containing each stack trace relative to the overall length of the selected time period.

Furthermore most tools cannot integrate ancillary data with sampling. For example, ANTS Performance Profiler will not give information about database calls, or HTTP requests, in sampling mode since this data is collected using instrumentation, which is how it is able to tell you – for example – exactly where queries were executed.

Despite these disadvantages, because of its low overhead, and because it doesn’t require modification of app code, sampling can often be used on a live process without the need for a restart before and after profiling, so can often be a good option for apps in production.

The effect of all of this on timing measurements if you’ve opted for sampling rather than instrumentation profiling is that the choice of wall clock time or CPU time becomes irrelevant. This is because whilst your profiler knows the call stack for each thread in every sample, it probably won’t know whether or not the thread was running (i.e., it could have been sleeping, suspended, etc.) – figuring this out could introduce unacceptable overhead whilst collecting data. As a result you’ll always be looking at wall clock time with sampling, rather than have the choice as you do with instrumentation.

Hopefully you’re now equipped to better understand and use the different kinds of timing data your performance profiler will show you. Please do feel free to chime in with questions or comments below – feedback is always much appreciated and if you need help I’d love to hear from you.

*Optimising SQL is beyond the scope of this post but I will cover it, using a similar example, in the future. For now I want to focus on the different timing metrics and what they mean to help you understand how to get the best out of your performance profiler. That being said, your tool might give you a handy hint so it’s not even as if you need to do that much thinking for yourself (but you’ll still look whip sharp in front of your colleagues)…

ANTS Performance Profiler hinting that the problem may be SQL-related.

Just don’t let them get a good look at your screen!