It’s that time of the year where, for me, talk proposals are submitted. I also tend to take it as an opportunity to refresh and rework talks.
This year I’ve submitted talks for DDD, DDD North, and NDC London (this one’s a bit of a long shot), and am keeping my eye out for other opportunities. I’ll also be giving talks at the Derbyshire .NET User Group, and DDD Nights in Cambridge in the autumn.
Voting for both DDD and DDD North is now open so, if you’re interested in any of the talks I’ve listed below, please do vote for them at the following links:
Here are my talks. If you’d like me to give any of them at a user group, meetup, or conference you run, please do get in touch.
Talk Title: How to speed up .NET and SQL Server web apps
Performance is a critical aspect of modern web applications. Recent developments in hardware, software, infrastructure, bandwidth, and connectivity have raised expectations about how the web should perform.
Increasingly this attitude is applied to internal line of business apps, and niche sites, as much as to large public-facing sites. Google even bases your search ranking in part on how well your site performs. Being slow is no longer an option.
Unfortunately, problems can occur at all layers and in all components of an application: database, back-end code, systems integrations, local and third party services, infrastructure, and even – increasingly – the client.
Complex apps often have problems in multiple areas. How do you go about tracking them down and fixing them? Where do you begin?
The answer is you deploy the right tools and techniques. The good news is that generally you can do this without changing your development process. Using a number of case studies I’m going to show you how to track down and fix performance issues. We’ll talk about the tools I used to find them, and the fixes that resulted.
That being said, prevention is better than cure, so I’ll also talk about how you can go about catching problems before they make it to production, and monitor to get earlier notification of trouble brewing.
By the end you should have a plethora of tools and techniques at your disposal that you can use in any performance analysis situation that might confront you.
With web applications becoming ever more interactive, and the meteoric rise in popularity of mobile browsers, the kind of performance and resource usage issues that once only concerned back-end developers have now become common currency on the client as well.
In this session we’ll look at how these runtimes manage memory and how you can get the best out of them. We’ll discuss the “classic” blunders that can trip you up, and how you can avoid them. We’ll also look at the tools that can help you if and when you do run into trouble, both on the client and the server.
Talk Title: Optimizing client-side performance in interactive web applications
Devices nowadays offer a wide variety of form factors and capabilities. On top of this, connectivity – whilst widely available across many markets – varies considerably in quality and speed. This presents a huge challenge to anyone who wants to offer a great user experience across the board, along with a need to carefully consider what actually constitutes “the board”.
In this session I’m going to show you how to optimize the client experience. We’ll take an in depth look at Chrome Dev Tools, and how the suite of debugging, data collection and diagnostic tools it provides can help you diagnose and fix performance issues on the desktop and Android mobile devices. We’ll also take a look at using Safari to analyse and debug web applications running on iOS.
Talk Title: Complex objects and microORMs: an introduction to the Dapper.SimpleLoad and Dapper.SimpleSave extensions for StackExchange’s Dapper microORM
Dapper (https://github.com/StackExchange/dapper-dot-net) is a popular microORM for the .NET framework that provides simple way to map database rows to objects. It’s a great alternative when speed is of the essence, and when you just don’t need the functionality offered by EF.
But what happens when you want to do something a bit more complicated? What happens if you want to join across multiple tables into a hierarchy composed of different types of object? Well,then you can use Dapper’s multi-mapping functionality… but that can quickly turn into an awful lot of code to maintain, especially if you make heavy use of Dapper.
So far so good, but what happens when you want to save your objects back to the database?
With Dapper it’s pretty easy to write an INSERT, UPDATE, or DELETE statement and pass in your object as the parameter source. But if you’ve got a complex object this, again, can quickly turn into a lot of code.
I’ll give you a good overview of both Dapper.SimpleLoad and Dapper.SimpleSave, with a liberal serving of examples. I’ll also explain their benefits and drawbacks, and when you might consider using them in preference to more heavyweight options, such as EF.
Web performance is a hot button topic so that question is pretty much guaranteed to start an argument. Perhaps this is more because of the answer – which is, “it depends” – than the question. But it’s fair to say that if much of your business either arrives, or is transacted, online then the answer is pretty darned fast. (It’s also fair to say if the speed of your website is the only differentiator you have from your competitors, you may have bigger problems.)
In this post I want to cover the following:
The relationship between web performance and
Key business metrics such as retention, conversion rates, and revenue
Ideal benchmark web performance
How to improve web performance
That’s obviously quite a lot of ground to cover, so let’s get cracking.
I’m not going to rehash everything she said because there’s really no point, but is it honestly beyond the bounds of possibility that if she were to redraw the graphs for 2014 then the lines might fall something like this?
No, I don’t think so either. Nobody’s become any more tolerant of slow websites in the last two years.
It’s worth pointing out that the performance poverty line is NOT an absolute line for all websites, in contrast to the way I’ve sometimes seen it presented. Tammy took data for 5 companies that were Strangeloop customers and suggests that you should collect your own data from your own site to find where your performance poverty line is. Nevertheless, I think the line at 8 seconds is a good ballpark figure.
What it means is that for page loads over 8 seconds, relatively small improvements in performance will make little or no difference to key business metrics because you’ve already lost people. For example, you’re unlikely to see any improvement in bounce rate, pages per visit, or conversion rate if you just improve your loading time from 10 seconds to 8 seconds. You need to halve your page load time, or better, to see any real improvement.
Companies like Amazon and Facebook take this very seriously, and have hard numbers for the negative effect poor performance can have on both revenue and engagement.
(Click to see a larger version. NB. They’re happy for people to reproduce this.)
Shopzilla saw a 12% revenue increase after improving average page load times from 6 seconds to 1.2 seconds.
Amazon saw 1% revenue increase for every 100ms improvement.
AOL found visitors in the top ten percentile of site speed view 50% more pages than visitors in the bottom ten percentile.
Yahoo increased traffic by 9% for every 400ms improvement.
Mozilla estimated 60 million more Firefox downloads as a result of making page loads 2.2 seconds faster.
I also mentioned Facebook. They’re far from my favourite site, but back in 2010 at Velocity China they revealed that 500ms extra on page load times lead to a 3% drop-off in traffic, and 1000ms lead to 6% drop-off. One suspects that as page loads get slower still that nice linear relationship probably turns into a cliff drop.
And the evidence goes back even further. Remember how, in the late 90s, that search engine nobody had heard of – Google – managed to trounce all opposition? One of the major reasons for that (apart from better search results) was that the homepage was incredibly sparse, such that it loaded very quickly even over the slowest of dial-up connections. This was in stark contrast to the (relatively – remember, slow connections) bloated and content laden homepages of sites such as AltaVista and Yahoo. Here’s AltaVista’s homepage on January 17th, 1999. Ironically they were doing a better job back in 1996.
I’m not seriously suggesting that in the case of your site you’ll definitely lose 1% of revenue for every extra 100ms on page load time. Amazon has an extraordinarily broad customer base, whereas in a niche you might not suffer as badly… alternatively, you might do even worse. If you collect performance metrics from your site you should be able to figure out the real impact for yourself.
What’s true is that you’ll lose out to faster competitors. You need to be amongst the best of them; ideally you want to beat them. (Unfortunately for any business involved in some kind of online retail activity, unless you’re particularly nichey, one of your competitors probably is Amazon. This is a colossal pain in the backside because their page load times are VERY fast.)
Anyway, to summarize: a faster website leads to higher conversion rates and more revenue. Win!
(Btw, I don’t rate AdSense as an income source but, if you do, a faster site should mean higher bids, which means more money for both you and Google.)
Web Performance & Mobile Computing
I’ve touched on this briefly in my aside above but mobile devices, unless they’re being used with WiFi, are notorious for suffering slow, choppy connections. In theory this gets better with 4G, and particularly with LTE-Advanced (see my previous post). In reality bandwidth caps and contention may make the additional speed and reduced latency of 4G a moot point, so don’t bank on better performance just because the headline figures suggest it’s available.
If you expect a lot of customers to access your site from a mobile device, you should make sure you test on these devices, and make any changes necessary to give users a good experience. DON’T test exclusively on the latest greatest hardware. I realise it’s tiresome but make sure you use the kind of low-end/mid-range smartphones that are common currency. There are still plenty of iPhone 3GSs and 4s, along with a gazillion veteran and scuffed Android devices doing good service.
Web Performance & SEO
SEO’s a bit of a tricky topic, because I (sort of) don’t believe in it. I’m not saying it doesn’t work but the problem is, if overdone, it can backfire quite badly. These days it seems barely a month goes by where I don’t read about another legitimate outfit who’ve been boned by a drop in traffic as Google update their search index filters. MetaFilter springs immediately to mind just because it’s been on HN the past few days, but there are others. (That particular story is sad because it’s had such a severe effect that they’ve had to let staff go, but I digress…)
The point is that nowadays the performance of your website does have an effect on its ranking in search results. The faster your site, the higher it will rank, and vice versa. A faster site is one SEO trick that Google won’t penalize you for, so take advantage of it!
Ideal Benchmark Web Performance
This is another slightly tricky area. Some people will give you a hard figure for this as though it’s holy writ, but I don’t necessarily think that’s helpful. Also, whilst it’s important that you get landing page performance right, you shouldn’t focus on that to the exclusion of your site as a whole. If you offer people a crappy experience once they’ve got past the landing page they’re still going to bail.
You need to benchmark against competitors, ideally over a variety of connection speeds, but at the very least check how you fare against them over a low latency connection to get a good idea of baseline performance. If you need to, set up a VM on Azure or EC2 and remote desktop into it, then check speeds from there. You don’t necessarily need to be the fastest site on the web, but you want to be amongst the fastest (or better if you can) as compared to your competitors.
You can use services such as Neustar for more systematic testing under load from a variety of location. You can even use them on your competitors but I wouldn’t recommend it because they probably won’t be very happy with you, and may lawyer up.
If you really want some figures to aim at, the Amazon’s numbers aren’t a bad target:
<200ms time to first byte,
<500ms to render above the fold content,
<2000ms for a complete page load
(NOTE: these measurements were taken on a connection with ~5ms latency. You won’t see this performance over, for example, a home broadband connection, or 3G. The effect of a slower connection compounds on slower sites though, often because of roundtripping. You should test your site over the kinds of connections your target audience will use, and on the kinds of devices they use, especially low-end laptops, cheap tablets, mobiles with no 4G connectivity, etc.)
They actually aren’t that hard to achieve. One situation in which you may find them more of a struggle is if you’re using a CMS: optimisation could require customisation, but you’ll often find plugins that can help you. WordPress, for example, offers plenty.
You want to improve the average page load, so make sure you load test under circumstances that emulate your anticipated usage patterns. This used to be a hassle but nowadays services such as the aforementioned Neustar make it pretty straightforward.
How To Improve Web Performance
There are two key areas for improvement:
Time to first byte (server-side optimization)
Client-side processing, loading and rendering
Taking latency out of the picture, time to first byte (TTFB) is a function of how much work you have to do on the server before you start returning page data. Lots of data retrieval or dynamic generation on the server side can have a devastating effect on time to first byte. Web servers are never faster than when serving static content so you want to get as close to this as possible, particularly for landing pages.
For example, if you need to present a lot of user specific information, instead of executing half a dozen SQL queries to execute the data, consider storing a blob of JSON in a key-value store so you can quickly look it up and return it by user ID. You can even use caching and indexing software, such as Endeca, to help if you feel the complexity is warranted. Selective denormalisation of data can really improve performance. You can also offload work after the page load by asynchronously retrieving via AJAX or similar; this will improve the perceived performance of your site even if some page elements aren’t completely rendered immediately (you can often insert placeholder information to help as well).
Note that TTFB is a concept that applies to any request sent over HTTP, so it’s as applicable to any AJAX/web service requests made within your page as it is to the initial page load. Make sure you pay attention to both!
Services such as Google PageSpeed Insights and Yahoo! YSlow can help you do this by telling you exactly what you need to optimise. Just point them at the appropriate URL, or install the extension in the case of YSlow, and set them off.
They’ll often tell you to put static resources, like images, on a CDN but this can be a mixed blessing. You might realise a bit of extra speed, but you’ll also lose out on SEO juice if people post links to these resources because they’ll be linking to files on a CDN, not on your website. (Yeah, I know, I know: I’m supposed to be uncomfortable with SEO, but you do need to give it some consideration.)
All of this is time, effort, and money so, if you’re desperate or lazy (and even if you’re neither) you can cheat…
Google PageSpeed Service claims to be able to improve website performance by 20-60%. Whether you believe that or not you lose nothing by at least giving it a go, even if you’re actively working on other optimisations.
To test it out, visit webpagetest.org and hand over the URL of one of your landing pages. It’ll queue up your test and, when it’s finished, present you with results like this:
(Sorry Autotrader, I’m not picking on you: I’ve just been looking at motorbikes recently and noticed your site could be a bit faster.)
The video comparison is kind of cool. You can see that with www.autotrader.co.uk (which I tested from Dublin, Ireland), the above the fold content on the optimized page appears much more quickly. However, there’s nothing quite like hard numbers, so I like the filmstrip comparison, and this sequence really highlights the differences in above the fold performance:
(You can click through the thumbnails for a larger view.)
I’ve switched to a Thumbnail Interval of 0.1 seconds, which shows that above the fold content begins to render at 0.6 seconds for the optimized version, as opposed to 2.2 seconds for unoptimized. That’s a full 1.6 second improvement, which is massive. Unfortunately it still doesn’t complete until 4.7 seconds, which isn’t great, but still better than 5.4 seconds for the original.
The total load time is only about 10% better for the optimized version – 4.9 seconds vs. 5.5 seconds – but the improvement in above the fold performance is key, because that’s what defines the user’s experience.
So how does this work? Google basically proxies your site. It sits between your server and your users, optimizes your pages and serves the optimized versions, instead of the versions on your servers. It is smart though so it will retrieve dynamic content from your servers whenever it’s needed. The only hassle is that to use it for real you’ll obviously need to update your DNS configuration.
As I say, they claim a 20%-60% improvement, but for dynamic sites you should realistically expect to achieve something at the lower end of that range. Also, what it often can’t overcome is a very poor TTFB because it’s not as if it can make your server any faster. Things will probably be a little better but if you have big problems you’re going to have to do some work yourself (or you could get in touch and hire me to do it for you!).
One surprising outcome of using PageSpeed Service is that sometimes overall page load times can increase. That might sound bad but, as I’ve already said, it’s the user experience that really counts: if above the fold render performance improves you’re still onto a winner.
Another reason you may not see the speed gains you hope for is that non-cacheable resources cannot be proxied by PageSpeed Service. For some resources you won’t be able to do anything about this, but you should make sure any resources that can be set cacheable are.
Final point on PageSpeed Service: you’re probably wondering about cost. Companies like Akamai offer similar services for serious $$$$ but, for now, the good news is that PageSpeed Service is free. Google do plan to charge for it, but they’ve said you’ll get 30 days notice before you have to start forking over cash, and can cancel within that period.
Hopefully it’s clear by now that a focus on performance leads to improvements in key business metrics related to both engagement and revenue. You also understand the need to consider mobile computing, and the potential for improved search ranking through higher performance. Finally you should have a pretty good idea of exactly what you’re aiming for performance-wise, and how to get there, by focussing on specific areas of improvement on both server and client.
Often you’d use this code to redirect the user to a mobile version of your site. However, my site uses the responsive Twenty Twelve WordPress theme. This works well on mobile browsers. Thus, I don’t really want or need to redirect the user.
Unfortunately, there’s one particular piece of content that was banjaxxing the page width on mobile devices. It needed to be replaced. Normally I’d do this on the client-side, which is what the theme does. In this cases the content’s Ts & Cs meant I wasn’t allowed to. But I could do it on the server.
You probably know that you can’t get the client’s screen resolution or browser window dimensions on the server. However, you can use the user-agent string to detect mobile browsers. This is exactly what the code at detectmobilebrowsers.com does. They’ve written and tested code that will reliably detect mobile browsers across most devices. That isn’t something I wanted to attempt myself (how would I test it?), and neither should you.
WordPress runs on PHP so I now have this particularly minging, but highly functional, lump of code in one of my template scripts:
if(preg_match('/(android|bb\d+|meego).+mobile|avantgo|bada\/|blackberry|blazer|compal|elaine|fennec|hiptop|iemobile|ip(hone|od)|iris|kindle|lge |maemo|midp|mmp|mobile.+firefox|netfront|opera m(ob|in)i|palm( os)?|phone|p(ixi|re)\/|plucker|pocket|psp|series(4|6)0|symbian|treo|up\.(browser|link)|vodafone|wap|windows (ce|phone)|xda|xiino/i',$useragent)||preg_match('/1207|6310|6590|3gso|4thp|50[1-6]i|770s|802s|a wa|abac|ac(er|oo|s\-)|ai(ko|rn)|al(av|ca|co)|amoi|an(ex|ny|yw)|aptu|ar(ch|go)|as(te|us)|attw|au(di|\-m|r |s )|avan|be(ck|ll|nq)|bi(lb|rd)|bl(ac|az)|br(e|v)w|bumb|bw\-(n|u)|c55\/|capi|ccwa|cdm\-|cell|chtm|cldc|cmd\-|co(mp|nd)|craw|da(it|ll|ng)|dbte|dc\-s|devi|dica|dmob|do(c|p)o|ds(12|\-d)|el(49|ai)|em(l2|ul)|er(ic|k0)|esl8|ez([4-7]0|os|wa|ze)|fetc|fly(\-|_)|g1 u|g560|gene|gf\-5|g\-mo|go(\.w|od)|gr(ad|un)|haie|hcit|hd\-(m|p|t)|hei\-|hi(pt|ta)|hp( i|ip)|hs\-c|ht(c(\-| |_|a|g|p|s|t)|tp)|hu(aw|tc)|i\-(20|go|ma)|i230|iac( |\-|\/)|ibro|idea|ig01|ikom|im1k|inno|ipaq|iris|ja(t|v)a|jbro|jemu|jigs|kddi|keji|kgt( |\/)|klon|kpt |kwc\-|kyo(c|k)|le(no|xi)|lg( g|\/(k|l|u)|50|54|\-[a-w])|libw|lynx|m1\-w|m3ga|m50\/|ma(te|ui|xo)|mc(01|21|ca)|m\-cr|me(rc|ri)|mi(o8|oa|ts)|mmef|mo(01|02|bi|de|do|t(\-| |o|v)|zz)|mt(50|p1|v )|mwbp|mywa|n10[0-2]|n20[2-3]|n30(0|2)|n50(0|2|5)|n7(0(0|1)|10)|ne((c|m)\-|on|tf|wf|wg|wt)|nok(6|i)|nzph|o2im|op(ti|wv)|oran|owg1|p800|pan(a|d|t)|pdxg|pg(13|\-([1-8]|c))|phil|pire|pl(ay|uc)|pn\-2|po(ck|rt|se)|prox|psio|pt\-g|qa\-a|qc(07|12|21|32|60|\-[2-7]|i\-)|qtek|r380|r600|raks|rim9|ro(ve|zo)|s55\/|sa(ge|ma|mm|ms|ny|va)|sc(01|h\-|oo|p\-)|sdk\/|se(c(\-|0|1)|47|mc|nd|ri)|sgh\-|shar|sie(\-|m)|sk\-0|sl(45|id)|sm(al|ar|b3|it|t5)|so(ft|ny)|sp(01|h\-|v\-|v )|sy(01|mb)|t2(18|50)|t6(00|10|18)|ta(gt|lk)|tcl\-|tdg\-|tel(i|m)|tim\-|t\-mo|to(pl|sh)|ts(70|m\-|m3|m5)|tx\-9|up(\.b|g1|si)|utst|v400|v750|veri|vi(rg|te)|vk(40|5[0-3]|\-v)|vm40|voda|vulc|vx(52|53|60|61|70|80|81|83|85|98)|w3c(\-| )|webc|whit|wi(g |nc|nw)|wmlb|wonu|x700|yas\-|your|zeto|zte\-/i',substr($useragent,0,4))) : ?>
<div style="padding-bottom: 16px; margin-left: -24px; margin-top: -24px;">
<!-- Content displayed on mobile (but not tablet) browsers here -->
<?php else : ?>
<span style="float: right; clear: right; padding-bottom: 8px;">
<!-- Content displayed on all other browsers here -->
<?php endif; ?>
DO NOT copy this code. New devices are being released all the time so make sure you grab the latest version from detectmobilebrowsers.com. That way your code will work with the most recent generation of devices.
To edit your WordPress templates just open up your dashboard and click Appearance > Editor:
(Oh, and if I ever need that check in more than one place I’ll refactor it into a method. For now I can live with it sitting in the one template where I need it.)
As my comment in the code suggests, this won’t work for tablets. If you need special support for them, add the code snippet under the Tablets section here to the first regex. You can also separate the checks for mobiles and tablets, or even different models of mobile/tablet, if you need more fine-grained control over site behaviour.
The stats for LTE-Advanced are pretty impressive: the aim is to offer 1Gbps download, and 500Mbps upload. In a recent test in London’s Tech City EE were able to achieve 294Mbps download speed on a “category 6” device. More on what “category 6” means in a moment although, suffice to say, for now there are no such devices available on the market.
His question for us, as mobile developers, was, “What would YOU do with 300Mbps bandwidth?” Specifically, what kind of apps would we build? How would we use that much bandwidth?
Whilst some joker commented that “we’d waste it” and I have no doubt that badly written apps will do exactly that, it does raise many exciting possibilities.
The obvious candidate for high bandwidth is video, but it’s not just downloading video: that 150Mbps upload speed means you can also upload video in realtime. In fact, those of you already used to 4G service may have noticed a much better Skype experience than you get over home broadband for precisely this reason. Even the crappiest ADSL connection usually has enough download bandwidth to stream video. But upload? Not so much. Still, the current 4G LTE services offer speeds of up to 100Mbps download and 50Mbps upload with LTE-8, more than sufficient for a video calls, at least until you run into your bandwidth cap.
But I wonder if we might see new classes of apps emerging where media is being streamed effectively peer to peer in real time. Social and crowdsourced video could be interesting for travel (exactly how bad is that queue on the M4?), event management, and gaming – urban team nerf assassins with real-time video, anyone? (I suppose it might also increase the risk of being arrested for antisocial and disruptive behaviour, so use it responsibly. 🙂 )
On the downside it also raises the possibility of even more intrusive and disruptive forms of advertising but I suppose we’re all used to that by now, and will find ways to deal with it. I’m much more interested in the positive and creative uses.
All of this led to a lot of questions from the group and I felt a bit bad for Chandru because I’m pretty sure he wasn’t able to get through his whole talk. That being said he was very happy to take questions, and very open about answering them, so it was a good session.
Before I get into that, it might be worth a look at what LTE-Advanced offers.
LTE-Advanced is focuses improvements on 7 different areas:
Peak data rate
Cell edge user throughput
I’ve already talked about the peaked data rates, but lets look at some of these other areas.
The aim is to reduce the idle to connected transition to 50ms (this is the standard and it looks like EE are already beating this in their tests), which is significantly faster than 3G, and even faster than the current 4G LTE.
Since the roundtripping time is reduced you should then feel as if the device is connected via a hard line when using apps.
Spectrum Efficiency & Flexibility
I mentioned “category 6” devices, which aren’t currently available earlier. Current devices tend to be categories 3 and 4. One of the things this refers to is the number of antennae they contain. Currently two is quite common. Category 10 devices may have eight antennae, although it’s difficult to see how these could be made small enough to fit in your pocket.
The number of antennae is one of the factors that enables devices to offer such high speeds: they can use multiple antennae simultaneously to transmit and receive over multiple bands in the frequency spectrum. This is obviously complex but that complexity is hidden at the application level where you simply see one high-speed connection.
Cell-Edge User Throughput
This is somewhat related to the above. With LTE-Advanced it’s possible to transmit and receive data through multiple cells simultaneously. This means that, unlike 3G, where you see a significant drop-off in performance at the edges of cells, service quality and performance is maintained to a much higher level at cell edges.
Enabled by increased bandwidth plus the capability to request guaranteed bandwidth.
Improvements to mobility will enable handovers between cells at up to 350kmh and, for some frequencies, up to 500kmh – useful when travelling on high speed trains. LTE-Advanced also supports seamless handover between cellular and WiFi networks. The latter should be possible with devices available today but will require upgrades to software both on handsets and on core network infrastructure.
All of this seems great, and should mean that a lot of the problems commonly associated with mobile apps – high latency, flaky connectivity, poor perceived performance – may disappear, or at least be significantly reduced. Mind you, the group were quick to raise potential issues, which are worth looking at…
Issues for App Developers
What about bandwidth caps?
300Mbps is all very well but with bandwidth caps like we have today what’s the point? 5GB isn’t going to last you very long.
I can personally attest to this. Back in August I was in Seattle and Redmond with my friend Kevin and we were using his phone with a 2GB AT&T SIM that we managed to rip through in about four days with the following use: downloading emails (both his and mine with WiFi tethering), some light web usage, maps (mostly offline with Nokia maps), and one 20-30 minute Skype video call.
Bandwidth caps are either going to have to go, or be very significantly increased. People in general are annoyed by them so the former would seem like the way forward. All of this leads to another question…
Will the core network be able to cope with that amount of data flying back and forth?
Chandru said there’s an arms race between what basestations and handsets support over the radio network, and what the network backhaul can handle, and that they are having to continually upgrade it to keep up.
Your device might be able to support these higher speeds, but how many devices running bandwidth-hungry apps can the base-stations support? Will we end up in the same situation as we have with DSL, particularly in rural areas – i.e., nominally 8Mbps connections that at peak times can perform worse than the 56k dial-up connections of 15 years ago?
This is very frustrating for users, and the problem will be made worse if apps are developed on the assumption that the network connection will be fast.
The issue really comes down to the cell density in a given area.
Most areas are covered by macro-cells. These are large cells that provide coverage over a wide area. They tend to be supported by base stations with transmitters placed as high as possible, with good line of site over the widest possible area. However, they can only support a limited number of users and therefore tend to be used as a backup in more populated areas.
Urban, and more populated areas in general, tend to be primarily supported by much smaller cells, with therefore many more base-stations covering a given area. These allow large numbers of users to be supported in areas of higher population density.
If there are a sufficient number of smaller cells then contention should mostly become a non-issue, although I would expect that for large gatherings (concerts, New Year celebrations, etc.) we’ll continue to see the kind of network flakiness we still sometimes experience. This does also depend on what people are using their devices for.
What about pricing?
The rollout in London’s Tech City is currently non-commercial. If you’ve got a device that supports LTE-A, you can go ahead and use it for free at the moment. This is likely to change, and there will be a price premium on LTE-A services, as there is for 4G currently. As with all things mobile you should expect to see that disappear in a couple of years.
Will this supercede fixed line broadband?
Fibre is fast, at least compared to what many at least in the UK are currently used to, but it’s only a third as fast as LTE-A, so why bother with the cost of installing the infrastructure?
In many developing countries, India being one example, this is exactly what’s happening: they’re leapfrogging fixed line infrastructure in favour of cellular because it’s much cheaper to install. No digging up streets, laying cables, etc.
On the other hand fixed line can have certain advantages:
It’s more secure,
In theory at least it’s harder (or perhaps just more risky) to take the network down whereas, with the right equipment, I can easily disrupt cellular networks in my local area,
It’s possible to get guaranteed bandwidth, at least with a leased line, which is why they’re popular with businesses although still far too pricey to be within range of any but the absolute wealthiest of consumers (there are mechanisms for requesting guaranteed bandwidth over LTE-A but, of course, if too many users need it simultaneously this will degrade – I’m not sure how gracefully).
Fixed line isn’t likely to go away completely, even in developing countries, but in 5-10 years time for a lot of people it will no longer be the default option.
What effect does LTE-A have on battery life?
Chandru commented that when EE ran their LTE-Advanced test the device was plugged into the mains and became quite warm, so it sounds like it’s probably quite power-hungry.
One of the other attendees pointed out that Wireless AC already supports LTE-Advanced speeds but without significant battery drain, so it may not be that bad. But then wireless access tends to be over much smaller distances and requires lower transmission power, so I’d say this doesn’t sound like a great benchmark.
What will 5G offer?
The notable item Chandru mentioned here was consistency of bandwidth: the aim is to offer 1Gbps across devices. In the meantime therefore it does seem like we may not be able to assume too much in the way of bandwidth, and there were a lot of questions around how you detect what type and speed of connection you have. This is largely hidden by iOS at least, although it’s possible to tell whether you’re on a WiFi or cellular network. Android’s APIs are a little more accommodating and offer more detail, if you need it.
That’s it for now. Thanks to Chandru, for speaking, Tony Short, for organising, and Red Gate, for sponsoring the event.