NL-ix has been present in London since 2015, a decade now! When we were expanding further into Europe, London, as one of the largest European interconnectivity markets, was an obvious choice for expansion. Up until now, London has always been a challenging location for us. We've unfortunately been in the position to have to turn down prospects because of those challenges. This article explains the background of our challenges and how we have improved our builds to be better prepared for the future and further growth.
Our initial London build was done in 2015. Back then, traffic levels were significantly lower and 100 GE equipment was expensive. It's no surprise that our first subsea backbone consisted of two links with 4x10GE lines each.
For maximum reach in London we started out with 4 locations: Telehouse North and Equinix LD5 as "core" locations where backbone connections toward Amsterdam landed. Telecity Hex6/7 (Now Equinix LD8) and Digital Realty LHR20 were local satellite sites.
Within the city, we of course deployed
During Covid we had to expand our network in London to add more capacity. At that point we just figured out how to get equipment in and out of the UK after Brexit, which was its own challenge. And then Covid hit right when our London capacity started running out, Cloud on-ramps for our Enterprise customers. To be able to keep the lights on we had to expand our backbone capacity without being able to go on-site ourselves, due to restrictions. This was something new for us as we've always built everything ourselves, except for some standard remote hands work like patching.
The changes required to increase capacity were a bit more hands-on than patching: physical capacity had to be expanded first. Of course, the remote hands engineers weren't familiar with our racks, and that added a lot of risk to this expansion project. Early on in the project we therefore decided to not perform any changes on existing connectivity to limit the impact of mistakes.
Instead, we decided built a second backbone network next to the original one, so that the remote hands engineers only had to build new capacity, leaving the original capacity in place without any change. This wasn't very cost efficient, but it was Covid and we were happy that we could at least continue to support our customers.
In the end, after spending a lot of time, we completed the project and expanded our backbone. The downside of all of this was that our London PoPs now depended on duct tape to stay afloat. Not ideal!
It was clear that our situation in London had to be improved. With our migration to new Nokia routers happening at full force, London was next in line, and that was a natural moment to reflect and think about the future architecture of our London PoPs. There are a number of noteworthy decisions we made along the line and we'll explain those below.
Slough vs London:
if you're familiar with the London datacenter campuses, you'll know that there's a very large Equinix outside of London in Slough. The other large campuses are in London city itself. To improve the resilience of our network we have decided to create separate islands for Slough and London. The positive effect of that is that we can manage the backbone capacity for each of those locations separately.
Reach:
when building out a metro NL-ix aims to optimize reach and to reach at least 80% of all networks in a metro with a minimum amount of devices to limit cost and moving parts in the network: Maximum impact with minimum digital footprint. This was already the goal initially, when we first built London, so we ended up staying in the same locations as where we are already present: Equinix LD5, Equinix LD8, Telehouse North and Digital Realty LHR20.
Capacity:
to ensure sufficient backbone capacity, we're transitioning our network to only use 400GE connections for backbone and skip 100GE altogether. Both Slough and London each have 2x400GE connected redundantly on non-overlapping paths. As you might be familiar with our 2.5x redundancy rule: we use the other location as the 0.5x as it's completely redundant.
A/B resilience:
NL-ix operates two separate exchange fabrics: Fabric A and Fabric B. Fabric A runs on Nokia and Fabric B on Juniper. The important design rule for the multi fabric design is that the fabric requires a different hardware vendor, different facilities and separate backbone connections. London will get a Fabric B PoP in Digital Realty LON19 (the A PoP in Digital Realty LON20).
The Telehouse campus:
this is an interesting one as their patching policy has a reputation. This campus is composed of multiple building and houses a large number of networks. The standard patching model Telehouse provides is cheap inter-building cross connects and more expensive intra-building cross connects.
As NL-ix wants to provide our services to networks in all buildings, this was a challenge: either we pay a lot of money for presence in all buildings, or our customers pay a lot of money for cross connects. We weren't happy with this status quo and were able to find a customer friendly alternative: for the Telehouse campus NL-ix provides cross connects for inter-building cross connects along with our service. This requires a slight markup on the price, but is more cost effective than handling the cross connect yourself.
Backbone connections:
the real challenge is designing a backbone network, in this case for two fabrics, primarily via sub-sea cables as London is not connected to mainland Europe.
We decided to use four backbone links: one via the Eurotunnel that is resilient against all subsea interference, the second is the Scylla cable that's buried underneath the seabed to protect it against ankers, the new Concerto cable provides an alternative route to Scylla, and the last link going to Paris via the channel.
The result of this rebuild is that we now have enough capacitiy for the time coming. We can provide up to 10 Tbps connected capacity at each physical PoP and up to 3 Tbps of customer traffic. Backbone capacity is simple to expand and our London network is more resilient than ever.
If there's any time to hop on the boat, it's now!