In part one of this series, I framed the original strategy using Roger Martin’s Playing To Win framework. In part two of this retrospective series, I will cover the choices made during the RFP process and for hosting the site.

The RFP

The RFP for the website was put out to market in 2004. It was broken into four sections with vendors being allowed to opt for as many parts as they wanted. The four parts were design, content management, hosting, and live streaming. 

Some vendors pitched for all four, but used third parties to provide parts they did not offer directly. Some of those third parties pitched their specialist service directly. If I recall correctly there were 23 respondents.

In the end we chose four different providers: 

  • Alto in Wellington for design 
  • MySource Matrix (provided by Squiz Australia) as the Content Management System (CMS)
  • ICONZ in Auckland for hosting
  • Richard Naylor’s R2 streaming company and Citylink for audio hosting and live streaming.

Having chosen specialist providers in each field, we worked closely with them all to make sure that the pieces fitted together at the end of the process, and could be delivered on time. This would ensure that we got the most value, but also a solution that was not tied to any single vendor or technology, allowing us to change pieces out as needed later.

We did have proposals from vendors offering all four parts. The owner of a large agency in Auckland phoned me and told me it was impossible to deliver a site for the money suggested, and they declined to bid, but wished me the best of luck. Their solution was a top-to-bottom integrated solution, built using a single vendor’s technology stack. 

After assessing similar RFP responses, it was very evident that this approach would have locked us into the entire solution, making any changes or upgrades a whole-of-platform affair, limiting flexibility and driving up the total cost of ownership.

The total cost of ownership (PDF link) of particular approaches is hardly ever considered in choosing vendors or technology, and I was which is a shame as many businesses end up with ‘unexpected costs’ due to poor project practices and unrealistic expectations. I credit writers like Paul Strassmann (The Squandered Computer) for introducing me to these concepts.

The modular approach we took was one of the most important choices we made as it allowed us to continually update, upgrade and improve parts of the technical platform. This was done incrementally as needed, and in a way that was both low cost and seamless to visitors. It also allowed the system as a whole to be optimised and fine-tuned in a number of important areas, such as site load times, content publishing times, accessibility and search engine optimisation (SEO).

Hosting, Servers & Internet

As I said, the first iteration of the site was hosted at ICONZ in Auckland, with a managed connection to the internet. A single four-core HP server with a RAID was purchased to host the site. Backups were taken once a day and stored off-site.

This single point of failure (a single server, at a single location) will seem shocking to readers in 2023, but the reality in 2005 was that anything more complex was orders of magnitude more expensive. By keeping things simple, and being prepared to wear some risk, we could spend that money on publishing content.

And yes, we did have some outages, but mostly due to us growing faster than the CMS could keep up.

One unplanned outage happened due to human error, right in the middle of the day. I got an alert to say the site was down, and having confirmed this I called the ICONZ support line. They were also perplexed as there were no network or power issues evident. One of their team went to the server room to investigate. 

What he found was that another customer had come in to do some work on one of their servers. They had removed their server from the rack and had it sat on a chair. In the process of doing this they had unplugged ours. The ICONZ team plugged our server back in, and after an investigation they offered to move our equipment to another shared rack which had no other customers in it, and which they agreed they would not initially fill. 

If you were the person who unplugged our server back in 2006 you are forgiven.

Traffic from ICONZ was passed over peering links—these are direct managed private connections—to other New Zealand ISPs.

The audio was all hosted on R2 servers at CityLink facilities in San Francisco, Auckland and Wellington. As an aside, these servers were also used by TVNZ, and some government agencies, and served about 150 megabits per second back to New Zealand, all day, every day. This seems bizarre, but is the result of arcane internet connectivity politics.

We moved to dual servers, one for the database and one for the webserver, around 2007/8.

We upgraded to eight 16 core servers around 2010, and in 2012 I decided to move hosting of the CMS and audio in-house. RNZ has nearly carrier-grade equipment rooms in Auckland and Wellington, and it seemed pointless to be paying someone else to provide power and an internet connection when we could do this for free. 

Server rack showing 6 servers and a couple of routers.
Server rack at ICONZ.

As part of that change, we got our own IP address range and went to market for internet connectivity. This was a fascinating exercise given the original problems we had getting reasonable rates for bandwidth costs in 2005. There were about a dozen respondents, and while I’ll preserve confidentiality, the results were surprising. 

One of the country’s largest providers declined to bid. Another large provider completely misread what we wanted, passed the RFP to an implementation partner, and we were offered their standard office internet package of 10 megabits per second.

Other providers offered very competitive pricing, in some cases cheaper than the upstream provider (that had also bid) that they used for their underlying service!

FX Networks took a very different approach. They had looked carefully at our proposal and realised that the majority of our traffic was outgoing. Most of their customers were net data consumers, so as a net data provider we could offer some balance on what were under-utilised two-way data circuits.

FX Networks also had direct connections with every other ISP in New Zealand, often in more than one place, and this contributed significantly to the good speed profile of the site.

A good deal was struck, and we stuck with them, and then with Vocus when FX was later acquired.

The cloud was starting to mature over this period, and I would run the numbers every year to see if it was cheap enough to move away from self-hosting. The last time I did this was in 2016 and the cost was still significantly higher. 

There were two main reasons. The first was that RNZ servers were depreciated over five years, and we purchased extended support to take them out to eight years. The only thing that ever needed replacing was the odd hard drive (RAID, so hot swapped) and a motherboard because of a fault with the built-in fans running flat-out. We had duplicate set-ups running in Wellington and Auckland, so the impact of a hardware failure in one location was low. We had a contract with Catalyst IT to support the software side of things, keeping things up to date, and so on.

Taking just capital expenditure and depreciation into account, moving to the cloud was more costly. On top of this bandwidth was significantly more expensive, being more than 10 times the existing fixed monthly cost, and with no cap.

The passage of time has since driven down the costs of the cloud, and there are now techniques for better network management and mitigating a lot of the problems that stopped us moving in this direction in 2016. I would still carefully run the numbers though…

In the next part I will look at the design of the site, and how that evolved.

Leave a comment

Trending