Uncategorized

Under the hood at radionz.co.nz

A couple of weeks ago I pushed the button on the new radionz.co.nz, and since then I have had a string of questions about what tech is behind the site. (Yes Rowan, I did play that song.)

This post will answer those questions.

Hardware

We have eight 16 core HP servers, all with 16 gigs of RAM and 15k discs. The discs are RAID 1+0 for performance and we use hot spares for extra resilience. Cold spares are kept on-site for quick replacement.

The servers are split between hosting in Wellington and Auckland. One set is standby, one is live, and radionz.co.nz and thewirless.co.nz can run in either location independently.

Both sites have gigabit fibre, which allows us to easily meet traffic demands at peak times.

We do not use a CDN. Everything is hosted in New Zealand as 85% of traffic is local, and I want the best performance for our primary audience. Even for international audiences, our site still loads faster than other NZ media websites.

Why aren’t we in the cloud? At this time we have fixed-term contracts for hardware support and bandwidth that are significantly cheaper than equivalent cloud offerings. Our requirement to have geographically diverse hosting, low latency paths to our primary audience (NZ), and high peak traffic capacity, have so far have tipped the scales in favour of continuing our current hosting regime. Our hosting and bandwidth costs have actually  gone down slightly over the last six years. We currently do six times the page impressions and 10 times the number of images served, so no complaints here from me on that score.

Software

The entire stack is Free and Open Source software. Not free as in cost, free as in freedom to modify.

Ubuntu is the Operating System of choice, and we use virtualisation to allow easier management of machines in the cluster as a whole. Catalyst IT is responsible for the day-to-day up-keep of our core systems including monitoring, patching, and failover.

At the front-end we use Varnish, a high speed cache. A few years ago RNZ part-funded a feature we wanted (asynchronous grace mode), and this helps keep things fast for visitors to the site. We have a micro caching regime for pages and parts of pages and this also reduces build times for pages that are stale in Varnish.

The CMS is bespoke. Built using Ruby on Rails, it is known as ELF, and I have written about its development on this blog in the past. That started out on Rails 2.3.5 in 2010, and 13,000 code commits later were are on the latest version of Rails. 15 people have contributed code to ELF in that time, the biggest contributor is me with 9,500 commits and over 1 million lines of code changed.

The benefits of having our own CMS are significant, and of particular importance to a media company. We can quickly adapt the system to new requirements, and we can fine tune the performance of the system to meet our specific needs. Having three radio stations, National, Concert, and International, can result in vey peaky traffic. When Kathryn Ryan says ‘go and look at the pictures on our website’ to her 250,000+ listeners that’s both an opportunity and a challenge. Our average traffic figures don’t reflect the peaks we can get on an hourly basis.

This flexibility has also allowed us to fine tune our technical SEO to a very high degree. We use standard-compliant semantic markup, Open Graph, Schema.org, and sitemaps (static and news-only) extensively to ensure social sites and search engines get the best possible information. This is constantly under review in order to keep pace with significant market changes.

The search technology is Apache Solr and we use the Percona fork of MySQL. We might change to Postgres at some stage, mostly because that’s what we use for The Wireless and a number of internal RNZ systems.

We use Git for code management.

My overall approach is to balance flexibility, cost, risk and performance, and I do strongly believe the choice of software has had a huge impact on my ability to keep things running effectively and efficiently. Enough of the self promotion.

The New Site

The new site was build alongside the old, using exactly the same technology.

The admin section of the site is independent from the front-end, which made it simple to update alongside the exisiting public site.

The new design was developed on a seperate code branch, and tested on a staging server. Any changes to the core of the system (admin, new assets required for the new site) were cherry-picked into our production branch and deployed immediately. This meant that new assets (such banner image and branding) could be uploaded into our admin section prior to the launch. (The staging server ran on a clone of the production server’s data.)

On the day, it would then be a simple matter (and it was) to deploy the stable head of our development branch into production. This is my standard plan for big changes as it’s faster to roll back – I simply deploy the old production code.

To test this I deployed the current production code to our staging server, and then the new development branch over that. The switch between the two was completely seamless, confirming we’d be able to do the same on our production code.

The search was also updated along with the new design. Actually, the major changes – about ten lines of configuration code – were made two weeks prior, and the search was reindexed. I changed only one line of config after launch as it turns out people favour currency over pure relevance in their results. This was another advantage of having our own system – it is faster to evaluate the impact of changes, and faster to make changes in response to feedback. And before you ask, yes, we are going to move to Elastic Search in the future, mostly because it is simpler to manage across multiple sites.

There were a few performance regressions on the day of launch, but nothing that impacted the public’s view of the site, and nothing we were not able to solve (or mitigate) on the day. And obviously we will keep tuning things as the new design beds in.

There is still room for much growth.

Releases of new code have continued at pace since the relaunch. In the 10 days since launch I have deployed 90 updates.

We’ve had a lot of feedback form the public, and there are still some issues to fix. But the team is working hard to work through the list, and there is some more exciting stuff to come, so stay tuned here and keep visiting radionz.co.nz.

One thought on “Under the hood at radionz.co.nz

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s