We do apologize for the extended maintenance, no one enjoys having the site down – we’re users too! Some of our backend storage acted cranky, causing a delay from our original time budget. I would like to note that this maintenance window was particularly satisfying for the Site-Ops team – signifying the end of a long cage move…
The best analogy to servers and their location tends to be housing. Shared servers are about the same as sharing a locker in the gym – usually temporary, unknown sanitary factors and very cheap. The next step up would likely be a dedicated server, similar to renting a room in a house with roommates. The downside here being lack of control outside of the physical server – the network routing, power infrastructure, backend storage, etc. After hogging up enough physical real estate with servers, routers & storage; folks tend to graduate to a “cage”. These are private suites within a colocation facility, that allow the tenant to customize the layout, power, cooling and network connections. VideoBox has lived in various cages over the years – Wednesday afternoon finalized our most recent move, up a single floor!
Unlike moving homes or even office buildings, large website moves are extremely complicated. In this case, planning began in the summer of 2007 – starting off with contract renegotiating. The last 2 months have seen a number of slow and methodical changes, particularly in the last 3 weeks. Growth support such as more racks, additional network capacity, more storage and numerous configuration changes to allow more redundancy in our systems. Most of this work was done in the background, with minimal user disruption.
A great example of the talent at VideoBox is software deployments. On average the development team deploys new code every two weeks, constantly fixing bugs and adding features (sometimes mixing them up too). These changes are done 90% of time with no disruption to downloads or site navigation. This is extremely rare for high traffic web sites, who systems usually require complete service outages for upgrades.
Both the Site-Ops and Development teams always love to hear the geek shout-outs and half-cocked theories on what software we use, how big our tubes are or which vendors we prefer. The public facing servers are Linux, with some BSD systems thrown in for good measure (Canadians are known to be tight security dweebs too ). Both MySQL & PostgreSQL are utilized – each with their own benefits. As for the small hand-crank lift, those come in handy when one’s storage systems pack 40+ hard drives in a single chassis.
Out of curiosity what other sort of technical geeky posts would you like to see? Cheers.