Author Archive

Evan Stone LOVES VideoBox

Walking down the hallway out of the convention center, we spotted Evan Stone chatting with fans. As soon as he realized the camera was on him, he launched into this:

See Dee N take 3


We’re on our 3rd CDN trial, again as noted this is focused on improving the site image load performance time. As usual testing was done before flipping the switch on. Our particular interest is if European users are seeing faster page loading time with this feature. Please comment with your feedback.

Per the last trial, we have for some time now implemented a number of well-known tricks to optimize image delivery. This includes spreading the load between four hostnames, as mentioned by commenter dweazy – along with proper cache HTTP response headers, stripping JPG metadata comments.

Securing Them Gigglebits

Today we’re launching SSL or HTTPS support for a number of pages on the site. This includes the login page, change password form and other user specific sections. You’re probably familiar with “lock” or secure icon in the bottom window of your browser. This form of surfing means the data transmitted is encrypted from your source computer all the way to our destination web servers. Thus folks in your office, dorm or other environments can’t “sniff” your login information. This is a great value if you’re browsing on unsecured WiFi or other shared networks.

For clarification, our billing partners have always received your credit card billing information in this fashion. Extensive testing has been done for the past few weeks to minimize problems. As always, please escalate any issues via the tech support request form.

P.S. I’m sure the uber geeks will point out that HTTPS doesn’t prevent MITM or Man-in-the-middle attacks are still possible. You pervs!

pixel

Riding ‘Em Lightly

geekery.jpg
The Site-Ops and Development teams are always looking at ways to improve site performance. Typically most of these engineering efforts are focused “behind the scenes”. However for the next two weeks we’re test driving a very user facing change, CDN’ing our images.

CDN’s are Content Delivery Networks; these companies place thousands of servers in geographical diverse locations around the Interwebs. The concept being that these servers can dish out images and other navigation elements (such as CSS or JavaScript files) faster then our single colo in San Francisco. Our goal in this experiment is to decrease the page load time for members both domestically and internationally.

Please feel free to leave comments if you’re noticing faster or slower load times; please leave the name of your ISP and location. Thanks!
pixel

Nerd Stuff

geekery.jpg
We do apologize for the extended maintenance, no one enjoys having the site down – we’re users too! Some of our backend storage acted cranky, causing a delay from our original time budget. I would like to note that this maintenance window was particularly satisfying for the Site-Ops team – signifying the end of a long cage move…

The best analogy to servers and their location tends to be housing. Shared servers are about the same as sharing a locker in the gym – usually temporary, unknown sanitary factors and very cheap. The next step up would likely be a dedicated server, similar to renting a room in a house with roommates. The downside here being lack of control outside of the physical server – the network routing, power infrastructure, backend storage, etc. After hogging up enough physical real estate with servers, routers & storage; folks tend to graduate to a “cage”. These are private suites within a colocation facility, that allow the tenant to customize the layout, power, cooling and network connections. VideoBox has lived in various cages over the years – Wednesday afternoon finalized our most recent move, up a single floor!

Unlike moving homes or even office buildings, large website moves are extremely complicated. In this case, planning began in the summer of 2007 – starting off with contract renegotiating. The last 2 months have seen a number of slow and methodical changes, particularly in the last 3 weeks. Growth support such as more racks, additional network capacity, more storage and numerous configuration changes to allow more redundancy in our systems. Most of this work was done in the background, with minimal user disruption.

A great example of the talent at VideoBox is software deployments. On average the development team deploys new code every two weeks, constantly fixing bugs and adding features (sometimes mixing them up too). These changes are done 90% of time with no disruption to downloads or site navigation. This is extremely rare for high traffic web sites, who systems usually require complete service outages for upgrades.

Both the Site-Ops and Development teams always love to hear the geek shout-outs and half-cocked theories on what software we use, how big our tubes are or which vendors we prefer. The public facing servers are Linux, with some BSD systems thrown in for good measure (Canadians are known to be tight security dweebs too ). Both MySQL & PostgreSQL are utilized – each with their own benefits. As for the small hand-crank lift, those come in handy when one’s storage systems pack 40+ hard drives in a single chassis.

Out of curiosity what other sort of technical geeky posts would you like to see? Cheers.

(Another) Planned Site Outage Today: 10:30am PST

caution.jpg
The Site-Ops team is completing the final phase of database movement today. To facilitate this we’ve decided to put the site into maintenance mode for the duration of the move. This is scheduled to start @ 10:30AM till noon. The time window is longer than we expect the tasks to take. We apologize for any inconvenience this may cause.

For our fellow geeks out there, here are a few teaser shots:
internets.jpg internets2.jpg internets3.jpg internets4.jpg

Planned Site Outage Today: 10:30am PST

caution2.jpg
The Site-Ops team is moving one of our databases today. To facilitate this we’ve decided to put the site into maintenance mode for the duration of the move. Estimated outage time is 30 minutes or less. Outage will begin at approximately 10:30am PST. We apologize for any inconvenience this may cause.

Bandwidth Update

20071105_outage.pngThis morning we experienced an outage with one of network providers. During this time a small portion of downloads were slower then our average speed – as a rule of thumb, we shoot for atleast 250Kb/s for any single customer download. Our backup ISP’s worked well during this time window. The circuit in question was down from 10AM PST to 3:32PM, just little over 5½ hours.

Typically with a large bandwidth commitment, say in the >100Mb/s, providers will typically issue an “RFO” or Reason For Outage if any downtime occurs.. AKA “How we fucked up”. I’m sure we all wish this sort of contractual obligation existed with our home DSL or cable connection. We consider such lengthy outage to be unacceptable and will be grilling the provider during a conference call scheduled for tomorrow. Apologies for the degradation, have a good night and resume that downloading.