Posted by: paragon | November 4, 2006

The Secret Behind MediaTemple’s Grid-Server


Aggregation: ParagonHost, LLC (11/04/06)

The Secret Behind MediaTemple’s Grid-Server

22 Oct ’06 – 06:14 by benr

There has been a lot of buzz around (mt) MediaTemple’s latest offering this week: (gs) Grid Server. I listened to a podCast at TechCrunch and was really sucked into the marketing speak about the offering. But as a SysAdmin I wanted to know how it worked. The key to the product is that you setup your enviroment once and its “automatically deployed on the grid”, thereby even your little site is benefiting from the collective resources of the grid.

I had to know how it worked. I called MediaTemple but they wouldn’t tell me anything… frankly I don’t think the guy I talked to even knew. So I bought an account to look for myself and found something very very interesting. The real story isn’t MediaTemples Grid Server, its actually BlueArc.

The secret behind GS isn’t revolutionary, but it is clever. Basically they have, at last check, 17 systems running Debian. Thanks to a BlueArc press release I know they bought a Titan back around the middle of this year. They had a relationship with HP, found via Google in an HP success story, which leads me to believe they are still using HP systems, and specifically I think they are using HP ProLiant DL360 2.00GHz G5 Servers based on data from /proc. Interestingly there are 4 Xeon 2Ghz cores and only 2GB of memory per system. There is no local storage, instead the systems boot a root filesystem via NFS, and user storage is also mounted NFS.

The Grid magic is this: store all use data on NFS so that no matter which system you connect to you can access the data. Then spread your vhost configuration to all hosts in the “grid”, so that any system can serve your data. This system is therefore highly scalable because adding an additional node to the “grid” is trivial and reliable because if one system dies, big deal. But this means that you require two things to make it work: really good load balancers and really good NFS storage. And by good I mean very reliable and extremely fast.

And thats where BlueArc Titan fits into this story… without the performance offered by BlueArc Titan the Grid Server concept just can’t work and becomes a disaster. Putting all user data on the Titan is a big vote of confidence but putting all the root filesystems on it says something even more telling. No doubt the idea of putting root filesystems on NFS was not to reduce componants in the servers but to facilitate provisioning and change management by means of cloning a “golden root” and rebooting each machine.

I have no idea what load balancer they are using. Apparently whoever it is isn’t putting their name in a press release. In a setup like this I’d only choose to go with F5 BigIP, but who knows. They do have Pound installed on each node but I can’t imagine that they’d spend money on systems and storage but not on load balancers.

Of course, this leaves one problem, especially if your a Ruby on Rails developer: PHP can be served by any host by Apache, but Rails apps use their own webservers (WEBrick, Mongrels, or ligHTTPd). Thats where the (mt) Containers come in. I’m less sure about how that works, and frankly less interested. Basically you create a little container (64M in the low end account) within which you setup your Mongrels and that then starts the binaries in n number of grid nodes. This applies to any application that requires running binaries, so Java developers aren’t welcome (untill they design Tomcat/Geronimo containers). If your a developer, look before you leap, (gs) might be great for static content and Apache CGI, but otherwise look elsewhere. These are by no means to be confused with real containers or what many call “Virtual Private Servers” (VPS) or even “Virtual Dedicated Servers”.

Back to BlueArc, the real story here, I’m impressed that (mt) trusted their solution to them. Its a testimate to the reputation BlueArc is building in the industry. I am a little interested in the configuration in terms of performance because I found that with 8K blocks I get 102MB/s in a TextDrive Container (NFS on Thumper) vs 72MB/s in a MediaTemple Grid Server (NFS on Titan), shocked actually, I would expect the BlueArc to blow away Thumper, but I’m withholding judgement for now. What I’ll be watching is how the performance changes over time, as (mt) moves more customers (new and old) into the “grid”. If I do a benchmark in 6 months will I see the same performance or reduced? When there is maintance or failure on the Titan (unlikely as that might be) will it take down the entire site? It shouldn’t of course, but that depends onwhether (mt) bought a redundant configuration. In short, the fate of (mt) rests squarely on that device… lets see how things go.

– – C O M M E N T S – –
hardware load balancers: eh, the problem there is often configuration. shuffling http requests to the least-loaded of several identical http servers is a first-year programming problem, so it’s been solved many times. propritary “black boxes” often make it difficult to use more complex configurations of what requests go where. That, and ‘black boxes’ often like to charge a per-host fee, which gets expensive fast.

As for the performance of the titan – I don’t want to talk shit about a product I haven’t used, but NFS in general is, well, kinda mediocre. really, you can’t even touch 1G FC speeds until you use 10G ethernet, and that costs real money. Now; there are some very old implementations, and you can hire any SysAdmin off the street and he will be able to configure NFS, but there are many other SAN technogies that would enable simliar clustering , and most of them offer better performance than NFS. The advantage of NFS is not performance- it’s stability and interoperability of the implementations.

Because of this simplicity, if MT’s NFS appliance suddenly breaks, it really won’t be that hard to restore from backup on to a couple external SCSI arrays and serve with FreeBSD boxes. NFS is NFS- you’d take a speed hit but it would mostly work.

I’m just some guy with a couple used brocades, so my opinion is probably worth about what you paid for it, but that’s what I think.

luke crawford (Email) (URL) – 22 October ’06 – 23:28

also, how do you do disk benchmarks? I’m a scsi nazi, so of course, I think the dd metric is completely useless for most things. (It’s nice when it comes time to restore from backup, but in my world, sequential access is something of a myth.) but that might be mostly because with the dd metric any dork with a couple sata drives can keep up with me; I look much better when it comes to measuring latency on heavily parallel loads.

luke crawford (Email) (URL) – 22 October ’06 – 23:33

Something that comes to my mind is how many spindles do they have behind the BlueArc? If its not equivalent to the 48-spindles y’all have on a Thumper, then that would explain the performance lag despite an FPGA’d controller on the BlueArc.

Jason Williams (Email) – 23 October ’06 – 23:28

BlueArc’s performance and reliability do seem impressive, but what MT is doing is really no different than what many sites have been doing for years with multiple identical hosts connected to just about any modern disk array with a cluster filesystem. I designed and implemented one such filesystem in 1999-2001, and it wasn’t the only one or even the first. I was at a site using another just two weeks ago.

Basically NFS is only one way to present a shared coherent filesystem to multiple hosts, and not even the best way. It’s just the easiest way for many people because they understand NFS and LANs better than cluster filesystems and SANs.

Platypus (Email) (URL) – 24 October ’06 – 07:17

Thanks for this detailed post. I was considering a (gs) account but now I’m not so sure. Waiting for MediaTemple to whip together “containers” for every new binary server platform doesn’t sound like it’s worth it.

Alex Payne (Email) (URL) – 24 October ’06 – 10:42

My company has been offering a similar service for years, it’s a nice play of public relations.

Dataracks (Email) (URL) – 29 October ’06 – 09:59

For a thread on WHT about it and the issues customers have seen.


Larry Ludwig (Email) (URL) – 29 October ’06 – 17:08

BlueArc had a headache yesterday:

and shit in MT’s new cadillac. 🙂

taco (Email) – 29 October ’06 – 18:10

firstofall, I still think it’s a nice offer, but:
so there are 17 hosts? at times I’ve been running a lot more compute power at home – i don’t know why anyone is even discussing the offer, unless it’s technically brilliant

I like the bluearc part, but obviously (see last post) they lack competence for running it. You don’t want to be late applying patches against data loss, and You shouldn’t need to summon onsite techs for applying patches, and especially You don’t want to be late on more than one patch like they were.

BUT I do like the bit about the bluearc system noticing the fault. That’s what I’d like to know more about. Seems bluearc is safe, just (mt) is nothing more than what it looks like: little unexperienced ISP.

darkfader – 31 October ’06 – 13:56


  1. Check out the recent chit chat on Tech Crunch:

    Media Temple launched a major new hosting service this morning called Grid Server. It matches low end shared hosting services in pricing ($20/month) but promises to grow along with the site, manage huge short term traffic spikes without a disruption in service or performance and avoid the “bad neighbor” problem common with shared hosting services. The basic $20 package includes 100 GB of storage, 1 TB of bandwidth and up to 100 individual sites.

    I spoke to the Grid Server team yesterday. The podcast of the conversation is up at TalkCrunch.

    Media Temple’s Grid-Server is a completely new hosting platform that replaces yesterday’s obsolete shared server technology. We’ve eliminated roadblocks and single points of failure by using hundreds of servers working in tandem for your site, applications, and email. The Grid’s on-demand scalability means you’ll always be ready for intense bursts of traffic; and the growing audience resulting from your online success. All of this power, controlled through our brand new AccountCenter, is available today for a price point unmatched by any competing service.

    Customer sites are not hosted on a single (dedicated, shared or virtual) machine. Instead, they are managed by hundreds of clustered servers, and Media Temple monitors the health of the entire grid as well as individual sites. If a site spikes in traffic, performance is unaffected and the site owner will simple be charged for overage on bandwidth and CPU usage. If the grid begins to get stressed, Media Temple simply adds more machines.

    Overage pricing hasn’t been put up on the site at the time of writing this post (and it’s important of course), although the company says that the basic package specs compare very favorably with low end dedicated server hosting at $200/month.

    They’ve also added a number of other features to make hosting setup and maintenance as easy as possible for the novice, including one-click setup of WordPress, Drupal, Gallery, ZenCart and other applications.

    Mosso (part of Rackspace) is an existing competing service that is comparable to much of what Media Temple is doing with Grid Server; however, Mosso starts at 5x the price, $100/month. The basic Mosso package offers slightly less storage and twice the bandwidth offered by Grid Server.

    Grid Server can also be compared to Amazon’s new EC2 utility computing service, which we discussed in the podcast. The Media Temple team was quick to point out that EC2 isn’t really designed to deal with permanent virtual server configurations, and lacks customer service and the auto burst capabilities of Grid Server.

    As a disclosure, we use Media Temple for some of our hosting (we have a couple of dedicated servers with them). Frankly Grid Server may be a better choice for us. We have a ton of excess capacity to handle traffic spikes, which we pay for whether or not we use.

    This entry was posted on Tuesday, October 17th, 2006 at 8:00 am and is filed under Company & Product Profiles. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

  2. PodCast via Tech Crunch:

    *** Check out the Pod Cast via TechCrunch via the above URL




  3. Above podcast with the following MT principles:

    Michael Arrington spoke with Demian Sellfors (CEO), Chris Leah (Director of Technology), Alex Capehart (Director Marketing) and David Feinberg (Product Manager) for 30 minutes yesterday about the new product. The podcast is enclosed.

    Download the podcast here….

    [audio src="" /]

  4. Media Temples – Grid Platform appears to use's mail services…

    Click the demo and select “Simple Ajax” as the interface… The IE 6 Advances blows… use’s pop ups for everthing, but the AJax developement is not bad:



  5. *** As noted above,

    Keep on top of the thread at about Media Temples – Grid Platform service offerings…

    Here what clients are saying about the service:

    Interesting info.



  6. beautiful online information center. greatest work thanks

  7. Hello, Your site is great. Regards, Valintino Guxxi

  8. 10-11yo little lolitas

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: