Friday, March 7, 2008

speed saves

So thinking about web projects, I'm curious what people actually do with their hardware. From a development point of view, most developers kinda throw it over the wall to the operations folks to make it go fast. Most of the architects I know give some basic consideration to making a solution scalable, but few rarely look at it as carving more cycles out of the existing hardware vs throwing more money and hardware at the situation. Is this endemic of all large companies? By comparison reading about different Web companies, who have grown out of very, very cheap servers with some great strategies for increasing capacity, decreasing latency and keeping costs down. Google's tech talk on Dapper, their RPC monitoring unit shined a light on how their site has grown over time. From a set of borrowed machines in 1997 to warehouses of servers today.

At work our standard is a sub-6 second response time for most apps. Google would not discuss what their threshold was for response time, but the impression is that data needed to leave the datacenter in a sub 1 second response time, and all of their development was geared towards that goal.

I think I need to spend a bit more time figuring out guidelines for better performance. I should restate that. I need to study the existing performance guidelines and learn to apply them better to my work.

No comments: