As forward-thinking tech companies sometimes do, Facebook VP of Technical Operations Jonathan Heiliger has opened up
a bit and shared some of the inner workings
of Facebook's data center. To handle around 350 million users and the 25 terabytes of data per day that's generated by them, Facebook has over 30,000 servers and is adding about 10,000 more servers every 18 months. Given this scale, monitoring data center performance is a critical operation for the social networking giant, and it's developed its own benchmarking software it calls 'Dyno' (after the Dynamometer that performance car tuners use) to help optimize its applications and hardware output. Heiliger's team at Facebook has written up a brief whitepaper
on their server monitoring accomplishments, concluding that other data centers would also be wise to have Total Cost of Ownership (TCO) in mind before deploying new software or server infrastructure. That's not exactly a world-changing revelation, but publicizing this peek into Facebook's operations serves to improve the company's overall reputation and might provide signals to hardware vendors about where real-world improvements actually appear.
The downside to this exposure is when you operate a massive data center, the responsibility to reduce waste
appears to grow with the number of servers you use. Environmental concerns are raised when inefficient operations are centralized and possibly controlled by a single organization. So Facebook's data centers are subject to extra scrutiny for its programming efficiency, especially when some programmers claim that 75% of Facebook's servers could be eliminated
by switching from PHP to C++
to generate its webpages. Presumably, if such huge savings were really practical, there would be more demand for more efficient programming tools and programmers to implement them. Will assembly language programmers come back into fashion, or should all PHP coders really start brushing up on their C++?