Server CPU Performance Pre and Post Cloud Computing – [Infographic]

Remember When CPU Performance was Guaranteed?

Since the early launch of the PC in the early 1980’s, CPU performance has been defined by the clock speed and front side bus of the processer. It was simple and easy to understand and Intel and AMD battled the MHz and GHz wars over the years.

Server CPU Speeds

This infographic looks at server class CPUs from the Pentium era CPUs of 1993 to today. Along the way,  a lot of new technologies totally transformed the way we look at CPU performance.  Beginning with the advent of X86 server virtualization in 2001, VMware and other virtualization technologies permitted multiple operating systems, and their related processes, to be run in parallel on a single CPU. Instead of relying on the old model of “one server, one application” which lead to under-utilized resources, virtual resources are dynamically applied to meet business needs without any excess “fat.”

In 2006, the compute world changed again with the introduction of Amazon’s EC2 and Cloud Computing 1.0. Amazon changed the definition of CPU capacity to an ambiguous unit that they defined, making it even more complicated for a customer to know what he or she is actually getting.

Cloud Computing 2.0, introduced in 2012, restores transparency and consistent performance to CPU performance. This second-generation Cloud offers dedicated CPU cores (up to 62 cores per virtual machine) and dedicated RAM (up to 240 GB per virtual machine) and ushered in the era of flexible, granular CPU/RAM configurations on a server by server basis.

This infographic takes a historic look at the historical performance of server CPU chips.

ProfitBricks Infographic: CPU to Cloud Computing

 

A full size version of this infographic: CPU to Cloud is available here.

Embed This Image On Your Site (copy code below):