Servers for the Post-Moore Era
Server architecture is entering an age of heterogeneity, as silicon performance scaling approaches its horizon and the economics of scale enables cost-effective deployment of custom silicon in the datacenter. Traditionally, customized components have been deployed as discrete expansion boards to reduce cost and design complexity to ensure compatibility with rigidly designed CPU silicon and its surrounding infrastructure. A prime example of this pattern is the tethering of today’s Remote Direct Memory Access (RDMA) network interface cards to commodity PCIe interconnects.
Although using a commodity I/O interconnect has enabled RDMA to be deployed at large scales in today’s datacenters, our prior work on the “Scale-Out NUMA” project has shown that judiciously integrating architectural support directly on the CPU silicon provides significant benefits. Namely, integration affords lower RDMA latency and the ability to perform richer operations such as atomic accesses to software objects, and remote procedure calls.
The HARNESS project therefore aims at co-designing server silicon with software to support the performance-critical primitives in the datacenter – in particular those pertaining to networked systems and storage stacks.