Traditional Data Processing at a Crossroads
The exponential growth of data and the need to quickly process and act on the information provided is stretching traditional computing infrastructures beyond their limits. As a result, enterprises are struggling to keep up and meet the customer demand for improved performance, functionality and security.
Data processing has typically required multiple copy operations to place data in locations for processing and analysis. As datasets grow, copy operations consume significant processor bandwidth and create processing latencies. These latencies increase with added CPU cores, load/store demands, and increased loaded request depths at the memory controller.
Overprovisioning of CPU-attached memory drives up server cost when considering worst-case scenarios. These resources are often underutilized, wasting data center space and increasing deployment costs. Underprovisioning can also mean reduced data processing performance resulting from slow storage elements. In other words, you spend a lot of money only to end up with an underperforming data center arrangement.
Gen Z: Scaling Beyond The Limits
In a standard box-to-box function, scaling is limited by the distance a signal can effectively travel and the amount of data that can be received. Gen-Z memory fabric is able to span multiple racks and rows, allowing a substantial number of devices to be connected for data scaling and processing.
IntelliProp’s fabric implementation – which consists of a Gen-Z host bridge, Gen-Z switch, and Gen-Z memory modules – features shared memory operating between server nodes. Database operations run on the data residing in the modules and fabric management are used to configure the Gen-Z fabric resources.
Performance challenges are decreased with data residing in a single location. This eliminates multiple-copy operations on the data, while the multiple servers and accelerators allocated are able to rapidly access the data, and allows for accurate management of server configuration challenges. Memory allocation is handled on an “as-needed” basis, reducing the financial costs of overprovisioning as well as the slow performance of underprovisioning.
This well-defined fabric management operating system – with industry standard Redfish objects for configuration – reduces the total system implementation cost with shared data processing accelerators. Additionally, the low-latency memory access allows existing software to run unmodified and optimizes software to take advantage of the Gen-Z-defined atomic data operations.
Multi-box scaling represents the next phase of Gen-Z technology. We look forward to sharing more in upcoming webinars, blog posts, and in-person at SC21 in November.
Want to speak to someone about Gen-Z Multi-box scaling? Contact us today!