Qualcomm Solution for Inferencing
Leveraging the G292 design, the G292-Z43 is built for AI Inferencing applications with the Qualcomm Cloud AI 100 accelerator.
Up to 400TOPS per card and low-power consumption, 75W, from a single-slot GPU, the Qualcomm AI 100 provides a card for edge, telco, and datacenters.
Low-profile, HHHL, design allows for absurd GPU density using PCIe Gen4 lanes. The G292-Z43 supports up to 16 cards in x8 slots for high throughput.
Strong power efficiency translates into lowering TCO for edge and telco computing while the AI 100 is highly adaptable to environments.
MLPerf v1.1 benchmark showcased leading performance by the Cloud AI 100 in inferences/second/watt. And also leadership in offline inferences/second.
Major frameworks supported natively. Over 50 ML models for computer vision and natural language processing. Also python for development.
G292-Z43 can support sixteen Gen4 x8 lanes for AI 100 cards, and for faster networking there are an additional two low-profile Gen4 x16 lanes.
Inference applications with leading hyperscale companies requiring heavy AI inferencing workloads for natural language processing, recommendations and prediction engines.
Reshaping transportation and smart cities with massive MIMO (multiple-input, multiple-outfit) antennas.
Transforming the shopping experience, public safety, manufacturing (tracking defects), and agriculture.
HPC/AI Server - AMD EPYC™ 7002 - 2U DP 16 x PCIe Gen4 GPUs (Broadcom solution) | Application: AI , AI Training , AI Inference & HPC