Getty Images

New IBM Storage Scale focuses on AI, HPC

Adding to the growing list of storage for AI, IBM Storage Scale System 6000 brings the vendor's parallel file system as well as nearly a ronnabyte of capacity to a single cluster.

IBM has released a new storage system designed to handle high-performance computing and AI workloads.

IBM Storage Scale System 6000 is a software-defined storage system that includes a parallel file system to support file and object data, as well as the two commonly used network-attached storage (NAS) protocols, Network File System (NFS) and Server Message Block (SMB). It is aimed at intensive workloads such as AI and high-performance computing (HPC).

The system uses high speed interconnects, GPUDirect software and the upcoming version of FlashCore Modules (IBM's custom built all-NVMe flash storage modules due out in the second half of 2024) to deliver a performance of up to 256 GBps read. The new version of Storage Scale System, formerly Elastic Storage System, can be extended to thousands of nodes for up to 633 yottabytes in a single cluster.

IBM has taken an HPC system and made an on-premises version that can be managed by IT as opposed to research scientists, according to Randy Kerns, an analyst at Futurum Group. IBM Storage Scale System 6000 comes with HPC properties including extreme scaling and linear performance at scale. The product is evidence of HPC expanding its reach beyond academia and research and into the enterprise, where unstructured data continues to grow and generative AI is taking root, he said.

But the 6000 is also evidence of another trend: The increasing applicability of NAS, which traditionally was used for storing and sharing files such as documents and videos and is now finding broader use cases, according to Brent Ellis, an analyst at Forrester Research.

"The trend is toward object and file storage, getting higher and higher performance and being used for more primary workloads," he said.

Applied to AI

The IBM Storage Scale System 6000 supports Nvidia Magnum IOTM GPUDirect Storage, software that bypasses CPUs and allows data to be transferred between storage and GPU memory directly. This cuts down on data loading that can leave GPUs sitting idle, interrupting the AI model training, an issue that Nvidia said negatively effects AI and HPC application performance.

IBM has native implementation of GPUDirect with RDMA over Converged Ethernet (RoCE) and InfiniBand, high-performance, low-latency interconnect technology, Kerns said.

"This storage element could feed data to a data platform very quickly," he said, adding that it could be IBM's data platform, Watsonx, or other data platforms.

The 6000 would compete against other storage aimed at AI such as Vast Data, Weka and other high-performance NAS like those from Pure Storage and NetApp, Kerns said.

AI is currently a hot topic, and any product or service that can be used for AI will be presented as such, according to Forrester Research's Ellis.

Ellis believes several enterprises will outsource AI model training to established model training vendors due to infrastructure requirements. However, inferencing or solving the task the AI was trained on will be done in-house. For either training or inferencing, Ellis said the 6000 is a capable file system even if all the benefits can't yet be fully realized.

"It's a little cart before the horse, as the new FlashCore Modules that incorporate [some of the benefits] don't come out until the second half of 2024," Ellis said.

A NAS for IBM

IBM is positioning the 6000 as storage for semi- and unstructured data, such as video, imagery, text and instrumentation data -- use cases typically done with NAS devices. However, Ellis said, the technology goes beyond the tasks for which users would utilize a NAS.

"This is meant to interact with, say, containers or data analytic workloads or high-performance files like SAP," he said.

Storage Scale being embedded in the system allows users to connect to cloud-based file systems as part of the directory tree, a hierarchical structure representing where files are stored, increasing portability and caching, Ellis said.

But focusing on NAS use cases addresses a potential gap in IBM's offerings, Kerns said. For several years, it was in an OEM relationship with NetApp, before the deal ceased in 2014.

"It is a file system that supports NFS and SMB access," he said. "It also has a POSIX-compliant driver that is not typically seen in enterprises."

Portable Operating System Interface-compliant drivers are more often seen in HPC technology than NAS systems, Kerns said.

While the 6000 would be overkill for home directories or second-tier storage, it could be used for those cases, as well as high-end HPC and AI use cases, according to Kerns.

Data platforms connect to different systems, particularly on the public cloud, and all of those connections have to be managed, Kerrns said. If all the systems were placed on premises on the 6000, they would not only grant performance and other advantages, but it would also simplify management.

Adam Armstrong is a TechTarget Editorial news writer covering file and block storage hardware, and private clouds. He previously worked at StorageReview.com.

Dig Deeper on Primary storage devices

Disaster Recovery
Data Backup
Data Center
Sustainability
and ESG
Close
  翻译: