Dell Technologies has expanded its AI data infrastructure portfolio by integrating the NVIDIA AI Data Platform into the Dell AI Data Platform. The goal of the combined offering is to help enterprises turn distributed and siloed data into more usable inputs for AI development, training, and production workloads.

The Dell AI Data Platform—part of the broader Dell AI Factory strategy—aims to provide an open, modular architecture that decouples data storage from compute. This design is intended to reduce data bottlenecks and support a range of AI tasks, including training, fine-tuning, retrieval-augmented generation (RAG), and inference.

The four core building blocks are: Storage engines for smart data placement and seamless data movement; data engines to turn data into actionable insights; built-in cyber resiliency; and data management services.

Storage Engines: PowerScale and ObjectScale

Dell highlights its PowerScale NAS platform and ObjectScale object storage as the core storage engines supporting the updated architecture.

PowerScale is positioned as a NAS system optimized for parallel performance across AI training, fine-tuning, inference, and RAG pipelines. Dell reports new integrations with NVIDIA GB200 and GB300 NVL72 systems, alongside software updates designed to simplify large-scale management and improve compatibility with AI application stacks.

The company also notes that its PowerScale F710 system has received NVIDIA Cloud Partner (NCP) certification for high-performance storage. According to Dell, the system can support GPU clusters at 16,000+ GPU scale while reducing rack space, network switches, and power consumption versus unspecified competitors.

ObjectScale

ObjectScale is described as Dell’s high-performance, S3-native object storage platform for large-scale AI datasets. The platform is available either as an appliance or as software on Dell PowerEdge servers; Dell reports that the software-defined version offers performance up to eight times faster than its previous-generation all-flash object storage.

New enhancements include:

  • S3 over RDMA (tech preview): Dell projects improvements such as 230% higher throughput, 80% lower latency, and significant reductions in CPU use relative to traditional S3 implementations.
  • Small-object optimization: For large deployments, the company cites up to 19% higher throughput and up to 18% lower latency for 10 KB objects.
  • Extended AWS S3 alignment: Deeper interoperability and bucket-level compression to assist developers and data scientists in handling large data volumes.

Data Engines: Search, Analytics, and Vector Search Integration

Dell is also expanding its portfolio of “data engines,” software components designed to prepare, organize, query, and activate enterprise data for AI workflows. These tools are built in collaboration with partners including NVIDIA, Elastic, and Starburst.

Data Search Engine (with Elastic)

The new Data Search Engine enables natural-language querying of enterprise data for use cases such as RAG, semantic search, and generative AI pipelines. Integrated with Dell’s MetadataIQ, it can search large file sets on PowerScale and ObjectScale using detailed metadata. Developers can also integrate with frameworks like LangChain while leveraging incremental ingestion to reduce compute overhead.

Data Analytics Engine (with Starburst)

This engine allows federated querying across multiple enterprise data sources, including spreadsheets, databases, cloud warehouses, and lakehouses. A new “Agentic Layer” uses large language models to automate documentation, derive insights, and embed AI into SQL workflows. It also unifies access to vector stores (e.g., Iceberg, PostgreSQL with PGVector, Dell’s Data Search Engine) to support RAG and search tasks. Dell has added model monitoring and governance features and introduced an MCP Server to support multi-agent and AI application development.

NVIDIA cuVS Integration

The Dell AI Data Platform now integrates NVIDIA’s cuVS to accelerate vector search workloads. The combined solution brings GPU-accelerated hybrid search (keyword + vector) to the Data Search Engine, with the aim of delivering higher-performance enterprise search in an on-premises environment. Dell positions this integration as a turnkey, GPU-powered option for organizations deploying or scaling vector-search-driven AI applications.


Share this post
The link has been copied!