← Back to Blog

From IO500 #3 to AI: What Our Core42 Collaboration Proved

At Supercomputing (SC25) in Atlanta, November 2025, the IO500 10-Node Production list came out with the DAOS core in all three top spots. Entries #1, #2, and #3. Entry #3, submitted by Core42 on their Maximus-01 system, ran on standard TCP networking. No RDMA. No exotic interconnects. Just Ethernet and NVMe.

We worked closely with Core42 to validate the Enakta Storage Platform at scale on their infrastructure.

#1 #2 #3
IO500 10-Node Production
DAOS core sweeps all 3 — we hold #3 on TCP
TCP
Core42 #3 achieved
on Ethernet, no RDMA
Top 2 DAOS entries outscore
the next 30 systems combined

What the benchmark actually showed

The IO500 measures both bandwidth (IOR) and metadata (mdtest) across a balanced mix of workloads. The overall score is a geometric mean of both, so you can't get to the top on metadata alone. Placing #3 in the Production list on just 10 nodes, over TCP, puts this configuration ahead of systems running on significantly more hardware with significantly more expensive networking.

The IO500 Production list is specifically for configurations representative of real deployments, not one-off lab setups. As StorageNewsletter summarised, the DAOS core appears to be the winner across all the IO500 lists. The Register noted that the top two DAOS core entries alone have four times the combined benchmark score of the next 30 storage systems.

Why Core42 matters

Core42, the AI infrastructure arm of Abu Dhabi's G42 group, operates some of the largest GPU clusters in the Middle East. They chose the DAOS core for Maximus because at the scale they're building, every percentage point of storage efficiency translates to real money in GPU utilisation.

See Core42's announcement: Core42 on X · Raghu Cherukupalli on LinkedIn

The journey to get here

When we founded Enakta Labs in 2023, we bet on the DAOS core. User-space I/O, no kernel overhead, true distributed metadata, native NVMe access. The architecture looked right, but an engine alone isn't a product.

Version 1.3 brought our SMB and S3 protocol integrations. The Enakta Platform brings these to the same high-performance engine, making it accessible to Windows and macOS workstations, not just HPC clusters. We published a validated reference architecture with KIOXIA over a year ago. We built sub-10-minute failed node recovery. Tooling that gets a cluster running in under an hour. The Core42 results proved the whole stack at a scale that actually matters.

From storage to AI

AI was the natural next step. We developed the native PyTorch integration for the DAOS core for Google and open sourced it. That same integration now underpins their Parallelstore service for AI/ML workloads. It gives PyTorch applications direct access to the DAOS core, bypassing POSIX overhead entirely for training data loading and checkpoint I/O.

FlashActivate builds on that work. Instead of serving models through a conventional filesystem, it uses the Enakta Platform's native throughput to activate model weights. We're targeting sub-second activation — the same architecture that earned IO500 #3, applied to model serving. FlashActivate is in development.

What's next

On the storage side, we're working to get the Enakta Storage Platform into the hands of teams that actually need this. Media studios drowning in 8K footage. HPC labs pushing simulation boundaries. Neoclouds building differentiated infrastructure. Enterprises replacing ageing NAS with something that scales. Version 1.3 is GA with SMB, S3, and PyTorch native access. If your current storage is the bottleneck, we'd love to show you what the DAOS core can do.

On the AI side, we're building the Enakta Labs AI Platform to bring that same storage performance to bare-metal GPU operators running inference at scale. FlashActivate, our model activation layer, is where all of that I/O throughput translates into sub-second model cold starts.

One platform, expanding from storage into AI. We're proud of what the team has built so far, and there's a lot more coming.

Interested?

Whether you need high-performance storage today or want to learn more about the AI Platform, we'd love to hear from you.