MinIO at Cloud Field Day 23: Four Key Takeaways for Enterprise IT

I’ll be honest – MinIO wasn’t initially at the top of my list of must-see presentations at Cloud Field Day 23. But recent conversations in the IT community around their decision to remove the web-based management UI from their Community Edition had piqued my interest. The move generated quite a bit of discussion about open source sustainability and commercial strategies, and I was curious to hear their side of the story.

My interest was also personal. A developer I knew indirectly had been working on an interesting proof of concept using MinIO to store and serve 3D models generated from scans of animal cadavers and organs for veterinary education. The project could require massive storage capacity for detailed anatomical models, and there was also a desire to pivot from using a SQL Server database to store smaller objects. It was exactly the kind of use case that showcases why object storage matters beyond simple file archiving – and why performance and scalability decisions have real-world implications for research and education.

What I got instead was a deep dive into AIStor, MinIO’s commercial offering that represents their evolution from a simple S3-compatible storage solution into what they’re positioning as a comprehensive AI data platform. AB Periasamy, Jason Nadeau, and Dil Radhakrishnan walked us through AIStor, their commercial offering designed specifically for AI and analytics workloads, complete with features I hadn’t expected to see from a storage vendor.

Here are the four key takeaways that stood out to me:

1. Object-Native vs. Gateway Storage: Why Architecture Matters for AI Workloads

Not gonna lie – when I first heard MinIO’s Jason Nadeau talk about “object-native architecture,” my initial reaction was “here we go with another vendor trying to differentiate their storage with fancy terminology.” But as he walked through the comparison between their approach and traditional object gateway solutions, it started making a lot more sense, especially for anyone who’s spent time dealing with the performance headaches that come from bolting new capabilities onto existing infrastructure.

The reality is many enterprise environments have been down this road before. Legacy SAN and NAS systems get extended and retrofitted for years because ripping and replacing storage infrastructure isn’t exactly a trivial decision. But what MinIO demonstrated is why that approach fundamentally doesn’t work when you’re talking about AI workloads that need to move massive amounts of data quickly and consistently. Their gateway-free, stateless, direct-attached architecture eliminates the translation layers that create bottlenecks – and anyone who’s ever tried to troubleshoot performance issues through multiple abstraction layers knows exactly what I’m talking about.

What makes this architectural difference even more compelling is how it enables features like PromptObject – AIStor’s ability to query unstructured data directly through the S3 API using natural language prompts. During Dil Radhakrishnan’s demo, you could literally ask a PDF or image to return structured JSON data without building complex RAG pipelines or maintaining separate vector databases. For known single-object queries, PromptObject removes the need for those components entirely—but it can also complement a RAG pipeline when broader inference or contextual chaining is required.

When AB Periasamy talked about deployments with more than 60,000 drives across multiple racks, all needing atomic operations across multiple drives simultaneously, it hit home why traditional storage architectures break down. AI training and inference demand a level of performance and consistency that wasn’t even on the radar when most current storage infrastructure was designed. And increasingly, they also demand the kind of intelligent interaction with data that PromptObject represents – turning storage from a passive repository into an active participant in AI workflows.

MinIO also demonstrated something called the Model Context Protocol (MCP) – which, frankly, sounds like yet another acronym to keep track of, but actually does something useful. It’s Anthropic’s spec that MinIO has adopted to let AI agents talk directly to storage systems. So instead of pulling data out, processing it somewhere else, and shoving it back, an AI agent can just ask MinIO to list buckets, tag objects, or even build dashboards on the fly. It’s the kind of direct integration that makes sense once you see it in action, even if the name makes it sound more complicated than it needs to be.

2. S3 Express API: What Amazon Learned About AI Storage Performance

AB Periasamy’s explanation of S3 Express was particularly interesting. Amazon’s decision to strip away certain features from their general-purpose API to optimize for AI workloads reveals where the real performance bottlenecks live.

The changes Amazon made tell a story about practical performance optimization. Getting rid of MD5 sum computations makes perfect sense – anyone who’s dealt with large file transfers knows that checksum calculation can be a significant CPU hit, especially when you’re talking about the massive datasets AI workloads require. Same goes for eliminating directory sorting on list operations. When you’re dealing with billions of objects, sorting is just a waste of compute resources that AI applications don’t actually need.

What’s particularly interesting from an enterprise IT perspective is that MinIO implemented S3 Express compatibility in AIStor, giving you the choice between regular S3 API and S3 Express without requiring any data format changes. You can literally restart the server and switch between APIs. That kind of flexibility is exactly what organizations need when they’re constantly balancing performance requirements with operational simplicity and budget constraints.

3. GPU Direct Storage: Why Your CPU is the New Bottleneck

Here’s something that really made me rethink how modern compute infrastructure should be architected: AB’s explanation of how GPUs have become the main processor and CPUs have essentially become co-processors for AI workloads. For those of us who’ve spent years optimizing CPU and memory utilization, this represents a significant architectural shift.

The bottleneck isn’t the GPU processing power – it’s how fast you can get data to the GPU memory. Traditional architectures require data to flow from storage through the CPU and system memory before reaching the GPU, creating a chokepoint that limits the performance of expensive GPU hardware. GPU Direct Storage bypasses all that by using RDMA to move data directly from storage to GPU memory, with HTTP as the control plane and RDMA as the data channel.

 

What caught my attention during the Q&A was the practical implementation details. You need Mellanox ConnectX-5 or newer network cards, and there are real trade-offs around encryption (you basically lose the RDMA performance benefits if you need to decrypt on the client side). These are the kinds of infrastructure requirements that need to be planned for now if organizations are serious about supporting AI workloads. The performance gains are significant, but you’re looking at specific hardware requirements and architectural decisions that affect entire network fabrics.

4. From 30PB to 50PB Overnight: Scaling Storage for AI at Enterprise Scale

One of the most eye-opening parts of the presentation was hearing about real customer deployments – like the fintech client that scales from 30 petabytes to 50 petabytes based on market volatility, or the autonomous vehicle manufacturer storing over an exabyte of data. These aren’t theoretical use cases; these are production environments dealing with the kind of explosive data growth that keeps storage administrators up at night (and honestly, makes me grateful for our more modest data growth challenges).

What really resonated was the discussion around failure planning. MinIO built AIStor with erasure coding parity levels of eight, assuming your hardware will break and plans accordingly. In environments where equipment often runs longer than ideal due to budget constraints (I once maintained a set of IBM servers nearly a decade past their initial warranty), this kind of resilience planning is crucial. When you’re talking about exabyte-scale deployments, hardware failure isn’t a possibility – it’s a constant reality.

The implications for higher education are significant. Research institutions are increasingly dealing with AI and machine learning workloads that generate massive datasets. The traditional approach of scaling up conventional storage solutions isn’t going to cut it when a single research project can generate petabytes of data. Organizations need to start thinking about storage infrastructure that’s designed from the ground up for these workloads, not retrofitted to handle them.

Final Thoughts

What struck me most about MinIO’s presentation was AB Periasamy’s technical candor and depth of knowledge. This was my second experience at a Tech Field Day event where I found myself genuinely impressed by a CEO’s ability to dive into the technical weeds and provide substantive answers to challenging delegate questions. AB didn’t shy away from discussing the limitations and trade-offs of their approach – whether it was acknowledging the encryption challenges with GPU Direct Storage or explaining why certain hardware requirements are non-negotiable.

The removal of the Community Edition GUI, which initially brought MinIO to my attention for this event, makes more sense in the context of their broader strategy. They’re clearly betting that the future of storage isn’t about pretty management interfaces, but about APIs, automation, and intelligent data interaction. Whether that bet pays off remains to be seen, but their technical approach to solving real AI infrastructure challenges is compelling.

For organizations serious about AI workloads, MinIO’s AIStor represents a thoughtful approach to the storage infrastructure challenges that traditional vendors are still trying to solve by bolting AI capabilities onto legacy architectures. The question isn’t whether AI will transform how we think about storage – it’s whether we’ll build infrastructure designed for that transformation, or continue retrofitting solutions that were never meant for these workloads.

 

To watch all the videos of MinIO’s presentations at Cloud Field Day 23, head over to Tech Field Day’s site.

 

Cloud ERP on Your Terms: SAP, HPE GreenLake, and the Private Cloud Middle Ground

I participated this week as a delegate at Cloud Field Day 23, and one of the most candid sessions so far came from HPE GreenLake and SAP. The focus? SAP Cloud ERP – formerly known as RISE – and their joint approach to helping legacy SAP ERP customers make the leap to their private cloud platform.

An early slide (highlights mine) hit with a stat that landed harder than I expected: as of the end of 2024, only 39% of legacy SAP ERP customers had actually purchased S/4HANA licenses. That’s not migration complete—that’s just licenses purchased. And that is for a product that goes End of Support in 2027. For a platform as mission-critical and sprawling as SAP ERP, it’s not hard to see why inertia reigns.

SAP and HPE’s proposed answer for hesitant customers? A hybrid approach called Customer Data Center (CDC) private cloud ERP. Think of it as SAAS, but running in your data center, on HPE hardware, maintained by both SAP and HPE. Customers get cloud operations and SAP support continuity, while keeping their workloads and their data close to home. It’s designed to help customers avoid falling off the end-of-support cliff while buying time to transition on their terms.

The session also included a customer perspective from Energy Transfer, a US firm with 130,000 miles of pipeline in 44 states and one of the early adopters of this CDC model. They were refreshingly transparent. Yes, there were “sticks and carrots” involved in the decision, but the biggest carrot for them was the promise of access to Joule – SAP’s agentic AI platform. Joule is only available in SAP’s public SaaS offering or this private CDC model, making it a compelling draw. Energy Transfer’s non-negotiable condition? The transition had to be cost-neutral.

SAP also described how they structure their engagement model to support projects of this magnitude. Given how many ERP projects fail or flounder due to continuity issues, I asked a question during the session about team depth. Specifically, how do they manage institutional knowledge when key personnel inevitably move on? SAP’s response was pragmatic: their named project teams are regional, and roles are built with intentional overlap. Each team member is flanked by someone one level above and one level below who is looped in, to smooth transitions if (when) someone leaves. As someone who has had to step into the gap when colleagues take other opportunities and now manages a team, that struck me as both smart and necessary.

HPE and SAP didn’t shy away from the business reality underpinning all of this. The perpetual license model is dying, and subscription-based models are now the norm. While some customers still pine for the days of CapEx and perpetuals, HPE and SAP are incentivizing the move to recurring revenue models in a way that’s clearly designed to align better with how modern IT is financed and measured.

Bottom line? Public Cloud ERP isn’t one-size-fits-all, and by SAP’s admission isn’t ready for many of their complex and customized customer environments. This hybrid CDC approach acknowledges that reality. Not every enterprise is ready to go all-in on SaaS, and some may never be. SAP and HPE GreenLake seem to understand that, and the CDC model looks like a pragmatic (and carrot-laced) middle path.