The Under-Discussed Database Market Gives SO MUCH POWER to AWS

The Under-Discussed Database Market Gives SO MUCH POWER to AWS

AcquiredSep 7, 20224m

Ben Gilbert (host), David Rosenthal (host)

Database market size and growthDatabase software stickinessData growth vs bandwidth constraintsAWS Snowball and Snowmobile migration toolsCloud database vendor lock-in dynamicsAmazon’s Oracle-to-AWS migration timelineAWS proprietary vs hosted open-source databases

In this episode of Acquired, featuring Ben Gilbert and David Rosenthal, The Under-Discussed Database Market Gives SO MUCH POWER to AWS explores databases are huge, sticky, and deepen AWS’s cloud lock-in power The database software market is roughly $100B, growing about 10% annually because virtually all computing requires persistent data storage.

Databases are huge, sticky, and deepen AWS’s cloud lock-in power

The database software market is roughly $100B, growing about 10% annually because virtually all computing requires persistent data storage.

Database software is unusually “sticky” because moving large, mission-critical datasets is slow, risky, and operationally painful.

Data creation and storage have grown faster than internet bandwidth, making physical data transfer (e.g., AWS Snowball/Snowmobile) sometimes the most practical migration path.

AWS designed services like Snowball and Snowmobile to address data-migration bottlenecks, lowering barriers to initial cloud adoption while reinforcing long-term retention.

Even Amazon took until 2019—13 years after AWS launched—to fully migrate off Oracle, underscoring how difficult database switching is even for the most capable operator.

Key Takeaways

Databases are a massive, expanding profit pool.

At ~$100B and ~10% annual growth, databases represent one of the largest recurring infrastructure-software categories, driven by the simple reality that every application needs a system of record.

Get the full analysis with uListen

Database lock-in is both technical and physical.

Switching isn’t just changing software APIs—it can mean relocating petabytes/exabytes of data, revalidating performance, and absorbing downtime and risk, creating exceptionally high switching costs.

Get the full analysis with uListen

Bandwidth is the hidden constraint that shapes cloud adoption.

Because data volumes have grown faster than network speeds, uploading over the internet can be impractical; physical shipment can be faster than WAN transfer at extreme scale.

Get the full analysis with uListen

AWS migration hardware is a strategic wedge into the enterprise.

Snowball and Snowmobile solve the “how do we move it? ...

Get the full analysis with uListen

Even best-in-class teams struggle to switch database foundations.

Amazon’s own multi-year journey off Oracle (completed in 2019) demonstrates that database migrations are long, complex programs rather than quick platform swaps.

Get the full analysis with uListen

Once data is in a vendor’s cloud database, retention becomes the default outcome.

The practical difficulty of moving large datasets means initial adoption decisions can lock in years of spend, amplifying AWS’s durable power in cloud infrastructure.

Get the full analysis with uListen

Notable Quotes

The global market size for database software is one hundred billion dollars, and it is growing at ten percent per year.

Ben Gilbert

Database software may be the stickiest software of all time.

Ben Gilbert

Never underestimate the bandwidth of a semi-truck moving down the highway.

David Rosenthal

Amazon.com did not finish their migration off of Oracle databases and onto AWS products until twenty nineteen.

Ben Gilbert

It’s still hard to migrate within the company.

David Rosenthal

Questions Answered in This Episode

What specific technical and organizational factors make databases “stickier” than other core infrastructure (e.g., compute or networking)?

The database software market is roughly $100B, growing about 10% annually because virtually all computing requires persistent data storage.

Get the full analysis with uListen AI

How do Snowball and Snowmobile change the economics and timeline of cloud migration compared to pure internet transfer at petabyte/exabyte scale?

Database software is unusually “sticky” because moving large, mission-critical datasets is slow, risky, and operationally painful.

Get the full analysis with uListen AI

If Amazon took 13 years to migrate off Oracle, what should a typical enterprise realistically plan for in timeline, cost, and risk?

Data creation and storage have grown faster than internet bandwidth, making physical data transfer (e. ...

Get the full analysis with uListen AI

Where is the boundary between healthy switching costs and anti-competitive lock-in in managed cloud databases?

AWS designed services like Snowball and Snowmobile to address data-migration bottlenecks, lowering barriers to initial cloud adoption while reinforcing long-term retention.

Get the full analysis with uListen AI

What strategies (architecture patterns, data formats, multi-cloud designs) can reduce database lock-in without giving up managed-service benefits?

Even Amazon took until 2019—13 years after AWS launched—to fully migrate off Oracle, underscoring how difficult database switching is even for the most capable operator.

Get the full analysis with uListen AI

Transcript Preview

Ben Gilbert

There's two properties of the database market people just don't think about but are incredible. One, the global market size for database software is one hundred billion dollars, and it is growing at ten percent per year. Because everything you do with computing, you need to store it in a database. You need databases, and you can't get away from them. It's big, and it's growing fast. Two, database software may be the stickiest software of all time.

David Rosenthal

Especially at the scale that people are producing data now. It's actually worth contextualizing this a little bit. So there's all these stats all the time, which are something like last year, more data was produced and stored than in the entire decade before and in the entire century before that. And that's not the exact stat, but there's eleven different variants of it, which we all sort of intuitively know because we're storing data on our phones. But when you have two things exponentially growing, it's hard to intuit the difference between those two things. And so we sort of know this about data. We also know this about the internet. Like when you talk about dial-up back in the day, and then when people got their first cable or T-One line, and meanwhile, I'm here podcasting, and David, I am seeing you in gigabit down directly into my computer, and it's unbelievable. So you think, "Wow, these two things have the same phenomena," except that they're actually moving at very different rates. The internet has not gotten faster at the rate that data storage has increased. So this is most illustrated in some of the AWS re:Invent talks. They're like, "Hey, a lot of you wanna shift to the cloud, but you have a petabyte of data, or some of you have an exabyte of data in your data center. So what do we do about that?" And they first released this thing that was a hundred-terabyte super secure thing they would ship to your office called the Snowball, and you'd plug it in. It would automatically get all your data. It had a Kindle on it, so it would actually display a custom shipping thing, and you could track it all the way back, and it would arrive in the Amazon data center, and they would auto... It was like tamper-proof, bulletproof. It was the amazing thing. And they've released a few other generations of them now. There's even some with compute on them for field applications. And then the curves kept going. The internet kept getting a little bit faster, but our data storage kept getting a lot more significant. And there's some stat that Andy gives on stage in a keynote in twenty sixteen, seventeen, somewhere in there, where they announce Amazon Snowmobile. And he's like, "Hey," 'cause all of us are sitting here on computers that have a terabyte or two terabytes or four-terabyte hard drive. You're like, "A hundred terabytes is not that meaningful." And so then they're like, "We will send a Snowmobile to your data center," which is a semi-truck full of Snowballs, effectively, so that you can get the data to us. And even with this solution, this never underestimate the bandwidth of a semi-truck moving down the highway, this type of solution, it can still take six months to migrate all of your data into the cloud, whereas it would have taken you years and years and years and years, I don't know, the better part of a century to actually upload it over the wide area network, over the internet. And so that, I think, illustrates pretty heavily your point about once you decide to put all of your enterprise data into a database hosted in some specific vendor's cloud, there's pretty meaningful lock-in there. There are very practical concerns with moving.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome