Building an AI-Native Software Company With Legora CEO Max Junestrand | Ep. 44

Building an AI-Native Software Company With Legora CEO Max Junestrand | Ep. 44

Max Junestrand (guest), Jack Altman (host)

Why not to fine-tune legal-specific modelsApplication-layer differentiation: RAG, citations, context, trustAI-native org design: engineer-led, minimal product roadmapKilling features as models improve (low-ego culture)Due diligence at scale: tabular/matrix review paradigmCompetitive pilots, forward-deployed legal engineers, migrationsEurope-first global expansion and Stockholm culture

In this episode of Uncapped with Jack Altman, featuring Max Junestrand and Jack Altman, Building an AI-Native Software Company With Legora CEO Max Junestrand | Ep. 44 explores legora CEO on building an AI-native legal software juggernaut fast Legora bet early that foundation models would rapidly improve, so differentiation would come from application-layer workflows, guardrails, and enterprise-grade trust rather than training bespoke models.

Legora CEO on building an AI-native legal software juggernaut fast

Legora bet early that foundation models would rapidly improve, so differentiation would come from application-layer workflows, guardrails, and enterprise-grade trust rather than training bespoke models.

The company built an engineering- and research-led organization that expects features to become obsolete quickly and optimizes for speed, low ego, and continuous re-learning.

Legora’s initial breakthrough came from narrowing focus to a few high-value legal workflows (e.g., tabular review and deep Word/Outlook integration), leading to rapid revenue doubling and U.S. expansion.

Go-to-market relied on frictionless competitive pilots and forward-deployed “legal engineers” to deliver immediate, measurable value that created strong usage-based stickiness.

Starting in Sweden forced multi-country and multi-language readiness from day one, enabling parallel global expansion while maintaining a distinctive Stockholm-centered culture (#blodsmak).

Key Takeaways

In vertical AI, assume foundation models will outpace bespoke tuning.

Legora rejected the 2023–2024 “train your own model” playbook, arguing fine-tuning didn’t work well enough at their scale and that real value was in compliance, ingestion, parsing, citations, and workflow design around fast-improving models.

Get the full analysis with uListen AI

Differentiation shifts from model IQ to trustworthy execution environments.

Max notes a “flip” where models are no longer the bottleneck; the hard work is building software that lets models execute with the right tools and lets humans review outputs with confidence (auditability, UI, workflow controls).

Get the full analysis with uListen AI

Build for both human users and agent users.

Legora treats new features as dual-use: humans may initiate or review, while agents increasingly operate tools like tabular review and document editing directly—changing product requirements and interface priorities.

Get the full analysis with uListen AI

An AI-native company must be comfortable deleting major work.

They expect capabilities to become native to the next model release, so the organization optimizes for humility and adaptability—shipping fast, then ripping out scaffolding when the model makes it obsolete.

Get the full analysis with uListen AI

Roadmaps compress when models “flip”; execution cadence becomes daily.

Instead of long-term planning, Legora recalibrates continuously as new model behavior unlocks new product primitives; this requires strong evaluation infrastructure and fast decision loops.

Get the full analysis with uListen AI

Win adoption with “usage stickiness,” not implementation lock-in.

Legora’s stickiness comes from lawyers building workflows and using them frequently, not from hard-to-remove data integrations—making it easier to displace incumbents with better pilots and a dedicated migration team.

Get the full analysis with uListen AI

Global scale can be achieved in parallel when you ignore old SaaS playbooks.

Legora ran pilots and sold wherever demand appeared (e. ...

Get the full analysis with uListen AI

Notable Quotes

For two reasons we were like, 'Fuck that,' right?

Max Junestrand

If AI can do something, it will do it.

Max Junestrand

The models are now intelligent enough where they're no longer the bottleneck.

Max Junestrand

Legora has two users. It's human users and agent users.

Max Junestrand

At Legora, we wake up with a metallic taste of blood in our mouths.

Max Junestrand

Questions Answered in This Episode

Legora’s early view was “don’t fine-tune”: what specific tasks (if any) have since changed your mind about domain adaptation, synthetic data, or retrieval-augmented fine-tuning?

Legora bet early that foundation models would rapidly improve, so differentiation would come from application-layer workflows, guardrails, and enterprise-grade trust rather than training bespoke models.

Get the full analysis with uListen AI

You mention models aren’t the bottleneck anymore—what are the top 3 “trust and review” primitives you think legal AI platforms must nail (citations, provenance, audit logs, redlining, etc.)?

The company built an engineering- and research-led organization that expects features to become obsolete quickly and optimizes for speed, low ego, and continuous re-learning.

Get the full analysis with uListen AI

Tabular Review solved due diligence scale issues; what other legal workflows break in a chat interface, and what UI paradigms replace chat for those tasks?

Legora’s initial breakthrough came from narrowing focus to a few high-value legal workflows (e. ...

Get the full analysis with uListen AI

How do you decide when to delete internal scaffolding versus keep it as a safety layer, especially when customers demand reliability over novelty?

Go-to-market relied on frictionless competitive pilots and forward-deployed “legal engineers” to deliver immediate, measurable value that created strong usage-based stickiness.

Get the full analysis with uListen AI

What does your evaluation stack look like in practice (gold data creation, customer-contributed tasks, pass/fail thresholds), and how do you prevent eval overfitting?

Starting in Sweden forced multi-country and multi-language readiness from day one, enabling parallel global expansion while maintaining a distinctive Stockholm-centered culture (#blodsmak).

Get the full analysis with uListen AI

Transcript Preview

Max Junestrand

I remember doing this interview in Swedish. There's a saying, like, blodsmak, and you know, like, you taste the... like, you taste the blood because you're- you worked so hard.

Jack Altman

Yeah.

Max Junestrand

She publishes the article in English.

Jack Altman

[laughs]

Max Junestrand

"At Legora, we wake up with a metallic taste of blood in our mouths."

Jack Altman

[laughs]

Max Junestrand

And people in the company go, "Holy shit. Is Max a vampire, or does he just floss badly?" Like-

Jack Altman

[laughs]

Max Junestrand

... what's going on?

Jack Altman

It's not, like, a culture that I think would quite work in San Francisco. Like, I don't know-

Max Junestrand

No

Jack Altman

... if that's something that you can do.

Max Junestrand

Well, when we open our San Francisco office, they're gonna taste the blood.

Jack Altman

[laughs]

Max Junestrand

Yeah. They're gonna taste the blood.

Jack Altman

I love it. Well, this is gonna be a cool new format. I'm here with my new partner, Chetan and Max. And, uh, Max, you're the founder, CEO of Legora, which is an amazing legal tech company that Chetan sits on the board of. And so I just feel really lucky to be doing this with both of you. So you guys, thank you for, thank you for making this happen.

Max Junestrand

Thank you so much, Jack. It's great to be here.

Jack Altman

Okay. So I wanna start with the topic of competition. And Chetan, when you invested in the company, there were already competitors out there. This was, I think... It was only two year- It's crazy because Legora's like-

Max Junestrand

Yeah. Two years ago

Jack Altman

... it's a big company already.

Max Junestrand

Yeah.

Jack Altman

But it's-

Max Junestrand

Almost 400, 400 people

Jack Altman

... you know, but this, this, the seed was two years ago. And at the time of the seed, it was an early market, but there were competitors out there. And so I actually wanna start with you, Chetan. Like, what was in your head at the moment you invested? Were you thinking about the landscape around? Like, were you just, "Max is so special that I don't care"? Like, what was going through your head when you did that?

Speaker

The first meeting that we had was with Max, was with me, Peter, and Max actually in the other room. And interestingly, I had invested in two other legal software companies pre-AI.

Jack Altman

Oh, wow.

Speaker

And so there was a shape of the legal market that I intuitively understood because I participated in the market, and so I sort of understood the different kinds of lawyers. Who buys software? Do in-house lawyers buy software? Do law firms buy so- How they... Sort of there was a intuitive understanding that I had. And there's sort of, like, two things that happen when you've, like, sold into an industry before. Either you end up hating it-

Jack Altman

[laughs]

Speaker

... or, or you have some strong bias against it.

Jack Altman

Totally.

Speaker

So there was always this idea that there's opportunity for AI in the legal market, and you know, there was a player in the market that had already raised an billion-dollar valuation. And when Max came in to chat with me and Peter, the thing that immediately jumped out was the clarity of thought that Max had on why the general foundation models had a lot of room to grow-

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome