Static experiences are ending
Tomas Måsviken
March 27, 2026
Every product team is investing in AI. New tools, faster workflows, copilots in every editor. The way we build has changed dramatically in the last eighteen months.
But look at what we're actually shipping. The same static pages. The same flows. The same content – served identically to every user, regardless of who they are, what they need, or where they are in their journey.
That's the elephant in the room.
Most of the AI conversation in product and design is about efficiency. Ship faster. Generate more. Automate the pipeline. And that's real – we've seen it firsthand. Projects that would have taken 6–12 months and millions in IT investment are now scoped and delivered in weeks.
But efficiency is table stakes. The bigger shift is that products can now adapt. Not through expensive middleware or marketing-team-managed A/B tests bolted on after the fact – but as a fundamental property of how the product works.
A hotel page that knows you're a business traveler and surfaces what matters to you. A product card that emphasises sustainability for one shopper and price for another. Not because someone manually created those variants, but because the product understands who it's talking to.
This isn't futuristic. The data already exists – in CDPs, CRMs, booking engines, analytics. The design system already defines every visual element. What's missing is the layer that connects them.
If contextual experiences are so obviously better, why is virtually everything still static?
Because the toolchain made dynamic too expensive. Building a contextual experience with today's stack means designing N variants, engineering N code paths, managing N content sets, testing N×M combinations, and maintaining all of it over time. The cost scales multiplicatively. So we build static – not because it's right, but because it's affordable.
The alternative has been enterprise personalization platforms – $100K–500K/year, owned by marketing, sitting outside the design system and outside the codebase. They work for banner swaps and campaign overlays. They don't change how the product fundamentally works. Designers aren't in control. The product team is watching from the sidelines while a separate tool runs experiments on their surfaces.
That's why the old way persists. Not because people prefer static. Because the cost of contextual has been too high, and the "solutions" live in the wrong place.
We've spent the last months building and testing an architecture where contextual costs the same as static. The core idea is a clean separation into three layers:
The design system – components as context-free shells. A card, a hero, a feature block. They define what you can say, not what you should say. Most organisations already have this.
Context – what you know about the user and the situation. Persona descriptions that both designers and AI can read, plus structured signals from the systems you already have. Most organisations have this data too – it's just scattered across tools.
Composition – the thin layer that connects understanding to vocabulary. For high-stakes content, the designer determines exact copy per context. For everything else, the designer describes the intent – "a description adapted to what matters most to this guest" – and AI resolves it against the persona.
The cost of being contextual drops from N×M (a variant for every persona-component combination) to N+M (one description per persona, one intent per component). For 4 personas and 10 components, that's 40 variants vs 14 descriptions. At 10 personas and 50 components, it's 500 vs 60. The more you grow, the bigger the advantage.
Here's the part that matters most to us: this approach keeps designers in control of how experiences adapt. Not ML teams. Not marketing tools. Designers.
The persona descriptions are authored in Figma. The slot intents are written by the person who designed the component. The output is reviewed in Storybook, alongside the design system. It's governed, version-controlled, and brand-consistent – because it lives inside the same tools and workflows the team already uses.
This is fundamentally different from handing adaptation over to a separate platform and a separate team.
We've been running workshops and deep dives with product teams across several industries – hospitality, messaging, fashion. The pattern is consistent: once you build on their actual surfaces, with real data and real components, it clicks fast. Teams go from curious to committed.
We're also seeing this beyond contextual experiences. We've started building workflow automation that, just a few years ago, would have meant massive IT programmes. The economics of what's possible to build have shifted faster than most organisations realise.
The most AI-forward teams aren't prototyping in lightweight tools anymore. They're building deployable product experiences with real code and real data – and their teams are learning through building, not through decks. That builds autonomy and reduces long-term dependency on external partners, including us.
Every product decision is a choice between static and contextual now. Most teams are still choosing static by default – not deliberately, but because they haven't seen an alternative that's affordable and designer-led.
The question isn't whether products will become contextual. It's which surfaces you start with.
Tomas Måsviken is the founder of Samsen, a design studio where every designer ships with AI.
Static experiences are ending
Tomas Måsviken
March 27, 2026

Every product team is investing in AI. New tools, faster workflows, copilots in every editor. The way we build has changed dramatically in the last eighteen months.
But look at what we're actually shipping. The same static pages. The same flows. The same content – served identically to every user, regardless of who they are, what they need, or where they are in their journey.
That's the elephant in the room.
Most of the AI conversation in product and design is about efficiency. Ship faster. Generate more. Automate the pipeline. And that's real – we've seen it firsthand. Projects that would have taken 6–12 months and millions in IT investment are now scoped and delivered in weeks.
But efficiency is table stakes. The bigger shift is that products can now adapt. Not through expensive middleware or marketing-team-managed A/B tests bolted on after the fact – but as a fundamental property of how the product works.
A hotel page that knows you're a business traveler and surfaces what matters to you. A product card that emphasises sustainability for one shopper and price for another. Not because someone manually created those variants, but because the product understands who it's talking to.
This isn't futuristic. The data already exists – in CDPs, CRMs, booking engines, analytics. The design system already defines every visual element. What's missing is the layer that connects them.
If contextual experiences are so obviously better, why is virtually everything still static?
Because the toolchain made dynamic too expensive. Building a contextual experience with today's stack means designing N variants, engineering N code paths, managing N content sets, testing N×M combinations, and maintaining all of it over time. The cost scales multiplicatively. So we build static – not because it's right, but because it's affordable.
The alternative has been enterprise personalization platforms – $100K–500K/year, owned by marketing, sitting outside the design system and outside the codebase. They work for banner swaps and campaign overlays. They don't change how the product fundamentally works. Designers aren't in control. The product team is watching from the sidelines while a separate tool runs experiments on their surfaces.
That's why the old way persists. Not because people prefer static. Because the cost of contextual has been too high, and the "solutions" live in the wrong place.
We've spent the last months building and testing an architecture where contextual costs the same as static. The core idea is a clean separation into three layers:
The design system – components as context-free shells. A card, a hero, a feature block. They define what you can say, not what you should say. Most organisations already have this.
Context – what you know about the user and the situation. Persona descriptions that both designers and AI can read, plus structured signals from the systems you already have. Most organisations have this data too – it's just scattered across tools.
Composition – the thin layer that connects understanding to vocabulary. For high-stakes content, the designer determines exact copy per context. For everything else, the designer describes the intent – "a description adapted to what matters most to this guest" – and AI resolves it against the persona.
The cost of being contextual drops from N×M (a variant for every persona-component combination) to N+M (one description per persona, one intent per component). For 4 personas and 10 components, that's 40 variants vs 14 descriptions. At 10 personas and 50 components, it's 500 vs 60. The more you grow, the bigger the advantage.
Here's the part that matters most to us: this approach keeps designers in control of how experiences adapt. Not ML teams. Not marketing tools. Designers.
The persona descriptions are authored in Figma. The slot intents are written by the person who designed the component. The output is reviewed in Storybook, alongside the design system. It's governed, version-controlled, and brand-consistent – because it lives inside the same tools and workflows the team already uses.
This is fundamentally different from handing adaptation over to a separate platform and a separate team.
We've been running workshops and deep dives with product teams across several industries – hospitality, messaging, fashion. The pattern is consistent: once you build on their actual surfaces, with real data and real components, it clicks fast. Teams go from curious to committed.
We're also seeing this beyond contextual experiences. We've started building workflow automation that, just a few years ago, would have meant massive IT programmes. The economics of what's possible to build have shifted faster than most organisations realise.
The most AI-forward teams aren't prototyping in lightweight tools anymore. They're building deployable product experiences with real code and real data – and their teams are learning through building, not through decks. That builds autonomy and reduces long-term dependency on external partners, including us.
Every product decision is a choice between static and contextual now. Most teams are still choosing static by default – not deliberately, but because they haven't seen an alternative that's affordable and designer-led.
The question isn't whether products will become contextual. It's which surfaces you start with.
Tomas Måsviken is the founder of Samsen, a design studio where every designer ships with AI.
Static experiences are ending
Tomas Måsviken
March 27, 2026
Every product team is investing in AI. New tools, faster workflows, copilots in every editor. The way we build has changed dramatically in the last eighteen months.
But look at what we're actually shipping. The same static pages. The same flows. The same content – served identically to every user, regardless of who they are, what they need, or where they are in their journey.
That's the elephant in the room.
Most of the AI conversation in product and design is about efficiency. Ship faster. Generate more. Automate the pipeline. And that's real – we've seen it firsthand. Projects that would have taken 6–12 months and millions in IT investment are now scoped and delivered in weeks.
But efficiency is table stakes. The bigger shift is that products can now adapt. Not through expensive middleware or marketing-team-managed A/B tests bolted on after the fact – but as a fundamental property of how the product works.
A hotel page that knows you're a business traveler and surfaces what matters to you. A product card that emphasises sustainability for one shopper and price for another. Not because someone manually created those variants, but because the product understands who it's talking to.
This isn't futuristic. The data already exists – in CDPs, CRMs, booking engines, analytics. The design system already defines every visual element. What's missing is the layer that connects them.
If contextual experiences are so obviously better, why is virtually everything still static?
Because the toolchain made dynamic too expensive. Building a contextual experience with today's stack means designing N variants, engineering N code paths, managing N content sets, testing N×M combinations, and maintaining all of it over time. The cost scales multiplicatively. So we build static – not because it's right, but because it's affordable.
The alternative has been enterprise personalization platforms – $100K–500K/year, owned by marketing, sitting outside the design system and outside the codebase. They work for banner swaps and campaign overlays. They don't change how the product fundamentally works. Designers aren't in control. The product team is watching from the sidelines while a separate tool runs experiments on their surfaces.
That's why the old way persists. Not because people prefer static. Because the cost of contextual has been too high, and the "solutions" live in the wrong place.
We've spent the last months building and testing an architecture where contextual costs the same as static. The core idea is a clean separation into three layers:
The design system – components as context-free shells. A card, a hero, a feature block. They define what you can say, not what you should say. Most organisations already have this.
Context – what you know about the user and the situation. Persona descriptions that both designers and AI can read, plus structured signals from the systems you already have. Most organisations have this data too – it's just scattered across tools.
Composition – the thin layer that connects understanding to vocabulary. For high-stakes content, the designer determines exact copy per context. For everything else, the designer describes the intent – "a description adapted to what matters most to this guest" – and AI resolves it against the persona.
The cost of being contextual drops from N×M (a variant for every persona-component combination) to N+M (one description per persona, one intent per component). For 4 personas and 10 components, that's 40 variants vs 14 descriptions. At 10 personas and 50 components, it's 500 vs 60. The more you grow, the bigger the advantage.
Here's the part that matters most to us: this approach keeps designers in control of how experiences adapt. Not ML teams. Not marketing tools. Designers.
The persona descriptions are authored in Figma. The slot intents are written by the person who designed the component. The output is reviewed in Storybook, alongside the design system. It's governed, version-controlled, and brand-consistent – because it lives inside the same tools and workflows the team already uses.
This is fundamentally different from handing adaptation over to a separate platform and a separate team.
We've been running workshops and deep dives with product teams across several industries – hospitality, messaging, fashion. The pattern is consistent: once you build on their actual surfaces, with real data and real components, it clicks fast. Teams go from curious to committed.
We're also seeing this beyond contextual experiences. We've started building workflow automation that, just a few years ago, would have meant massive IT programmes. The economics of what's possible to build have shifted faster than most organisations realise.
The most AI-forward teams aren't prototyping in lightweight tools anymore. They're building deployable product experiences with real code and real data – and their teams are learning through building, not through decks. That builds autonomy and reduces long-term dependency on external partners, including us.
Every product decision is a choice between static and contextual now. Most teams are still choosing static by default – not deliberately, but because they haven't seen an alternative that's affordable and designer-led.
The question isn't whether products will become contextual. It's which surfaces you start with.
Tomas Måsviken is the founder of Samsen, a design studio where every designer ships with AI.