From Monolith to Micro-Frontends: Lessons from the Trenches
Two years ago, I led the migration of a 400k-line React monolith into micro-frontends. It was one of the most challenging projects of my career — and I’d do it again, but very differently.
This isn’t a tutorial. It’s the messy truth about what actually happens when you try to split a frontend that’s been growing unchecked for five years.
Why We Even Considered It
Our codebase had a familiar problem: six teams, one repo, one deploy pipeline. A CSS change in the billing dashboard could break the onboarding flow. Deploys took 45 minutes. Nobody wanted to touch shared components because they were load-bearing walls in a house of cards.
We’d tried module boundaries, CODEOWNERS files, stricter PR reviews — all the “discipline” solutions. They helped for a month and then eroded. The structural problem was that everything shipped together, so everything was coupled together.
Micro-frontends promised independent deploys, team autonomy, and the ability to adopt new tools incrementally. That pitch was mostly true. The parts they leave out of the conference talks are what I want to focus on.
How We Split It
We went with Module Federation (Webpack 5 at the time) and carved the app along team/domain boundaries: billing, onboarding, admin, and a shared “shell” that handled routing and auth.
Each micro-frontend got its own repo, its own CI pipeline, and its own deploy. The shell app loaded them at runtime:
const OnboardingApp = React.lazy(
() => import("onboarding/App")
);
const BillingApp = React.lazy(
() => import("billing/App")
);
function AppRouter() {
return (
<Suspense fallback={<ShellSkeleton />}>
<Routes>
<Route path="/onboarding/*" element={<OnboardingApp />} />
<Route path="/billing/*" element={<BillingApp />} />
<Route path="/admin/*" element={<AdminApp />} />
</Routes>
</Suspense>
);
}
Simple enough on a slide. In practice, this is where the pain starts.
The Challenges Nobody Warns You About
Shared State Is the Hard Part
The moment two micro-frontends need to read the same state — say, the current user object — you have an architecture decision that will haunt you.
We tried three approaches:
- Global Redux store in the shell. Worked initially, then became the exact coupling we were trying to escape. Teams were afraid to change the store shape because other apps depended on it.
- Event bus. We built a custom pub/sub system. It was flexible but impossible to debug. Events flew around with no type safety and no clear ownership.
- Shared context via a lightweight contract. This is what stuck. The shell exposed a typed API surface, and each micro-frontend consumed it through a thin SDK:
// shared-contracts/src/user.ts
export interface ShellContext {
user: User;
permissions: Permission[];
featureFlags: Record<string, boolean>;
navigate: (path: string) => void;
}
// Inside each micro-frontend
const { user, featureFlags } = useShellContext();
The key insight: treat the shell like a backend API. Version it. Document it. Don’t let consumers reach into its internals.
Authentication Was a Nightmare
Our auth flow assumed a single SPA. Tokens were stored in memory after login, and every API call grabbed them from a React context. When you split the app into multiple bundles loaded at runtime, that context doesn’t exist yet when child apps initialize.
We burned two weeks on a race condition where the billing app would mount before the auth token was available, fire off API calls, get 401s, and redirect to login — which would then succeed and redirect back, creating an infinite loop for users on slow connections.
The fix was boring but effective: the shell became the sole owner of auth state and exposed it through a promise-based API that micro-frontends awaited before rendering.
The Design System Versioning Problem
We had a shared component library. When it was a monolith, everyone was on the same version by definition. With micro-frontends, teams drifted. One team was on v2.3 of the design system, another on v2.7. Buttons looked different across pages. Users noticed.
We eventually enforced a policy: shared dependencies (React, the design system, our HTTP client) were provided by the shell as singletons. Micro-frontends declared them as externals. This solved the consistency problem but meant upgrades were coordinated — which partially defeated the “independent teams” promise.
There’s no clean answer here. Pick your tradeoff and be honest about it.
Developer Experience Degraded Before It Improved
For about three months, local development was painful. Running the full app meant starting four dev servers. Hot reload sometimes worked across boundaries, sometimes didn’t. New engineers spent their first week just getting the environment running.
We invested heavily in a CLI tool that orchestrated local development and let you run one micro-frontend against production versions of the others. That tool took a month to build and was the single best investment of the entire migration.
What I’d Do Differently
Start with a strangler pattern, not a big bang. We tried to extract all four domains simultaneously. It was chaos. I’d pick one low-risk domain, extract it fully, learn from the pain, and then move to the next.
Invest in contracts first. We defined the shell’s API as we went, which meant we redesigned it three times. If I did this again, I’d spend the first two weeks just designing the contract layer — types, versioning strategy, error handling — before writing a single line of federation config.
Consider whether you actually need micro-frontends. Seriously. If your problem is “deploys are slow and teams step on each other,” a well-configured monorepo with Turborepo or Nx, good module boundaries, and independent deploy pipelines per package might get you 80% of the benefit at 20% of the complexity. We probably could have tried that first.
Don’t underestimate the cultural shift. Micro-frontends aren’t just an architecture change — they’re an ownership change. Teams that are used to “someone else will fix it” now own their deploy, their errors, and their performance budget. Some teams thrived. Others struggled. Plan for that.
Was It Worth It?
Honestly? Yes — but barely, and only because our monolith was genuinely unworkable. Deploy times dropped from 45 minutes to under 5. Teams shipped independently. Incidents were isolated instead of cascading.
But if someone told me their 50k-line app “needs micro-frontends,” I’d push back hard. The operational overhead is real. The complexity is real. It’s a tool for a specific scale of problem, not a default architecture.
Build the monolith well first. Split it when — and only when — you’ve exhausted simpler options.