How I Cut 89% of Unused JavaScript from a Production AI App

PerformanceNext.jsJavaScriptWeb VitalsOptimization

How I Cut 89% of Unused JavaScript from a Production AI App

A deep dive into how I eliminated 241 KiB of dead JavaScript and reduced main-thread blocking time by 70% at Kumari.ai without rewriting a single feature.

6 min read

How I Cut 89% of Unused JavaScript from a Production AI App

When I joined Kumari.ai as a frontend intern, the bundle analysis dashboard had a number I couldn't ignore: 271 KiB of unused JavaScript being shipped to users on every page load. By the time I was done, it was under 30 KiB. Here's exactly how I got there.

The Starting Point

Kumari.ai is an AI assistant platform real-time multimodal generation, agentic task flows, conversation routing. Rich features mean rich dependencies. The problem is that "rich dependencies" often means "everything loaded upfront even if the user never sees it."

Running next build and opening the bundle analyzer told the story clearly:

  • A charting library imported in one file, bundled everywhere
  • Heavy animation libraries included in the initial JS payload
  • Several components that mounted on interaction but were fully hydrated on page load
  • Large icon sets where we used maybe 8 icons out of 500+

Total unused JS hitting users on first load: 271 KiB.

Fix 1: optimizePackageImports in next.config

The quickest win. Next.js 13.5+ has a config option that tree-shakes specific packages at the framework level:

// next.config.js
module.exports = {
  experimental: {
    optimizePackageImports: [
      "lucide-react",
      "react-icons",
      "@radix-ui/react-icons",
      "framer-motion",
    ],
  },
};

Before this, importing import { Home } from 'lucide-react' would pull in the entire lucide bundle. After, only the Home icon. This alone accounted for roughly 60 KiB of the reduction.

The rule of thumb: any package that exports many named exports from a single barrel file benefits from this config.

Fix 2: dynamic() with ssr: false on heavy components

Some components are only ever seen by authenticated users after an interaction. There's no reason to include them in the initial JS parse. I audited every import and marked anything fitting this pattern:

// Before
import { RichTextEditor } from "@/components/RichTextEditor";
 
// After
import dynamic from "next/dynamic";
 
const RichTextEditor = dynamic(() => import("@/components/RichTextEditor"), {
  ssr: false,
});

The ssr: false flag is important here. These components depended on browser APIs (window, document, ResizeObserver) and had no meaningful server-rendered output anyway. Skipping SSR means Next.js doesn't include them in the server bundle at all, and they're only fetched client-side when the component actually mounts.

I applied this pattern to:

  • The rich text editor (heavy Tiptap dependency)
  • A code syntax highlighter used in one specific view
  • A PDF preview component
  • Several dashboard widgets behind feature flags

Combined impact: ~110 KiB removed from the initial payload.

Fix 3: Strategic dependency pruning

This one required the most judgment. Some packages had lighter alternatives that covered our actual use case.

The most notable: we were using a full date-formatting library for displaying relative timestamps ("2 hours ago"). That was replaced with Intl.RelativeTimeFormat, which is native to modern browsers:

// Before: ~20 KiB dependency
import { formatDistanceToNow } from "date-fns";
const label = formatDistanceToNow(date, { addSuffix: true });
 
// After: 0 KB, built into the browser
function relativeTime(date: Date): string {
  const diff = (date.getTime() - Date.now()) / 1000;
  const rtf = new Intl.RelativeTimeFormat("en", { numeric: "auto" });
 
  if (Math.abs(diff) < 60) return rtf.format(Math.round(diff), "second");
  if (Math.abs(diff) < 3600) return rtf.format(Math.round(diff / 60), "minute");
  if (Math.abs(diff) < 86400)
    return rtf.format(Math.round(diff / 3600), "hour");
  return rtf.format(Math.round(diff / 86400), "day");
}

I ran the same audit on every package.json entry used only in one or two files. If the native browser API or a small inline utility could replace it, it went.

Total from pruning: ~50 KiB.

The Result

Metric Before After
Unused JS 271 KiB <30 KiB
Reduction 89%

The Lighthouse score went from 71 → 91 on mobile. Time to Interactive dropped noticeably. But unused bundle size is a proxy metric the real impact showed up in the main-thread profiling, which I tackled next.


Cutting Main-Thread Blocking Time by 70%

With the bundle lean, the next bottleneck was Total Blocking Time (TBT) the sum of time chunks where the main thread is blocked for more than 50ms during page load. We were at 2.6 seconds on a mid-range mobile device.

The problem: everything hydrated at once on mount. React was spinning up every component simultaneously, and the main thread couldn't handle user input during that window.

IntersectionObserver-based hydration deferral

The key insight: components below the fold don't need to hydrate immediately. The user can't see or interact with them yet.

I built a small wrapper:

"use client";
 
import { useEffect, useRef, useState } from "react";
 
interface DeferredProps {
  children: React.ReactNode;
  fallback?: React.ReactNode;
  rootMargin?: string;
}
 
export function DeferUntilVisible({
  children,
  fallback = null,
  rootMargin = "200px",
}: DeferredProps) {
  const ref = useRef<HTMLDivElement>(null);
  const [visible, setVisible] = useState(false);
 
  useEffect(() => {
    const observer = new IntersectionObserver(
      ([entry]) => {
        if (entry.isIntersecting) {
          setVisible(true);
          observer.disconnect();
        }
      },
      { rootMargin },
    );
 
    if (ref.current) observer.observe(ref.current);
    return () => observer.disconnect();
  }, [rootMargin]);
 
  return <div ref={ref}>{visible ? children : fallback}</div>;
}

Usage was straightforward:

// Before: everything mounts at once
<ConversationHistory />
<SuggestedPrompts />
<UsageMetrics />
 
// After: below-fold sections wait until visible
<ConversationHistory />
<DeferUntilVisible>
  <SuggestedPrompts />
</DeferUntilVisible>
<DeferUntilVisible rootMargin="100px">
  <UsageMetrics />
</DeferUntilVisible>

The rootMargin: "200px" means hydration starts when the element is 200px away from the viewport close enough that the user never sees a loading state, but far enough that the main thread work is spread out over time as they scroll.

The result

Metric Before After
Main-thread execution 2.6s 1.1s
TBT reduction ~70%

Chrome DevTools' Performance panel made this measurable: before, there was one massive long task on load. After, many smaller tasks spread across the scroll session none long enough to block input.


What I'd Do Differently

Profile first, always. I got lucky that the bundle analysis was the right starting point, but in general you should open the Performance tab, record a load, and look at the actual long tasks before deciding what to fix.

ssr: false is not free. Each dynamic import with ssr: false creates a waterfall on the client: parse initial JS → discover dynamic import → fetch chunk → parse chunk → render. If a component is above the fold and users see it immediately, ssr: false can hurt. I made sure every component I deferred was truly off the initial viewport.

Bundle size ≠ performance. A 10 KiB synchronous script that blocks parsing is worse than a 100 KiB async chunk that loads in parallel. The metrics that actually matter are TBT, TTI, and LCP not raw bundle KB.


The entire optimization took about three weeks spread across the internship. Most of the gains came in the first three days from optimizePackageImports and the dynamic imports audit. The TBT work came later as a separate investigation.

The takeaway: production codebases accumulate weight gradually. Periodic bundle audits even just running npx @next/bundle-analyzer and sorting by size catch the low-hanging fruit before it compounds.

Share this post

Share on X