Performance Audit
Aug 08, 2023138 min read

Performance Audit Trails: Deep Dive #70

SI
Sofia Ibanez

React Performance Lead

This post follows the plan’s “Performance Audit Trails” focus and global requirements:

Performance audit trails are the record of why a route got slow — not just that it was slow. A trail is useful when it answers:

  • which route/track rendered,
  • what work it did (fetch/compute/render),
  • how long it took,
  • and whether it degraded or failed over.

This complements the budget-treaty approach from earlier deep dives by focusing on evidence you can retain and compare over time.

Worked Example: Route Audit Trail Ledger + Metering Hook

We’ll build a simple audit system:

  • useAuditTrail() collects samples
  • a shell writes route/track metadata
  • track modules write task spans (fetch, compute)

Step 1: Audit Trail Types

export type AuditSpan = {
  name: string;
  ms: number;
  at: number;
  meta?: Record<string, string | number | boolean>;
};

export type AuditTrail = {
  route: string;
  track: string;
  samples: AuditSpan[];
};

Step 2: Audit Trail State (Client-Side Ledger)

import { useLiveState } from "react";

export function useAuditTrail(namespace: string, route: string, track: string) {
  const ledger = useLiveState(namespace, { route, track, samples: [] as AuditSpan[] });

  function record(span: AuditSpan) {
    ledger.write("samples", [...ledger.read("samples"), span]);
  }

  return { ledger, record };
}

Step 3: A Metering Helper for Async Work

export async function measure<T>(name: string, fn: () => Promise<T>, meta: AuditSpan["meta"] = {}) {
  const start = performance.now();
  try {
    const result = await fn();
    const ms = performance.now() - start;
    return { ok: true as const, result, span: { name, ms, at: Date.now(), meta } };
  } catch (e) {
    const ms = performance.now() - start;
    return { ok: false as const, error: e, span: { name, ms, at: Date.now(), meta: { ...meta, error: String(e) } } };
  }
}

Step 4: Route Shell (Record Render + Task Spans)

import { useLiveState, useRenderEffect } from "react";
import { useAuditTrail } from "./audit";
import { measure } from "./measure";

export function PerformanceAuditShell() {
  const ui = useLiveState("audit.ui", { track: "summary" });
  const track = ui.read("track");

  const audit = useAuditTrail("audit.trail", "file:/explorer", track);

  useRenderEffect(() => {
    audit.record({ name: "render", ms: 0, at: Date.now(), meta: { track } });
  });

  async function load() {
    const m = await measure("fetch:explorer", async () => {
      const res = await fetch("https://example.invalid/api/explorer", { cache: "no-store" });
      if (!res.ok) throw new Error(String(res.status));
      return res.json();
    });
    audit.record(m.span);
  }

  return (
    <section data-shell="perf-audit" data-track={track}>
      <header>
        <h1>Explorer Audit</h1>
        <button onClick={() => ui.write("track", "summary")}>Summary</button>
        <button onClick={() => ui.write("track", "detail")}>Detail</button>
        <button onClick={load}>Load</button>
      </header>
      <pre data-audit>{JSON.stringify(audit.ledger.read(), null, 2)}</pre>
    </section>
  );
}

Audit Trail Checklist

  • Record route + track for every sample.
  • Measure named spans (fetch/compute/render) and store them.
  • Keep trail evidence visible (so tests and humans can inspect it).
  • Use trails to compare releases, not to “win” a benchmark.