---
title: "Performance"
description: "Optimize your SvelteKit application's performance with caching strategies, bundle optimization, and runtime improvements."
canonical_url: "https://vercel.com/academy/svelte-on-vercel/performance"
md_url: "https://vercel.com/academy/svelte-on-vercel/performance.md"
docset_id: "vercel-academy"
doc_version: "1.0"
last_updated: "2026-04-11T16:26:46.714Z"
content_type: "lesson"
course: "svelte-on-vercel"
course_title: "Svelte on Vercel"
prerequisites:  []
---

<agent-instructions>
Vercel Academy — structured learning, not reference docs.
Lessons are sequenced.
Adapt commands to the human's actual environment (OS, package manager, shell, editor) — detect from project context or ask, don't assume.
The lesson shows one path; if the human's project diverges, adapt concepts to their setup.
Preserve the learning goal over literal steps.
Quizzes are pedagogical — engage, don't spoil.
Quiz answers are included for your reference.
</agent-instructions>

# Performance

# Performance Optimization

The ski-alerts app works. But "works" and "fast" are different things. This lesson covers the patterns already in your codebase that make it fast, and adds caching headers to your API routes so repeated requests don't hit the weather API unnecessarily.

## Outcome

Add `Cache-Control` headers to API endpoints and understand the parallel fetching patterns already in the app.

## Fast Track

1. Add `Cache-Control` headers to the evaluate endpoint
2. Understand the `Promise.all` pattern in `fetchAllConditions`
3. Know when to cache and when not to

## What's Already Fast

The ski-alerts app has two performance patterns built in. Let's look at them.

### Parallel Data Fetching

The `fetchAllConditions` function in `src/lib/services/weather.ts` fetches weather for all 5 resorts in parallel:

```typescript title="src/lib/services/weather.ts" {2-5}
export async function fetchAllConditions(resorts: Resort[]): Promise<ResortConditions[]> {
  const results = await Promise.all(
    resorts.map(async (resort) => {
      const weather = await fetchWeather(resort);
      return { resort, weather };
    })
  );
  return results;
}
```

Without `Promise.all`, fetching 5 resorts sequentially takes \~2.5 seconds (5 x 500ms each). With `Promise.all`, all 5 requests happen concurrently and the total time is \~500ms, the duration of the slowest single request.

```
Sequential: resort1 (500ms) → resort2 (500ms) → resort3 (500ms) → ... = ~2.5s
Parallel:   resort1 (500ms)
            resort2 (500ms)  = ~500ms total
            resort3 (500ms)
            resort4 (500ms)
            resort5 (500ms)
```

### ISR on the Dashboard

From lesson 4.1, the dashboard uses ISR with a 5-minute expiration. Most visitors get an instant cached response instead of waiting for weather API calls.

## Hands-on exercise 4.3

Let's add caching headers to the evaluate endpoint and review the app's performance patterns:

**Requirements:**

1. Add `Cache-Control` headers to `GET` responses from the evaluate endpoint
2. Ensure `POST` endpoints are never cached (they have side effects)
3. Review and understand the parallel fetching pattern in the weather service

**Implementation hints:**

- Use `Cache-Control: public, s-maxage=60, stale-while-revalidate=300` for the evaluate endpoint. This caches for 1 minute on the CDN and serves stale for up to 5 minutes while revalidating
- `s-maxage` controls CDN/edge caching; `max-age` controls browser caching
- `stale-while-revalidate` serves the cached version while fetching a fresh one in the background
- POST requests should never be cached, but you don't need to set headers because browsers and CDNs don't cache POST by default

**What NOT to cache:**

- `/api/chat`: streaming responses are unique per user
- `/api/workflow`: side effects (evaluating and marking alerts)
- POST `/api/evaluate`: the current implementation is POST, so it's not cached by default

## Try It

1. **Add a GET handler to the evaluate endpoint for cacheable responses:**

   You can add a simple GET endpoint that returns conditions for all resorts:

   ```typescript title="src/routes/api/evaluate/+server.ts"
   export const GET: RequestHandler = async () => {
     const conditions = await fetchAllConditions(resorts);

     return json(
       {
         resorts: conditions.map(({ resort, weather }) => ({
           id: resort.id,
           name: resort.name,
           conditions: weather.conditions,
           temperature: weather.temperature,
           snowfall: weather.snowfall24h
         })),
         fetchedAt: new Date().toISOString()
       },
       {
         headers: {
           'Cache-Control':
             'public, s-maxage=60, stale-while-revalidate=300'
         }
       }
     );
   };
   ```

2. **Deploy and test caching:**

   ```bash
   $ curl -I https://your-app.vercel.app/api/evaluate
   ```

   Response headers:

   ```
   cache-control: public, s-maxage=60, stale-while-revalidate=300
   x-vercel-cache: MISS  (first request)
   ```

   Second request:

   ```
   x-vercel-cache: HIT  (served from edge cache)
   ```

3. **Measure the difference:**
   - First request: \~500ms (weather API calls)
   - Cached request: \~5ms (edge cache hit)

## Commit

```bash
git add -A
git commit -m "feat(perf): add caching headers to evaluate endpoint"
git push
```

## Done-When

- [ ] GET `/api/evaluate` returns data with `Cache-Control` headers
- [ ] Subsequent requests show `x-vercel-cache: HIT`
- [ ] POST endpoints remain uncached
- [ ] You understand the parallel fetching pattern in `fetchAllConditions`

## Solution

```typescript title="src/routes/api/evaluate/+server.ts" {8-30}
import { json } from '@sveltejs/kit';
import { resorts, getResort } from '$lib/data/resorts';
import { fetchWeather, fetchAllConditions } from '$lib/services/weather';
import { evaluateCondition } from '$lib/services/alerts';
import type { Alert } from '$lib/schemas/alert';
import type { RequestHandler } from './$types';

interface EvaluationResult {
  alertId: string;
  resortId: string;
  resortName: string;
  triggered: boolean;
  condition: Alert['condition'];
  weather: {
    temperature: number;
    snowfall: number;
    conditions: string;
  };
}

// Cacheable GET endpoint for current conditions
export const GET: RequestHandler = async () => {
  const conditions = await fetchAllConditions(resorts);

  return json(
    {
      resorts: conditions.map(({ resort, weather }) => ({
        id: resort.id,
        name: resort.name,
        conditions: weather.conditions,
        temperature: weather.temperature,
        snowfall: weather.snowfall24h
      })),
      fetchedAt: new Date().toISOString()
    },
    {
      headers: {
        'Cache-Control':
          'public, s-maxage=60, stale-while-revalidate=300'
      }
    }
  );
};

// POST handler for evaluating specific alerts (not cached)
export const POST: RequestHandler = async ({ request }) => {
  const requestId = crypto.randomUUID().slice(0, 8);
  const startTime = Date.now();
  const { alerts } = (await request.json()) as { alerts: Alert[] };

  if (!alerts || !Array.isArray(alerts)) {
    return json({ error: 'alerts array required' }, { status: 400 });
  }

  const results: EvaluationResult[] = [];

  const alertsByResort = new Map<string, Alert[]>();
  for (const alert of alerts) {
    const existing = alertsByResort.get(alert.resortId) || [];
    existing.push(alert);
    alertsByResort.set(alert.resortId, existing);
  }

  console.log(`[Evaluate] Started`, {
    requestId,
    alertCount: alerts.length,
    resorts: [...alertsByResort.keys()]
  });

  for (const [resortId, resortAlerts] of alertsByResort) {
    const resort = getResort(resortId);
    if (!resort) {
      console.warn(`[Evaluate] Resort not found: ${resortId}`, { requestId });
      continue;
    }

    try {
      const weather = await fetchWeather(resort);
      for (const alert of resortAlerts) {
        const triggered = evaluateCondition(alert.condition, weather);
        results.push({
          alertId: alert.id,
          resortId: alert.resortId,
          resortName: resort.name,
          triggered,
          condition: alert.condition,
          weather: {
            temperature: weather.temperature,
            snowfall: weather.snowfall24h,
            conditions: weather.conditions
          }
        });
      }
    } catch (error) {
      console.error(`[Evaluate] Weather fetch failed for ${resort.name}:`, {
        requestId,
        error: String(error)
      });
    }
  }

  console.log(`[Evaluate] Completed`, {
    requestId,
    duration: Date.now() - startTime,
    evaluated: results.length,
    triggered: results.filter((r) => r.triggered).length
  });

  return json({
    evaluated: results.length,
    triggered: results.filter((r) => r.triggered).length,
    results
  });
};
```

## Troubleshooting

\*\*Warning: Cache-Control header not appearing in response\*\*

Check that you're returning the headers in the second argument to `json()`. The syntax is `json(data, { headers: { 'Cache-Control': '...' } })`. If you put the headers object inside the data, they won't be set on the HTTP response.

\*\*Warning: GET endpoint returns 405 Method Not Allowed\*\*

Make sure you exported a named `GET` constant, not a default export. SvelteKit expects `export const GET: RequestHandler = async () => { ... }`. Also verify the file is `+server.ts`, not `+page.server.ts`. Page server files don't support custom HTTP method handlers.

## Cache-Control Cheat Sheet

| Header                       | Where it caches                   | Duration        |
| ---------------------------- | --------------------------------- | --------------- |
| `max-age=60`                 | Browser                           | 60 seconds      |
| `s-maxage=60`                | CDN/Edge                          | 60 seconds      |
| `stale-while-revalidate=300` | CDN serves stale while refreshing | Up to 5 minutes |
| `no-store`                   | Nowhere                           | Never cached    |
| `private`                    | Browser only                      | Not on CDN      |

For the ski-alerts app:

- **Dashboard page**: ISR handles caching (lesson 4.1)
- **GET /api/evaluate**: CDN-cached with `s-maxage=60`
- **POST endpoints**: Not cached (default for POST)
- **Streaming endpoints**: Not cacheable (unique per request)

## Advanced: Vercel Speed Insights

For real user performance monitoring, add Vercel Speed Insights to track Core Web Vitals (LCP, INP, CLS) across your deployed app. See the [Speed Insights docs](https://vercel.com/docs/speed-insights) for setup instructions.

The key metrics to watch for ski-alerts:

- **LCP (Largest Contentful Paint)**: How fast the conditions dashboard renders. ISR keeps this fast
- **INP (Interaction to Next Paint)**: How quickly the UI responds to interactions like sending a chat message. Streaming helps here
- **CLS (Cumulative Layout Shift)**: Whether the page shifts as data loads. The fixed layout prevents this


---

[Full course index](/academy/llms.txt) · [Sitemap](/academy/sitemap.md)
