Skip to content
  • Anomaly alert configuration now available

    You can now granularly configure anomaly alerts to define exactly which unexpected spikes and errors matter to your application. Alert rules give you detection-level control, allowing you to customize which projects, alert types, metrics, HTTP status codes, and specific routes you monitor for anomalies.

    For the anomalies you choose to track, Vercel automatically investigates your logs and metrics to identify the root cause, routing those findings to distinct destinations like dedicated Slack channels or individual emails.

    When you configure a rule to silence a specific pattern, anomaly detection skips that traffic entirely, preventing anomalies from appearing in your dashboard and stopping notifications before they trigger.

    This feature is available for teams using Observability Plus at no additional cost.

    Try it out or learn more about alert rules.

    Fabio Benedetti, Malavika Tadeusz

  • Zero-configuration Django support

    Vercel Django DarkVercel Django Dark

    Django, one of the most popular high-level Python web frameworks, for rapid development, is now supported with zero-configuration. You can now instantly deploy Django full-stack apps or APIs on Vercel.

    Vercel now recognizes and deeply understands the specifications of Django applications, removing the need for redirects in vercel.json or using the /api folder.

    All applications on Vercel use Fluid compute with Active CPU pricing by default. Static files will be served by the Vercel CDN.

    Deploy Django on Vercel or visit the Django on Vercel documentation

    Link to headingExample Django app

    CLI entry point

    manage.py
    import os
    import sys
    os.environ.setdefault("DJANGO_SETTINGS_MODULE", "app.settings")
    from django.core.management import execute_from_command_line
    execute_from_command_line(sys.argv)

    Project settings

    app/settings.py
    SECRET_KEY = "my-secret-key"
    DEBUG = False
    ALLOWED_HOSTS = ["localhost", "127.0.0.1", ".vercel.app"]
    ROOT_URLCONF = "app.urls"
    WSGI_APPLICATION = "app.wsgi.application"
    INSTALLED_APPS = ["app"]

    URL routing table

    app/urls.py
    from django.urls import path
    from app.views import index
    urlpatterns = [
    path("", index),
    ]

    Request handlers

    app/views.py
    from django.http import HttpResponse
    def index(request):
    return HttpResponse("<html><body>hello, world!</body></html>")

    WSGI entry point

    app/wsgi.py
    import os
    from django.core.wsgi import get_wsgi_application
    os.environ.setdefault("DJANGO_SETTINGS_MODULE", "app.settings")
    application = get_wsgi_application()

  • Use and manage Vercel Sandbox directly from the Vercel CLI

    Vercel Sandboxes can now be used and managed directly from the Vercel CLI, through the vercel sandbox subcommand.

    This eliminates the need to install and maintain a separate command-line tool, and removes the friction of switching contexts. Your entire Sandbox workflow now lives exactly where you already work, keeping your development experience unified and fast.

    Run pnpm i -g vercel@latest to update to the latest Vercel CLI (at least v50.42.0).

  • Summary of CVE-2026-23869

    Link to headingSummary

    A high-severity vulnerability (CVSS 7.5) in React Server Components can lead to Denial of Service.

    We created new rules to address these vulnerabilities and deployed them to the Vercel WAF to automatically protect all projects hosted on Vercel at no cost. However, do not rely on the WAF for full protection. Immediate upgrades to a patched version are required.

    Link to headingImpact

    A specially crafted HTTP request can be sent to any App Router Server Function endpoint that, when deserialized, may trigger excessive CPU usage. This can result in denial of service in unpatched environments.

    These vulnerabilities are present in Next.js 13.x, 14.x, 15.x, 16.x and affected packages using the App Router. The issue is tracked upstream as CVE-2026-23869

    Link to headingResolution

    After creating mitigations to address this vulnerability, we deployed them across our globally-distributed platform to protect our customers. We still recommend upgrading to the latest patched version.

    Updated releases of React and affected downstream frameworks include fixes to prevent this issue. All users should upgrade to a patched version as soon as possible.

    Link to headingFixed In

    • = 15.0.0 to be fixed in 15.5.15

    • = 16.0.0 to be fixed in 16.2.3

  • Vercel Sandbox now supports up to32 vCPU + 64 GB RAM configurations

    Vercel Sandbox now supports creating sandboxes with up to 32 vCPUs and 64 GB of RAM for Enterprise customers. This enables running large, resource-intensive applications that are CPU-bound or require a large amount of memory.

    Get started by setting the resources.vcpus option in the SDK:

    import { Sandbox } from "@vercel/sandbox";
    const sandbox = await Sandbox.create({
    resources: { vcpus: 32 },
    });

    Or using the --vcpus option in the CLI:

    sandbox create --connect --vcpus 32

    Learn more about Sandbox in the docs.

  • Chat SDK adds Liveblocks support

    Chat SDK now supports Liveblocks, enabling bots to read and respond in Liveblocks Comments threads with the new Liveblocks adapter. This is an official vendor adapter built and maintained by the Liveblocks team.

    Teams can build bots that post, edit, and delete comments, react with emojis, and resolve @mentions within Liveblocks rooms.

    Try the Liveblocks adapter today:

    import { Chat } from "chat";
    import { createLiveblocksAdapter } from "@liveblocks/chat-sdk-adapter";
    const bot = new Chat({
    userName: "mybot",
    adapters: {
    liveblocks: createLiveblocksAdapter({
    apiKey: "sk_...",
    webhookSecret: "whsec_...",
    botUserId: "my-bot-user",
    botUserName: "MyBot"
    }),
    },
    });
    bot.onNewMention(async (thread, message) => {
    await thread.post(`You said: ${message.text}`);
    });

    Read the documentation to get started, browse the directory, or build your own adapter.

  • Opus 4.6 Fast Mode available on AI Gateway

    Fast mode support for Claude Opus 4.6 is now available on AI Gateway.

    Fast mode is a premium high-speed option that delivers 2.5x faster output token speeds with the same model intelligence. This is an early, experimental feature.

    Fast mode's increased output token speeds enable new use cases, especially for human-in-the-loop workflows. Run large coding tasks without needing to context switch and get planning results without extended waits.

    To enable fast mode, pass speed: 'fast' in the anthropic provider options in AI SDK:

    import { streamText } from "ai";
    const { text } = await streamText({
    model: 'anthropic/claude-opus-4.6',
    prompt:
    `Analyze this codebase structure and create a step-by-step plan
    to add user authentication.`,
    providerOptions: {
    anthropic: {
    speed: 'fast',
    },
    },
    });

    You can use fast mode with Claude Code via AI Gateway by setting "fastMode": true in your settings.json.

    {
    "model": "opus[1m]",
    "fastMode": true
    }

    Try fast mode directly in the AI Gateway playground for Opus 4.6.

    Fast mode is priced at 6x standard Opus rates.

    Standard

    Fast Mode

    Input: $5 / 1M tokens
    Output: $25 / 1M tokens

    Input: $30 / 1M tokens
    Output: $150 / 1M tokens

    All standard pricing multipliers (e.g., prompt caching) apply on top of these rates.

    AI Gateway: Track top AI models by usage

    The AI Gateway model leaderboard ranks the most used models over time by total token volume across all traffic through the Gateway. Updates regularly.

    View the leaderboard

  • GLM 5.1 on AI Gateway

    GLM 5.1 from Z.ai is now available on Vercel AI Gateway.

    Designed for long-horizon autonomous tasks, GLM-5.1 can work continuously on a single task for extended periods, handling planning, execution, testing, and iterative refinement in a closed loop. Rather than one-shot code generation, it runs an autonomous cycle of benchmarking, identifying bottlenecks, and optimizing across many iterations, with particular strength in sustained multi-step engineering workflows.

    Beyond agentic coding, GLM-5.1 improves on general conversation, creative writing, front-end prototyping, and office productivity tasks like generating PowerPoint, Word, and Excel documents.

    To use GLM 5.1, set model to zai/glm-5.1 in the AI SDK.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'zai/glm-5.1',
    prompt:
    `Refactor the data ingestion pipeline to support streaming,
    add error recovery, and benchmark throughput against the
    current implementation.`,
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in custom reporting, observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.