Skip to content

Debugging Failed Deployments

Varity Team Core Contributors Updated April 2026

When a deployment fails, the fastest path to recovery is: get the logs → identify the error → fix or rollback. This page walks through each step.

Every deployment produces a build log. Always read it before guessing at a cause.

Re-run the deploy with --verbose to stream full output:

Terminal window
varitykit app deploy --verbose

If the deployment already ran, retrieve its logs by ID:

Terminal window
varitykit app info deploy-1737492000

If you’re in an AI editor with the Varity MCP server connected:

"Show me the build logs for deploy-1737492000"

This calls varity_deploy_logs:

ParameterTypeRequiredDescription
deployment_idstringYesThe deployment ID from varitykit app list
limitnumberNoMax log lines (default 100)

The build command (npm run build) failed.

  1. Reproduce locally:

    Terminal window
    npm run build
  2. Fix TypeScript or lint errors:

    Terminal window
    npx tsc --noEmit
  3. Ensure all dependencies are installed:

    Terminal window
    npm install
  4. Re-deploy:

    Terminal window
    varitykit app deploy

“Upload failed” / “Network timeout”

Section titled ““Upload failed” / “Network timeout””

The build succeeded but the upload to Varity’s infrastructure timed out or was interrupted.

This is usually transient. Re-try the deployment:

Terminal window
varitykit app deploy

If it fails repeatedly:

Terminal window
varitykit doctor

This checks connectivity to Varity’s services and reports any reachability issues.


Dynamic Deployment: “Lease acquisition failed”

Section titled “Dynamic Deployment: “Lease acquisition failed””

For dynamic deployments (--hosting dynamic), the compute infrastructure must acquire resources before launching your container. This error means no lease could be established.

Common causes:

CauseWhat to check
No available compute capacityTry deploying again — capacity fluctuates
Container image too largeOptimize your Docker layer or reduce dependencies
Resource request too highCheck your compute config in varity.config.json

Recovery:

Terminal window
# Retry the deployment
varitykit app deploy --hosting dynamic
# Check current status
varitykit app status

If the lease consistently fails, fall back to static hosting while you investigate:

Terminal window
varitykit app deploy --hosting static

Dynamic Deployment: “Container failed to start”

Section titled “Dynamic Deployment: “Container failed to start””

The lease was acquired but the container exited immediately after launch.

Common causes:

  1. Missing environment variables — the app crashed on startup because a required env var is absent. Check Environment Variables.

  2. Port mismatch — the container isn’t listening on the expected port. Confirm your app listens on 0.0.0.0, not 127.0.0.1.

  3. Out-of-memory (OOM) — the process was killed by the kernel during startup. Check your package.json for memory-intensive build steps.

Get more detail from logs:

Terminal window
varitykit app info deploy-1737492000

Or via MCP: varity_deploy_logs with limit: 200 for more context.


Dynamic Deployment: “Deployment timed out”

Section titled “Dynamic Deployment: “Deployment timed out””

The deployment didn’t become healthy within the expected window.

Typical causes:

  • Long-running migrations that block app startup
  • Slow cold-start on first container launch (normal for first deploy — retry once)
  • App not responding to health check on the expected port

Fix:

  1. Check logs for slow startup output
  2. Move database migrations out of the startup path if possible
  3. Retry — first-launch cold-starts often succeed on the second attempt:
    Terminal window
    varitykit app deploy --hosting dynamic

“SDL configuration error” / “Invalid manifest”

Section titled ““SDL configuration error” / “Invalid manifest””

The container deployment manifest has a structural problem.

This error is rare and typically means an internal configuration was generated incorrectly. Run:

Terminal window
varitykit doctor

If doctor passes, re-deploy. If the error recurs, report it to support@varity.so with the full output of varitykit app info <deployment-id>.


”App deployed but shows blank page or 404”

Section titled “”App deployed but shows blank page or 404””

The deployment succeeded from Varity’s perspective but the app isn’t serving correctly.

  1. Verify the build output directory exists locally:

    • Next.js: .next/ or out/ (for static export)
    • React (Vite): dist/
    • React (CRA): build/
  2. Confirm the correct output path is used:

    Terminal window
    npm run build
    ls dist/ # or out/, build/, .next/ depending on framework
  3. For Next.js static export, ensure output: 'export' is set in next.config.js

  4. Re-deploy after verifying the output:

    Terminal window
    varitykit app deploy

If you can’t diagnose the issue quickly, roll back to the last working version to restore service:

Terminal window
# Find the last working deployment
varitykit app list
# Roll back to it
varitykit app rollback deploy-1737491000

See the full Rollback Guide for details on deployment IDs, caveats with database state, and verifying the restore worked.


Before chasing logs, run a full environment diagnostic:

Terminal window
varitykit doctor

This checks:

CheckWhat it verifies
CLI installedvaritykit is on PATH and up to date
AuthenticationYou’re logged in with valid credentials
NetworkVarity services are reachable
Project structurepackage.json and build config exist

If you’re still stuck after reading logs and running diagnostics:

  1. Discord#support channel — fastest response
  2. GitHub Issuesvarity-labs/varity-sdk — attach the output of varitykit app info <deployment-id>
  3. Emailsupport@varity.so

When reporting an issue, include:

  • Full output of varitykit app info <deployment-id>
  • Output of varitykit doctor
  • Your OS, Node.js version, and Python version