Skip to main content

Site Import API

The Site Import API creates a complete site — project, database, application, environment variables, and domain — in a single API call. It's designed for migration scripts and automation tools that need to provision sites programmatically.

Prerequisites

  • An API key (Settings → API Keys in the dashboard)
  • Your organization ID (visible in the URL when viewing your org)
  • A configured Git Provider (Settings → Git Providers) if using GitHub/GitLab source
  • A container registry — either configured in Settings → Registry, or provided by the platform administrator as a default

Quick Start

export API_KEY="your-api-key"
export KUPLOY_URL="https://your-kuploy-instance.com"

# Preview (dry run)
curl -X POST $KUPLOY_URL/api/site-import/preview \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"organizationId": "org_xxx",
"name": "mysite",
"domain": "mysite.com",
"database": {"type": "mariadb", "name": "mysite", "user": "mysite", "password": "secret"},
"source": {"type": "github", "repo": "myorg/sites", "branch": "main", "buildPath": "/mysite", "buildType": "dockerfile", "githubId": "gh_xxx"}
}'

# Import (creates everything)
curl -X POST $KUPLOY_URL/api/site-import \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"organizationId": "org_xxx",
"name": "mysite",
"domain": "mysite.com",
"database": {"type": "mariadb", "name": "mysite", "user": "mysite", "password": "secret"},
"source": {"type": "github", "repo": "myorg/sites", "branch": "main", "buildPath": "/mysite", "buildType": "dockerfile", "githubId": "gh_xxx"},
"envVars": {"APP_KEY": "base64:xxx", "APP_ENV": "production"},
"port": 80
}'

# Check status
curl $KUPLOY_URL/api/site-import/imp_xxx \
-H "Authorization: Bearer $API_KEY"

Authentication

Generate an API key from Settings → API Keys. Pass it via either header:

Authorization: Bearer <your-api-key>
x-api-key: <your-api-key>

API keys inherit your permissions. If you're an owner of an organization, the key can create resources in that organization.

tip

When generating the API key, set the organization in the key's metadata so it's scoped to the right org.

Endpoints

POST /api/site-import — Import a site

Creates project, database, application, environment variables, and domain in one call.

Request body:

FieldTypeRequiredDescription
organizationIdstringYesTarget organization
namestringYesProject and app name
domainstringNoCustom domain (e.g. mysite.com)
databaseobjectNoDatabase configuration
database.typeenumYes (if database)mariadb, mysql, postgres, mongodb
database.namestringYes (if database)Database name
database.userstringYes (if database)Database username
database.passwordstringYes (if database)Database password
sourceobjectYesApplication source configuration
source.typeenumYesgithub, gitlab, docker, git
source.repostringIf github/gitlabRepository in owner/name format
source.branchstringNoBranch (default: main)
source.buildPathstringNoSubdirectory for build context (default: /)
source.buildTypeenumNodockerfile or nixpacks (default: dockerfile)
source.dockerImagestringIf dockerDocker image to deploy
source.githubIdstringIf githubID of your configured GitHub Git Provider
source.gitlabIdstringIf gitlabID of your configured GitLab Git Provider
source.customGitUrlstringIf gitHTTPS or SSH clone URL
source.customGitSSHKeyIdstringNoSSH key ID for private git repos
envVarsobjectNoKey-value map of environment variables
portnumberNoApplication port (default: 80)

Response:

{
"importId": "abc123",
"status": "success",
"projectId": "proj_xxx",
"applicationId": "app_xxx",
"databaseId": "db_xxx",
"databaseType": "mariadb",
"databaseServiceHost": "mysite-db-a2b3c4",
"domainId": "dom_xxx"
}

Status values: success, failed, partial (some resources created before failure).

POST /api/site-import/preview — Preview

Same input as import. Returns validation results without creating anything:

{
"valid": true,
"resources": {
"project": { "name": "mysite" },
"database": { "type": "mariadb", "name": "mysite" },
"application": { "name": "mysite", "sourceType": "github", "buildType": "dockerfile" },
"domain": { "host": "mysite.com" }
},
"warnings": [],
"errors": []
}

Use this to validate before importing. Checks for name conflicts, domain availability, and source configuration.

GET /api/site-import — List imports

Returns import history for your organization.

Query parameters: limit (default 50, max 100), offset (default 0).

GET /api/site-import/:id — Get import

Returns details of a specific import including status, created resource IDs, and error information.

Source Types

GitHub

Requires a configured GitHub Git Provider in your org. Find the provider ID in Settings → Git Providers.

{
"source": {
"type": "github",
"repo": "myorg/myrepo",
"branch": "main",
"buildPath": "/mysite",
"buildType": "dockerfile",
"githubId": "gh_xxx"
}
}

Custom Git (HTTPS/SSH)

For repos that aren't connected via OAuth. Works with any Git host.

{
"source": {
"type": "git",
"customGitUrl": "https://github.com/myorg/myrepo.git",
"branch": "main",
"buildPath": "/mysite",
"buildType": "dockerfile"
}
}

For private repos, configure an SSH key in Settings → SSH Keys and pass its ID:

{
"source": {
"type": "git",
"customGitUrl": "git@github.com:myorg/myrepo.git",
"branch": "main",
"buildPath": "/mysite",
"buildType": "dockerfile",
"customGitSSHKeyId": "ssh_xxx"
}
}

Docker Image

Deploy a pre-built image without a build step:

{
"source": {
"type": "docker",
"dockerImage": "myregistry/myapp:latest"
}
}

Automatic Environment Variables

When a database is included, the API automatically sets database connection environment variables on the application:

All database types:

VariableValue
DB_HOSTInternal K8s service hostname
DB_DATABASEDatabase name
DB_USERNAMEDatabase user
DB_PASSWORDDatabase password

Additional vars by database type:

DatabaseVariableValue
MySQL/MariaDBDB_PORT3306
MySQL/MariaDBDB_CONNECTIONmysql or mariadb
PostgreSQLDATABASE_URLpostgresql://user:pass@host:5432/dbname
MongoDBMONGO_URLmongodb://user:pass@host:27017/dbname

Your envVars are merged after these — you can override any of the auto-generated values.

Error Handling and Retry

Import statuses

StatusMeaning
pendingImport is in progress
successAll resources created and deployment triggered
partialSome resources were created before the failure — retry or rollback
failedNo resources were created

Retrying a failed import

If an import fails or partially completes, you can retry it directly from the Import Sites page:

  1. Find the failed import card in the Import History list
  2. Click the Retry button — a spinning indicator shows the retry is in progress
  3. The card auto-refreshes every few seconds so you can watch the status change from pending to success

Retry is idempotent — it updates the existing import record in-place rather than creating a duplicate. The flow is:

  1. Cleans up any partial resources from the previous attempt (deletes the project, which cascades to app, database, domain)
  2. Resets the import record to pending
  3. Re-runs the full import using the stored input configuration

You can retry as many times as needed — the import history stays clean with one record per import.

Rolling back a partial import

If an import is partial (some resources were created), you can also Rollback instead of retrying. This deletes all created resources and removes the import record.

Retrying via API

# Via tRPC (from the dashboard or programmatically)
siteImport.retry({ siteImportId: "abc123" })

What "success" means

A success status means all resources (project, database, application, domain) were created and the deployment was triggered. However, the build and deploy are asynchronous — the application may still be building or the deployment may fail after the import completes. Check the application's deployment logs in Projects → [your project] → Application → Deployments for build status.

Similarly, if a domain was configured with HTTPS, the SSL certificate (Let's Encrypt) is provisioned asynchronously. Ensure your domain's DNS points to your cluster's ingress IP for the certificate to be issued.

Plan Limits

Site imports count against your plan's resource quotas (projects, applications, databases, domains). If you exceed a limit, the import fails with a clear error message indicating which quota was reached.

Import via the Dashboard

You can import sites visually from the Import Sites page in the Kuploy dashboard.

Steps

  1. Navigate to Import Sites in the sidebar (or go to /import-sites)
  2. Click Import Site
  3. Fill in the source first:
    • Source Type — GitHub, GitLab, Custom Git URL, or Docker Image
    • Git Provider — select your connected provider (required for GitHub/GitLab). Configure in Settings → Git if none available.
    • Repository — in owner/repo format (e.g. myorg/virtualmin-sites)
    • Build Path — subdirectory containing the Dockerfile (e.g. /mysite)
  4. Click "Auto-fill from site.json" — if the repo contains a site.json file at the build path, it auto-fills:
    • Site name, domain, database type/name/user
    • Build path (from the site name)
    • If the repo isn't configured yet, it opens a local file picker instead
  5. Fill in remaining fields:
    • Domain (optional) — custom domain with automatic SSL
    • Branch — defaults to main
    • Build Type — Dockerfile or Nixpacks
    • Port — application port (default 80)
  6. If Include database is checked, fill in:
    • Database type (MariaDB, MySQL, PostgreSQL, MongoDB)
    • Database name, user, and password (password is never in site.json — fill manually)
  7. Optionally add Environment Variables (one per line, KEY=VALUE format)
  8. Click Preview to validate — shows what will be created and any warnings
  9. Click Import Site to create all resources

After importing, the project, application, database, and domain are created. A deployment is triggered automatically. If a database was included, DB connection environment variables (DB_HOST, DB_DATABASE, DB_USERNAME, DB_PASSWORD) are set on the application automatically.

Import History

The Import Sites page shows all past imports with their status:

  • Success — all resources created, deployment triggered
  • Partial — some resources created before a failure
  • Failed — no resources created (or rolled back)

Failed and partial imports can be retried from the history view.

Example: Virtualmin Migration (CLI)

After running the kuploy-migrate script, automate the Kuploy setup:

#!/bin/bash
# Import all sites from a virtualmin-sites repo

API_KEY="your-api-key"
KUPLOY_URL="https://console.example.com"
ORG_ID="org_xxx"
GITHUB_ID="gh_xxx"
REPO="myorg/virtualmin-sites"

for site_dir in /tmp/virtualmin-migrate/repo/*/; do
site=$(basename "$site_dir")

# Read site.json
domain=$(jq -r '.domain' "$site_dir/site.json")
db_name=$(jq -r '.db_mysql' "$site_dir/site.json")
db_user=$(jq -r '.mysql_user' "$site_dir/site.json")

# Read password from credentials file
db_pass=$(grep "^$site " /tmp/virtualmin-migrate/credentials.txt | awk '{print $4}')

echo "Importing $site ($domain)..."

curl -s -X POST "$KUPLOY_URL/api/site-import" \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d "{
\"organizationId\": \"$ORG_ID\",
\"name\": \"$site\",
\"domain\": \"$domain\",
\"database\": {
\"type\": \"mariadb\",
\"name\": \"$db_name\",
\"user\": \"$db_user\",
\"password\": \"$db_pass\"
},
\"source\": {
\"type\": \"github\",
\"repo\": \"$REPO\",
\"branch\": \"main\",
\"buildPath\": \"/$site\",
\"buildType\": \"dockerfile\",
\"githubId\": \"$GITHUB_ID\"
},
\"port\": 80
}" | jq .

echo ""
done

After importing, you still need to import database dumps via kubectl — the API creates the empty database container, but SQL import stays external.