Getting Started
Installation
Cipi installs a complete Laravel-ready production stack on any Ubuntu 24.04+ VPS with a single
command. The installer takes roughly 10 minutes and sets up Nginx, MariaDB, PHP, Redis, Supervisor,
Fail2ban, UFW, Certbot, Deployer, and the cipi CLI itself.
During installation, the wizard asks for your SSH public key (accepted formats:
ssh-rsa, ssh-ed25519, ecdsa) before any package is
installed. Cipi creates a dedicated cipi Linux user for admin SSH
access
and applies SSH hardening: root login is disabled, cipi uses public-key only (group
cipi-ssh), app users (group cipi-apps) can connect with password. Login
attempts are limited to 3 with a 20-second grace period, and X11 forwarding is disabled.
The installer also generates a random 40-character root password, stores it in
/etc/cipi/server.json, and displays it in the installation summary. Pasted SSH keys are
automatically sanitized — comments, carriage returns, and excess whitespace are stripped before
validation.
Standard installation
$ wget -O - https://cipi.sh/setup.sh | bash
Non-interactive installation
For automated setups, pass your SSH public key via the SSH_PUBKEY environment variable
to skip the interactive prompt:
$ SSH_PUBKEY="ssh-ed25519 AAAA..." wget -O - https://cipi.sh/setup.sh | bash
ssh-keygen -t rsa -b 4096
AWS (root login disabled by default)
$ ssh ubuntu@your-server-ip $ sudo -s $ wget -O - https://cipi.sh/setup.sh | bash
At the end of the installation you will see a summary screen with the SSH access details, the
auto-generated root password, and the MariaDB root password.
Save them immediately — they are shown only once. Credentials are stored in
/etc/cipi/server.json, which — like all Cipi configuration files — is
encrypted at rest using AES-256-CBC via the built-in Vault
system. Credentials, SSH keys, .env files, and database dumps are protected both on
disk
and during transfer (via Sync encrypted archives). Cipi also enforces
GDPR-compliant log retention with automatic rotation policies for application,
security, and HTTP logs — see Log retention for details.
/etc/cipi/server.json.Post-installation access
After installation, admin access uses public-key authentication as the
cipi user (root login is disabled). App users can SSH directly with
ssh myapp@server-ip and the password generated at app creation. To run
cipi commands or perform any administrative task, connect as cipi and
then escalate:
# 1. connect to the server (key-based auth only) $ ssh cipi@your-server-ip # 2. escalate to root to run cipi commands cipi@server:~$ sudo -s # 3. now you can use all cipi commands root@server:~# cipi status root@server:~# cipi app list # 4. to work as an app user, switch with su root@server:~# su - myapp
Requirements
- Ubuntu 24.04 LTS or higher
- Root access (or
sudo -son AWS) - Ports 22, 80, and 443 open
- A clean server — do not install Cipi on a server with an existing web stack
Cipi is tested and works on: DigitalOcean, AWS EC2, Vultr, Linode / Akamai, Hetzner, Google Cloud, OVH, Scaleway, and any KVM or bare-metal host running Ubuntu.
Quick Start
1. Create your first app
The interactive wizard asks for a username, primary domain, Git repository URL (SSH format), branch, and PHP version.
$ cipi app create
Or pass all flags directly to skip interactive mode:
$ cipi app create \
--user=myapp \
--domain=myapp.com \
--repository=git@github.com:you/myapp.git \
--branch=main \
--php=8.4
At the end, Cipi prints a credentials summary — save it, shown only once — including the SSH deploy key, database credentials, webhook URL, and webhook token.
2. Add the deploy key to your Git provider
If you have configured a GitHub or GitLab token via cipi git, this step is
automatic — Cipi adds the deploy key and creates the webhook for you. See
Git auto-setup for details.
Otherwise, copy the ssh-ed25519 ... key shown after app creation and add it as a
Deploy Key in your repository:
- GitHub: Repository → Settings → Deploy keys → Add deploy key
- GitLab: Repository → Settings → Repository → Deploy keys
3. Prepare your Laravel project
Cipi uses the database driver for cache, sessions, and queues. Run these once inside your Laravel project, commit, and push the generated migrations:
$ php artisan cache:table $ php artisan session:table $ php artisan queue:table $ php artisan migrate
artisan migrate --force on every deploy. The cache,
session, and queue tables will be created on first deploy if you commit the migrations.4. Deploy
$ cipi deploy myapp
Deployer clones your repo into a new releases/N/ directory, runs
composer install --no-dev, links .env and storage/, runs
migrations, runs artisan optimize, creates storage:link, swaps the
current symlink atomically, and restarts queue workers. Zero downtime.
5. Install SSL
$ cipi ssl install myapp
Certbot provisions a Let's Encrypt certificate, configures Nginx for HTTPS, and updates
APP_URL in .env. Your Laravel app is now live on
https://myapp.com.
Tech Stack
Cipi brings a complete, production-ready stack to your Ubuntu server. Here is everything that gets installed and configured:
| Component | Role |
|---|---|
| Ubuntu | Base OS (24.04 LTS or higher) |
| Nginx | Web server, reverse proxy, SSL termination |
| PHP-FPM | PHP runtime (multiple versions via ondrej/php PPA) |
| MariaDB | Relational database (drop-in MySQL replacement) |
| Redis | In-memory store for cache, sessions, queues, broadcast |
| Supervisor | Process manager for Laravel queue workers |
| Deployer | Zero-downtime deployment tool |
| Certbot | Let's Encrypt SSL certificates |
| UFW | Firewall (ports 22, 80, 443) |
| Fail2ban | SSH brute-force protection |
| unattended-upgrades | Automatic security patches |
| Composer | PHP dependency manager |
| cipi CLI | Server management and orchestration |
App Structure
When you run cipi app create, Cipi creates a fully isolated environment for the app.
Here is everything that gets set up:
In addition to the home directory, Cipi creates these system files:
The .env is auto-compiled with all credentials — database name, password, webhook token,
and APP_KEY. You never have to touch it manually, but you can always edit it with
cipi app env myapp.
Cipi Agent
Installation
cipi-agent is a lightweight Laravel package that connects your application to Cipi.
It provides a webhook endpoint for automatic deploys on git push, a health check
endpoint, an MCP server for AI assistants, a database anonymizer for GDPR-safe data exports,
and a set of useful Artisan commands.
$ composer require andreapollastri/cipi-agent
The service provider auto-discovers — no configuration or config/app.php change is
needed. Cipi already injected the required .env variables during
cipi app create, so the package works out of the box.
Artisan commands
| Command | Description |
|---|---|
| php artisan cipi:status | Show Cipi config values and connectivity status |
| php artisan cipi:deploy-key | Print the SSH deploy key for this app |
| php artisan cipi:mcp | Show the MCP endpoint URL and setup snippets for Cursor, VS Code, and Claude Desktop |
| php artisan cipi:generate-token {type} | Generate a secure token. Type can be mcp, health, or
anonymize
|
| php artisan cipi:service {type} --enable|--disable | Toggle a service on or off. Updates .env in place. Type can be
mcp or anonymize
|
| php artisan cipi:init-anonymize | Scaffold the anonymization config at
/home/{app_user}/.db/anonymization.json
|
Webhook — Automatic Deploys
The agent exposes a POST endpoint at /cipi/webhook. When your Git provider sends a push
event, the agent verifies the signature and writes a .deploy-trigger flag file. A cron
job running every minute as the app user detects this file, removes it, and runs Deployer in the
background.
This design means the webhook response is instant (no HTTP timeout waiting for deployment to
complete) and Deployer runs with the correct user permissions — no sudo required.
Configure your Git provider
| Provider | Webhook URL | Secret field |
|---|---|---|
| GitHub | https://yourdomain.com/cipi/webhook | Secret |
| GitLab | https://yourdomain.com/cipi/webhook | Secret token |
The token to use is stored in .env as CIPI_WEBHOOK_TOKEN. You can also
retrieve it any time with:
$ cipi deploy myapp --webhook
Branch filtering
By default every push triggers a deploy. To restrict deploys to a specific branch, add this to your
.env:
CIPI_DEPLOY_BRANCH=main
Pushes to any other branch will receive a skipped response and no deploy will be
triggered.
Health Check
The agent also exposes a GET endpoint at /cipi/health that returns a JSON payload with
the status of the app, database, cache, queue, and the currently deployed Git commit hash. Useful
for external monitoring services such as UptimeRobot. Protected by the
CIPI_HEALTH_TOKEN Bearer token — generate one with
php artisan cipi:generate-token health.
$ curl -H "Authorization: Bearer YOUR_CIPI_HEALTH_TOKEN" \
https://yourdomain.com/cipi/health
{
"status": "healthy",
"app_user": "myapp",
"php": "8.4",
"laravel": "12.0.0",
"commit": "a1b2c3d",
"checks": {
"database": { "ok": true },
"cache": { "ok": true },
"queue": { "ok": true, "pending_jobs": 0 }
}
}
The commit field shows the short Git hash of the currently deployed release, useful
for verifying which version is running in production.
MCP Server
cipi-agent includes a built-in MCP server (Model Context Protocol)
that exposes your application to AI assistants such as Cursor,
VS Code (with GitHub Copilot), and Claude Desktop. The endpoint
implements MCP 2024-11-05 over HTTP using JSON-RPC 2.0
and is protected by the CIPI_MCP_TOKEN Bearer token.
The MCP endpoint is available at POST /cipi/mcp and is disabled by default. To enable
it:
$ php artisan cipi:service mcp --enable $ php artisan cipi:generate-token mcp
Available tools
The MCP server exposes six tools that an AI assistant can invoke by name:
| Tool | Description |
|---|---|
| health | App, database, cache, and queue status — same data as the
/cipi/health endpoint
|
| app_info | Full application configuration: app user, PHP version, Laravel version, environment, queue/cache/session drivers, deploy branch, and all Cipi URLs |
| deploy | Trigger a new zero-downtime deployment — writes the
.deploy-trigger file; Deployer picks it up within 1 minute
|
| logs | Read the last N lines (default 50, max 500) from application logs. Supports
type (laravel, nginx, php,
worker, deploy), level for Laravel severity
filtering (e.g. error), and search for keyword filtering.
Laravel daily rotation (laravel-YYYY-MM-DD.log) is auto-detected.
|
| db_query | Execute SQL queries against the application database — equivalent to
cipi app tinker. Supports SELECT, SHOW, DESCRIBE, EXPLAIN (read) and
INSERT, UPDATE, DELETE (write). Results formatted as ASCII table, capped at 100 rows.
Destructive DDL (DROP TABLE/DATABASE, TRUNCATE, GRANT/REVOKE, file I/O) is blocked.
|
| artisan | Run any Artisan command (e.g. migrate:status,
queue:size, cache:clear). Long-running and interactive
commands like serve, queue:work, and
tinker are blocked
|
Setup instructions
Run the cipi:mcp Artisan command to get the endpoint URL and ready-to-paste
configuration snippets for your AI client:
$ php artisan cipi:mcp
The command prints the available tools and the JSON configuration for Cursor, VS Code, and Claude Desktop.
Cursor
Add the following to ~/.cursor/mcp.json (or go to Cursor → Settings → MCP):
{
"mcpServers": {
"cipi-myapp": {
"type": "http",
"url": "https://yourdomain.com/cipi/mcp",
"headers": {
"Authorization": "Bearer YOUR_CIPI_MCP_TOKEN"
}
}
}
}
Replace cipi-myapp with your app user name, yourdomain.com with your
actual domain, and YOUR_CIPI_MCP_TOKEN with the token from your
.env. Cursor connects natively over HTTP — no bridge needed.
VS Code
VS Code (with GitHub Copilot) supports MCP natively since version 1.102. Add the following to
.vscode/mcp.json in your project (or run MCP: Open User Configuration
for a global setup):
{
"servers": {
"cipi-myapp": {
"type": "http",
"url": "https://yourdomain.com/cipi/mcp",
"headers": {
"Authorization": "Bearer YOUR_CIPI_MCP_TOKEN"
}
}
}
}
Replace cipi-myapp, yourdomain.com, and
YOUR_CIPI_MCP_TOKEN
as above. Use MCP: Add Server from the Command Palette for a guided setup. VS Code
connects over HTTP — no bridge needed.
Claude Desktop
Claude Desktop requires the mcp-remote bridge to convert stdio to HTTP. Add the
following to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"cipi-myapp": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://yourdomain.com/cipi/mcp",
"--header",
"Authorization: Bearer YOUR_CIPI_MCP_TOKEN"
]
}
}
}
Install mcp-remote globally once with:
$ npm install -g mcp-remote
What you can do with it
Once connected, your AI assistant can interact with the Cipi-managed application in natural language. For example:
User: Is the app healthy? Any pending jobs in the queue? Claude: Calling health tool... Status: healthy — database, cache, and queue all OK. Queue: 0 pending jobs. User: Show me the last errors from the log. Claude: Calling logs tool (lines: 100)... Found 2 errors in the last 100 lines: [2026-03-04 14:22:01] production.ERROR: Connection refused [...] [2026-03-04 14:23:15] production.ERROR: Redis timeout [...] User: Clear the cache and deploy the latest version. Claude: Calling artisan tool (cache:clear)... Cache cleared successfully. Calling deploy tool... Deploy queued — Deployer will run within 1 minute. User: What's the current migration status? Claude: Calling artisan tool (migrate:status)... All 47 migrations have been run. User: How many users signed up in the last 7 days? Claude: Calling db_query tool (SELECT COUNT(*) FROM users WHERE created_at >= ...)... | count | |-------| | 23 |
CIPI_MCP_TOKEN secret. Anyone with the token can trigger deploys,
read logs, run database queries, and run Artisan commands through the MCP endpoint. If you
suspect a leak, regenerate
the token with php artisan cipi:generate-token mcp and restart the application.
Database Anonymizer
cipi-agent includes a built-in database anonymizer that creates sanitized copies of your production database — ideal for sharing with developers, QA teams, or staging environments without exposing real user data. It supports both MySQL and PostgreSQL and uses Faker-based transformations configured via a JSON file.
How it works
- A
POST /cipi/dbrequest queues anAnonymizeDatabaseJobin the background - The job runs
php artisan cipi:anonymize, which dumps the database, applies Faker-based transformations according toanonymization.json, and produces a sanitized SQL file - When complete, an email notification is sent with a signed download URL
GET /cipi/db/{token}serves the anonymized dump via a signed URL that expires after 15 minutes
A utility endpoint POST /cipi/db/user is also available for user ID lookup, useful
when building anonymization rules.
Setup
Scaffold the anonymization configuration file:
$ php artisan cipi:init-anonymize
This creates /home/{app_user}/.db/anonymization.json with a starter template. The
file is stored outside the project repository for security — it never gets committed.
Configuration
Edit /home/{app_user}/.db/anonymization.json to define which tables and columns to
anonymize:
{
"tables": {
"users": {
"name": "name",
"email": "safeEmail",
"phone": "phoneNumber",
"address": "address"
},
"orders": {
"shipping_address": "address",
"notes": "sentence"
}
}
}
Each key under tables is a database table name. Each nested key is a column name,
and the value is a Faker
formatter (e.g. name, safeEmail,
phoneNumber, address, sentence).
Endpoints
| Method | Endpoint | Description |
|---|---|---|
| POST | /cipi/db | Queue an anonymization job. Returns immediately; sends email when done. |
| POST | /cipi/db/user | User ID lookup utility |
| GET | /cipi/db/{token} | Download the anonymized dump (signed URL, expires in 15 minutes) |
All endpoints are protected by the CIPI_ANONYMIZER_TOKEN Bearer token.
Enable the anonymizer
The anonymizer is disabled by default. Enable it and generate a token:
$ php artisan cipi:service anonymize --enable $ php artisan cipi:generate-token anonymize
/home/{app_user}/.db/ is already excluded).
Never commit it to version control.ENV Variables
These variables are automatically injected by Cipi into the app's .env during
cipi app create. You normally do not need to set them manually.
| Variable | Description | Default |
|---|---|---|
| CIPI_WEBHOOK_TOKEN | HMAC secret / token for webhook validation | auto-generated |
| CIPI_DEPLOY_BRANCH | Branch that triggers a deploy (empty = all branches) | empty |
| CIPI_APP_USER | Linux user owning this app | auto-set |
| CIPI_MCP | Enable or disable the MCP server endpoint at /cipi/mcp |
false |
| CIPI_MCP_TOKEN | Bearer token for MCP endpoint authentication | none |
| CIPI_HEALTH_TOKEN | Bearer token for the /cipi/health endpoint |
none |
| CIPI_ANONYMIZER | Enable or disable the database anonymizer endpoints at /cipi/db |
false |
| CIPI_ANONYMIZER_TOKEN | Bearer token for the anonymizer endpoints | none |
php artisan cipi:generate-token {type} to generate tokens for
mcp, health, or anonymize. Use
php artisan cipi:service {type} --enable|--disable to toggle services — the
command updates your .env in place.
Apps
cipi app create
Creates a fully isolated Laravel application: Linux user, PHP-FPM pool, Nginx vhost, MariaDB
database, Supervisor worker, crontab entry, Deployer config, SSH deploy key, and auto-compiled
.env.
Interactive mode
$ cipi app create
Non-interactive (flags)
$ cipi app create \
--user=myapp \
--domain=myapp.com \
--repository=git@github.com:you/myapp.git \
--branch=main \
--php=8.4
app list / app show / app edit / app delete
| Command | Description |
|---|---|
| cipi app list | List all apps with domain, PHP version, and status |
| cipi app show <app> | Full details: domain, PHP, deploy key, workers, webhook |
| cipi app edit <app> --php=8.5 | Hot-swap PHP version. Updates FPM pool, Nginx socket, Supervisor, crontab, Deployer
config, and .env — zero downtime |
| cipi app edit <app> --branch=develop | Change the deploy branch |
| cipi app env <app> | Open the app's .env file in nano as the app user |
| cipi app reset-password <app> | Regenerate the app's Linux user SSH password. The new password is displayed on screen — save it immediately |
| cipi app reset-db-password <app> | Regenerate the app's MariaDB password and automatically update the
DB_PASSWORD value in the app's .env file
|
| cipi app delete <app> | Permanently remove the app, user, database, Nginx vhost, FPM pool, and Supervisor workers. Asks for confirmation. |
Managing ENV variables
Every app has a single .env file living at /home/<app>/shared/.env.
It is created and pre-populated by Cipi during app create with the database
credentials, APP_KEY, APP_URL, cache/session/queue settings, and the
webhook token. The shared/ directory is symlinked into every release, so the same
.env is always active regardless of which release is current.
Edit interactively via CLI
The safest way to change ENV values is through Cipi itself — it opens the file in nano as the app user, with the correct permissions:
$ cipi app env myapp
Save with Ctrl+O then exit with Ctrl+X. Changes take effect immediately for new requests — no restart needed for most values. If you change queue connection or cache driver, restart the workers:
$ cipi worker restart myapp
Edit directly via SSH
You can also edit the file directly over SSH as root or as the app user:
# as root $ nano /home/myapp/shared/.env # or switch to the app user first $ su - myapp $ nano ~/shared/.env
Key ENV variables set by Cipi
| Variable | Description | Set by |
|---|---|---|
| APP_KEY | Laravel encryption key — generated once at app creation | Cipi |
| APP_URL | Updated automatically by cipi ssl install |
Cipi |
| DB_CONNECTION | Always mysql (MariaDB is drop-in compatible) |
Cipi |
| DB_DATABASE / DB_USERNAME / DB_PASSWORD | Auto-generated credentials for the app's isolated database | Cipi |
| CACHE_STORE | database — uses the app's MariaDB |
Cipi |
| SESSION_DRIVER | database |
Cipi |
| QUEUE_CONNECTION | database |
Cipi |
| CIPI_WEBHOOK_TOKEN | HMAC secret for cipi-agent webhook validation | Cipi |
| CIPI_APP_USER | Linux username owning this app | Cipi |
| CIPI_MCP | Enable or disable the built-in MCP server at /cipi/mcp |
User (true by default) |
cipi db password myapp — it updates both MariaDB and the
.env atomically. Editing them by hand risks leaving the two out of sync.
Adding your own variables
Add any custom variable at the bottom of the file as you normally would in a Laravel project. They
are preserved across deploys because the .env lives in shared/ and is
never overwritten by Deployer.
# your custom variables
STRIPE_KEY=sk_live_...
STRIPE_SECRET=sk_live_...
MAIL_MAILER=smtp
MAIL_HOST=smtp.mailgun.org
cipi app logs
Tail application logs in real-time. Logs are rotated daily and kept for 14 days. By
default, all logs are shown including Laravel daily logs
(laravel-YYYY-MM-DD.log) from shared/storage/logs/.
$ cipi app logs myapp # all logs (incl. Laravel daily logs) $ cipi app logs myapp --type=nginx # Nginx access + error $ cipi app logs myapp --type=php # PHP-FPM errors $ cipi app logs myapp --type=worker # queue worker output $ cipi app logs myapp --type=deploy # deploy history $ cipi app logs myapp --type=laravel # Laravel application logs
app artisan & app tinker
Run Artisan commands and Tinker as the app user with the correct PHP version and
open_basedir context — exactly as they would run during a deploy.
$ cipi app artisan myapp migrate:status $ cipi app artisan myapp queue:retry all $ cipi app artisan myapp db:seed --class=ProductionSeeder $ cipi app artisan myapp cache:clear $ cipi app tinker myapp
SSH as the app user
Each app runs under its own isolated Linux user. Sometimes you need to work directly inside that user's environment — inspect files, run one-off scripts, or debug something that only reproduces as the correct user.
Direct SSH as app user (recommended)
App users can SSH directly to the server with the password generated at app creation:
# connect as the app user (password auth) $ ssh myapp@your-server-ip # you are directly inside the app user's shell myapp@server:~$ pwd /home/myapp myapp@server:~$ cd ~/current myapp@server:~$ ls
The password is shown when the app is created (or use cipi app reset-password myapp to
regenerate it). This works for SFTP clients, IDE remote sessions, and terminal access.
Via cipi (admin path)
If you are already connected as cipi, you can switch directly to any app user:
$ ssh cipi@your-server-ip
cipi@server:~$ sudo su - myapp
myapp@server:~$ pwd
/home/myapp
Reset the app user password
If you need to regenerate an app user's password (e.g. for direct SSH or SFTP), use:
$ cipi app reset-password myapp
The new password is displayed on screen — save it immediately.
Useful commands once logged in as the app user
# navigate to the active release myapp@server:~$ cd ~/current # run artisan directly with the correct PHP version myapp@server:~$ /usr/bin/php8.4 ~/current/artisan tinker # inspect the shared .env myapp@server:~$ cat ~/shared/.env # tail all logs myapp@server:~$ tail -f ~/logs/*.log # check active releases myapp@server:~$ ls -lt ~/releases/
open_basedir restriction limits PHP to /home/myapp.
This is enforced at the PHP-FPM level, not at the shell level — you can access any file your
shell user can read when working in the terminal.Deploy & CI/CD
cipi deploy
Cipi uses Deployer for all deployments. Every deploy is atomic: a new release
directory is prepared fully before the current symlink is swapped, so traffic is never
interrupted.
Deploy pipeline
- Stop queue workers (
cipi worker stop) - Clone repo into
releases/N/ - Run
composer install --no-dev - Link
shared/.envandshared/storage/ - Run
artisan migrate --force - Run
artisan optimize - Run
artisan storage:link - Swap
currentsymlink atomically - Restart queue workers
- Prune old releases (keep last 5)
$ cipi deploy myapp # deploy latest commit $ cipi deploy myapp --rollback # instant rollback to previous release $ cipi deploy myapp --releases # list all releases with timestamps $ cipi deploy myapp --key # show the SSH deploy key $ cipi deploy myapp --webhook # show webhook URL and token $ cipi deploy myapp --unlock # remove a stuck deploy lock $ cipi deploy myapp --trust-host=git.mycompany.com # trust a custom Git server fingerprint $ cipi deploy myapp --trust-host=git.mycompany.com:2222 # trust on non-standard port (also writes ~/.ssh/config)
cipi deploy myapp --unlock to remove it before re-deploying.auth.json
Manage the auth.json file for an app. This file lives at
/home/<app>/shared/auth.json and is automatically symlinked into every release by
Deployer — exactly like .env. Use it to store structured credential data (e.g. API
keys, feature flags, or any JSON payload) that your Laravel app can read at runtime.
$ cipi auth create myapp # create auth.json with initial { "users": [] } structure $ cipi auth edit myapp # open in $EDITOR (fallback: nano), validate JSON on close $ cipi auth show myapp # print contents formatted with jq $ cipi auth delete myapp # delete file (asks for confirmation)
Command details
| Command | Description |
|---|---|
cipi auth create <app> |
Creates shared/auth.json with the initial structure
{"users":[]}, sets permissions to 640 (owner
app:app), and adds auth.json to
shared_files in the app's Deployer config so it is symlinked on every
deploy.
|
cipi auth edit <app> |
Opens shared/auth.json in $EDITOR (falls back to
nano). After the editor closes, validates the JSON with
jq and warns if the file is malformed.
|
cipi auth show <app> |
Prints the contents of shared/auth.json formatted with
jq.
|
cipi auth delete <app> |
Asks for confirmation, then deletes shared/auth.json and removes the
auth.json entry from shared_files in the app's Deployer
config.
|
Deployer integration
cipi auth create automatically appends auth.json to the
shared_files list in /home/<app>/.deployer/deploy.php, and
cipi auth delete removes it. This means the file is treated exactly like
.env: it persists across releases and is never overwritten by a deploy.
cipi auth operation is logged via log_action for
auditability. The AUTH section is also listed in the output of
cipi help.
Git providers
Cipi is ready to work with GitHub and GitLab but it supports any other Git provider that supports SSH deploy keys — no vendor lock-in.
For self-hosted or custom Git servers, you need to trust the server's host fingerprint before
Deployer can clone over SSH. Use the --trust-host flag to add the fingerprint to the
app user's ~/.ssh/known_hosts automatically:
# show the deploy key and add it to your Git provider $ cipi deploy myapp --key # trust a custom Git server fingerprint (standard port) $ cipi deploy myapp --trust-host=git.mycompany.com # trust a custom Git server on a non-standard port # (also writes ~/.ssh/config automatically) $ cipi deploy myapp --trust-host=git.mycompany.com:2222
Host / Port entry to the app user's ~/.ssh/config so
that Deployer can reach the server without any extra configuration.
Git auto-setup
If you save a GitHub or GitLab Personal Access Token, Cipi
automatically adds the SSH deploy key and creates the webhook on the repository every time you run
cipi app create. No manual steps required.
Save a token
# GitHub (fine-grained or classic PAT) $ cipi git github-token ghp_xxxxxxxxxxxxxxxxxxxx # GitLab (gitlab.com) $ cipi git gitlab-token glpat-xxxxxxxxxxxxxxxxxxxx # GitLab (self-hosted — set the URL before or after the token) $ cipi git gitlab-url https://gitlab.example.com $ cipi git gitlab-token glpat-xxxxxxxxxxxxxxxxxxxx
GitHub token permissions
Fine-grained tokens (recommended) need Administration and
Webhooks set to Read and write on the target repositories. Classic tokens
need the repo scope.
GitLab token permissions
The api scope is the minimum required — GitLab does not offer a more granular scope
that covers both deploy keys and webhooks.
Automatic lifecycle
| Event | What Cipi does automatically |
|---|---|
app create |
Adds deploy key + creates webhook on the repository via API. The summary shows "auto-configured ✓" instead of manual instructions. |
app edit --repository=... |
Removes deploy key + webhook from the old repository, then adds them to the new one. |
app delete |
Removes deploy key + webhook from the repository before deleting the app. |
cipi git commands
| Command | Description |
|---|---|
cipi git status |
Show provider connection status and per-app integration details (deploy key ID, webhook ID) |
cipi git github-token <token> |
Save a GitHub Personal Access Token |
cipi git gitlab-token <token> |
Save a GitLab Personal Access Token |
cipi git gitlab-url <url> |
Set the base URL for a self-hosted GitLab instance |
cipi git remove-github |
Remove the stored GitHub token |
cipi git remove-gitlab |
Remove the stored GitLab token and URL |
Manual setup (fallback)
Auto-setup is skipped when no token is configured, when the API call fails (wrong permissions, repository not found, rate limit), or when the repository is hosted on a provider other than GitHub or GitLab (e.g. Gitea, Forgejo, Bitbucket). In all these cases Cipi falls back to the manual workflow and the app creation proceeds normally.
To configure deploy key and webhook manually:
# print the SSH deploy key to add to your Git provider $ cipi deploy myapp --key # print the webhook URL and token $ cipi deploy myapp --webhook # if using a custom Git server, trust the host fingerprint first $ cipi deploy myapp --trust-host=git.mycompany.com
Then add them in your provider's repository settings:
- Deploy key — GitHub: Settings → Deploy keys → Add deploy key; GitLab: Settings → Repository → Deploy keys
- Webhook — GitHub: Settings → Webhooks → Add webhook;
GitLab: Settings → Webhooks → Add new webhook. Set the payload URL and
secret to the values shown by
cipi deploy myapp --webhook
Customising the deploy script
The deploy configuration for each app is stored at:
This file is auto-generated by Cipi during app create and updated automatically when you
change the PHP version or deploy branch via cipi app edit. You can edit it to customise
the deploy pipeline, but you should understand the implications before doing so.
Default deploy pipeline
The auto-generated deploy.php runs these tasks in order:
deploy:prepare // create releases/N/ directory deploy:vendors // composer install --no-dev deploy:shared // link shared/.env and shared/storage/ artisan:migrate // php artisan migrate --force artisan:optimize // php artisan optimize artisan:storage:link // php artisan storage:link deploy:symlink // swap current → releases/N/ atomically cipi:restart-workers // supervisorctl restart myapp-* deploy:cleanup // keep last 5 releases, delete older
Adding custom tasks
You can add tasks before or after any step. Common examples:
// Run artisan db:seed after migrations after('artisan:migrate', 'artisan:db:seed'); // Clear view cache after symlink swap after('deploy:symlink', 'artisan:view:clear'); // Custom task — send a Slack notification task('notify:slack', function () { run('curl -X POST https://hooks.slack.com/... -d \'{"text":"Deployed!"}\''); }); after('deploy:symlink', 'notify:slack');
Running additional artisan commands
// Seed only in specific environments
task('artisan:db:seed', function () {
run('{{bin/php}} {{release_path}}/artisan db:seed --force');
});
cipi app edit myapp --php=X or cipi app edit myapp --branch=X. Back up
your customisations or keep them in a section clearly separated from the Cipi-managed blocks. A
safe pattern is to put all custom tasks at the bottom of the file after the default task
definition.
Disabling a default step
To skip a task — for example if you handle migrations manually — comment it out or remove it from the
deploy task definition:
// Remove the migrate step from the pipeline task('deploy', [ 'deploy:prepare', 'deploy:vendors', 'deploy:shared', // 'artisan:migrate', ← disabled 'artisan:optimize', 'artisan:storage:link', 'deploy:symlink', 'cipi:restart-workers', 'deploy:cleanup', ]);
Testing your changes
After editing deploy.php, always do a test deploy before pushing to production:
$ cipi deploy myapp # If something goes wrong, instant rollback: $ cipi deploy myapp --rollback # If the deploy is stuck (e.g. interrupted mid-run): $ cipi deploy myapp --unlock
~/logs/deploy.log or via
cipi app logs myapp --type=deploy. Check it first when troubleshooting a failed
deploy.
CI/CD Pipelines — GitHub & GitLab
Cipi supports two integration patterns with CI/CD pipelines. Choose the one that fits your workflow.
Option A — Webhook (recommended)
The cleanest approach: install cipi-agent in your Laravel project, configure one webhook
in your Git provider, and every push to the target branch triggers a deploy automatically. The
pipeline does not need SSH access to your server.
# 1. Install the agent in your Laravel project $ composer require andreapollastri/cipi-agent # 2. Get your webhook URL and token $ cipi deploy myapp --webhook
Then add the webhook in your Git provider (URL + token) and you are done. See the Cipi Agent — Webhook section for the full setup.
Option B — SSH deploy from the pipeline
If you want explicit control inside your pipeline (e.g. deploy only after tests pass, or only from a
specific environment), you can SSH into the server and run cipi deploy directly from
the CI job.
/root/.ssh/authorized_keys on the server and store the
private key as a CI secret. Never reuse deploy keys or personal keys.
Generate a dedicated CI key
# on your local machine $ ssh-keygen -t ed25519 -C "ci-deploy" -f ~/.ssh/ci_deploy -N "" # copy the public key to the server $ ssh-copy-id -i ~/.ssh/ci_deploy.pub cipi@your-server-ip # copy the private key content → add it as a CI secret $ cat ~/.ssh/ci_deploy
GitHub Actions
Add the private key as a repository secret named SERVER_SSH_KEY and the server IP as
SERVER_HOST.
# .github/workflows/deploy.yml name: Deploy on: push: branches: [main] jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Run tests run: php artisan test deploy: runs-on: ubuntu-latest needs: test # only deploy if tests pass steps: - name: Deploy via Cipi uses: appleboy/ssh-action@v1 with: host: ${{ secrets.SERVER_HOST }} username: cipi key: ${{ secrets.SERVER_SSH_KEY }} script: sudo cipi deploy myapp
For rollback on failure, extend the script step:
script: |
sudo cipi deploy myapp || (sudo cipi deploy myapp --rollback && exit 1)
GitLab CI / CD
Add the private key as a CI/CD variable named SERVER_SSH_KEY (type: File) and the server
IP as SERVER_HOST.
# .gitlab-ci.yml
stages:
- test
- deploy
test:
stage: test
script:
- php artisan test
deploy:
stage: deploy
environment: production
only:
- main
before_script:
- apt-get install -y openssh-client
- eval $(ssh-agent -s)
- echo "$SERVER_SSH_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- ssh-keyscan -H $SERVER_HOST >> ~/.ssh/known_hosts
script:
- ssh root@$SERVER_HOST "cipi deploy myapp"
With rollback on failure:
script:
- ssh root@$SERVER_HOST "cipi deploy myapp || (cipi deploy myapp --rollback && exit 1)"
Multi-app deploy
If the same pipeline manages multiple apps on the same server:
# GitHub Actions — deploy multiple apps in parallel
- name: Deploy
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.SERVER_HOST }}
username: root
key: ${{ secrets.SERVER_SSH_KEY }}
script: |
cipi deploy frontend &
cipi deploy api &
wait
cipi deploy myapp --unlock if a stuck lock is left behind.Deploy notifications
Add notification steps to any pipeline to keep the team informed on deploy success, failure, and automatic rollbacks. Both examples below work with GitHub Actions and GitLab CI using only standard HTTP calls — no extra platform dependencies.
Slack
Add a final step that posts to a Slack webhook regardless of deploy outcome. Use
if: always() in GitHub Actions so the notification fires on both success and failure.
Create an Incoming Webhook in your Slack workspace and store the URL as
SLACK_WEBHOOK_URL in your CI secrets.
# GitHub Actions — deploy + Slack notification
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Deploy
id: deploy
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.SERVER_HOST }}
username: root
key: ${{ secrets.SERVER_SSH_KEY }}
script: cipi deploy myapp
- name: Notify Slack — success
if: success()
uses: slackapi/slack-github-action@v2
with:
webhook: ${{ secrets.SLACK_WEBHOOK_URL }}
webhook-type: incoming-webhook
payload: |
{
"text": ":white_check_mark: *myapp* deployed successfully",
"attachments": [{
"color": "good",
"fields": [
{ "title": "Branch", "value": "${{ github.ref_name }}", "short": true },
{ "title": "By", "value": "${{ github.actor }}", "short": true },
{ "title": "Commit", "value": "${{ github.sha }}", "short": false }
]
}]
}
- name: Notify Slack — failure
if: failure()
uses: slackapi/slack-github-action@v2
with:
webhook: ${{ secrets.SLACK_WEBHOOK_URL }}
webhook-type: incoming-webhook
payload: |
{
"text": ":x: *myapp* deploy FAILED — rolling back",
"attachments": [{
"color": "danger",
"fields": [
{ "title": "Branch", "value": "${{ github.ref_name }}", "short": true },
{ "title": "By", "value": "${{ github.actor }}", "short": true },
{ "title": "Run", "value": "${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}", "short": false }
]
}]
}
- name: Rollback on failure
if: failure()
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.SERVER_HOST }}
username: root
key: ${{ secrets.SERVER_SSH_KEY }}
script: cipi deploy myapp --rollback
For GitLab CI, use curl directly — no plugin needed:
# .gitlab-ci.yml — deploy stage with Slack notification
deploy:
stage: deploy
script:
- ssh root@$SERVER_HOST "cipi deploy myapp" && export DEPLOY_STATUS="success" || export DEPLOY_STATUS="failed"
- |
if [ "$DEPLOY_STATUS" = "success" ]; then
curl -s -X POST "$SLACK_WEBHOOK_URL" \
-H "Content-Type: application/json" \
-d "{\"text\":\":white_check_mark: *myapp* deployed by $GITLAB_USER_LOGIN on \`$CI_COMMIT_REF_NAME\`\"}"
else
curl -s -X POST "$SLACK_WEBHOOK_URL" \
-H "Content-Type: application/json" \
-d "{\"text\":\":x: *myapp* deploy FAILED — <$CI_PIPELINE_URL|view pipeline>\"}"
ssh root@$SERVER_HOST "cipi deploy myapp --rollback"
exit 1
fi
Telegram
Create a Telegram bot via @BotFather, get the bot token, and find your chat/group ID.
Store them as TELEGRAM_BOT_TOKEN and TELEGRAM_CHAT_ID in CI secrets.
# GitHub Actions — deploy + Telegram notification
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Deploy
id: deploy
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.SERVER_HOST }}
username: root
key: ${{ secrets.SERVER_SSH_KEY }}
script: cipi deploy myapp
- name: Notify Telegram — success
if: success()
run: |
curl -s -X POST "https://api.telegram.org/bot${{ secrets.TELEGRAM_BOT_TOKEN }}/sendMessage" \
-d chat_id="${{ secrets.TELEGRAM_CHAT_ID }}" \
-d parse_mode="Markdown" \
-d text="✅ *myapp* deployed successfully%0ABranch: \`${{ github.ref_name }}\`%0ABy: ${{ github.actor }}"
- name: Notify Telegram — failure + rollback
if: failure()
run: |
curl -s -X POST "https://api.telegram.org/bot${{ secrets.TELEGRAM_BOT_TOKEN }}/sendMessage" \
-d chat_id="${{ secrets.TELEGRAM_CHAT_ID }}" \
-d parse_mode="Markdown" \
-d text="❌ *myapp* deploy FAILED — rolling back%0ABranch: \`${{ github.ref_name }}\`%0A[View run](${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }})"
ssh -o StrictHostKeyChecking=no -i <(echo "${{ secrets.SERVER_SSH_KEY }}") \
root@${{ secrets.SERVER_HOST }} "cipi deploy myapp --rollback"
GitLab CI equivalent (pure curl, no extra dependencies):
# .gitlab-ci.yml — deploy stage with Telegram notification
deploy:
stage: deploy
script:
- ssh root@$SERVER_HOST "cipi deploy myapp" && RESULT="✅ deployed" || RESULT="❌ FAILED"
- |
curl -s -X POST "https://api.telegram.org/bot${TELEGRAM_BOT_TOKEN}/sendMessage" \
-d chat_id="$TELEGRAM_CHAT_ID" \
-d parse_mode="Markdown" \
-d text="*myapp* ${RESULT}%0ABranch: \`$CI_COMMIT_REF_NAME\`%0ABy: $GITLAB_USER_LOGIN"
- |
if echo "$RESULT" | grep -q "FAILED"; then
ssh root@$SERVER_HOST "cipi deploy myapp --rollback"
exit 1
fi
https://api.telegram.org/bot<TOKEN>/getUpdates and look for the
chat.id field in the response. For private chats, just message the bot first.
Safe deploy — backup before release
A production-grade deploy pipeline should always create a restore point before the new code goes live. Cipi provides two complementary backup commands that map to two different safety levels:
# local DB snapshot — fast, on-disk, instant rollback $ cipi db backup myapp # → /var/log/cipi/backups/myapp_20260303_143012.sql.gz # S3 backup — DB dump + shared/ folder uploaded to your bucket $ cipi backup run myapp # → s3://your-bucket/cipi/myapp/2026-03-03_143015/db.sql.gz # → s3://your-bucket/cipi/myapp/2026-03-03_143015/shared.tar.gz
Used together in a pipeline, they give you both a fast local restore point and an off-server copy of the database and all uploaded files. The deploy only starts if both backups succeed.
cipi backup configure once on the server to
link your S3 credentials before cipi backup run can be used.
cipi db backup works without any configuration — it is always available.
What each command does internally
cipi db backup <app> calls
mysqldump --single-transaction --routines --triggers and gzips the output to
/var/log/cipi/backups/<app>_<timestamp>.sql.gz. The file stays on the
server and is never deleted automatically — add a cleanup step or a cron if disk space matters.
cipi backup run <app> does two things: dumps the database with
mariadb-dump --single-transaction into a temp dir, and archives the entire
/home/<app>/shared/ folder (which contains .env,
storage/, and any user-uploaded files). Both archives are then uploaded to S3 under the
path cipi/<app>/<timestamp>/. The temp files are deleted after a successful
upload.
GitHub Actions — safe deploy workflow
# .github/workflows/deploy.yml name: Deploy on: push: branches: [main] jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - run: php artisan test backup: runs-on: ubuntu-latest needs: test steps: - name: Local DB backup uses: appleboy/ssh-action@v1 with: host: ${{ secrets.SERVER_HOST }} username: root key: ${{ secrets.SERVER_SSH_KEY }} script: cipi db backup myapp - name: S3 backup (DB + shared) uses: appleboy/ssh-action@v1 with: host: ${{ secrets.SERVER_HOST }} username: root key: ${{ secrets.SERVER_SSH_KEY }} script: cipi backup run myapp deploy: runs-on: ubuntu-latest needs: backup # only runs if backup job succeeds steps: - name: Deploy uses: appleboy/ssh-action@v1 with: host: ${{ secrets.SERVER_HOST }} username: root key: ${{ secrets.SERVER_SSH_KEY }} script: cipi deploy myapp - name: Rollback on failure if: failure() uses: appleboy/ssh-action@v1 with: host: ${{ secrets.SERVER_HOST }} username: root key: ${{ secrets.SERVER_SSH_KEY }} script: | cipi deploy myapp --rollback echo "Deploy failed — rolled back to previous release"
The job graph enforces the order: test → backup → deploy. If
any job fails, the subsequent ones are skipped. If the deploy step itself fails, the
rollback step fires automatically and restores the previous Deployer release.
GitLab CI/CD — safe deploy pipeline
# .gitlab-ci.yml
stages:
- test
- backup
- deploy
variables:
APP: myapp
.ssh: &ssh
before_script:
- apt-get install -y openssh-client
- eval $(ssh-agent -s)
- echo "$SERVER_SSH_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- ssh-keyscan -H "$SERVER_HOST" >> ~/.ssh/known_hosts
test:
stage: test
script: php artisan test
only: [main]
backup-local:
stage: backup
<<: *ssh
only: [main]
script:
- ssh root@$SERVER_HOST "cipi db backup $APP"
backup-s3:
stage: backup
<<: *ssh
only: [main]
script:
- ssh root@$SERVER_HOST "cipi backup run $APP"
deploy:
stage: deploy
<<: *ssh
only: [main]
script:
- |
ssh root@$SERVER_HOST "
cipi deploy $APP || {
cipi deploy $APP --rollback
echo 'Deploy failed — rolled back'
exit 1
}
"
after_script:
- echo "Released → https://myapp.com"
backup-local and backup-s3 are in the same stage so they run in parallel if
you have multiple runners, cutting overall pipeline time. Both must succeed before the
deploy stage starts.
Restore from local backup
If you need to roll back the database to the snapshot taken just before the deploy:
# list available local snapshots $ ls -lh /var/log/cipi/backups/myapp_*.sql.gz # restore the most recent one $ cipi db restore myapp /var/log/cipi/backups/myapp_20260303_143012.sql.gz # also roll back the code release $ cipi deploy myapp --rollback
Restore from S3 backup
# list available S3 snapshots for this app $ cipi backup list myapp # download the DB snapshot from S3 $ aws s3 cp s3://your-bucket/cipi/myapp/2026-03-03_143015/db.sql.gz /tmp/db.sql.gz # restore the database $ cipi db restore myapp /tmp/db.sql.gz # (optional) restore shared/ files $ aws s3 cp s3://your-bucket/cipi/myapp/2026-03-03_143015/shared.tar.gz /tmp/shared.tar.gz $ tar -xzf /tmp/shared.tar.gz -C /home/myapp/
.sql.gz file to /var/log/cipi/backups/. On a busy deployment schedule,
add a cleanup cron or keep only the last N files:ls -t /var/log/cipi/backups/myapp_*.sql.gz | tail -n +6 | xargs rm -fThis example keeps the 5 most recent snapshots and deletes older ones.
Preview environments (per-branch deploy)
Every branch in your repository can automatically get its own live URL — a fully deployed Laravel app with its own database, workers, and HTTPS. This pattern is sometimes called "review apps" or "ephemeral environments".
The URL format uses three slugs separated by hyphens, so each environment is human-readable and globally unique:
https://develop-acmeco-3a1f9c2e.preview.domain.ltd https://release-1-2-3-acmeco-3a1f9c2e.preview.domain.ltd https://main-acmeco-3a1f9c2e.preview.domain.ltd
How the identifiers are generated
Three values are derived at pipeline runtime:
# branch name → lowercase, non-alphanum → hyphens, trim edges BRANCH_SLUG=$(echo "$BRANCH" | tr '[:upper:]' '[:lower:]' \ | sed 's/[^a-z0-9]/-/g; s/--*/-/g; s/^-//; s/-$//') # repo/project name → same treatment PROJECT_SLUG=$(echo "$PROJECT" | tr '[:upper:]' '[:lower:]' \ | sed 's/[^a-z0-9]/-/g') # deterministic MD5 hash — same branch always gets the same environment HASH=$(echo -n "${BRANCH_SLUG}${PROJECT_SLUG}" | md5sum | cut -c1-8) # Cipi app username: must be lowercase alphanumeric, 3–32 chars, no hyphens # hex chars (0–9, a–f) are valid; prefix "pr" ensures it starts with a letter APP_NAME="pr${HASH}" # e.g. pr3a1f9c2e # human-readable domain with wildcard base DOMAIN="${BRANCH_SLUG}-${PROJECT_SLUG}-${HASH}.${DEPLOY_WILDCARD_DOMAIN}"
Pre-requisites (one-time server setup)
A record
*.preview.domain.ltd → <server-ip> in your DNS provider. All subdomains
resolve automatically; no per-branch DNS changes needed.2. Wildcard SSL certificate — obtain a wildcard cert via DNS-01 challenge once and install it on the server. See the Wildcard domains section for instructions. The cert path used by the pipeline examples below is
/etc/letsencrypt/live/preview.domain.ltd/.3. Repository access — the pipeline examples use an HTTPS URL with a personal access token (PAT) embedded, so no per-app SSH deploy key setup is needed. The token only needs read access to the repository.
GitHub Actions
Add these secrets to the repository: SERVER_HOST, SERVER_SSH_KEY,
DEPLOY_WILDCARD_DOMAIN (e.g. preview.domain.ltd), GH_PAT (a
fine-grained PAT with read access to the repo).
# .github/workflows/preview.yml name: Preview on: push: branches-ignore: [main, master] # main branch uses your production pipeline delete: # clean up when a branch is deleted jobs: deploy: if: github.event_name == 'push' runs-on: ubuntu-latest steps: - name: Compute identifiers id: ids run: | BRANCH_SLUG=$(echo "${{ github.ref_name }}" \ | tr '[:upper:]' '[:lower:]' \ | sed 's/[^a-z0-9]/-/g; s/--*/-/g; s/^-//; s/-$//') PROJECT_SLUG=$(echo "${{ github.event.repository.name }}" \ | tr '[:upper:]' '[:lower:]' \ | sed 's/[^a-z0-9]/-/g') HASH=$(echo -n "${BRANCH_SLUG}${PROJECT_SLUG}" | md5sum | cut -c1-8) APP_NAME="pr${HASH}" DOMAIN="${BRANCH_SLUG}-${PROJECT_SLUG}-${HASH}.${{ secrets.DEPLOY_WILDCARD_DOMAIN }}" REPO="https://oauth2:${{ secrets.GH_PAT }}@github.com/${{ github.repository }}.git" echo "app_name=${APP_NAME}" >> "$GITHUB_OUTPUT" echo "domain=${DOMAIN}" >> "$GITHUB_OUTPUT" echo "repo_url=${REPO}" >> "$GITHUB_OUTPUT" - name: Create or update preview uses: appleboy/ssh-action@v1 with: host: ${{ secrets.SERVER_HOST }} username: root key: ${{ secrets.SERVER_SSH_KEY }} script: | APP="${{ steps.ids.outputs.app_name }}" DOMAIN="${{ steps.ids.outputs.domain }}" REPO="${{ steps.ids.outputs.repo_url }}" BRANCH="${{ github.ref_name }}" WILDCARD="/etc/letsencrypt/live/${{ secrets.DEPLOY_WILDCARD_DOMAIN }}" if cipi app show "$APP" &>/dev/null; then echo "→ Updating: $APP" cipi deploy "$APP" else echo "→ Creating: $APP → $DOMAIN" cipi app create \ --user="$APP" \ --domain="$DOMAIN" \ --repository="$REPO" \ --branch="$BRANCH" \ --php=8.4 # Patch nginx to listen on 443 using the pre-installed wildcard cert awk -v cert="$WILDCARD" ' /^ listen 80;/ { print print " listen 443 ssl http2;" print " ssl_certificate " cert "/fullchain.pem;" print " ssl_certificate_key " cert "/privkey.pem;" next } { print } ' "/etc/nginx/sites-available/$APP" > /tmp/_cipi_vhost \ && mv /tmp/_cipi_vhost "/etc/nginx/sites-available/$APP" nginx -t && systemctl reload nginx cipi deploy "$APP" fi - name: Print preview URL run: | echo "" echo " Preview → https://${{ steps.ids.outputs.domain }}" echo "" cleanup: if: github.event_name == 'delete' runs-on: ubuntu-latest steps: - name: Compute identifiers id: ids run: | BRANCH_SLUG=$(echo "${{ github.event.ref }}" \ | tr '[:upper:]' '[:lower:]' \ | sed 's/[^a-z0-9]/-/g; s/--*/-/g; s/^-//; s/-$//') PROJECT_SLUG=$(echo "${{ github.event.repository.name }}" \ | tr '[:upper:]' '[:lower:]' \ | sed 's/[^a-z0-9]/-/g') HASH=$(echo -n "${BRANCH_SLUG}${PROJECT_SLUG}" | md5sum | cut -c1-8) echo "app_name=pr${HASH}" >> "$GITHUB_OUTPUT" - name: Delete preview uses: appleboy/ssh-action@v1 with: host: ${{ secrets.SERVER_HOST }} username: root key: ${{ secrets.SERVER_SSH_KEY }} script: | APP="${{ steps.ids.outputs.app_name }}" if cipi app show "$APP" &>/dev/null; then echo "y" | cipi app delete "$APP" echo "→ Deleted: $APP" else echo "→ Not found, nothing to delete" fi
GitLab CI/CD
Add these CI/CD variables: SERVER_HOST, SERVER_SSH_KEY (File type),
DEPLOY_WILDCARD_DOMAIN, GL_TOKEN (a project/group access token with
read_repository scope).
# .gitlab-ci.yml stages: - preview - cleanup .ssh_setup: &ssh_setup before_script: - apt-get install -y openssh-client - eval $(ssh-agent -s) - echo "$SERVER_SSH_KEY" | tr -d '\r' | ssh-add - - mkdir -p ~/.ssh - ssh-keyscan -H "$SERVER_HOST" >> ~/.ssh/known_hosts .compute_ids: &compute_ids | BRANCH_SLUG=$(echo "$CI_COMMIT_REF_NAME" \ | tr '[:upper:]' '[:lower:]' \ | sed 's/[^a-z0-9]/-/g; s/--*/-/g; s/^-//; s/-$//') PROJECT_SLUG=$(echo "$CI_PROJECT_NAME" \ | tr '[:upper:]' '[:lower:]' \ | sed 's/[^a-z0-9]/-/g') HASH=$(echo -n "${BRANCH_SLUG}${PROJECT_SLUG}" | md5sum | cut -c1-8) APP="pr${HASH}" DOMAIN="${BRANCH_SLUG}-${PROJECT_SLUG}-${HASH}.${DEPLOY_WILDCARD_DOMAIN}" REPO="https://oauth2:${GL_TOKEN}@${CI_SERVER_HOST}/${CI_PROJECT_PATH}.git" WILDCARD="/etc/letsencrypt/live/${DEPLOY_WILDCARD_DOMAIN}" deploy-preview: stage: preview <<: *ssh_setup except: - main - master script: - *compute_ids - | ssh root@$SERVER_HOST bash -s << ENDSSH APP="$APP" DOMAIN="$DOMAIN" REPO="$REPO" BRANCH="$CI_COMMIT_REF_NAME" WILDCARD="$WILDCARD" if cipi app show "\$APP" &>/dev/null; then echo "Updating: \$APP" cipi deploy "\$APP" else echo "Creating: \$APP → \$DOMAIN" cipi app create \ --user="\$APP" \ --domain="\$DOMAIN" \ --repository="\$REPO" \ --branch="\$BRANCH" \ --php=8.4 awk -v cert="\$WILDCARD" ' /^ listen 80;/ { print print " listen 443 ssl http2;" print " ssl_certificate " cert "/fullchain.pem;" print " ssl_certificate_key " cert "/privkey.pem;" next } { print } ' "/etc/nginx/sites-available/\$APP" > /tmp/_cipi_vhost \ && mv /tmp/_cipi_vhost "/etc/nginx/sites-available/\$APP" nginx -t && systemctl reload nginx cipi deploy "\$APP" fi ENDSSH - echo "Preview → https://$DOMAIN" cleanup-preview: stage: cleanup <<: *ssh_setup only: - branches when: manual # or trigger on MR merge via rules: script: - *compute_ids - | ssh root@$SERVER_HOST " APP='$APP' if cipi app show \"\$APP\" &>/dev/null; then echo 'y' | cipi app delete \"\$APP\" fi "
cleanup-preview automatically when a merge request is
merged by adding a rules: block that checks
$CI_MERGE_REQUEST_EVENT_TYPE == "merge_train" or using a dedicated
workflow: with if: $CI_PIPELINE_SOURCE == "merge_request_event".
Notes and limits
cipi app list periodically and delete stale previews.The nginx SSL patch is not idempotent — if the pipeline runs
cipi app create twice (e.g. due to a retry), the awk patch will be
applied again. The hash ensures APP_NAME is deterministic, so the
if cipi app show guard prevents double-creation under normal conditions.Avoid running
cipi ssl install on a preview app — it will
overwrite the wildcard cert config with a per-domain Let's Encrypt cert that will fail (the
domain has no dedicated DNS record, only the wildcard).
Infrastructure
cipi php
Multiple PHP versions are installed during setup. New versions can be added at any time from the ondrej/php PPA (the de-facto standard for PHP on Ubuntu — new releases typically land there within days of their official release).
$ cipi php list # list installed PHP versions and their status $ cipi php install 8.5 # install a new PHP version $ cipi php remove 8.4 # remove a version (only if no apps use it)
To switch an existing app to a different version:
$ cipi app edit myapp --php=8.5
This hot-swaps the PHP version with zero downtime: updates the FPM pool, Nginx socket, Supervisor
workers, crontab, Deployer config, and .env in one atomic operation.
cipi db
Cipi creates a dedicated MariaDB database for each app automatically during app create.
You can also manage additional standalone databases.
$ cipi db create # interactive $ cipi db create --name=analytics # non-interactive $ cipi db list # list all databases with sizes $ cipi db backup myapp # dump to /var/log/cipi/backups/ $ cipi db restore myapp backup.sql.gz $ cipi db password myapp # regenerate database password $ cipi db delete analytics
cipi alias
Add multiple domains or subdomains to any app. After adding aliases, run
cipi ssl install to provision or renew the certificate with SAN coverage for all
domains.
$ cipi alias add myapp www.myapp.com $ cipi alias add myapp myapp.it $ cipi alias list myapp $ cipi alias remove myapp myapp.it
cipi ssl
Certbot manages Let's Encrypt certificates. Certificates auto-renew via a weekly cron.
cipi ssl status shows expiry dates with color-coded warnings: green (>30 days), yellow
(14–30 days), red (<14 days).
$ cipi ssl install myapp # provision / renew — includes all aliases (SAN) $ cipi ssl renew # force renewal of all certificates $ cipi ssl status # show all certs with expiry dates
cipi alias add, always run
cipi ssl install again to provision a new SAN certificate covering all domains.
cipi backup
Back up databases and storage to Amazon S3 or any S3-compatible provider (Hetzner Object Storage, DigitalOcean Spaces, Backblaze B2, MinIO, etc.).
Setup
$ cipi backup configure # → AWS Access Key ID # → AWS Secret Access Key # → Bucket name # → Region # → Endpoint URL (leave empty for AWS; required for other providers)
S3-compatible endpoints
| Provider | Endpoint URL |
|---|---|
| AWS S3 | leave empty |
| Hetzner | https://<datacenter>.your-objectstorage.com |
| DigitalOcean Spaces | https://<region>.digitaloceanspaces.com |
| Backblaze B2 | https://s3.<region>.backblazeb2.com |
| MinIO | https://your-minio-host |
Running backups
$ cipi backup configure # configure S3 credentials $ cipi backup run # backup all apps $ cipi backup run myapp # backup a single app $ cipi backup list # list all backups $ cipi backup list myapp # list backups for one app $ cipi backup prune myapp --weeks=4 # delete backups older than 4 weeks
Each backup uploads to s3://your-bucket/cipi/appname/YYYY-MM-DD_HHMMSS/ and contains:
db.sql.gz— compressed database dumpshared.tar.gz— the entireshared/directory (.env+storage/)
Scheduling automatic backups
# Add to root crontab (crontab -e)
0 2 * * * /usr/local/bin/cipi backup run >> /var/log/cipi/backup.log 2>&1
Pruning old backups from S3
Backups accumulate over time. Use cipi backup prune to delete backup folders older than
N weeks from S3. Run it as a cron job alongside the backup itself.
$ cipi backup prune myapp --weeks=4 # delete backups older than 4 weeks $ cipi backup prune myapp --weeks=2 # keep only the last 2 weeks
Add both commands to the root crontab to run automatically:
# root crontab — backup at 02:00, prune at 03:00 (keep 4 weeks)
0 2 * * * /usr/local/bin/cipi backup run myapp >> /var/log/cipi/backup.log 2>&1
0 3 * * * /usr/local/bin/cipi backup prune myapp --weeks=4 >> /var/log/cipi/backup-prune.log 2>&1
cipi backup prune reads S3 credentials from /etc/cipi/backup.conf,
written by cipi backup configure. It works with any S3-compatible provider.
Adjust --weeks to match your retention policy (e.g. --weeks=2 for two
weeks, --weeks=8 for two months).
User crontab
Cipi automatically adds a crontab entry for the Laravel scheduler when an app is created:
# installed automatically by cipi app create
* * * * * /usr/bin/php8.4 /home/myapp/current/artisan schedule:run >> /dev/null 2>&1
This entry runs as the myapp Linux user every minute, using the PHP version selected for
the app. It is updated automatically when you change PHP version via
cipi app edit myapp --php=X.
Viewing the current crontab
# as root — view the app user's crontab $ crontab -u myapp -l # or after switching to the app user $ su - myapp myapp@server:~$ crontab -l
Adding custom cron jobs
You can add extra cron jobs to the app user's crontab. Switch to the app user first to ensure jobs run with the correct user context and file permissions:
$ su - myapp
myapp@server:~$ crontab -e
Example entries you might add:
# existing Laravel scheduler (do not remove) * * * * * /usr/bin/php8.4 /home/myapp/current/artisan schedule:run >> /dev/null 2>&1 # nightly database backup at 2 AM 0 2 * * * /usr/local/bin/cipi db backup myapp >> /home/myapp/logs/backup.log 2>&1 # custom script every 15 minutes */15 * * * * /home/myapp/current/scripts/sync.sh >> /home/myapp/logs/sync.log 2>&1
cipi app edit myapp --php=<current-version> to restore it. Always keep the
schedule:run line as the first entry so it is easy to identify.
/home/myapp/. Jobs that require root access should be added to the root crontab
instead, with crontab -e as root.
Checking if cron is working
# check system cron log $ grep CRON /var/log/syslog | grep myapp | tail -20 # check Laravel scheduler execution $ cipi app artisan myapp schedule:list
cipi worker
Every app gets a default Supervisor worker for the default queue. You can add additional
queues with custom process counts and timeouts.
$ cipi worker add myapp --queue=emails --processes=3 $ cipi worker add myapp --queue=exports --processes=1 --timeout=7200 $ cipi worker list myapp $ cipi worker edit myapp --queue=default --processes=3 $ cipi worker remove myapp emails $ cipi worker restart myapp # restart all workers for the app $ cipi worker stop myapp # stop all workers for the app (used during deploys)
| Flag | Description |
|---|---|
| --queue=<name> | Queue name to consume (e.g. default, emails,
exports)
|
| --processes=<n> | Number of parallel worker processes |
| --timeout=<seconds> | Job timeout in seconds. Default is 60. |
Workers are stopped before the symlink swap and restarted after every deploy, preventing
Supervisor from picking up stale artisan paths. Supervisor is configured with
autorestart=unexpected so workers only restart on unexpected exits, not on graceful
stops.
cipi firewall
Cipi installs UFW with ports 22, 80, and 443 open by default. Use the firewall commands to manage additional rules without touching UFW directly.
$ cipi firewall allow 3306 # open a port $ cipi firewall allow 3306 --from=10.0.0.5 # allow from specific IP $ cipi firewall allow 3306 --from=10.0.0.0/24 # allow from subnet $ cipi firewall deny 8080 # block a port $ cipi firewall list # show all rules
cipi service
Check and control the system services that power Cipi directly from the CLI. Nginx uses a graceful reload (zero downtime) instead of a full restart.
$ cipi service list # status of all services $ cipi service list nginx # status of a specific service $ cipi service restart # restart all services $ cipi service restart nginx # graceful reload (zero downtime) $ cipi service restart php # restart all PHP-FPM versions $ cipi service start fail2ban $ cipi service stop supervisor # asks for confirmation
Supported service names: nginx, mariadb, redis-server,
supervisor, fail2ban, php<ver>-fpm (e.g.
php8.4-fpm). The keyword
php targets all installed PHP-FPM versions at once.
Redis is included in the default stack (from Cipi 4.0.4). It is installed with a password, bound to
localhost only, and its credentials (user, password) are saved in
/etc/cipi/server.json and shown at the end of installation. redis-server
is
added to the unattended-upgrades blacklist — Cipi manages it, so it is not auto-upgraded
automatically.
cipi ssh — SSH Key Management
Manage the authorized SSH keys for the cipi user — the admin SSH entry point. The
cipi user (group cipi-ssh) uses public-key only; root login is disabled.
App users (group cipi-apps) connect with password — see SSH as
the app user.
Commands
$ cipi ssh list # list all authorized keys with fingerprint, comment, and current-session marker $ cipi ssh add [key] # add a new SSH public key (validates format, prevents duplicates) $ cipi ssh remove [n] # remove a key by number $ cipi ssh rename [n] [name] # change the display name / comment of a key
Safety mechanisms
cipi ssh remove includes two safeguards to prevent lockout:
- Current-session protection — you cannot remove the key used by your active SSH session.
- Last-key protection — you cannot remove the last remaining authorized key.
Key comments
SSH keys are stored with their original comments intact, making it easy to identify who each key
belongs to. Use cipi ssh rename to change the display name of any key:
# list keys to find the number $ cipi ssh list # rename key #2 $ cipi ssh rename 2 "john-macbook"
Email notifications
When SMTP is configured, Cipi sends an email alert every time a key is added, removed, or renamed. The notification includes the server hostname, IP address, key fingerprint, key comment, timestamp, and remaining key count. Rename notifications also include the old and new key name.
cipi — Server & Self-Update
Top-level commands for server status and Cipi self-management.
$ cipi status # CPU, RAM, disk, services, PHP versions, apps $ cipi version # show installed Cipi version $ cipi self-update # update Cipi to the latest version $ cipi self-update --check # check for updates without installing
Password & credential reset
Cipi provides commands to regenerate server-level passwords. New passwords are stored in
/etc/cipi/server.json (encrypted via Vault) and displayed on screen.
Save them immediately — they are shown only once.
$ cipi reset root-password # regenerate the root Linux user SSH password $ cipi reset db-password # regenerate the MariaDB root password $ cipi reset redis-password # regenerate the Redis password and restart the service
cipi reset redis-password restarts the Redis service. Connected clients will be
temporarily disconnected. If your apps use Redis for cache or sessions, expect a brief
interruption.Advanced
cipi api
Cipi can optionally enable a REST API layer on the server via cipi api
<domain>. The API exposes operations on apps, aliases, and SSL with Bearer token
authentication and granular permissions.
Commands
$ cipi api <domain> # configure API at root (e.g. api.myhosting.com) $ cipi api ssl # install Let's Encrypt certificate for API domain $ cipi api token list # list tokens $ cipi api token create # create a new token (choose abilities) $ cipi api token revoke <id> # revoke a token $ cipi api status # Laravel version, queue worker status, pending jobs $ cipi api update # soft update: composer update on Laravel and API packages $ cipi api upgrade # full rebuild with rollback at /opt/cipi/api.old
Token creation and granular permissions
Authentication uses Sanctum. Each token can have one or more abilities that limit allowed operations:
apps-view— read appsapps-create— create appsapps-edit— edit appsapps-delete— delete appsdeploy-manage— deploy, rollback, unlockssl-manage— install and manage SSL certificatesaliases-view— read aliasesaliases-create— add aliasesaliases-delete— remove aliasesmcp-access— access the MCP server
REST endpoints
All endpoints require the Authorization: Bearer <token> header. Write operations
(create, edit, delete, deploy, rollback, unlock, SSL, alias) are asynchronous: they return
202 Accepted with a job_id to poll via GET /api/jobs/{id}.
| Method | Endpoint | Required ability |
|---|---|---|
| GET | /api/apps |
apps-view |
| GET | /api/apps/{name} |
apps-view |
| POST | /api/apps |
apps-create |
| PUT | /api/apps/{name} |
apps-edit |
| DELETE | /api/apps/{name} |
apps-delete |
| GET | /api/apps/{name}/aliases |
aliases-view |
| POST | /api/apps/{name}/aliases |
aliases-create |
| DELETE | /api/apps/{name}/aliases |
aliases-delete |
| POST | /api/apps/{name}/deploy |
deploy-manage |
| POST | /api/apps/{name}/deploy/rollback |
deploy-manage |
| POST | /api/apps/{name}/deploy/unlock |
deploy-manage |
| POST | /api/apps/{name}/ssl |
ssl-manage |
| GET | /api/jobs/{id} |
— |
Swagger / OpenAPI
Interactive documentation is available at /docs (Swagger UI, OpenAPI spec 2.0.0).
MCP server
An MCP (Model Context Protocol) server is exposed at /mcp via
Streamable
HTTP. It requires a token with the mcp-access ability. It exposes tools for app, alias,
deploy, and SSL management — AppList, AppShow, AppCreate,
AppEdit, AppDelete, AppDeploy,
AppDeployRollback,
AppDeployUnlock, AliasList, AliasAdd,
AliasRemove,
SslInstall — with async dispatch and job_id return for polling.
Installing the MCP server
The MCP endpoint is optional and loads only when the required MCP package is installed. To use it from VS Code, Cursor, or Claude Desktop:
- Configure the API with
cipi api <domain>andcipi api ssl - Create a token with
cipi api token createand select at leastmcp-access - Add the MCP server to your client config (see below)
Cursor
Add to ~/.cursor/mcp.json (or Cursor → Settings → MCP):
{
"mcpServers": {
"cipi-api": {
"type": "http",
"url": "https://<your-api-domain>/mcp",
"headers": {
"Authorization": "Bearer <your-token>"
}
}
}
}
Cursor connects natively over HTTP — no bridge needed.
VS Code
VS Code (with GitHub Copilot) supports MCP natively since 1.102. Add to .vscode/mcp.json
or
run MCP: Open User Configuration for a global setup:
{
"servers": {
"cipi-api": {
"type": "http",
"url": "https://<your-api-domain>/mcp",
"headers": {
"Authorization": "Bearer <your-token>"
}
}
}
}
Use MCP: Add Server from the Command Palette for a guided setup. VS Code connects over HTTP — no bridge needed.
Claude Desktop
Claude Desktop requires the mcp-remote bridge to convert stdio to HTTP. Add to
~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or the
equivalent
config path on your OS:
{
"mcpServers": {
"cipi-api": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://<your-api-domain>/mcp",
"--header",
"Authorization: Bearer <your-token>"
]
}
}
}
Install mcp-remote once with npm install -g mcp-remote.
Replace <your-api-domain> with your API domain (e.g.
api.myhosting.com)
and <your-token> with the token created in step 2.
cipi sync
Transfer, replicate, and back up entire Laravel applications between Cipi servers — including configuration, database dumps, storage files, SSH keys, workers, and crontabs. Every archive is encrypted with AES-256-CBC and protected by a user-defined passphrase, so credentials and sensitive data are safe at rest and during transfer.
Commands overview
$ cipi sync export [app ...] [--with-db] [--with-storage] [--output=<path>] [--passphrase=<secret>] $ cipi sync import <archive.tar.gz.enc> [app ...] [--update] [--deploy] [--yes] [--passphrase=<secret>] $ cipi sync push [app ...] [--host=IP] [--port=22] [--with-db] [--with-storage] [--import] [--passphrase=<secret>] $ cipi sync list <archive.tar.gz.enc> [--passphrase=<secret>] $ cipi sync pubkey # display the server's sync public key for inter-server trust $ cipi sync trust # add a remote server's public key to cipi's authorized_keys
Archive encryption
All sync archives are encrypted by default with AES-256-CBC. During export you are
prompted for a passphrase (minimum 8 characters) that protects the archive. The same passphrase is
required to import or inspect it. This protects SSH keys, .env files, database dumps,
and credentials at rest and during transfer.
# Interactive mode (default) — prompted for passphrase $ cipi sync export --with-db # Enter passphrase to encrypt the archive: ******** # Confirm passphrase: ******** # Non-interactive mode — for cron jobs and scripts $ cipi sync export --with-db --passphrase="MyStr0ngP@ss"
echo "MyStr0ngP@ss" > /etc/cipi/.sync_passphrase && chmod 400
/etc/cipi/.sync_passphrase. Then use
--passphrase="$(cat /etc/cipi/.sync_passphrase)" in cron jobs.
Export
Packs app configs into an encrypted .tar.gz.enc archive. Optionally includes database
dumps and storage files.
# Export all apps (config only) $ cipi sync export # Export three specific apps with database + storage $ cipi sync export shop blog api --with-db --with-storage # Export to a custom path (non-interactive) $ cipi sync export --with-db --output=/root/backups/cipi-march.tar.gz --passphrase="MyStr0ngP@ss"
What goes into the archive
| File | Description | Included |
|---|---|---|
env |
The app's .env from /home/<app>/shared/.env |
Always |
auth.json |
Composer auth credentials (if exists) | Always |
deploy.php |
Deployer config | Always |
ssh/* |
Deploy key, known_hosts, authorized_keys, SSH config | Always |
supervisor.conf |
Queue workers config | Always |
crontab |
App's crontab (scheduler + deploy trigger) | Always |
db.sql.gz |
Gzipped MariaDB dump (schema + data + routines) | --with-db |
storage.tar.gz |
Archive of /home/<app>/shared/storage/ |
--with-storage |
Plus global configs: apps.json (filtered to selected apps), databases.json,
backup.json, api.json.
Import
Restores apps from an archive onto the current server.
# Import all apps from archive $ cipi sync import /tmp/cipi-sync-aws01-20260306.tar.gz.enc # Import only two apps from an archive that contains ten $ cipi sync import /tmp/cipi-sync-aws01-20260306.tar.gz.enc shop blog --passphrase="MyStr0ngP@ss" # Import and deploy code from Git immediately $ cipi sync import /tmp/cipi-sync-aws01-20260306.tar.gz.enc --deploy # Non-interactive (skip all prompts) $ cipi sync import /tmp/cipi-sync-aws01-20260306.tar.gz.enc --yes --passphrase="MyStr0ngP@ss"
What import does for a NEW app
When an app does not exist on the target server, import creates it from scratch — equivalent to
cipi app create with all configs pre-filled from the archive:
- Linux user — Creates a new user with a random password
- Directories — Creates
/home/<app>/shared/,logs/,.ssh/,.deployer/ - SSH deploy key — Restores from archive (same key works with GitHub/GitLab without reconfiguration)
- MariaDB database — Creates database + user with a new random password
- Database data — Imports the dump if
--with-dbwas used during export .env— Copies from archive, then overwritesDB_PASSWORD,DB_USERNAME,DB_DATABASE,DB_HOSTwith the new server's values. Everything else (APP_KEY,MAIL_*,REDIS_*, custom vars) stays as-is- PHP-FPM pool, Nginx vhost, Supervisor workers, Crontab, Deployer — Fully configured from archive data
Safety checks before import
The import runs pre-flight checks before touching anything:
- App already exists — blocked unless
--updateis passed - Domain conflict — blocked if another app already uses the same domain
- Missing PHP version — warning (the app is skipped; install the version first
with
cipi php install)
Update mode (--update)
The key feature for repeated sync (e.g. failover replication). Without
--update, import refuses to touch apps that already exist. With --update,
it
updates existing apps and creates new ones.
$ cipi sync import /tmp/archive.tar.gz.enc --update --passphrase="MyStr0ngP@ss"
What update does for an existing app
.envsync — The archive.envreplaces the local one, butDB_PASSWORD,DB_USERNAME,DB_DATABASE, andDB_HOSTare preserved from the local server. Everything else (APP_KEY,MAIL_*,REDIS_*, custom vars) comes from the source.- Database data — If the archive has a dump, drops all tables (with
SET FOREIGN_KEY_CHECKS=0) and reimports. Uses local root credentials. - Storage — If the archive has storage, extracts over the existing directory (new files added, existing overwritten).
- PHP version migration — If the source uses a different PHP version, the update
migrates FPM pool, supervisor, crontab, deployer, and
.envautomatically. - Nginx vhost, Supervisor workers, Deployer config — Regenerated from archive data.
- Deploy — If
--deployis passed, runsdep deployto pull latest code.
What update does NOT change
- Linux user password
- SSH deploy keys (kept from first import)
- MariaDB user credentials (target keeps its own)
- SSL certificates (run
cipi ssl installseparately)
List (inspect archive)
View what is inside an archive without importing anything.
$ cipi sync list /tmp/cipi-sync-aws01-20260306.tar.gz.enc --passphrase="MyStr0ngP@ss" Cipi Sync Archive ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Cipi v4.2.0 Exported 2026-03-06T15:00:00Z Source aws01 (3.120.xx.xx) Database true Storage true Apps APP DOMAIN PHP DB STORAGE shop shop.example.com 8.4 yes yes blog blog.example.com 8.4 yes yes api api.example.com 8.5 yes yes ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Push (export + transfer + import)
Combines export, rsync transfer, and remote import in one command. Runs entirely from the source server.
# Interactive push — prompted for target IP and passphrase $ cipi sync push --with-db --with-storage --import # Non-interactive push (for cron and scripts) $ cipi sync push --host=51.195.xx.xx --port=22 --with-db --with-storage --import --passphrase="MyStr0ngP@ss" # Push specific apps only $ cipi sync push shop blog --host=51.195.xx.xx --with-db --import --passphrase="MyStr0ngP@ss" # Push without auto-import (transfer only — import manually on remote) $ cipi sync push --host=51.195.xx.xx --with-db --passphrase="MyStr0ngP@ss"
How push works
- Step 1: Runs
cipi sync exportlocally (encrypts with passphrase) - Step 2: Transfers the encrypted archive to the target via rsync
- Step 3: If
--importis passed, runscipi sync import --update --yeson the target via SSH
Push always adds --update and --yes when calling import on the remote. This
means: first run creates all apps, subsequent runs incrementally update them. This is what makes
push
safe to run repeatedly via cron.
SSH setup for push
The source server needs SSH access to the target as the cipi user. Use the built-in
trust mechanism for passwordless, key-based authentication between Cipi servers:
# On the SOURCE server — display its sync public key $ cipi sync pubkey # On the TARGET server — add the source's public key to cipi's authorized_keys $ cipi sync trust
Once trusted, cipi sync push connects as the cipi user automatically —
no root access required.
Practical scenarios
Scenario 1: Migrate all apps from AWS to OVH
You have 20 apps on AWS. You bought an OVH VPS and installed Cipi on it.
# On AWS (source server) $ cipi sync push --host=51.195.xx.xx --with-db --with-storage --import
On the OVH target: 20 Linux users, 20 databases, 20 nginx vhosts, PHP-FPM pools, supervisor configs,
crontabs — all created automatically. DB data imported, storage extracted, .env files
copied with OVH's DB passwords, SSH deploy keys preserved (same keys work with GitHub). After
import,
install SSL and update DNS:
# On OVH (target server) $ cipi ssl install shop $ cipi ssl install blog # ... then update DNS A records to OVH IP
Scenario 2: Scheduled failover replication (cron)
Every 6 hours, Server 1 syncs all apps to Server 2. If Server 1 dies, change DNS and go live on Server 2.
# One-time setup on Server 1 — trust Server 2 using cipi sync trust $ cipi sync pubkey # copy this key, then run "cipi sync trust" on Server 2 $ echo "YourStr0ngPassphrase!" > /etc/cipi/.sync_passphrase $ chmod 400 /etc/cipi/.sync_passphrase # First push (manual, to verify) $ cipi sync push --host=server2-ip --with-db --with-storage --import --passphrase="$(cat /etc/cipi/.sync_passphrase)" # Add to crontab for automatic replication $ crontab -e
0 */6 * * * /usr/local/bin/cipi sync push --host=51.195.xx.xx --port=22 --with-db --with-storage --import --passphrase="$(cat /etc/cipi/.sync_passphrase)" >> /var/log/cipi/sync-replica.log 2>&1
Data loss window equals the cron interval (6 hours in this example). When Server 1 goes down: change
DNS to Server 2, run cipi ssl install for each app, and you are live.
Scenario 3: Replicate to multiple servers
# Stagger by 30 minutes so exports don't run simultaneously
0 */6 * * * /usr/local/bin/cipi sync push --host=51.195.xx.xx --with-db --with-storage --import --passphrase="$(cat /etc/cipi/.sync_passphrase)" >> /var/log/cipi/sync-ovh.log 2>&1
30 */6 * * * /usr/local/bin/cipi sync push --host=164.90.xx.xx --with-db --with-storage --import --passphrase="$(cat /etc/cipi/.sync_passphrase)" >> /var/log/cipi/sync-do.log 2>&1
Scenario 4: Daily encrypted backup (no transfer)
0 3 * * * /usr/local/bin/cipi sync export --with-db --with-storage --output=/root/backups/cipi-$(date +\%Y\%m\%d).tar.gz --passphrase="$(cat /etc/cipi/.sync_passphrase)" >> /var/log/cipi/export.log 2>&1
Creates an encrypted portable archive every night. Restore on any Cipi server at any time with
cipi sync import.
Limitations
- SSL certificates are not included in the archive. Run
cipi ssl installafter importing on a new server. - DB sync is full replace, not incremental. Each update drops all tables and reimports.
- Storage sync is full extract, not incremental rsync. Deleted files on the source remain on the target.
- Deploy keys are the same on source and target — no GitHub/GitLab reconfiguration needed.
Vault & Encryption
Cipi encrypts all configuration files at rest using AES-256-CBC. The Vault system
provides transparent encryption and decryption so that sensitive data — database passwords, API
tokens,
SSH keys, .env contents — is never stored in plaintext on disk.
Architecture
The system is built on two layers:
- Vault — transparent encryption of JSON configuration files on disk
(
server.json,apps.json,databases.json,backup.json,smtp.json,api.json) - Sync encryption — passphrase-based encryption of export archives for secure transfer between servers
How Vault works
A master key is generated during installation with openssl rand -base64 32 and stored at
/etc/cipi/.vault_key (chmod 400, root-only). Every JSON config file is
encrypted on disk with openssl enc -aes-256-cbc -salt -pbkdf2. Files keep the
.json extension — the content is simply an encrypted blob instead of readable JSON.
The vault_read function auto-detects whether a file is plaintext or encrypted (backward
compatibility), so existing servers migrate seamlessly during update.
Vault functions
# Core functions in lib/vault.sh vault_init # Generate .vault_key if not present vault_read <file> # Decrypt and output JSON to stdout (auto-detect plain/encrypted) vault_write <file> # Read JSON from stdin, encrypt and write to disk vault_seal <file> # Encrypt an existing plaintext file in-place vault_get <file> <jq_query> # Shortcut: vault_read | jq
Public projection
Cipi generates an apps-public.json file containing only non-sensitive fields (domain,
aliases, PHP version, branch, repository, user, creation timestamp). The cipi-api group
reads this plaintext projection instead of the encrypted file, keeping the vault key restricted to
root.
Sync archive encryption
When you run cipi sync export, configs are decrypted from the vault into a staging area,
then the entire archive is encrypted with your passphrase. On import, the archive is decrypted with
the passphrase, and configs are re-encrypted with the destination server's vault
key.
chmod 400 and included in server backups. Consider exporting it manually for
additional safety.
Email Notifications
Cipi can send email alerts when backup errors, deploy failures, system cron job failures, or
security-relevant authentication events occur.
SMTP configuration is stored encrypted in /etc/cipi/smtp.json and included in sync
exports.
Commands
$ cipi smtp configure # interactive setup (Gmail, SendGrid, Mailgun, custom) $ cipi smtp status # display current notification settings $ cipi smtp test # send a verification email $ cipi smtp enable # enable notifications $ cipi smtp disable # disable without losing settings $ cipi smtp delete # remove SMTP configuration entirely
Automatic alerts
Once configured, Cipi sends email notifications on:
- Backup errors (S3 upload failures, dump errors)
- Deploy failures (Deployer errors, rollback triggers)
- System cron job failures (via the
cipi-cron-notifywrapper) - App lifecycle events — notifies when an app is created, edited, or deleted, including server hostname, app name, domain, and PHP version
- Sudo and su elevation — notifies when any user successfully elevates via
sudoorsu, including who ran it, target user (forsu), SSH key, client IP, and TTY - Privileged SSH login — notifies when
rootor any sudoer logs in via SSH, including source IP, SSH key fingerprint, and key comment - SSH key changes — notifies when an SSH key is added to, removed from, or renamed on the
cipiuser, including hostname, IP, fingerprint, key comment, timestamp, and remaining key count. Rename alerts also include the old and new key name.
Every email notification includes a footer with the client IP (SSH_CLIENT) and the SSH
key name used to authenticate, when applicable. The key name is resolved via
SSH_USER_AUTH with an auth.log fallback when needed.
Security auth notifications
Cipi integrates PAM-based authentication notifications via
pam_exec.so with ExposeAuthInfo enabled. When SMTP is configured, the
system automatically sends email alerts on these security-relevant events:
- Sudo and su elevation — triggered when any user successfully runs
sudoorsu. The notification includes the username, target user (forsu), the TTY, SSH key, client IP, and timestamp. - Privileged SSH login — triggered when
rootor any user in thesudogroup logs in via SSH. The notification includes the username, source IP address, SSH key fingerprint, and key comment (resolved from/var/log/auth.logfingerprint matching againstauthorized_keys). - SSH key changes — triggered when an SSH key is added to, removed from, or
renamed on the
cipiuser viacipi ssh add,cipi ssh remove, orcipi ssh rename. The notification includes the hostname, server IP, key fingerprint, key comment, timestamp, and remaining key count. Rename alerts also include the old and new key name. - App lifecycle events — triggered when an app is created, edited, or deleted. The notification includes the server hostname, app name, domain, and PHP version.
Notifications run asynchronously in the background so they never delay login or command execution. If SMTP is not configured, the hooks fail silently with no impact on the system.
Security event log
Regardless of SMTP configuration, all notification events (SSH key changes, app lifecycle,
password resets, sudo/su/SSH login, cron failures) are always logged to
/var/log/cipi/events.log in a compact one-line format. The log is rotated daily with
1-year retention via logrotate.
Cron wrapper
The cipi-cron-notify utility wraps system cron jobs and sends a notification if the job
exits with a non-zero code. This is useful for monitoring critical scheduled tasks.
Log Retention (GDPR)
Cipi enforces automatic log rotation policies designed to meet GDPR and general data protection requirements. Logs are rotated and deleted automatically — no manual cleanup needed.
| Category | Logs | Retention |
|---|---|---|
| Application | Laravel, PHP-FPM, workers, deploy, system | 12 months |
| Security | Fail2ban, UFW firewall, authentication, Cipi events (events.log) |
12 months |
| HTTP / Navigation | Nginx access and error logs | 90 days |
Redis
Redis is an in-memory data store that Cipi installs as part of the default stack (from version 4.0.4). It excels at caching, session storage, message queues, real-time broadcasting, and rate limiting.
System configuration
Redis binds to localhost only, uses a password, and runs as a system service.
Credentials (user, password) are in /etc/cipi/server.json. Host: 127.0.0.1, Port: 6379.
Laravel integration
Add these variables to your .env via cipi app env myapp:
REDIS_HOST=127.0.0.1 REDIS_PASSWORD=your-password-from-server-json REDIS_PORT=6379
Then set the drivers for each use case:
- Cache —
CACHE_STORE=redis - Session —
SESSION_DRIVER=redis - Queue —
QUEUE_CONNECTION=redis(thencipi worker restart myapp) - Broadcasting —
BROADCAST_CONNECTION=redis
Install the phpredis PHP extension for best performance, or use
predis/predis as a pure-PHP fallback.
Self-Update
Cipi can update itself from GitHub without affecting any app, database, or configuration.
$ cipi self-update --check # check for a new version $ cipi self-update # update to latest
Update process
- Downloads the latest version from GitHub
- Backs up the current installation to
/opt/cipi.bak.YYYYMMDDHHMMSS/ - Replaces CLI and lib scripts
- Runs any pending migration scripts in order (e.g. new Nginx directives, new packages)
- Updates the version file
Migration scripts live in lib/migrations/ and are named by version (e.g.
4.1.0.sh). When updating from v4.0.0 to v4.2.0, Cipi automatically runs
4.1.0.sh and 4.2.0.sh in order. Your apps, databases, and configurations
are never touched.
Wildcard domains
Cipi does not support wildcard domains (*.myapp.com) natively. The
block is twofold and architectural — not a configuration detail.
Why wildcards are not supported
1 — Domain validation rejects *
Every domain passed to cipi alias add (and cipi app create) is validated
against a strict regex that requires the string to start with [a-zA-Z0-9]. The asterisk
fails immediately, before nginx or certbot are ever touched.
2 — Certbot uses HTTP-01 challenge, which cannot issue wildcard certs
cipi ssl install calls certbot --nginx, which relies on the HTTP-01 (or
TLS-ALPN-01) challenge — placing a verification file on disk and serving it over port 80. Let's
Encrypt only issues wildcard certificates via the DNS-01 challenge, which requires
programmatic access to your DNS provider's API. Cipi does not integrate with any DNS provider, so
even if the validation were bypassed, certbot would refuse to issue the wildcard cert.
Recommended alternative — Multi-SAN certificate
If your subdomains are fixed and enumerable (e.g. api, admin,
www, staging), the correct approach is to add each one as an explicit
alias and let Cipi issue a single SAN certificate covering all of them:
$ cipi alias add myapp api.myapp.com $ cipi alias add myapp admin.myapp.com $ cipi alias add myapp www.myapp.com $ cipi ssl install myapp # single cert, SAN covers all domains
Certbot's --expand flag (used internally by Cipi) adds the new SANs to the existing
certificate without issuing a new one. The SAN list has no meaningful limit for typical use.
Manual wildcard certificate (outside Cipi)
If you need dynamic subdomains (e.g. <tenant>.saas.com), you can obtain a wildcard
certificate manually using a DNS plugin for certbot and place it on the server. Cipi will not
manage, renew, or track it — you own the lifecycle entirely.
# example with the Cloudflare DNS plugin $ pip install certbot-dns-cloudflare $ certbot certonly --dns-cloudflare \ --dns-cloudflare-credentials /root/.cloudflare.ini \ -d "*.myapp.com" -d "myapp.com"
After obtaining the certificate, edit the nginx vhost for the app directly
(/etc/nginx/sites-available/myapp) to reference the wildcard cert paths and add
server_name *.myapp.com myapp.com;. Then reload nginx:
$ nginx -t && systemctl reload nginx
cipi ssl install myapp after manual wildcard setup will overwrite your
custom nginx SSL directives with a Let's Encrypt HTTP-01 certificate. If you manage a wildcard
cert manually, avoid running cipi ssl install on that app.About Cipi
History
Cipi was created by Andrea Pollastri, a software developer since 2005 specialising in Laravel, system administration, and cybersecurity — with experience at companies like Docebo and Musement. He built Cipi out of a practical need: a fast, scriptable way to provision and deploy Laravel applications on his own VPS, without handing control to a SaaS panel. What started as a private tool became an open-source project that grew to over 1,000 GitHub stars and hundreds of active deployments worldwide. Here is a brief account of how it evolved across six years and four major versions.
The idea
A collection of shell scripts to automate the tedious parts of setting up a Laravel server on a fresh Ubuntu VPS — Nginx, PHP, MariaDB, Supervisor. No web UI, no package. Just bash.
First release — Laravel panel
The shell scripts were wrapped in a Laravel web application acting as a server control panel. Users could create apps, manage deployments, and configure Nginx through a browser UI hosted on the same server. The project was published on GitHub and quickly attracted interest from the Laravel community.
Feature growth
A year of rapid iteration added SMTP configuration, local database backups, PHP-FPM permission fixes, server service management, and root password reset. The v2 series reached 2.4.9 across dozens of patch releases, establishing Cipi as a stable option for small-team Laravel hosting.
The big leap — API, PHP 8, real-time UI
Built on Laravel 8, v3 introduced a fully documented REST API (Swagger / OA), PHP 8 support, real-time CPU/RAM charts, a Cronjob editor, Supervisor management, a GitHub repository manager, Node 15, Composer 2, and JWT authentication. Cipi could now manage the same server it ran on. The project reached 1k GitHub stars.
The last web UI — and a question
PHP 8.1 became the default version. Node was upgraded to v16, Certbot was refreshed, and domain alias handling was fixed. The v3.1.x series was the most polished release of the web-UI era and many teams kept it running in production for years.
It was also the version that prompted a harder question: was a browser-based control panel still the right interface? Modern development workflows had moved toward SSH, CI/CD pipelines, GitOps, and — increasingly — AI agents that could orchestrate infrastructure through shell commands. A web UI required authentication, a running Laravel process, and a database just to issue a deploy. A CLI needed none of that. The answer shaped everything that came next.
The rewrite — CLI-first, Laravel-exclusive
After years of maintaining a full Laravel web application as the control plane, v4 made
the boldest decision yet: drop the web UI entirely. Cipi became a pure
CLI tool operated over SSH. The scope also narrowed from generic PHP to Laravel
exclusively, allowing every part of the stack to be optimised for one
framework. MySQL was replaced by MariaDB 11.4, git pull
was replaced by Deployer zero-downtime releases, shared deploy keys
became per-app ed25519 keys, and S3 automated backups and
native webhook support for GitHub and GitLab were added from day one. v4 also introduced
a complete REST API for programmatic management of hosts and
applications, native integration with GitHub and GitLab git providers,
a groundbreaking dual MCP server architecture — one per-app and one
global — enabling full AI-driven infrastructure management directly from any
MCP-compatible IDE or AI agent, encrypted server sync for
migrating apps between VPS instances, and a built-in database
anonymizer for GDPR-safe data exports. The result is the version you are
using today.
Security Model
SSH hardening
During installation, Cipi creates groups cipi-ssh and cipi-apps and applies
the following SSH architecture:
| User | Access | Method |
|---|---|---|
root |
blocked | PermitRootLogin no |
cipi |
key-only | group cipi-ssh, PasswordAuthentication no globally |
| app users | user + password | group cipi-apps, Match Group cipi-apps →
PasswordAuthentication yes |
sshd_config uses AllowGroups cipi-ssh cipi-apps instead of
AllowUsers. App users can connect with ssh myapp@server-ip and the
password generated at app creation. Admin access is via public-key as cipi. SSH keys
are managed with cipi ssh list / add / remove — see SSH Key
Management.
SSH key management & notifications
Authorized keys for the cipi user can be managed via the CLI with built-in safety:
format validation, duplicate prevention, current-session protection, and last-key protection to
avoid lockout. When SMTP is configured, every key addition, removal, or rename triggers an email
alert with hostname, IP, fingerprint, timestamp, and remaining key count. Rename alerts also
include the old and new key name.
Privilege escalation prevention
Application users are restricted from using su to escalate privileges to
root or the cipi account. This is enforced via
pam_wheel.so group=sudo, ensuring that only members of the sudo group
can switch users.
Sudoers hardening
The www-data user (used by Nginx and PHP-FPM) has its sudoers access
restricted to an explicit command whitelist instead of a wildcard pattern. Only the specific
/usr/local/bin/cipi subcommands required by the API are allowed. This prevents
www-data from running arbitrary commands even if the web application is
compromised.
API command whitelist
The Cipi API validates every CLI command against an internal whitelist before executing it via
sudo. Commands not on the whitelist are rejected, preventing command injection
through the API layer.
User isolation
Each app runs under its own Linux user with chmod 750 on the home directory. No app can
read another app's files. PHP-FPM runs each app's pool as the app user with its own Unix socket.
PHP open_basedir
open_basedir is configured per FPM pool to restrict PHP to the app's home directory.
Even if an app is compromised, PHP cannot access the filesystem outside its own home.
Database isolation
Each app has its own MariaDB database and user with GRANT ALL restricted to that
database only. A compromised app cannot read or write to another app's database.
Per-app SSH deploy keys
Each app gets its own ed25519 SSH key pair. A compromised deploy key only affects one repository — not all apps on the server.
Webhook security
GitHub webhooks are verified using HMAC-SHA256 signatures. GitLab webhooks use token comparison. The webhook handler in cipi-agent returns 200 immediately and writes a flag file — Deployer runs separately as the app user with no elevated privileges.
Configuration encryption (Vault)
All Cipi configuration files — server.json, apps.json,
databases.json, backup.json, smtp.json,
api.json — are encrypted at rest using AES-256-CBC via the built-in
Vault system. A per-server master key (/etc/cipi/.vault_key, chmod 400)
ensures that even if an attacker gains read access to the filesystem, configuration files containing
database passwords, API tokens, and credentials are unreadable without root privileges. See
Vault & Encryption for the full architecture.
Encrypted sync archives
When transferring applications between servers via cipi sync, the entire archive —
including .env files, SSH keys, database dumps, and configuration — is encrypted with
AES-256-CBC using a user-provided passphrase. Archives use the .tar.gz.enc extension
and cannot be read without the passphrase. This protects sensitive data both at rest (on disk) and
in transit (during rsync/scp transfer). See Sync for details.
GDPR-compliant log retention
Cipi enforces automatic log rotation: application and security logs are retained for 12 months, while HTTP/navigation logs (which contain IP addresses — personal data under GDPR) are retained for 90 days. This satisfies the GDPR data minimization principle while preserving enough history for debugging and audit trails. See Log retention for the full policy.
Auth & key notifications
Cipi monitors privileged authentication events via PAM (pam_exec.so) and sends
real-time email alerts when a user elevates via sudo or su, or when
root/sudoers log in via SSH. SSH login notifications include the key fingerprint and
comment (resolved by matching the /var/log/auth.log fingerprint against
authorized_keys), so you can immediately identify which key was used to access the
server. Email alerts are also sent when SSH keys are added to, removed from, or renamed on the
cipi user, and when apps are created, edited, or deleted. Notifications include
contextual details (username, TTY, source IP, fingerprint, key comment, key count) and run
asynchronously to avoid login delays. All events are also logged to
/var/log/cipi/events.log regardless of SMTP. Email alerts require SMTP configured —
fails silently otherwise. See Email Notifications for setup.
No web panel
Cipi has no web interface. There is no attack surface from an admin panel. All management happens
over SSH as the cipi user using the cipi CLI.
Nginx default virtual host
The Nginx default virtual host uses a rewrite rule to serve a minimal "Server Up" page
for all requests to unconfigured domains or direct IP access. This reliable catch-all prevents Nginx
from exposing its version number in default error pages.
Network
UFW blocks all ports by default except 22, 80, and 443. Fail2ban monitors SSH for brute-force attempts and bans offending IPs automatically.
Automatic OS security updates
Cipi enables unattended-upgrades during installation, so the underlying Ubuntu system applies security patches automatically — without any manual intervention. Only security-classified updates are installed unattended; major version upgrades require explicit confirmation.
You can verify and manage the configuration at any time:
# check unattended-upgrades status $ systemctl status unattended-upgrades # view the upgrade log $ cat /var/log/unattended-upgrades/unattended-upgrades.log # force an immediate unattended upgrade run $ unattended-upgrade --debug --dry-run
sudo reboot.
Why MariaDB?
Cipi uses MariaDB instead of MySQL for several reasons:
- Drop-in replacement — Laravel doesn't notice the difference. Same PDO driver, same SQL syntax, same migration files.
- Better performance — more advanced query optimizer and native thread pool in the community edition.
- Clean licensing — pure GPL, no Oracle involvement.
- Native on Ubuntu — installs from the default Ubuntu repositories without external PPAs.
- Auto-tuned — Cipi configures
innodb_buffer_pool_sizebased on your server's RAM (256M for 1 GB RAM, up to 4 GB for 16 GB+ servers).
Why Laravel?
When v4 was designed, one of the clearest decisions was to drop generic PHP and WordPress support entirely and go all-in on a single framework. That framework was Laravel — and the reasoning goes well beyond personal preference.
Speed of development
Laravel gives developers an extraordinary head start. Authentication, queues, scheduled tasks, file
storage, email, notifications, API resources, database migrations — all of it is built in,
consistent, and documented. A team can go from an empty project to a production-ready application in
a fraction of the time it would take with a lower-level stack. For Cipi, this means that every
deployment assumption — directory layout, .env handling, queue workers, the scheduler —
maps cleanly to a known structure. There is no guesswork.
A thriving open-source community
Laravel has one of the most active and welcoming communities in the PHP world. Packages are published, maintained, and discussed daily on GitHub, Laracasts, Discord, and X. When you hire a Laravel developer, you get someone who already knows the conventions, the tooling, and the ecosystem. When you publish a Laravel package, it reaches tens of thousands of developers overnight. That network effect is genuinely valuable — and it is one of the reasons Cipi's companion package, cipi-agent, works so well: drop it into a Laravel app and it just fits.
Security by default
Laravel takes security seriously out of the box. CSRF protection, SQL injection prevention through the query builder, XSS mitigation via Blade's auto-escaping, rate limiting, signed URLs, encrypted cookies — these are not add-ons you wire together, they are the defaults. The framework's release cycle includes prompt security patches, and the team publishes clear upgrade guides. For production applications that handle real users and real data, that reliability matters enormously.
Longevity and support
Every major Laravel version receives bug fixes for 18 months and security fixes for two years after release. LTS versions extend that further. Taylor Otwell and the core team have maintained a consistent, principled release cadence since 2011 — a track record that is rare and genuinely reassuring for long-term projects. When you deploy on Cipi, you are not betting on a framework that might be abandoned next year.
The Laravel ecosystem
Laravel does not stand alone. Around it has grown an ecosystem of first-party and community tools that are among the best in the industry:
- Livewire — full-stack reactive components without leaving PHP
- Filament — a stunning admin panel and form builder built on Livewire
- Inertia.js — the bridge between Laravel and modern front-end frameworks (Vue, React, Svelte)
- Laravel Horizon — a beautiful dashboard for monitoring Redis queues
- Laravel Telescope — an elegant debug assistant for local and staging environments
- Laravel Octane — supercharges application performance with FrankenPHP, Swoole, or RoadRunner
- Pest — a delightful PHP testing framework with a Laravel-native plugin
- Laracasts — the best screencasts in PHP, period
These tools do not just exist — they are actively developed, widely adopted, and genuinely fun to use. The Laravel world has a rare quality: it manages to be both opinionated enough to feel cohesive and flexible enough to stay out of your way when you need it to.
Ship or die
Taylor Otwell, the founder of Laravel, has a mantra that resonates far beyond the framework: "we must ship." It is a simple statement, but it encapsulates an entire philosophy — that code only has value when it reaches users, that perfectionism is the enemy of progress, and that the best developers are the ones who find ways to move forward even when the conditions are not ideal.
The broader tech world has its own, blunter version: ship or die. Build something real, put it in front of people, learn, and iterate — or watch it slowly become irrelevant in a drawer. Laravel's entire ecosystem is built around that spirit. The framework reduces friction, the community celebrates releases, and the tooling rewards velocity. Cipi exists for the same reason: to remove the server management friction that stands between a developer and a live, deployed application.
If you are reading these docs, you are probably a developer who wants to build things, not babysit infrastructure. That is exactly the kind of developer Cipi was made for.
Why "ci-pi"?
Cipi is the Italian reading of the letters C and P — which stand for Control Panel. It is a quiet nod to the project's Italian roots and to the fact that, underneath all the CLI commands, Cipi is doing exactly what a control panel does: managing Nginx, PHP-FPM, MariaDB, Supervisor, Certbot, and UFW on your behalf — just without the browser tab.
The name also carries a deliberate lightness. Server management tools tend to take themselves very seriously. Cipi does not. It is a tool built by a developer for developers, with the goal of getting out of the way as quickly as possible.
The mascot
The Cipi mascot is a penguin — an intentional reference to Tux, the official Linux mascot created by Larry Ewing in 1996. Where Tux is polished and iconic, the Cipi penguin is drawn in a minimal, hand-sketched line style: same species, different attitude. Think of it as Tux's quieter sibling who prefers the terminal over the spotlight.
The illustration is rendered as a pure SVG using the project's accent color, so it adapts naturally to light and dark mode and scales to any size without losing quality.
Cipi vs alternatives
There are many tools for provisioning and managing web servers. This page explains where Cipi sits in that landscape and why you might — or might not — choose it over each category. The comparison is focused on seven axes: AI integration (dual MCP server — per-app and global), native Git deploy (GitHub & GitLab), REST API, CLI-first automation, encryption at rest (Vault), GDPR compliance, and total cost of ownership.
| Tool | AI ready (MCP) | CLI / automation | Free / OSS | No SaaS dep. |
|---|---|---|---|---|
| Cipi | ||||
| Laravel Forge | partial | $12–19/mo | ||
| Ploi | partial | €8–30/mo | ||
| moss.sh | limited free / $9–49/mo | |||
| RunCloud | $12–18/mo | |||
| CloudPanel | partial | |||
| ServerPilot | $12–49/mo | |||
| HestiaCP / VestaCP | ||||
| cPanel / Plesk | $20+/mo | |||
| aaPanel | ||||
| GridPane / xCloud / ServerAvatar |
Cipi vs Laravel Forge
Forge is the most direct competitor and the benchmark in the space — it is built by the Laravel team, mature, polished, and battle-tested at scale (970k+ servers, 56M+ deployments). It features a clean GUI, zero-downtime deployments, server monitoring, heartbeats, health checks, and integration with all major cloud providers (AWS, DigitalOcean, Hetzner, Vultr, and others).
Where Cipi wins:
- Cost — Forge costs $12–19/month per server. Running 5 production servers means $60–95/month just for the panel, year after year. Cipi is free, forever, with no per-server fee.
- No SaaS dependency — Forge requires an active subscription and a reachable forge.laravel.com to manage your servers. If the service has downtime or you cancel, you lose the management interface. Cipi lives entirely on your VPS — it works independently of any external service.
- CLI and pipeline automation — Cipi is entirely operable over SSH with composable shell commands. Every operation can be scripted, chained in a pipeline, triggered by a webhook, or run from a GitHub Action. Forge's automation relies primarily on its GUI and deployment scripts.
- Open source — Cipi is MIT-licensed. You can read, audit, fork, and modify every line. Forge is proprietary.
- Simplicity — Forge has grown into a feature-rich platform. Cipi has a deliberately small surface area — one binary, a handful of commands, zero GUI.
- Native Git deploy (GitHub & GitLab) — Cipi provides automatic deployment integration with both GitHub and GitLab out of the box, with webhook-driven zero-downtime releases configured in a single command.
- REST API — Cipi exposes a full REST API for managing hosts and applications programmatically, enabling integration with external tools, dashboards, and custom workflows.
- Dual MCP server — Cipi is the only server panel with a dual MCP architecture: a per-app MCP server for managing a single application and a global MCP server for creating, deploying, modifying, deleting, and managing SSL certificates across all applications on the server. This enables full AI-driven infrastructure management from any MCP-compatible IDE or agent.
- Encryption at rest — All Cipi configuration files (passwords, tokens, SSH keys) are encrypted with AES-256-CBC via the Vault system. Forge stores server credentials on its own SaaS infrastructure; Cipi keeps everything encrypted on your VPS.
- GDPR / data sovereignty — Cipi runs entirely on your server. No infrastructure data is sent to any third party. Forge is operated by a US company and processes your server metadata under US law — requiring a DPA for GDPR compliance. With Cipi there is no third-party data processor to manage.
- Server sync & replication — Built-in encrypted server-to-server sync with
cipi sync pushenables automated failover replication via cron. Forge has no equivalent feature.
Where Forge wins: GUI, multi-cloud provisioning, teams, monitoring dashboards, heartbeats, health checks, and years of production hardening. If your team is not comfortable with SSH and prefers a visual interface, or if you need to manage servers across multiple cloud accounts, Forge is the better choice.
Cipi vs Ploi & moss.sh
Ploi is a SaaS server management panel with strong Laravel support, a clean UI, and a good set of features (zero-downtime deployments, automatic database backups, S3 file backups, Supervisor queue management, DNS management). It starts at €8/month per server (Basic) up to €30/month (Unlimited). Moss is in the same category — a SaaS "virtual sysadmin" targeting freelancers and agencies, supporting PHP, Laravel, Symfony, WordPress, and Node.js on Ubuntu servers, with plans from $9 to $49/month and a limited free tier (capped at 25 git deploys/month).
Both Ploi and Moss trade a monthly fee and SaaS dependency for a polished GUI and built-in monitoring. Cipi trades the GUI for zero cost, open source code, full CLI control, a complete REST API, native GitHub & GitLab deployment integration, a dual MCP server (per-app and global) for AI-driven management, AES-256-CBC encryption at rest for all configs, and built-in encrypted server-to-server sync for automated failover replication. Notable differences: Moss is not Laravel-exclusive (no built-in Deployer, no artisan shortcuts) and its free tier imposes a deploy quota that runs out quickly on active workflows — Cipi has no deploy limits. On data sovereignty: Ploi is EU-based (Netherlands) and natively GDPR-compliant, which is a genuine advantage over US-based alternatives. Cipi goes further — no infrastructure data leaves your VPS at all, and all configs are encrypted on disk. Neither Ploi nor Moss offer config encryption at rest or built-in server sync. Ploi is the better choice if you want a Forge-like experience at a lower price point. Both remain the right choice over Cipi if your team prefers a GUI and does not need CLI-first pipeline automation or AI integration.
Cipi vs RunCloud
RunCloud is a PHP-oriented SaaS control panel that supports Nginx and Apache, multiple PHP versions, and basic deployment scripts. It is popular for WordPress and generic PHP apps and costs $12–18/month per server.
RunCloud is not Laravel-native: it has no built-in Deployer integration, no artisan shortcut commands, no automatic Supervisor configuration for Laravel queues, and no understanding of Laravel's shared directory structure. Deploying a Laravel app on RunCloud requires manual configuration of many pieces that Cipi handles automatically. If you are deploying Laravel exclusively, Cipi gives you far more out of the box at zero cost.
Cipi vs CloudPanel
CloudPanel is a free, open-source panel supporting PHP, Node.js, Python, and static sites. It is well-maintained, has a polished UI, and includes a CLI. It is a genuinely good option for teams that need to host a mix of application types on the same server.
The key difference is focus: CloudPanel is a generic multi-stack panel; Cipi is a Laravel-only tool. CloudPanel has no Deployer integration, no artisan commands, no automatic queue worker setup, and no webhook deploy system designed for Laravel. The CloudPanel CLI covers server administration but not Laravel application lifecycle management. If you only deploy Laravel, Cipi's opinionated approach gives you a faster, cleaner experience. If you also need to host Node.js, Python, or WordPress alongside Laravel, CloudPanel is the more appropriate choice.
Cipi vs ServerPilot
ServerPilot is a PHP/WordPress-focused SaaS panel ($12–49/month) that automates PHP-FPM and Nginx setup. It has no Laravel-specific tooling, no CLI for application management, and no zero-downtime deploy system. It is primarily aimed at WordPress developers who want automated PHP updates without thinking about server internals. For Laravel developers, it offers little value over Cipi at significant ongoing cost.
Cipi vs GridPane, xCloud, ServerAvatar
These three panels are focused on WordPress and WooCommerce, not PHP in general and certainly not Laravel specifically. They optimize for WordPress-centric workflows (staging clones, multisite, Redis object cache for WP, Cloudflare integration for WordPress). None of them have Laravel-specific tooling. If your workload is WordPress, evaluate them on their own merits. If your workload is Laravel, they are not the right category of tool.
Cipi vs cPanel & Plesk
cPanel and Plesk are traditional hosting control panels designed for shared hosting resellers: they manage email accounts, FTP, DNS zones, databases, and multiple customer websites on one machine. They are expensive (cPanel starts at $20+/month per server, Plesk similarly), GUI-only, built around Apache by default, and carry enormous complexity — most of which is irrelevant when deploying a single-stack Laravel application.
They are the right tool for hosting providers that sell shared hosting accounts to non-technical customers. They are the wrong tool for a development team that owns its own VPS and deploys Laravel apps. Cipi is purpose-built for the latter scenario at zero cost.
Cipi vs Webmin
Webmin is a free, open-source, browser-based interface for low-level Unix system administration — managing users, cron jobs, packages, firewall rules, services, and disk usage through a GUI rather than the shell. It is not a hosting panel or a deployment tool; it is a remote administration console for sysadmins. Virtualmin is the hosting-focused module built on top of Webmin.
Webmin and Cipi solve entirely different problems. Webmin lets you administer a server; Cipi lets you deploy and manage Laravel applications on a server. They are not mutually exclusive — you could technically run both on the same machine — but Cipi's CLI replaces any need for a GUI-based admin console for the tasks it covers (app creation, deploys, PHP management, firewall, SSL, backups). Webmin has no Laravel awareness: no Deployer, no artisan, no queue workers, no zero-downtime release management.
If your team needs a point-and-click interface for general server maintenance (editing
/etc/hosts, managing system users, reviewing logs visually), Webmin is a reasonable
complementary tool. For Laravel deployments specifically, Cipi is more capable and requires no
browser or active session.
Cipi vs Virtualmin, ISPConfig, Froxlor, Ajenti
These are free, open-source hosting panels designed for ISPs, hosting resellers, and system administrators managing many tenants on a single machine. They all share similar characteristics: email servers (Postfix/Dovecot), DNS (BIND), FTP (ProFTPD/vsftpd), multiple virtual host management, and complex multi-user permissions. They are powerful but have steep learning curves and are designed for environments far more complex than a dedicated Laravel application server.
None of them have Laravel-specific tooling. None of them integrate with Deployer. None of them are CLI-first in the sense that every operation can be scripted. For a team deploying Laravel exclusively, they are overbuilt and underspecialized compared to Cipi.
Cipi vs HestiaCP & VestaCP
VestaCP has been effectively abandoned since 2019. HestiaCP is its active community fork — free, open source, and maintained. It is a lighter alternative to cPanel for generic hosting (email, FTP, DNS, multiple PHP versions). Like the other traditional panels, it has no Laravel-specific tooling: no Deployer, no queue workers managed automatically, no artisan integration. Its primary audience is small hosting businesses or individuals who want a GUI for managing multiple websites on one server across multiple stacks. If you only deploy Laravel, Cipi is the more focused and simpler option.
Cipi vs aaPanel
aaPanel (also known as BaoTa Panel) is a free panel of Chinese origin, widely used in the Asia-Pacific region. It supports LNMP/LAMP stacks, Node.js, Docker, and various database engines through a plugin system. It is genuinely capable and has a large user base, but it is a generic multi-stack tool with a GUI-centric philosophy and no Laravel-native automation. Server communication is managed through the panel's own agent, creating a dependency on the panel's infrastructure similar to SaaS tools. For Laravel-specific, CLI-driven deployments, Cipi is a cleaner fit.
Cipi vs CentOS Web Panel (CWP / AlmaLinux Web Panel)
CentOS Web Panel — now repositioned around AlmaLinux — is a traditional hosting panel historically tied to the RHEL ecosystem. Cipi targets Ubuntu exclusively, which is the dominant OS for modern Laravel deployments and where the ondrej/php PPA gives access to all PHP versions from 7.4 to 8.5 within hours of release. Beyond the OS mismatch, CWP is a generic panel with no Laravel tooling and significant complexity overhead for a single-application Laravel server.
Cipi vs ZPanel
ZPanel is effectively unmaintained and should not be used for new deployments. It is included here only because it appears in comparisons on the web. Choose any of the other options listed on this page instead.
Privacy, GDPR, and data sovereignty
This dimension is rarely discussed in tool comparisons but is legally significant for teams that process personal data — EU-based companies, healthcare, fintech, or any product subject to GDPR, HIPAA, or local data protection regulations.
SaaS panels and data residency
When you connect a server to a SaaS panel (Forge, Ploi, RunCloud, ServerPilot, xCloud, GridPane, ServerAvatar), you are providing that service with data about your infrastructure: server IP addresses, hostnames, deployment credentials, environment variable names, and in some cases SSH private keys or deploy tokens. The SaaS provider stores and processes this data on their own infrastructure, which may be located in jurisdictions outside the EU.
- Laravel Forge is operated by Laravel LLC, a US company. Data is processed under US law. Under GDPR, Forge acts as a data processor for your infrastructure data, which requires a signed Data Processing Agreement (DPA). As of 2025, Forge provides a DPA on request, but the data remains on US-based servers.
- Ploi is operated by WebBuilds B.V., a Dutch company — EU-based and natively subject to GDPR. This makes it the most compliant SaaS option in this list for EU teams. Data is hosted in Europe.
- RunCloud, ServerPilot, GridPane, ServerAvatar, xCloud are primarily US or non-EU companies. Their data residency and DPA availability varies; check each provider's privacy policy and DPA status before using them with data subject to GDPR.
- aaPanel is of Chinese origin. Its privacy policy and data handling are subject to Chinese law, including the Personal Information Protection Law (PIPL) and the Data Security Law (DSL), which in certain circumstances allow Chinese government access to data stored on Chinese-operated systems. For EU teams or teams processing sensitive data, this is a significant compliance consideration.
Cipi: self-contained, zero data exfiltration
Cipi is installed directly on your VPS and operates entirely within your own infrastructure. It does not phone home, it does not send telemetry, and it does not contact any external service during normal operation (the only external calls are to GitHub for self-updates and to Let's Encrypt for SSL certificates — both are standard and optional). No server metadata, no credentials, no application data ever leaves your machine.
From a GDPR perspective, this means:
- No third-party data processor for infrastructure management — you are the sole data controller and processor for your server management operations.
- No DPA required with Cipi itself (there is no Cipi company processing your data).
- Full data sovereignty — your server data stays in the jurisdiction where your VPS is hosted, which you choose freely (Hetzner DE, OVH FR, AWS eu-central-1, etc.).
- Easier audit trails — every Cipi action is logged locally in
/var/log/cipi/cipi.log, accessible only to you, not to a third-party dashboard.
When to choose Cipi
- You deploy Laravel exclusively — Cipi is designed for nothing else.
- You want zero monthly panel cost — one server or fifty, the price is the same.
- You need full CLI and pipeline automation — every Cipi operation is scriptable over SSH.
- You need native Git deploy — automatic deployment integration with GitHub and GitLab, with webhook-driven zero-downtime releases.
- You need a REST API — full programmatic control over hosts and applications for integration with external tools, dashboards, and custom workflows.
- You need AI-driven management — Cipi's dual MCP server (per-app and global) enables creating, deploying, modifying, deleting, and managing SSL certificates across all applications from any MCP-compatible AI IDE or agent.
- You want no external dependency — the panel lives on your VPS and works independently of any SaaS.
- You value open source and auditability — every line of Cipi is readable, forkable, and MIT-licensed.
- You need GDPR / data sovereignty compliance — no infrastructure data leaves your VPS, no third-party data processor, no DPA required.
- You need encryption at rest — all configuration files, credentials, and SSH keys are encrypted with AES-256-CBC via the Vault system. Sync archives are also encrypted with a user-defined passphrase.
- You need server-to-server sync and replication —
cipi sync pushenables automated failover replication via cron, with encrypted transfer and incremental updates. - You are comfortable with SSH — Cipi has no GUI and does not try to have one.
When to choose something else
- You need a GUI — use Forge or Ploi. Both have excellent Laravel support and polished interfaces worth the monthly fee.
- You host multiple stacks (WordPress + Laravel + Node.js on the same server) — use CloudPanel or HestiaCP.
- You are a hosting reseller managing many customer accounts — use cPanel, Plesk, or ISPConfig.
- You deploy WordPress — use GridPane, RunCloud, or ServerPilot.