I used to trigger Forge deployments by hand. Push to GitHub, switch tabs, click "Deploy Now" in Forge, wait. It works, but it's manual toil that adds up — and it's easy to forget, especially late on a Friday. The fix is straightforward: Forge gives you a deploy webhook you can hit with a single curl command, which means you can drive it from GitHub Actions without any third-party integration.
Here's the full setup I use for zero-downtime Laravel deployments triggered on every push to main.
The Forge Quick Deploy webhook#
Every Forge site has a deployment hook URL. You'll find it under Sites → Your Site → Deployments — look for the "Deploy hook" section and copy the URL. It follows this pattern:
https://forge.laravel.com/servers/{SERVER_ID}/sites/{SITE_ID}/deploy/http?token={TOKEN}
You trigger it with a simple HTTP POST. That's it. No API key, no OAuth — the token in the URL is the auth. You can pass additional query parameters to annotate the deployment in Forge's history:
forge_deploy_branch— the branch being deployedforge_deploy_commit— the commit SHAforge_deploy_message— the commit messageforge_deploy_author— the commit author
These show up in Forge's deployment log, which makes it much easier to see exactly what shipped and when.
Storing secrets in GitHub Actions#
You never want the deploy hook URL in your workflow YAML — it's a live credential. Store it as a repository secret instead.
Go to Repository → Settings → Secrets and variables → Actions → New repository secret and add:
FORGE_DEPLOY_URL— the full webhook URL from Forge
That's the only secret you need for the webhook approach. If you later switch to Forge's REST API (which gives you more control), you'd need FORGE_API_KEY, FORGE_SERVER_ID, and FORGE_SITE_ID separately — but for most projects, the webhook is all you need.
Reference it in your workflow with ${{ secrets.FORGE_DEPLOY_URL }}.
The deployment workflow file#
Here's the workflow I use. It runs tests first and only deploys if they pass:
name: Deploy
on:
push:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
services:
mysql:
image: mysql:8.0
env:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: testing
ports:
- 3306:3306
options: >-
--health-cmd="mysqladmin ping"
--health-interval=10s
--health-timeout=5s
--health-retries=3
steps:
- uses: actions/checkout@v4
- name: Set up PHP 8.4
uses: shivammathur/setup-php@v2
with:
php-version: '8.4'
extensions: mbstring, pdo_mysql, bcmath
coverage: none
- name: Install dependencies
run: composer install --no-interaction --prefer-dist --optimize-autoloader
- name: Copy environment file
run: cp .env.ci .env
- name: Generate application key
run: php artisan key:generate
- name: Run tests
run: php artisan test --parallel
deploy:
needs: test
runs-on: ubuntu-latest
steps:
- name: Trigger Forge deployment
env:
FORGE_DEPLOY_URL: ${{ secrets.FORGE_DEPLOY_URL }}
run: |
curl -s -o /dev/null -w "%{http_code}" -X POST \
"$FORGE_DEPLOY_URL\
&forge_deploy_branch=${{ github.ref_name }}\
&forge_deploy_commit=${{ github.sha }}\
&forge_deploy_author=${{ github.actor }}\
&forge_deploy_message=$(python3 -c 'import urllib.parse,sys; print(urllib.parse.quote(sys.argv[1]))' "${{ github.event.head_commit.message }}")"
The needs: test line is the important bit — it makes deploy wait for test to succeed. If tests fail, nothing ships. To strengthen your CI gates further, running Laravel Pint automatically with pre-commit hooks catches formatting issues before they even reach GitHub, and enforcing architecture rules with Pest's arch() helper lets you codify structural rules that fail the build if violated.
The -w "%{http_code}" flag on the curl command prints the HTTP response code to stdout, which makes it easy to see in the Actions log whether Forge accepted the request (200) or rejected it.
One thing worth noting: the commit message URL-encoding via Python is the most portable approach across GitHub's runner environments. You can simplify to just omitting forge_deploy_message if you don't need it in Forge's log.
Running migrations safely in production#
Forge runs your deployment script on the server when the webhook fires. This is where you control what actually happens during a deploy. Enable zero-downtime mode in Forge (under your site's deployment settings), and your script will have access to three deployment macros:
$CREATE_RELEASE()
cd $FORGE_RELEASE_DIRECTORY
# Install dependencies in the new release directory
composer install --no-dev --optimize-autoloader --no-interaction
# Run migrations against the live database before traffic is switched over
php artisan migrate --force
# Clear config and route caches
php artisan config:cache
php artisan route:cache
php artisan view:cache
$ACTIVATE_RELEASE()
$RESTART_QUEUES()
The key insight here is the order: migrations run before $ACTIVATE_RELEASE(). At that point, the new code is in its release directory but traffic is still hitting the current symlink pointing at the previous release. You're running the migration against the live database, but the old code is serving requests.
This means your migrations need to be backwards compatible with the previous release — additive changes only. Adding a nullable column or a new table is safe. Renaming a column that the old code still reads is not. If you need to do a breaking schema change, split it across two deploys: add the new column, deploy, backfill, then remove the old one.
After $ACTIVATE_RELEASE() switches the symlink, the new code goes live. The $RESTART_QUEUES() macro handles restarting Laravel Horizon or any queue workers so they pick up the new code too. If you're running Horizon across multiple servers, scaling Laravel queues in production covers the horizon:terminate deployment pattern and multi-server coordination in detail.
This whole flow means deployments are effectively instantaneous from a user's perspective — Forge just flips a symlink once everything is ready.
The full picture#
Once this is wired up, the deploy process becomes: push to main → GitHub Actions runs tests → if they pass, it hits the Forge webhook → Forge clones the new release, runs your deploy script, and switches traffic over. End to end, that's usually under two minutes for a typical Laravel app.
The main thing I'd add on top of this is a Slack notification at the end of the deploy job — a curl to a Slack incoming webhook with the commit SHA and author. Knowing exactly what shipped and when is useful when you're debugging production issues.
If you're containerising your application with Docker, optimising Laravel Docker images with multi-stage builds pairs naturally with this deployment flow — you build a lean image in CI, then Forge deploys from it. And once you're in production, setting up Sentry self-hosted for error tracking means you'll know about exceptions the moment they happen rather than waiting for a user report.
FAQ#
Why use a webhook instead of Forge's REST API?
The webhook is simpler — it needs only one secret (the URL) and it's already built into Forge. The REST API would require you to store FORGE_API_KEY, FORGE_SERVER_ID, and FORGE_SITE_ID separately, and you'd have to manage the API call yourself. For most projects, the webhook is all you need. Only switch to the REST API if you need fine-grained control like fetching deployment history or managing multiple sites in one workflow.
What happens if migrations fail during deployment?
The migration runs against the live database before $ACTIVATE_RELEASE() switches traffic. If the migration fails, the deploy script stops and the new code is never activated — traffic stays on the old release. The failed migration is logged in Forge's deployment history, so you can debug it without affecting users. Once you fix the migration, the next deploy attempt will run it again.
Do I need backwards-compatible migrations if I deploy multiple times a day?
Yes, always. If you deploy release A (which adds a new column) and then release B (which uses that column), but you have to roll back to A or C in between, the old code might not understand the newer schema. Even with daily deploys, keep migrations additive and deploy schema changes separately from code that uses them. If the risk is high, deploy the schema change, monitor it, then deploy the code that relies on it in a second deploy.
How do I notify my team when a deployment succeeds or fails?
Add a Slack webhook notification step at the end of your deploy job. Use curl to hit your Slack incoming webhook with the commit SHA, author, and result. You can also use GitHub's native OIDC integration with Slack apps for cleaner integration. Include the status (success/failure) and a link to the commit so your team knows exactly what shipped and can investigate issues quickly.