Deploying Apps When Your Infrastructure Isn't Perfect
Most apps don't launch on pristine infrastructure. If you're like me, you've probably shipped code to a setup that's one power outage or bad config away from chaos. I remember my first real deployment in Lagos back in 2018. We were building a payment gateway for small businesses, bootstrapped with a team of three. Our infra? A single DigitalOcean droplet costing $20 a month, shared with our staging environment. No Kubernetes, no auto-scaling groups - just SSH, rsync, and a prayer that NEPA wouldn't cut the lights mid-deploy. That app processed millions in transactions before we ever touched AWS properly. The point is, imperfect infra is the norm, especially when you're starting out or running lean. Perfect setups are for companies with deep pockets, not the scrappy teams driving Nigeria's tech scene.
You can't wait for perfection. Deploying means getting your app in front of users, gathering feedback, and iterating. But how do you do it without everything crumbling? It starts with mindset: embrace the mess, but build safeguards. I've learned this the hard way, shipping dozens of apps from hackathons to production services. Here's what works when your infra is far from ideal.
Begin with the Bare Minimum
Strip everything down. Ask yourself: what's the smallest stack that runs your app? For a Node.js API, that might be Nginx reverse proxying to PM2 on a Ubuntu VPS. No Docker if it complicates things - just git pull, npm install, and restart. In that Lagos project, we skipped containers entirely. PM2 handled clustering and restarts on crashes, which was crucial during those frequent outages.
Why minimal? Because complexity kills in imperfect environments. Fancy orchestration tools assume reliable networks and hardware. In Nigeria, where MTN data flakes out or generators fail, simplicity wins. I once watched a colleague spend days debugging a Kubernetes pod eviction issue caused by spotty internet. We rolled back to Docker Compose on a single machine, and it was stable for months. Start lean, add layers only when pain hits.
Master Manual Deployments
Automation is great, but manual deploys teach you the system intimately. Use a checklist: backup the database, rsync code, run migrations, smoke test endpoints, tail logs for errors. Script it lightly if you want - a bash file with echo statements for steps - but execute it yourself first.
This hands-on approach reveals weak spots. During one deploy, our droplet ran out of disk space mid-migration. Manual access let me prune logs on the fly and finish. With CI/CD pipelines, that might have failed silently. Once you're comfortable, layer on tools like GitHub Actions for builds, but keep the deploy hook simple: SCP artifacts and restart services.
In resource-strapped setups, this also saves cash. No need for Jenkins servers or paid runners when your laptop can push changes.
Rollbacks Are Your Superpower
The best deployments have instant escapes. Design for rollback from day one. Use blue-green if you can - two identical servers, swap traffic with Nginx config. Too fancy? Timestamp your releases in folders: /var/app/releases/20231015. Symlink current to the latest, revert by symlinking back.
Database rollbacks are trickier. Schema changes? Use reversible migrations with tools like Alembic or Knex. Data? Feature flags via LaunchDarkly (free tier) or even environment variables. In our payment app, we flagged new fraud checks behind a toggle. When it spiked latency on slow connections, we flipped it off in seconds.
Practice rollbacks weekly. Simulate failures: kill processes, corrupt configs. I've seen teams panic without this muscle memory, turning 5-minute issues into outages.
Monitor Without Overkill
You don't need Datadog at $15 per host. Free tools suffice: UptimeRobot for HTTP checks every 5 minutes, alerting via WhatsApp integrations. Server-side, Prometheus with Node Exporter for basics - CPU, memory, disk. Grafana dashboards on the same box.
Focus on signals that matter: response times over 500ms, error rates above 1%, sudden traffic drops. Logs? Centralized with something lightweight like Fluentd to a file, or just journalctl. I set up a simple Telegram bot that pings on 5xx errors - game-changer for solo ops.
In imperfect infra, monitoring spots cascading failures early. Our droplet overheated twice from unoptimized queries; alerts let us scale vertically (bigger instance) before users noticed.
Handle the Inevitable Outages
Downtime happens. Plan communications: a status page on GitHub Pages or Vercel, honest updates like 'Deploying new features, back in 10 mins.' Users forgive if you're transparent.
Redundancy on a budget: Cloudflare for caching and DDoS, multiple cheap VPS across providers (DigitalOcean, Vultr, Linode). Geographic spread helps with local ISP issues. For databases, read replicas on managed services like PlanetScale's hobby tier.
When scaling hits, migrate gradually. We added load balancers only after 10k daily users. Until then, imperfect was perfect enough.
Evolving Beyond Imperfect
Imperfect infra forces smart choices. It taught me to prioritize user value over engineering elegance. That payment app ran two years on basics, funding our growth.
Now, practical steps to deploy confidently today:
Rigorously test locally first, mimicking prod. Build rollback into every change. Set three key alerts: uptime, errors, latency. Document your deploy process in a README - it'll save your future self. Weekly, review last deploy: what broke, how to prevent.
Your infra might not be perfect, but your app can still thrive. Ship, observe, improve. That's how real products get built.
Comments (0)
Join the conversation