[{"data":1,"prerenderedAt":790},["ShallowReactive",2],{"blog-en-\u002Fen\u002Fblog\u002Faws-ecs-vs-kubernetes-vs-self-hosted":3,"blog-en-surround-\u002Fen\u002Fblog\u002Faws-ecs-vs-kubernetes-vs-self-hosted":783},{"id":4,"title":5,"author":6,"body":7,"category":763,"cover":764,"date":765,"description":766,"draft":767,"extension":768,"lastReviewed":764,"meta":769,"navigation":770,"path":771,"readingTime":772,"seo":773,"sitemap":774,"stem":775,"tags":776,"__hash__":782},"blog_en\u002Fen\u002Fblog\u002Faws-ecs-vs-kubernetes-vs-self-hosted.md","AWS ECS vs Kubernetes vs self-hosted: three paths to run containers in 2026","HeroCtl team",{"type":8,"value":9,"toc":747},"minimark",[10,14,17,20,25,28,31,60,63,66,70,73,76,82,88,94,100,106,109,113,116,130,134,137,140,154,158,167,193,197,200,217,221,224,227,244,247,251,254,280,284,287,480,483,487,490,496,502,508,514,520,524,527,599,602,625,628,631,635,641,647,653,659,665,671,677,681,684,687,690,693,726,729,740,743],[11,12,13],"p",{},"AWS today sells, at minimum, four different products to run a container in production: ECS (with EC2 or Fargate), EKS, App Runner and Lightsail Containers. It is not catalog redundancy nor internal confusion — it is a direct response to the market. Each covers a distinct slice of those arriving at AWS with the same basic question: how to bring up a container, keep it alive, expose it to the internet, update it without falling, and sleep peacefully.",[11,15,16],{},"ECS is AWS's bet for those who don't want Kubernetes. It is not \"simpler Kubernetes\", it is a proprietary alternative to Kubernetes, written by Amazon engineers before K8s became consensus. EKS is managed Kubernetes, the same off-the-shelf as GKE and AKS. Self-hosted is the exit from AWS entirely — you run on any Linux server, pay only the server, and take your containers with you if the provider changes mood.",[11,18,19],{},"The three paths solve the same problem with very different trade-offs. This post puts side by side what each charges, what each binds, and in what context each makes sense — without pretending there is a uniform winner.",[21,22,24],"h2",{"id":23},"aws-ecs-what-it-is-exactly","AWS ECS: what it is exactly",[11,26,27],{},"ECS is Amazon's proprietary orchestrator. It is not open source, doesn't run outside AWS, has no alternative implementation. It was announced in 2014, before Kubernetes gained traction, and AWS has invested in it since then as the \"AWS-native, no K8s\" entry door to the container world.",[11,29,30],{},"The conceptual model is its own:",[32,33,34,42,48,54],"ul",{},[35,36,37,41],"li",{},[38,39,40],"strong",{},"Task definition"," instead of Pod. It is a JSON file describing the container, resources, ports, environment variables, IAM role.",[35,43,44,47],{},[38,45,46],{},"Service"," instead of Deployment. Keeps N tasks running, does health check, integrates with Application Load Balancer.",[35,49,50,53],{},[38,51,52],{},"Cluster"," is just a logical grouping — no paid control plane. The control plane is free (AWS manages it internally, you neither see nor maintain it).",[35,55,56,59],{},[38,57,58],{},"Capacity provider"," defines where tasks run: EC2 (you manage instances) or Fargate (serverless per vCPU\u002FRAM\u002Fsecond).",[11,61,62],{},"Integration with the rest of AWS is the real strong point. Task gets IAM role directly, no auth sidecar. Logs go to CloudWatch without an agent. Images come from ECR without configuring a pull secret. ALB routes traffic to tasks with automatic service discovery. All of that with a decent graphical console, stable CLI, and SDK in every language.",[11,64,65],{},"Compared to K8s, ECS is deliberately simple. There are no CRDs, no operators, no Helm charts, no formalized sidecar pattern, no admission control. You have task, service, cluster — and that's it. For a team already deep in AWS, that simplicity is the argument.",[21,67,69],{"id":68},"aws-ecs-where-it-hurts","AWS ECS: where it hurts",[11,71,72],{},"Lock-in is absolute, and worth naming first. Task definition doesn't run outside AWS. ECR is not a portable registry (you can pull, but IAM ties back). ALB is AWS-only. Service discovery via Cloud Map is AWS-only. CloudWatch is AWS-only. You are not adopting \"a way to run containers\" — you are adopting an entire stack that only exists in there. Migrating out requires rewriting each piece.",[11,74,75],{},"Cost appears in layers nobody adds in the first evaluation:",[11,77,78,81],{},[38,79,80],{},"Fargate",": US$0.04 per vCPU-hour + US$0.0044 per GB-hour. A modest app with 0.5 vCPU + 1 GB, running 24×7, costs US$25\u002Fmonth — R$125 at R$5\u002FUSD. Sounds like little until you remember that each microservice is a task, and that typical production has 8 to 15 microservices + queue tasks + cron jobs. Five small applications easily become R$600 of Fargate alone.",[11,83,84,87],{},[38,85,86],{},"CloudWatch Logs",": US$0.50 per GB ingested + US$0.03 per GB stored per month. An app logging 5 GB\u002Fmonth leaves US$2.65 — R$13. Multiplied by ten services, R$130\u002Fmonth in logs alone. And it is the \"cheap\" option — turn on Insights for serious queries, doubles.",[11,89,90,93],{},[38,91,92],{},"Egress",": US$0.09 per GB after the first 100 GB free — R$0.45\u002FGB. An app serving 500 GB of egress per month pays R$180. Video streaming, image downloads, heavy public API: egress becomes the largest bill item, frequently exceeding compute.",[11,95,96,99],{},[38,97,98],{},"Network",": VPC is free, but NAT Gateway costs US$0.045\u002Fhour — fixed US$32\u002Fmonth, R$160 — just to exist, plus US$0.045 per GB processed. You need NAT for any task in a private subnet that calls the internet (update package, talk to external API, send email via SES). In production with high availability, the recommendation is NAT in two zones — two NAT Gateways, R$320\u002Fmonth baseline before any traffic.",[11,101,102,105],{},[38,103,104],{},"Application Load Balancer",": US$0.0225\u002Fhour (US$16\u002Fmonth fixed) + US$0.008 per LCU-hour. For an app with moderate traffic, US$25\u002Fmonth is realistic — R$125.",[11,107,108],{},"Realistic sum for a small operation with five apps in Fargate, shared ALB, NAT in one zone only, moderate logs: R$1,000 to R$1,500\u002Fmonth. Grows linear with the number of tasks. Not expensive by enterprise standards, but multiple times the equivalent cost on dedicated VPS.",[21,110,112],{"id":111},"aws-ecs-who-uses-and-rightly-loves-it","AWS ECS: who uses and rightly loves it",[11,114,115],{},"There is a clear profile for whom ECS is the right answer, and we recommend it without reservations for these cases:",[32,117,118,121,124,127],{},[35,119,120],{},"A company that is already 100% AWS, with a team trained on the console and IAM policies. Adding ECS is incremental — doesn't require learning a new tool outside the bubble.",[35,122,123],{},"Burst workloads, scheduled jobs, nightly ETLs. Fargate shines when you want 50 tasks running for 12 minutes a day and zero the rest of the time. Paying per second is honest.",[35,125,126],{},"Compliance that requires specific AWS (FedRAMP High, American federal contracts, certain HIPAA configurations with AWS BAA). When audit asks for AWS, ECS gives you the shortest path without installing K8s on top.",[35,128,129],{},"A team that prioritizes zero-ops over cost and portability. If you don't have anyone to maintain an EC2 instance, Fargate is genuinely less work — you never see a machine, never patch a kernel, never show up to talk about disk saturation.",[21,131,133],{"id":132},"kubernetes-what-it-is-exactly","Kubernetes: what it is exactly",[11,135,136],{},"The audience knows K8s, so the summary here is short. De facto standard for orchestration since around 2018, with a giant CNCF ecosystem (cert-manager, ingress controllers, operators for practically any database). Consistent API across clouds, which makes multi-cloud genuinely viable (expensive, but viable). Well-documented learning curve — 300+ manifest lines to put a hello world with TLS in the air.",[11,138,139],{},"Operating models:",[32,141,142,148],{},[35,143,144,147],{},[38,145,146],{},"Managed"," (EKS, GKE, AKS): provider maintains the control plane. Charges around US$73\u002Fmonth per cluster on the big three — R$365. Plus NAT, ALB, observability, registry. Typical minimum team: 2 SREs.",[35,149,150,153],{},[38,151,152],{},"Self-managed"," with k3s, kubeadm, kops, Rancher: you install on VMs or bare metal. No control plane cost, but you become the platform team. Minimum team: 1 very good SRE or 2 average ones.",[21,155,157],{"id":156},"kubernetes-where-it-hurts","Kubernetes: where it hurts",[11,159,160,161,166],{},"Already covered in depth in ",[162,163,165],"a",{"href":164},"\u002Fen\u002Fblog\u002Fkubernetes-overkill-when-you-dont-need-it","Kubernetes is overkill: when you don't need it",". Direct summary:",[32,168,169,175,181,187],{},[35,170,171,174],{},[38,172,173],{},"Operational cost",": 1 to 2 dedicated SREs, R$30-40k\u002Fmonth each CLT in Brazil. That's the largest item on the bill — multiply by twelve months and the cluster has passed the R$500k\u002Fyear mark in people alone.",[35,176,177,180],{},[38,178,179],{},"Curve",": 6+ months to a team productive in fact (not \"delivers manifest\", but \"debugs a problem at three in the morning without destroying production\").",[35,182,183,186],{},[38,184,185],{},"Surrounding stack",": cert-manager, ingress controller, metrics operator, log agent, service mesh if any — each with its own version, its own update policy, its own failure model.",[35,188,189,192],{},[38,190,191],{},"Long manifests",": \"hello world\" with namespace + deployment + service + ingress + cert + RBAC sits at 300 lines. Helm reduces duplication but adds a conceptual layer.",[21,194,196],{"id":195},"kubernetes-who-uses-and-justifies","Kubernetes: who uses and justifies",[11,198,199],{},"The profiles where K8s is the obvious choice, no irony:",[32,201,202,205,208,211,214],{},[35,203,204],{},"Series B+ company with platform team of 3 or more dedicated people. The human scale sustains the complexity.",[35,206,207],{},"Multi-cloud or vendor neutrality as a real requirement (not as a slide aspiration). You will effectively run on two clouds, and K8s is the only mature abstraction covering both.",[35,209,210],{},"Workloads that depend on specific mature operators: Postgres operator with replication, Kafka operator with balancing, Cassandra operator with bootstrap. Rewriting that \"by hand\" costs more than the cluster.",[35,212,213],{},"Nominal compliance — some frameworks list Kubernetes by name in controls. If your auditor needs to point to a SOC2 certificate that says \"Kubernetes 1.28\", the tool has to be called Kubernetes.",[35,215,216],{},"Operation above 50 servers in sustained production. At that size, the CNCF ecosystem gives you tools you would have to build from scratch in smaller alternatives.",[21,218,220],{"id":219},"modern-self-hosted","Modern self-hosted",[11,222,223],{},"The third option is what changed in the last two years. Self-hosted stopped being \"Docker Compose on a server with luck\" and became a category with serious products — HeroCtl is one of them, but Coolify, Dokploy, Caprover and others also occupy the space.",[11,225,226],{},"The common proposition:",[32,228,229,232,235,238,241],{},[35,230,231],{},"A binary (or simple Docker image) installed on N Linux servers with Docker.",[35,233,234],{},"Replicated control plane, with automatic coordinator election. You lose one server, the cluster keeps going.",[35,236,237],{},"Embedded router, automatic Let's Encrypt certificates.",[35,239,240],{},"No cloud provider dependency — runs on any VPS, any bare metal, any mixture.",[35,242,243],{},"Honest commercial model: permanent free Community without artificial feature gate, paid Business published for those needing SSO\u002Faudit\u002FSLA, Enterprise for contracts with escrow and 24×7 support.",[11,245,246],{},"Cost reduces to two lines: the VPS and the time of the part-time dev looking after it. Three US$24\u002Fmonth droplets each on DigitalOcean — R$360\u002Fmonth — sustain an operation that on ECS would sit between R$1,500 and R$3,000.",[21,248,250],{"id":249},"self-hosted-where-it-hurts","Self-hosted: where it hurts",[11,252,253],{},"Honesty is worth more than a brochure. Where self-hosted is not the answer:",[32,255,256,262,268,274],{},[35,257,258,261],{},[38,259,260],{},"You are responsible for everything",". No AWS support to call when things go wrong. Active community helps, and paid Business support exists — but the first line of defense is you reading the log.",[35,263,264,267],{},[38,265,266],{},"Healthy scale range: 1 to 500 servers",". Above that, specific Kubernetes tooling still wins. It is not a product defect — it is where CNCF spent ten years polishing things nobody else has.",[35,269,270,273],{},[38,271,272],{},"Specific enterprise compliance",". If your auditor needs the orchestrator to appear on a pre-approved list of suppliers, and that list lists AWS\u002FAzure\u002FGCP\u002FK8s, new self-hosted excludes you by default.",[35,275,276,279],{},[38,277,278],{},"Native AWS integrations",": Cognito as auth, S3 with IAM directly on the task, RDS with IAM auth — all of that can be adapted, but the adaptation is extra work. On ECS it works without thinking.",[21,281,283],{"id":282},"side-by-side-twelve-criteria-without-caveat","Side by side: twelve criteria without caveat",[11,285,286],{},"The table below is the honest version of the decision. R$5\u002FUSD FX used in all real estimates.",[288,289,290,309],"table",{},[291,292,293],"thead",{},[294,295,296,300,303,306],"tr",{},[297,298,299],"th",{},"Criterion",[297,301,302],{},"AWS ECS (Fargate)",[297,304,305],{},"Kubernetes (EKS)",[297,307,308],{},"Self-hosted",[310,311,312,327,341,355,369,383,397,411,424,438,452,466],"tbody",{},[294,313,314,318,321,324],{},[315,316,317],"td",{},"Minimum BRL\u002Fmonth cost — 5 small apps",[315,319,320],{},"R$1,000-1,500",[315,322,323],{},"R$1,500-2,500 + SRE team",[315,325,326],{},"R$300-500 + part-time dev",[294,328,329,332,335,338],{},[315,330,331],{},"Predictable cost month over month",[315,333,334],{},"No — egress + log vary",[315,336,337],{},"No — sum of many lines",[315,339,340],{},"Yes — VPS is fixed",[294,342,343,346,349,352],{},[315,344,345],{},"Lock-in (0-10)",[315,347,348],{},"10 — task def is AWS-only",[315,350,351],{},"4 — portable manifests with caveats",[315,353,354],{},"1 — any Linux VPS",[294,356,357,360,363,366],{},[315,358,359],{},"Time to first deploy",[315,361,362],{},"2-4 hours (with IAM well done)",[315,364,365],{},"1-3 days",[315,367,368],{},"15-30 minutes",[294,370,371,374,377,380],{},[315,372,373],{},"Learning curve",[315,375,376],{},"Medium (own concepts + AWS)",[315,378,379],{},"High (6+ months for real productivity)",[315,381,382],{},"Low (Heroku model)",[294,384,385,388,391,394],{},[315,386,387],{},"Operator ecosystem",[315,389,390],{},"Restricted to AWS catalog",[315,392,393],{},"Hundreds, mature",[315,395,396],{},"Limited, growing",[294,398,399,402,405,408],{},[315,400,401],{},"Multi-region",[315,403,404],{},"Native (AWS regions)",[315,406,407],{},"Native via federation",[315,409,410],{},"Manual, with care",[294,412,413,416,419,421],{},[315,414,415],{},"24\u002F7 support",[315,417,418],{},"Paid separately (Business+)",[315,420,418],{},[315,422,423],{},"Paid Enterprise",[294,425,426,429,432,435],{},[315,427,428],{},"Nominal enterprise compliance",[315,430,431],{},"Strong (FedRAMP, HIPAA)",[315,433,434],{},"Strong (listed by name)",[315,436,437],{},"Under construction",[294,439,440,443,446,449],{},[315,441,442],{},"Ideal scale range",[315,444,445],{},"1-200 tasks",[315,447,448],{},"50-100,000 servers",[315,450,451],{},"1-500 servers",[294,453,454,457,460,463],{},[315,455,456],{},"Minimum team",[315,458,459],{},"1 dev + an AWS docs reader",[315,461,462],{},"2 SREs",[315,464,465],{},"1 part-time dev",[294,467,468,471,474,477],{},[315,469,470],{},"Migration pain (leaving)",[315,472,473],{},"High — rewrite stack",[315,475,476],{},"Low — manifests are portable",[315,478,479],{},"Minimal — Docker is Docker",[11,481,482],{},"The column that matters varies by context, and that is exactly the point: there is no uniform winner.",[21,484,486],{"id":485},"decision-by-context","Decision by context",[11,488,489],{},"Practical translation of the tables into direct recommendation. If your scenario fits one of these five, the answer is the indicated one — without flourish.",[11,491,492,495],{},[38,493,494],{},"\"We're already on AWS, small team, contracts require AWS.\"","\nECS Fargate. Lock-in is already a done deal, and Fargate eliminates the work of managing instances. You trade predictable cost for zero-ops, which is the right trade-off when the team has few hands and can't stop to patch a kernel.",[11,497,498,501],{},[38,499,500],{},"\"Multi-cloud strategy or compliance requires neutrality between vendors.\"","\nKubernetes. If the team is strong, k3s self-managed on VMs drastically reduces control plane cost. If the team is average, managed EKS in a primary cloud and equivalent in the other. Don't pay K8s cost without the real requirement — the real requirement exists and the tool fits.",[11,503,504,507],{},[38,505,506],{},"\"Early-stage startup, costs hurt, team is not AWS specialist.\"","\nSelf-hosted. HeroCtl, Coolify, Dokploy — the segment matured. Three droplets, a part-time dev, R$400\u002Fmonth of infra, and you have the entire operation under control. When the product gains traction and the company gets large, you can reassess — but getting there spending R$400\u002Fmonth is the difference between closing runway and not closing.",[11,509,510,513],{},[38,511,512],{},"\"Big enterprise, compliance lists K8s by name, platform team with 5+ people.\"","\nManaged EKS. Control plane cost disappears in the budget, the team absorbs the complexity, and audit is satisfied. This is the canonical K8s case — don't try to economize here.",[11,515,516,519],{},[38,517,518],{},"\"Solo dev experimenting, side project, MVP.\"","\nRender or Railway on hosted cloud (pay only what you use, zero ops), or Coolify on a US$5 VPS. Don't build a cluster for a project that may die in three months. When it passes US$1k MRR and it becomes clear it'll survive, migrate to HeroCtl or Dokploy on a three-VPS cluster.",[21,521,523],{"id":522},"migrating-from-ecs-to-self-hosted-practical-path","Migrating from ECS to self-hosted: practical path",[11,525,526],{},"For those already on ECS who want to reduce the bill, the conceptual mapping is simpler than it seems. The primitives match almost one-to-one:",[288,528,529,539],{},[291,530,531],{},[294,532,533,536],{},[297,534,535],{},"ECS",[297,537,538],{},"HeroCtl",[310,540,541,548,555,561,568,576,583,591],{},[294,542,543,545],{},[315,544,40],{},[315,546,547],{},"Job spec",[294,549,550,552],{},[315,551,46],{},[315,553,554],{},"Group inside the job",[294,556,557,559],{},[315,558,52],{},[315,560,52],{},[294,562,563,565],{},[315,564,80],{},[315,566,567],{},"Dedicated server (VPS or bare metal)",[294,569,570,573],{},[315,571,572],{},"ALB",[315,574,575],{},"Integrated router, with automatic TLS",[294,577,578,580],{},[315,579,86],{},[315,581,582],{},"Single embedded writer",[294,584,585,588],{},[315,586,587],{},"Task IAM role",[315,589,590],{},"Orchestrator secrets",[294,592,593,596],{},[315,594,595],{},"ECR",[315,597,598],{},"ECR keeps working, or internal registry",[11,600,601],{},"Practical path for a simple app:",[603,604,605,613,616,619,622],"ol",{},[35,606,607,608,612],{},"Bring up three Linux VPS with Docker. Install the orchestrator on one of the three (",[609,610,611],"code",{},"curl -sSL get.heroctl.com\u002Finstall.sh | sh","). The other two join as agents — single command for each.",[35,614,615],{},"Take the task definition in JSON. Map: container, resources, ports, environment variables, secrets. Becomes a configuration file of ~50 lines.",[35,617,618],{},"Submit via CLI. The cluster decides where to run, opens the port, registers in the router, issues a Let's Encrypt certificate, starts serving.",[35,620,621],{},"Point the domain DNS to the IP of the coordinator server (or to a DNS-based Load Balancer if you have more than one region).",[35,623,624],{},"Test in staging for a week. If OK, repeat for production and decommission ECS.",[11,626,627],{},"Average time per simple app: 1 to 3 hours, including the test. Apps with strong dependence on IAM\u002FCognito\u002FSQS take longer — you need to adapt the AWS call to go via SDK + key (instead of implicit IAM role). Stateless HTTP apps are almost mechanical.",[11,629,630],{},"Typical annual savings on modest scale-up (15 microservices, 1 ALB, NAT in one zone, moderate logs): R$50,000 to R$150,000 leaving the AWS bill and turning into salary or marketing. The human component also changes — you stop needing a dedicated AWS specialist.",[21,632,634],{"id":633},"questions-that-come-up","Questions that come up",[11,636,637,640],{},[38,638,639],{},"Can I use ECS Fargate without ALB to save?","\nTechnically yes, exposing the task with public IP — but you lose automatic TLS, load balancing, layer 7 health check, and service discovery. For real production this is not savings, it is debt. Worth it only for internal workloads without ingress (queue jobs, ETLs).",[11,642,643,646],{},[38,644,645],{},"Is EKS Anywhere viable outside AWS?","\nIt exists and works, but the license cost is high and integration with the AWS ecosystem is partial — you get the \"EKS\" name without getting native integration. To run K8s outside AWS, k3s or kubeadm have better cost-benefit in practice.",[11,648,649,652],{},[38,650,651],{},"Migrating from ECS to self-hosted, how long realistically?","\nSimple apps: 1-3 hours each. Entire operation with 10-15 services: 2 to 4 weeks if you go carefully, with staging tests. The bottleneck is usually dependencies on AWS-specific services (SQS, SNS, Cognito), not the orchestrator itself.",[11,654,655,658],{},[38,656,657],{},"Does keeping both (ECS + self-hosted in parallel) make sense?","\nIt does, during migration and sometimes permanently. Workloads that depend heavily on native IAM (direct S3 access without keys, for example) can stay in ECS. The rest goes to the self-hosted cluster. DNS routing solves the traffic split.",[11,660,661,664],{},[38,662,663],{},"Compliance requires AWS, can I use HeroCtl on EC2?","\nYou can. HeroCtl runs on any Linux VPS with Docker — including EC2 instances. You lose the advantage of total portability, but keep the simple operational model and predictable cost. It is a good option for teams that need to stay inside AWS by contract but want to escape native complexity.",[11,666,667,670],{},[38,668,669],{},"Is ECS App Runner a good alternative?","\nApp Runner is AWS's offer for \"Heroku on top of ECS\". Works for very simple apps (one image, one port, automatic build from Git). Charges more than equivalent Fargate and has less control. For weekend MVP, it is reasonable. For serious production, ECS direct with Fargate gives more flexibility for the same money.",[11,672,673,676],{},[38,674,675],{},"GKE Autopilot vs Fargate vs HeroCtl?","\nGKE Autopilot and Fargate occupy the same conceptual niche: serverless per pod\u002Ftask, you don't see the node. GKE Autopilot is generally cheaper for stable workloads, and more expensive for burst. Both have strong lock-in. HeroCtl attacks the problem from the other side — you see the server on purpose, pay for it whole, and the orchestrator distributes the loads. For long-running stable workload, comes out cheaper. For extreme burst workload, serverless wins.",[21,678,680],{"id":679},"closing","Closing",[11,682,683],{},"There is no path that wins on all twelve criteria of the table. ECS wins on AWS integration, loses on portability. Kubernetes wins on ecosystem, loses on simplicity. Self-hosted wins on cost and clarity, loses on specific extreme-scale tooling.",[11,685,686],{},"The right choice depends on your team, your compliance, your contracts, and the company stage. The wrong choice is not doing the math — adopting ECS because \"it's the AWS standard\" without summing Fargate + ALB + NAT + CloudWatch + egress; adopting Kubernetes because \"it's what's trending\" without having the 2 SREs; or staying on fragile homemade self-hosted when the operation has already passed 50 servers and the CNCF ecosystem has come to justify itself.",[11,688,689],{},"If you are reviewing the container orchestration stack in 2026, the practical recommendation is simple: measure the real cost of the previous twelve months, identify which of the five profiles your team fits, and decide. If the profile is \"early startup, small team, cost hurts\", the lowest-risk path is to try self-hosted in parallel for one month, before migrating.",[11,691,692],{},"To start:",[694,695,700],"pre",{"className":696,"code":697,"language":698,"meta":699,"style":699},"language-bash shiki shiki-themes github-dark-default","curl -sSL get.heroctl.com\u002Finstall.sh | sh\n","bash","",[609,701,702],{"__ignoreMap":699},[703,704,707,711,715,719,723],"span",{"class":705,"line":706},"line",1,[703,708,710],{"class":709},"sQhOw","curl",[703,712,714],{"class":713},"sFSAA"," -sSL",[703,716,718],{"class":717},"s9uIt"," get.heroctl.com\u002Finstall.sh",[703,720,722],{"class":721},"suJrU"," |",[703,724,725],{"class":709}," sh\n",[11,727,728],{},"Three Linux VPS, ten minutes per server, and you have a cluster with replicated control plane, integrated router, automatic certificates. From there, it is a matter of moving service by service from the AWS bill to your own cluster.",[11,730,731,732,734,735,739],{},"Related reading: ",[162,733,165],{"href":164}," explains in more detail when the colossus doesn't fit; ",[162,736,738],{"href":737},"\u002Fen\u002Fblog\u002Fk3s-vs-heroctl-when-each-fits","k3s vs HeroCtl: when each one makes sense"," compares the two lightweight alternatives within the self-hosted category.",[11,741,742],{},"Container orchestration is a long-term decision. Make it by the right math, not by inertia.",[744,745,746],"style",{},"html pre.shiki code .sQhOw, html code.shiki .sQhOw{--shiki-default:#FFA657}html pre.shiki code .sFSAA, html code.shiki .sFSAA{--shiki-default:#79C0FF}html pre.shiki code .s9uIt, html code.shiki .s9uIt{--shiki-default:#A5D6FF}html pre.shiki code .suJrU, html code.shiki .suJrU{--shiki-default:#FF7B72}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}",{"title":699,"searchDepth":748,"depth":748,"links":749},2,[750,751,752,753,754,755,756,757,758,759,760,761,762],{"id":23,"depth":748,"text":24},{"id":68,"depth":748,"text":69},{"id":111,"depth":748,"text":112},{"id":132,"depth":748,"text":133},{"id":156,"depth":748,"text":157},{"id":195,"depth":748,"text":196},{"id":219,"depth":748,"text":220},{"id":249,"depth":748,"text":250},{"id":282,"depth":748,"text":283},{"id":485,"depth":748,"text":486},{"id":522,"depth":748,"text":523},{"id":633,"depth":748,"text":634},{"id":679,"depth":748,"text":680},"comparison",null,"2026-03-04","ECS is AWS's offer for those escaping Kubernetes. Kubernetes is Kubernetes. Self-hosted is the path out of AWS. Each makes sense in specific contexts — no uniform trade-off.",false,"md",{},true,"\u002Fen\u002Fblog\u002Faws-ecs-vs-kubernetes-vs-self-hosted","15 min",{"title":5,"description":766},{"loc":771},"en\u002Fblog\u002Faws-ecs-vs-kubernetes-vs-self-hosted",[777,778,779,780,763,781],"aws","ecs","kubernetes","fargate","self-hosted","CmcR4T0P7aqHTkKD7laYYeTXUCOtwQwpAa8lPIlZeaM",[764,784],{"title":785,"path":786,"stem":787,"description":788,"date":789,"category":763,"children":-1},"CapRover vs Coolify vs Dokploy: the simple segment compared in 2026","\u002Fen\u002Fblog\u002Fcaprover-vs-coolify-vs-dokploy","en\u002Fblog\u002Fcaprover-vs-coolify-vs-dokploy","The three dominant panels for running 'Heroku on 1 VPS'. Each bets on a different philosophy — maturity, feature richness, or low weight. An honest comparison to choose without regret.","2026-01-19",1777362214511]