[{"data":1,"prerenderedAt":838},["ShallowReactive",2],{"doc-es-\u002Fes\u002Fdocs\u002Foperaciones\u002Fprimer-cluster":3,"docs-es-all":769},{"id":4,"title":5,"body":6,"category":751,"description":752,"draft":753,"extension":754,"icon":755,"lastReviewed":756,"meta":757,"navigation":385,"order":227,"path":758,"prerequisites":759,"readingTime":761,"seo":762,"stem":763,"tags":764,"__hash__":768},"docs_es\u002Fes\u002Fdocs\u002Foperaciones\u002Fprimer-cluster.md","Levantar cluster de 3 nodos",{"type":7,"value":8,"toc":736},"minimark",[9,13,18,21,87,90,100,104,107,187,190,199,203,206,249,252,260,263,267,270,302,310,314,325,432,435,439,442,456,459,465,468,490,501,505,508,564,567,602,613,616,621,628,635,639,644,647,685,689,695,710,714,725,732],[10,11,12],"p",{},"Un nodo aislado funciona, pero cualquier reboot tira la aplicación. Para tolerar fallas y hacer deploy sin ventana, necesitas 3 nodos formando un plano de control distribuido.",[14,15,17],"h2",{"id":16},"por-que-3-nodos-y-no-2-o-4","¿Por qué 3 nodos, y no 2 o 4?",[10,19,20],{},"La regla es simple: el consenso entre servidores necesita mayoría. Con 3 nodos, perder 1 aún mantiene 2 vivos, es decir, mayoría. Con 2, perder 1 deja 1 vivo, sin mayoría, y el cluster se traba por seguridad.",[22,23,24,40],"table",{},[25,26,27],"thead",{},[28,29,30,34,37],"tr",{},[31,32,33],"th",{},"Nodos",[31,35,36],{},"Tolera falla de",[31,38,39],{},"Indicado para",[41,42,43,55,65,76],"tbody",{},[28,44,45,49,52],{},[46,47,48],"td",{},"1",[46,50,51],{},"0",[46,53,54],{},"desarrollo, lab",[28,56,57,60,62],{},[46,58,59],{},"3",[46,61,48],{},[46,63,64],{},"producción pequeña\u002Fmedia",[28,66,67,70,73],{},[46,68,69],{},"5",[46,71,72],{},"2",[46,74,75],{},"producción crítica",[28,77,78,81,84],{},[46,79,80],{},"4 o 6",[46,82,83],{},"igual que 3 o 5",[46,85,86],{},"nunca, es desperdicio",[10,88,89],{},"Números pares suman costo sin sumar resiliencia. Siempre impar.",[91,92,93],"blockquote",{},[10,94,95,99],{},[96,97,98],"strong",{},"Nota:"," los workers (modo agente puro) no cuentan para esa matemática. Puedes tener 3 servidores + 50 workers y la regla sigue siendo \"1 servidor puede caer\".",[14,101,103],{"id":102},"provisionar-los-3-servidores","Provisionar los 3 servidores",[10,105,106],{},"HeroCtl no exige máquinas iguales, pero los servidores deben tener mismo perfil de CPU y disco para evitar lentitud en quien queda atrás. Un ejemplo barato y funcional:",[22,108,109,131],{},[25,110,111],{},[28,112,113,116,119,122,125,128],{},[31,114,115],{},"Proveedor",[31,117,118],{},"Plan",[31,120,121],{},"Costo\u002Fmes",[31,123,124],{},"CPU",[31,126,127],{},"RAM",[31,129,130],{},"Disco",[41,132,133,153,171],{},[28,134,135,138,141,144,147,150],{},[46,136,137],{},"Hetzner",[46,139,140],{},"CPX21",[46,142,143],{},"€ 7,99",[46,145,146],{},"3 vCPU",[46,148,149],{},"4 GB",[46,151,152],{},"80 GB",[28,154,155,158,161,164,167,169],{},[46,156,157],{},"DigitalOcean",[46,159,160],{},"s-2vcpu-4gb",[46,162,163],{},"US$ 24",[46,165,166],{},"2 vCPU",[46,168,149],{},[46,170,152],{},[28,172,173,176,179,181,183,185],{},[46,174,175],{},"Vultr",[46,177,178],{},"vc2-2c-4gb",[46,180,163],{},[46,182,166],{},[46,184,149],{},[46,186,152],{},[10,188,189],{},"Provisiona 3 máquinas en el mismo datacenter, en red privada si el proveedor lo ofrece. La latencia entre nodos debe quedar bajo 10 ms.",[10,191,192,193,198],{},"Después sigue el paso de ",[194,195,197],"a",{"href":196},"\u002Fes\u002Fdocs\u002Foperaciones\u002Finstalacion","instalación"," en cada una. Cuando termines, tendrás 3 servidores con binario listo y nada más.",[14,200,202],{"id":201},"inicializar-el-primer-nodo","Inicializar el primer nodo",[10,204,205],{},"El primer nodo \"abre\" el cluster. Los otros se unirán a él. Ese comando solo corre en una máquina y nunca más.",[207,208,213],"pre",{"className":209,"code":210,"language":211,"meta":212,"style":212},"language-bash shiki shiki-themes github-dark-default","# En el nodo 1, IP privada 10.0.0.1\nsudo heroctl cluster init --advertise 10.0.0.1\n","bash","",[214,215,216,225],"code",{"__ignoreMap":212},[217,218,221],"span",{"class":219,"line":220},"line",1,[217,222,224],{"class":223},"sH3jZ","# En el nodo 1, IP privada 10.0.0.1\n",[217,226,228,232,236,239,242,246],{"class":219,"line":227},2,[217,229,231],{"class":230},"sQhOw","sudo",[217,233,235],{"class":234},"s9uIt"," heroctl",[217,237,238],{"class":234}," cluster",[217,240,241],{"class":234}," init",[217,243,245],{"class":244},"sFSAA"," --advertise",[217,247,248],{"class":244}," 10.0.0.1\n",[10,250,251],{},"Salida esperada:",[207,253,258],{"className":254,"code":256,"language":257},[255],"language-text","cluster initialized\nnode-id:  node-1\nstate:    healthy\nnodes:    1\u002F1\n","text",[214,259,256],{"__ignoreMap":212},[10,261,262],{},"A partir de aquí, el nodo 1 ya acepta jobs. Pero sin los otros dos, no hay tolerancia a falla.",[14,264,266],{"id":265},"generar-token-de-join","Generar token de join",[10,268,269],{},"Para que otros nodos se unan, necesitas un token. Es firmado por el cluster y tiene validez configurable.",[207,271,273],{"className":209,"code":272,"language":211,"meta":212,"style":212},"# En el nodo 1\nheroctl cluster join-token --ttl 1h\n# eyJhbGciOi...truncado...8X7Z\n",[214,274,275,280,296],{"__ignoreMap":212},[217,276,277],{"class":219,"line":220},[217,278,279],{"class":223},"# En el nodo 1\n",[217,281,282,285,287,290,293],{"class":219,"line":227},[217,283,284],{"class":230},"heroctl",[217,286,238],{"class":234},[217,288,289],{"class":234}," join-token",[217,291,292],{"class":244}," --ttl",[217,294,295],{"class":234}," 1h\n",[217,297,299],{"class":219,"line":298},3,[217,300,301],{"class":223},"# eyJhbGciOi...truncado...8X7Z\n",[91,303,304],{},[10,305,306,309],{},[96,307,308],{},"Atención:"," el token concede entrada al plano de control. Trátalo como contraseña. Usa TTL corto (1h alcanza para unir 3 nodos) y nunca lo pegues en logs o Slack público.",[14,311,313],{"id":312},"conectar-nodos-2-y-3","Conectar nodos 2 y 3",[10,315,316,317,320,321,324],{},"En cada uno de los otros dos nodos, ejecuta el comando de join. Sustituye ",[214,318,319],{},"\u003CTOKEN>"," por el token generado y ",[214,322,323],{},"\u003CIP>"," por la IP privada de esa máquina.",[207,326,328],{"className":209,"code":327,"language":211,"meta":212,"style":212},"# En el nodo 2, IP privada 10.0.0.2\nsudo heroctl cluster join \\\n  --token eyJhbGciOi...8X7Z \\\n  --advertise 10.0.0.2 \\\n  --servers 10.0.0.1:8080\n\n# En el nodo 3, IP privada 10.0.0.3\nsudo heroctl cluster join \\\n  --token eyJhbGciOi...8X7Z \\\n  --advertise 10.0.0.3 \\\n  --servers 10.0.0.1:8080\n",[214,329,330,335,350,360,371,380,387,393,406,415,425],{"__ignoreMap":212},[217,331,332],{"class":219,"line":220},[217,333,334],{"class":223},"# En el nodo 2, IP privada 10.0.0.2\n",[217,336,337,339,341,343,346],{"class":219,"line":227},[217,338,231],{"class":230},[217,340,235],{"class":234},[217,342,238],{"class":234},[217,344,345],{"class":234}," join",[217,347,349],{"class":348},"suJrU"," \\\n",[217,351,352,355,358],{"class":219,"line":298},[217,353,354],{"class":244},"  --token",[217,356,357],{"class":234}," eyJhbGciOi...8X7Z",[217,359,349],{"class":348},[217,361,363,366,369],{"class":219,"line":362},4,[217,364,365],{"class":244},"  --advertise",[217,367,368],{"class":244}," 10.0.0.2",[217,370,349],{"class":348},[217,372,374,377],{"class":219,"line":373},5,[217,375,376],{"class":244},"  --servers",[217,378,379],{"class":234}," 10.0.0.1:8080\n",[217,381,383],{"class":219,"line":382},6,[217,384,386],{"emptyLinePlaceholder":385},true,"\n",[217,388,390],{"class":219,"line":389},7,[217,391,392],{"class":223},"# En el nodo 3, IP privada 10.0.0.3\n",[217,394,396,398,400,402,404],{"class":219,"line":395},8,[217,397,231],{"class":230},[217,399,235],{"class":234},[217,401,238],{"class":234},[217,403,345],{"class":234},[217,405,349],{"class":348},[217,407,409,411,413],{"class":219,"line":408},9,[217,410,354],{"class":244},[217,412,357],{"class":234},[217,414,349],{"class":348},[217,416,418,420,423],{"class":219,"line":417},10,[217,419,365],{"class":244},[217,421,422],{"class":244}," 10.0.0.3",[217,424,349],{"class":348},[217,426,428,430],{"class":219,"line":427},11,[217,429,376],{"class":244},[217,431,379],{"class":234},[10,433,434],{},"Cada comando demora de 5 a 15 segundos. El nodo descarga el estado actual, sincroniza y pasa a recibir actualizaciones en tiempo real.",[14,436,438],{"id":437},"verificar-salud","Verificar salud",[10,440,441],{},"Tras estar los 3 conectados, cualquiera responde con la visión del cluster:",[207,443,445],{"className":209,"code":444,"language":211,"meta":212,"style":212},"heroctl cluster status\n",[214,446,447],{"__ignoreMap":212},[217,448,449,451,453],{"class":219,"line":220},[217,450,284],{"class":230},[217,452,238],{"class":234},[217,454,455],{"class":234}," status\n",[10,457,458],{},"Salida saludable:",[207,460,463],{"className":461,"code":462,"language":257},[255],"cluster:  3 nodes\nquorum:   ok (2\u002F3 required)\nleader:   node-1 (10.0.0.1)\npeers:\n  - node-1  10.0.0.1  server  ready  applied=1247\n  - node-2  10.0.0.2  server  ready  applied=1247\n  - node-3  10.0.0.3  server  ready  applied=1247\nlast_update: 0.4s ago\n",[214,464,462],{"__ignoreMap":212},[10,466,467],{},"Qué mirar:",[469,470,471,478,484],"ul",{},[472,473,474,477],"li",{},[96,475,476],{},"quorum: ok"," — mayoría viva, el cluster acepta escrituras.",[472,479,480,483],{},[96,481,482],{},"applied"," igual en los 3 nodos — todos vieron los mismos cambios. Diferencia pequeña (1–2) es normal entre el pulso actual.",[472,485,486,489],{},[96,487,488],{},"last_update"," bajo 5s — replicación fluyendo.",[10,491,492,493,495,496,500],{},"Si ",[214,494,482],{}," diverge mucho (cientos), hay problema de red o disco en un nodo. Mira ",[194,497,499],{"href":498},"#problemas-comunes","Problemas comunes"," abajo.",[14,502,504],{"id":503},"agregar-workers","Agregar workers",[10,506,507],{},"Los workers ejecutan solo contenedores. No votan, no deciden, no guardan estado. Agrega cuantos quieras sin afectar el consenso.",[207,509,511],{"className":209,"code":510,"language":211,"meta":212,"style":212},"# En cada worker\nsudo heroctl agent \\\n  --token \u003CTOKEN> \\\n  --advertise 10.0.0.10 \\\n  --servers 10.0.0.1:8080,10.0.0.2:8080,10.0.0.3:8080\n",[214,512,513,518,529,548,557],{"__ignoreMap":212},[217,514,515],{"class":219,"line":220},[217,516,517],{"class":223},"# En cada worker\n",[217,519,520,522,524,527],{"class":219,"line":227},[217,521,231],{"class":230},[217,523,235],{"class":234},[217,525,526],{"class":234}," agent",[217,528,349],{"class":348},[217,530,531,533,536,539,543,546],{"class":219,"line":298},[217,532,354],{"class":244},[217,534,535],{"class":348}," \u003C",[217,537,538],{"class":234},"TOKE",[217,540,542],{"class":541},"sZEs4","N",[217,544,545],{"class":348},">",[217,547,349],{"class":348},[217,549,550,552,555],{"class":219,"line":362},[217,551,365],{"class":244},[217,553,554],{"class":244}," 10.0.0.10",[217,556,349],{"class":348},[217,558,559,561],{"class":219,"line":373},[217,560,376],{"class":244},[217,562,563],{"class":234}," 10.0.0.1:8080,10.0.0.2:8080,10.0.0.3:8080\n",[10,565,566],{},"Confirma con:",[207,568,570],{"className":209,"code":569,"language":211,"meta":212,"style":212},"heroctl node list\n# node-1   server  ready   3.2 GB free   2 jobs\n# node-2   server  ready   3.4 GB free   1 job\n# node-3   server  ready   3.0 GB free   2 jobs\n# node-10  worker  ready   7.8 GB free   0 jobs\n",[214,571,572,582,587,592,597],{"__ignoreMap":212},[217,573,574,576,579],{"class":219,"line":220},[217,575,284],{"class":230},[217,577,578],{"class":234}," node",[217,580,581],{"class":234}," list\n",[217,583,584],{"class":219,"line":227},[217,585,586],{"class":223},"# node-1   server  ready   3.2 GB free   2 jobs\n",[217,588,589],{"class":219,"line":298},[217,590,591],{"class":223},"# node-2   server  ready   3.4 GB free   1 job\n",[217,593,594],{"class":219,"line":362},[217,595,596],{"class":223},"# node-3   server  ready   3.0 GB free   2 jobs\n",[217,598,599],{"class":219,"line":373},[217,600,601],{"class":223},"# node-10  worker  ready   7.8 GB free   0 jobs\n",[91,603,604],{},[10,605,606,608,609,612],{},[96,607,98],{}," siempre pasa las 3 IPs en ",[214,610,611],{},"--servers",". Si apuntas solo al nodo 1 y se cae durante el join, el agente queda en retry.",[14,614,499],{"id":615},"problemas-comunes",[617,618,620],"h3",{"id":619},"nodo-no-conecta","Nodo no conecta",[10,622,623,624,627],{},"Síntoma: ",[214,625,626],{},"cluster join"," queda en \"connecting...\" por más de 30s.",[10,629,630,631,634],{},"Causa típica: firewall bloqueando puerto 8082 entre los nodos. Confirma con ",[214,632,633],{},"nc -zv \u003Cip-do-nó-1> 8082",". Si falla, libera el puerto en el firewall externo del proveedor.",[617,636,638],{"id":637},"estado-divergente-entre-nodos","Estado divergente entre nodos",[10,640,623,641,643],{},[214,642,482],{}," muy diferente entre nodos (>100).",[10,645,646],{},"Causa posible: un nodo quedó offline y está reaplicando historial. Espera unos minutos. Si no converge, fuerza resync en el nodo atrasado:",[207,648,650],{"className":209,"code":649,"language":211,"meta":212,"style":212},"sudo systemctl restart heroctl\nheroctl node info \u003Cnode-id>\n",[214,651,652,665],{"__ignoreMap":212},[217,653,654,656,659,662],{"class":219,"line":220},[217,655,231],{"class":230},[217,657,658],{"class":234}," systemctl",[217,660,661],{"class":234}," restart",[217,663,664],{"class":234}," heroctl\n",[217,666,667,669,671,674,676,679,682],{"class":219,"line":227},[217,668,284],{"class":230},[217,670,578],{"class":234},[217,672,673],{"class":234}," info",[217,675,535],{"class":348},[217,677,678],{"class":234},"node-i",[217,680,681],{"class":541},"d",[217,683,684],{"class":348},">\n",[617,686,688],{"id":687},"cluster-sin-mayoria","Cluster sin mayoría",[10,690,623,691,694],{},[214,692,693],{},"quorum: degraded"," o los comandos de escritura se traban.",[10,696,697,698,702,703,706,707,709],{},"Significa que 2 de 3 nodos están fuera. El cluster rechaza escrituras para evitar inconsistencia. Recupera los nodos antes de intentar cambiar nada. Si uno de los servidores está perdido en definitiva, ",[194,699,701],{"href":700},"\u002Fes\u002Fdocs\u002Foperaciones\u002Freferencia-cli","reemplázalo"," con ",[214,704,705],{},"cluster leave"," + nuevo ",[214,708,626],{},".",[617,711,713],{"id":712},"aparecen-dos-lideres","Aparecen dos \"líderes\"",[10,715,716,717,720,721,724],{},"No ocurre. Si el status de un nodo dice que es líder y otro también, hay problema de reloj. Sincroniza con ",[214,718,719],{},"chrony"," o ",[214,722,723],{},"ntpd"," en todos los nodos y reinicia.",[10,726,727,728,709],{},"Próximo paso: ",[194,729,731],{"href":730},"\u002Fes\u002Fdocs\u002Fdeploy\u002Fprimer-deploy","hacer deploy de la primera app",[733,734,735],"style",{},"html pre.shiki code .sH3jZ, html code.shiki .sH3jZ{--shiki-default:#8B949E}html pre.shiki code .sQhOw, html code.shiki .sQhOw{--shiki-default:#FFA657}html pre.shiki code .s9uIt, html code.shiki .s9uIt{--shiki-default:#A5D6FF}html pre.shiki code .sFSAA, html code.shiki .sFSAA{--shiki-default:#79C0FF}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html pre.shiki code .suJrU, html code.shiki .suJrU{--shiki-default:#FF7B72}html pre.shiki code .sZEs4, html code.shiki .sZEs4{--shiki-default:#E6EDF3}",{"title":212,"searchDepth":227,"depth":227,"links":737},[738,739,740,741,742,743,744,745],{"id":16,"depth":227,"text":17},{"id":102,"depth":227,"text":103},{"id":201,"depth":227,"text":202},{"id":265,"depth":227,"text":266},{"id":312,"depth":227,"text":313},{"id":437,"depth":227,"text":438},{"id":503,"depth":227,"text":504},{"id":615,"depth":227,"text":499,"children":746},[747,748,749,750],{"id":619,"depth":298,"text":620},{"id":637,"depth":298,"text":638},{"id":687,"depth":298,"text":688},{"id":712,"depth":298,"text":713},"operacoes","Forma un cluster con 3 servidores en menos de 10 minutos. Tolera falla de 1 nodo sin indisponibilidad.",false,"md","i-lucide-network","2026-04-26",{},"\u002Fes\u002Fdocs\u002Foperaciones\u002Fprimer-cluster",[760],"instalacao","8 min",{"title":5,"description":752},"es\u002Fdocs\u002Foperaciones\u002Fprimer-cluster",[765,766,767],"cluster","alta-disponibilidad","primeros-pasos","sWPLxag5yS79EakDMNeiTr12BQmLkXhgakwlxIyd-CA",[770,776,781,786,792,797,801,806,807,811,817,821,827,832],{"path":771,"title":772,"description":773,"category":774,"order":220,"icon":775},"\u002Fes\u002Fdocs\u002Fapi\u002Freferencia-api","Referencia de la API REST","Endpoints, autenticación JWT, ejemplos con curl y patrones de error de la API de HeroCtl.","api","i-lucide-code",{"path":730,"title":777,"description":778,"category":779,"order":220,"icon":780},"Deploy de la primera app","Levanta una aplicación Node.js con base Postgres en 50 líneas de YAML. Incluye health check, rolling update y rollback.","deploy","i-lucide-rocket",{"path":782,"title":783,"description":784,"category":779,"order":227,"icon":785},"\u002Fes\u002Fdocs\u002Fdeploy\u002Frolling-canary-blue-green","Rolling, canary, blue-green y rainbow","Cuatro estrategias de deploy. Cuándo usar cada una, con ejemplos completos y trade-offs honestos.","i-lucide-git-branch",{"path":787,"title":788,"description":789,"category":790,"order":227,"icon":791},"\u002Fes\u002Fdocs\u002Fobservabilidad\u002Fbackup-restauracion","Backup y restauración del estado del cluster","Cómo guardar, programar y restaurar snapshots del plano de control de HeroCtl. Estrategia de disaster recovery.","observabilidade","i-lucide-archive",{"path":793,"title":794,"description":795,"category":790,"order":220,"icon":796},"\u002Fes\u002Fdocs\u002Fobservabilidad\u002Fmetricas-logs","Métricas y logs","Recolección de métricas, logs y traces sin montar una pila de observabilidad externa. Cuándo vale, y cuándo integrar con herramienta de fuera.","i-lucide-activity",{"path":196,"title":798,"description":799,"category":751,"order":220,"icon":800},"Instalación","Instala HeroCtl en cualquier servidor Linux con Docker en un solo comando. Cubre prerrequisitos, bootstrap y verificación.","i-lucide-download",{"path":802,"title":803,"description":804,"category":751,"order":362,"icon":805},"\u002Fes\u002Fdocs\u002Foperaciones\u002Fmulti-region","Multi-region (en planificación Q4 2026)","Qué esperar de multi-region en HeroCtl, cómo correr en varias regiones hoy y la hoja de ruta hasta 2027.","i-lucide-globe",{"path":758,"title":5,"description":752,"category":751,"order":227,"icon":755},{"path":700,"title":808,"description":809,"category":751,"order":298,"icon":810},"Referencia completa del CLI","Todos los comandos heroctl con sinopsis, flags y ejemplo. Úsalo como chuleta de mesa.","i-lucide-terminal",{"path":812,"title":813,"description":814,"category":815,"order":227,"icon":816},"\u002Fes\u002Fdocs\u002Fred\u002Ffirewall","Configuración de firewall","Qué puertos usa HeroCtl, cuáles necesitan estar abiertos y cuáles nunca deberían exponerse a internet.","rede","i-lucide-shield",{"path":818,"title":819,"description":820,"category":815,"order":220,"icon":805},"\u002Fes\u002Fdocs\u002Fred\u002Fingress-tls","Ingress y TLS automático","Cómo exponer aplicaciones por el puerto 443 con certificados emitidos y renovados automáticamente, sin operar un router externo.",{"path":822,"title":823,"description":824,"category":825,"order":227,"icon":826},"\u002Fes\u002Fdocs\u002Fseguridad\u002Frbac","RBAC y control de acceso (Business+)","Modelo de roles, políticas y tokens para limitar quién puede enviar, leer y operar el cluster.","seguranca","i-lucide-users",{"path":828,"title":829,"description":830,"category":825,"order":220,"icon":831},"\u002Fes\u002Fdocs\u002Fseguridad\u002Fsecretos","Gestión de secretos","Cómo guardar contraseñas, tokens y claves fuera del spec del job, con cifrado en reposo y rotación versionada.","i-lucide-key",{"path":833,"title":834,"description":835,"category":836,"order":220,"icon":837},"\u002Fes\u002Fdocs\u002Ftroubleshooting\u002Fproblemas-comunes","Troubleshooting de problemas comunes","Los 12 problemas más frecuentes en clusters HeroCtl, con síntoma, diagnóstico y corrección paso a paso.","troubleshooting","i-lucide-alert-triangle",1777362182670]