Autoscaling & Ingress Dynamic Load Balancer

Autoscaling flow diagram:

  • Scaling based on PODs based on CPU/Memory is easy but not effective
  • The app might want to scale based on number of requests received by Load Balancer, number of message in message queue, scheduled basis
  • To scale the containers based on the number of connection
  • Scaling PODs using custom metrics
  • Autoscaling Rules
  • Dynamic update of load balancer with newly provisioned containers
  • PODs will push the metrics using metricbeat (or Prometheus, stackdriver, Datadog) can be used
  • Metric collector like metricbeat will send the metrics and logs to a central location (e.g. elasticsearch)
  • It will send the alerts based on policy configured
  • Autoscaling Plugin will call the custom metrics adaptor registered on cluster
  • It can be used for HPA (Horizontal POD autoscaling)
  • The below solution can monitor various parameters like number of connections to HAProxy, load on HAProxy, etc
  • Can be extended to any application
  • Dynamic Load Balancer configuration, any app can use it
  • The solution is configuration based, no hard coding
  • The container will get attached to load balancer based on ENV variable passed or docker label during run time
  • Audit table for tracking the containers, timestamp, policy applied to understand the pattern


  • HAProxy
  • Traefik
  • ELK stack
  • Metricbeat (Kube API, Docker API, HAProxy etc)
  • Autoscale plugin: written in Python script
  • ElastAlert
  • Mattermost
  • MongoDB
  • Kibana
  • MariaDB/MySQL
  • ActiveMQ
  • DNS (Bind9)
  • AWS/Google Cloud/Azure
  • server-template node 1–100 localhost:3128 source <haproxy> check disabled inter 5s maxconn 8000
  • It holds the dynamic configuration, it helps in updating the HAProxy configuration during bakcned nodes scale up & scale down
  • If the container goes down, it will register back to Load Balancer (HAProxy/Traefik) on the startup
  • If HAProxy goes down, it will get updated with backend nodes again
  • If the autoscale plugin is down, it won't impact the HAProxy & Containers. It can still start, stop and create new containers & attached them to HAProxy. (ENV variables will be used, script running inside the container at entrypoint will register to HAProxy)
  • Dynamic HAProxy & backend containers are decoupled from autoscaling solution

Autoscaling Labels

  • policy_name
  • autoscale_max_instance
  • autoscale_min_instance
  • service_name: For load balancer service, specify which service name to scale up & scale down
  • alert_service_name: container to monitor and scale
  • naming_convention (e.g.webapp_): required for service scaling as new container needs to be created with same naming convention with an incremental number
  • Autoscale_action: scale_up_node, scale_up_service,service_availability
  • service_name=<>
  • service_availability=true






Cloud & Enterprise Architect | DevOps | App Modernisation | Automation | Presales

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Kubernetes: A case Study of Huawei

Lessons and Experience from my First Project

How to send Stock Updates with Python

The Calabar Tech Ecosystem: A mirage or reality?.

Year of the Occam

The Single Source of Truth

CCSP Review

Where to Study as a Programmer — Top 12 Online Courses

Where to Study as a Programmer

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store


Cloud & Enterprise Architect | DevOps | App Modernisation | Automation | Presales

More from Medium

Setting Up Envoy

Overview of Microsoft Defender for Containers

Embrace Everything?

How to Install Hadoop?