Autoscaling & Ingress Dynamic Load Balancer

Aviratna
3 min readMay 14, 2021

There is a need for an auto-scaling solution which is configurable, data-driven, policy based, supports multi-cloud which can be configured with parameters other than CPU, Memory etc. It can be used as an extension for opensource kubernetes for autoscaling and Ingress. (e.g. for Kubernetes extension replace docker inspect command with kubectl describe pod command)

Autoscaling flow diagram:

Need:

  • Scaling based on PODs based on CPU/Memory is easy but not effective
  • The app might want to scale based on number of requests received by Load Balancer, number of message in message queue, scheduled basis
  • To scale the containers based on the number of connection
  • Scaling PODs using custom metrics
  • Autoscaling Rules
  • Dynamic update of load balancer with newly provisioned containers

Flow:

  • PODs will push the metrics using metricbeat (or Prometheus, stackdriver, Datadog) can be used
  • Metric collector like metricbeat will send the metrics and logs to a central location (e.g. elasticsearch)
  • It will send the alerts based on policy configured
  • Autoscaling Plugin will call the custom metrics adaptor registered on cluster

Features:

  • It can be used for HPA (Horizontal POD autoscaling)
  • The below solution can monitor various parameters like number of connections to HAProxy, load on HAProxy, etc
  • Can be extended to any application
  • Dynamic Load Balancer configuration, any app can use it
  • The solution is configuration based, no hard coding
  • The container will get attached to load balancer based on ENV variable passed or docker label during run time
  • Audit table for tracking the containers, timestamp, policy applied to understand the pattern

Tools

  • HAProxy
  • Traefik
  • ELK stack
  • Metricbeat (Kube API, Docker API, HAProxy etc)
  • Autoscale plugin: written in Python script
  • ElastAlert
  • Mattermost
  • MongoDB
  • Kibana
  • MariaDB/MySQL
  • ActiveMQ
  • DNS (Bind9)
  • AWS/Google Cloud/Azure

HAProxy Configuration: To enable the remote API on HAProxy that allows remote configuration update for backend nodes

stats socket <HAProxy>:9999 level admin

For remote API access

server-state-file /var/lib/haproxy/server-state

To store the HAProxy state on server incase of restart of HAProxy it will reload the configuration

Load-server-state-from-file global

server-template

  • server-template node 1–100 localhost:3128 source <haproxy> check disabled inter 5s maxconn 8000
  • It holds the dynamic configuration, it helps in updating the HAProxy configuration during bakcned nodes scale up & scale down

Failure scenario handled

  • If the container goes down, it will register back to Load Balancer (HAProxy/Traefik) on the startup
  • If HAProxy goes down, it will get updated with backend nodes again
  • If the autoscale plugin is down, it won't impact the HAProxy & Containers. It can still start, stop and create new containers & attached them to HAProxy. (ENV variables will be used, script running inside the container at entrypoint will register to HAProxy)
  • Dynamic HAProxy & backend containers are decoupled from autoscaling solution

Autoscaling Labels

Autoscaling solution can be used by any application container, they just need to pass below docker/kube label during docker/kubectl run without any change in application

Parameters:

Alert parameters:

  • policy_name
  • autoscale_max_instance
  • autoscale_min_instance
  • service_name: For load balancer service, specify which service name to scale up & scale down
  • alert_service_name: container to monitor and scale
  • naming_convention (e.g.webapp_): required for service scaling as new container needs to be created with same naming convention with an incremental number
  • Autoscale_action: scale_up_node, scale_up_service,service_availability

Label & Env to pass during run time:

  • service_name=<>
  • service_availability=true

HAProxy / Traefik as Ingress Load Balancer:

Create a container with environment variable so when container startup it will get auto connected on start, stop, etc

  • HAPROXY_SERVER
  • HAPROXY_PORT
  • NODE_COUNT

Solution:

Its containerized and packaged solution, its like sidecar container running as part of any project

Containerized the solution: docker image created for HAProxy, Traefik, elasticsearch, kibana,ElastAlert, Autoscale plugin, MongoDB, MongoDB-Rest API

Architecture

--

--

Aviratna

Cloud & Enterprise Architect | DevOps | App Modernisation | Automation | Presales