Warm tip: This article is reproduced from stackoverflow.com, please click
azure azure-aks azure-load-balancer terraform

How to assign only one static public IP to AKS-multiAZ Loadbalancer

发布于 2020-03-31 22:59:56

I'm setting up a multi-AZ AKS cluster, I would like to assign a static public IP that I created to this loadbalancer. Here is what I have :

#### Creating a Public static IP ####
resource "azurerm_public_ip" "lb-public-ip1" {
  name                = "${var.public_ip_name}"
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"
  allocation_method   = "Static"
  ip_version          = "IPv4"
  sku                 = "standard"
  #domain_name_label   =
  tags = {
    Environment = "${var.environment}"
    owner       = "${var.resource_owner}"
    created-by  = "${var.policy_created_by}"
  }
  depends_on    = ["null_resource.module_depends_on"]
}
data "azurerm_public_ip" "lb-public-ip1" {
  name                = "${azurerm_public_ip.lb-public-ip1.name}"
  resource_group_name = "${azurerm_public_ip.lb-public-ip1.resource_group_name}"
  depends_on          = ["null_resource.module_depends_on"]
}
resource "null_resource" "module_depends_on" {
  triggers = {
    value = "${length(var.module_depends_on)}"
  }
}

#### Creating AKS Cluster ####
resource "azurerm_kubernetes_cluster" "k8s" {
    name                = "${var.cluster_name}"
    location            = "${var.location}"
    resource_group_name = "${var.resource_group_name}"
    dns_prefix          = "${var.dns_prefix}"
    kubernetes_version  = "1.14.8"
    linux_profile {
        admin_username = "ubuntu"

        ssh_key {
            key_data = "${var.key_data}"
        }
    }

    default_node_pool {
        availability_zones    = ["1","2"]
        enable_auto_scaling   = true 
        enable_node_public_ip = false 
        max_count             = "8" 
        min_count             = "2" 
        name                  = "default" 
        node_count            = "${var.node_count}"  
        os_disk_size_gb       = "${var.os_disk_size}" 
        type                  = "VirtualMachineScaleSets" 
        vm_size               = "Standard_DS2_v2"
       }

    role_based_access_control {
          enabled = true
    }
    service_principal {
        client_id      = "${var.client_id}"
        client_secret  = "${var.client_secret}"
    }
    addon_profile {
        kube_dashboard {
              enabled = true
        }
        oms_agent {
        enabled                    = "${var.oms_agent_activation}"
        log_analytics_workspace_id = "${var.log_analytics_workspace_id}"
        }
    }
    network_profile {
        network_plugin    = "kubenet"
        load_balancer_sku = "Standard"
        load_balancer_profile {
            outbound_ip_address_ids = [ "${azurerm_public_ip.lb-public-ip1.id}" ]

        }
    }
    tags = {
        Environment = "${var.environment}"
        Name        = "${var.cluster_name}"
        owner       = "${var.resource_owner}"
        created-by  = "${var.policy_created_by}"
    }
    depends_on       = [azurerm_public_ip.lb-public-ip1]
}

With this setup, it created an AKS cluster and LoadBalancer called kubernetes and assigned that static public IP which I created to the LoadBalancer without any assigned LB rules and I can see under the "Frontend IP configuration" that it has also created another IP and all the LoadBalancer rules and HealthProbes are assigned to that IP which is created automatically. Besides it is also created two backend pools: kubernetes (2 VMs) and aksOutboundBackendPool(2 VMs)

in Azure doc it says: "By default, one public IP will automatically be created in the same resource group as the AKS cluster, if NO public IP, public IP prefix, or number of IPs is specified." but in my case I have specified the PublicIP!

I'm wondering Why did it create another IP itself? how can I skip that automatic created IP and only use the IP that I have created and assigned to loadbalancer-profile and how AKS can assign the LoadBalancer rules and health probes to that IP which I assigned?

What is the need of having multiple public IP?

At the end, I'm gonna use that assigned PublicIP to istio ingress gateway. That's why I need only one specific public IP.

Also which Backend Pool should I use?

I just need an AKS cluster with high availability for Prod Env in case if the cluster in one zone goes down, it starts the cluster in the second zone.

Any help would be appreciated.

Questioner
Matrix
Viewed
75
Charles Xu 2020-02-03 19:54

As I know, when you create the AKS and create a static public IP to assign to its outbound through the Terraform, you just need to create a public IP and the AKS cluster, do not need to use the data source and null_resource. So your code could be changed into this:

#### Creating a Public static IP ####
resource "azurerm_public_ip" "lb-public-ip1" {
  name                = "${var.public_ip_name}"
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"
  allocation_method   = "Static"
  ip_version          = "IPv4"
  sku                 = "standard"
  #domain_name_label   =
  tags = {
    Environment = "${var.environment}"
    owner       = "${var.resource_owner}"
    created-by  = "${var.policy_created_by}"
  }
}


#### Creating AKS Cluster ####
resource "azurerm_kubernetes_cluster" "k8s" {
    name                = "${var.cluster_name}"
    location            = "${var.location}"
    resource_group_name = "${var.resource_group_name}"
    dns_prefix          = "${var.dns_prefix}"
    kubernetes_version  = "1.14.8"
    linux_profile {
        admin_username = "ubuntu"

        ssh_key {
            key_data = "${var.key_data}"
        }
    }

    default_node_pool {
        availability_zones    = ["1","2"]
        enable_auto_scaling   = true 
        enable_node_public_ip = false 
        max_count             = "8" 
        min_count             = "2" 
        name                  = "default" 
        node_count            = "${var.node_count}"  
        os_disk_size_gb       = "${var.os_disk_size}" 
        type                  = "VirtualMachineScaleSets" 
        vm_size               = "Standard_DS2_v2"
       }

    role_based_access_control {
          enabled = true
    }
    service_principal {
        client_id      = "${var.client_id}"
        client_secret  = "${var.client_secret}"
    }
    addon_profile {
        kube_dashboard {
              enabled = true
        }
        oms_agent {
        enabled                    = "${var.oms_agent_activation}"
        log_analytics_workspace_id = "${var.log_analytics_workspace_id}"
        }
    }
    network_profile {
        network_plugin    = "kubenet"
        load_balancer_sku = "Standard"
        load_balancer_profile {
            outbound_ip_address_ids = [ "${azurerm_public_ip.lb-public-ip1.id}" ]

        }
    }
    tags = {
        Environment = "${var.environment}"
        Name        = "${var.cluster_name}"
        owner       = "${var.resource_owner}"
        created-by  = "${var.policy_created_by}"
    }
    depends_on       = [azurerm_public_ip.lb-public-ip1]
}

And there would be two backend pool: aksOutboundBackendPool and kubernetes, and one outbound rule: aksOutboundRule. No lb rules and probes. It must be caused by other things.