Page cover image

EKS Cluster with private access endpoint and a bastion host

You can connect to the private Kubernetes access endpoint via Systems Manager Session.

Bastion Host must be within the same VPC as the EKS cluster and must have access to the Kubernetes API server endpoint.

INBOUND access to port 22 of the Bastion Host must be open.

  1. Create a Systems Manager Host Configuration for the bastion host. You can follow the AWS Systems Manager Quick Start guide to create a host configuration for the bastion host.

    For example, you can open this link in the browser. https://<eks-region>.console.aws.amazon.com/systems-manager/quick-setup?region=<eks-region>

    Choose Create to create a new Host Configuration

    Choose Host Management Option and choose Create;

In the creation wizard, select the instances based on the tags or select the instance ID manually and choose Create.

Once the Host Configuration is complete, proceed with the next steps.You can connect to the private Kubernetes access endpoint via Systems Manager Session.

  1. Install SSM Agent in the Bastion Host. For example, you can run these commands in an Ubuntu 18.04 based Bastion Host.

ssh -i <pemfile> ubuntu@<bastionhost-public-ip>
sudo apt update
sudo snap install amazon-ssm-agent --classic
sudo snap switch --channel=candidate amazon-ssm-agent
sudo snap start amazon-ssm-agent
sudo snap services amazon-ssm-agent

You should be able to see the ssm agent is listed as below.

Service Startup Current Notes 
amazon-ssm-agent.amazon-ssm-agent enabled active -

Note: If the ssh to the bastion host fails with this error, make sure the port 22 is open in the security group for INBOUND access from global CIDR 0.0.0.0/0

ssh: connect to host <bastionhost-public-ip> port 22: Operation timed out

  1. Install SSM plugin in your local machine

  2. Create an SSM IAM User with the SSM priviledges - AmazonSSMFullAccess and provide programatic access to the AWS APIs.

  3. Grab the SSM IAM User credentials.

  4. In your local machine, configure the SSM IAM User and the EKS region.

    aws configure

Make sure the Access Key, Secret Key and the Region are added to the ~/.aws/config file.

cat ~/.aws/config 
[default]
aws_access_key_id=<SSMUser_ACCESS_KEY>
aws_secret_access_key=<SSMUser_SECRET_KEY>
region = <EKS_REGION>
  1. In your local machine, start the SSM port forwarding session.

aws ssm start-session --target <instance ID of the bastion host> --document-name AWS-StartPortForwardingSessionToRemoteHost  --parameters '{"host":["EKS API Server endpoint DNS Name"],"portNumber":["443"], "localPortNumber":["9444"]}'

If you get the below error, make sure the EKS point is just a DNS Name and NOT a URL.

SessionId: gp-ssm-user-0057180c6d1d1ec44 : lookup https://CC725FE9FA351D0C306F597C47491BE7.gr7.us-east-1.eks.amazonaws.com: no such host
  1. Form the Kubernetes Configuration file - kube.conf. Edit the template below and replace the configurations based on your EKS cluster

---
apiVersion: v1
clusters:
- cluster:
    server: https://localhost:9444
    insecure-skip-tls-verify: true
  name: <EKS_CLUSTER_ARN>
contexts:
- context:
    cluster: <EKS_CLUSTER_ARN>
    user: cluster/<EKS_CLUSTER_NAME>
  name: <EKS_CLUSTER_ARN>
current-context: <EKS_CLUSTER_ARN>
kind: Config
preferences: {}
users:
- name: cluster/<EKS_CLUSTER_NAME>
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - --region
      - <EKS_REGION>
      - eks
      - get-token
      - --cluster-name
      - <EKS_CLUSTER_NAME>
      command: /usr/local/bin/aws
      env:
      - name: AWS_ACCESS_KEY_ID
        value:  <EKS_OWNER_ACCESS>
      - name: AWS_SECRET_ACCESS_KEY
        value: <EKS_OWNER_SECRET>

AWS does not accept localhost as a SAN IP (Subject Alternative Name) for the EKS Cluster. Because of this, the certificate authority data in the Kubernetes config file cannot be used when querying the Kubernetes API server via a tunneling proxy. We need to skip the TLS verification in this scenario.

  1. Set the Kubernetes configuration file.

export KUBECONFIG=kube.conf

Once configured, Check Cluster Connectivity to proceed further.

Last updated