top of page
Search
Writer's pictureRamy Afifi

VMWare Tanzu Kubernetes Grid on AWS EC2

This blog post details steps on how to setup & deploy VMWare Tanzu Kubernetes Grid on a running AWS EC2 instance following the best practices and recommendations.


What is VMWare Tanzu Kubernetes Grid?

VMWare Tanzu Kubernetes Grid, aka TKG, provides a consistent, upstream-compatible implementation of Kubernetes that can run both on-premises in vSphere (vSphere is VMware's cloud computing virtualization platform) or in the public cloud like AWS and Azure. TKG provides the registry, networking, monitoring, authorization, ingress control and logging services that production Kubernetes environment requires.


Below is a high-level representation of TKG availability across on-premises and public cloud platform.


Tanzu Kubernetes Grid Architecture

Tanzu kubernetes grid allows you to run kubernetes with consistency and make it available for us as a utility. Also, TKG provides the services such as networking, authentication, ingress control and logging features. TKG has multi cluster architecture as explained below.


Management Cluster

  1. Base of TKG and acts as the foundation for TKG and manages and distributes the workloads

  2. This cluster is the first element that you deploy when you create a Tanzu Kubernetes Grid instance


Workload Clusters

  1. This is where actual application is running

  2. All tasks in this cluster is managed by management cluster


Standalone TKG Architecture

Installation of TKG in AWS EC2


TKG will be installed using TKG CLI on the bootstrap machine. Bootstrap machine is any machine, either local or a server where initial bootsrapping of management cluster is performed. Bootstrap machine acts as a temporary gateway for management clusters to be created. You can access the TKG cluster after the complete deployment from bootstrap machine or from any other host provided you have necessary configurations available in the host.


Prerequisites for bootstrap Instance



Initialize the Tanzu Kubernetes Grid CLI


Once we have the prerequisites deployed, we will be able to initialize TKG CLI which is the first step in deploying TKG. Follow the below steps in order to initialize TKG CLI.


Execute below command to create config files for management cluster and workload cluster. This will create ~/.tkg folder in your home directory, that contains the TKG configuration files.

tkg get management-cluster 

Deploy Management Cluster to Amazon EC2 with installer interface UI


We will be using TKG Web UI to configure TKG in the EC2 Instance. Once the all prerequisites and deployed and TKG CLI is initialized, we will be able to continue with the deployment and configuration of TKG.


You will be executing below commands in the bootstrap machine.


Initialize installer interface

tkg init --ui 

This command will initialize installer interface with default config file at $HOME/.tkg/config.yaml


Verify Initialization of UI


TKG installer will be available in the local host in this case and can be accessed from the URI, http://127.0.0.1:8080. In case the bootstrap machine is a remote server, you will have to replace the local host IP with the remote server IP to access the TKG Installer UI. Click on AWS EC2 as shown in the screenshot below to setup management cluster on an AWS EC2 instance.



TKG Configuration Settings


Enter all the required inputs and click on “Deploy Management Cluster”.


Deployment will take couple of minutes to complete and progress of the deployment can be viewed in the screen as below.



On completion on the process, you will be able to verify the installation by verifying the EC2 instance running on AWS EC2 Console. You will see the EC2 instance running as per the configuration provided in the UI earlier while starting the deployment from Installer UI.



Deploy Management Cluster to Amazon EC2 with CLI


TKG can be installed on EC2 instance using CLI instead of UI. Once all prerequisites are installed, follow the below steps in order to install TKG using CLI.


Configure AWS Credentials


To install TKG management cluster using CLI, we need to configure proper authentication to AWS from bootstrap machine. This can be configured in multiple ways.


Configure AWS CLI | User Guide


Initialize installer

tkg init --ui 

This command will initialize installer interface with default config file at $HOME/.tkg/config.yaml


TKG Configuration Settings


Unlike the web UI, while deploying management cluster using CLI, configuration is stored in the config.yaml created in the previous steps. You can either update the value from $HOME/.tkg/config.yaml or can be set using environment variable.


Deploy TKG Cluster


Management cluster can be deployed using below command using CLI. You can provide custom name for the management cluster as argument during the deployment.

tkg init –infrastructure aws –name <management-cluster-name> --plan dev 

Deployment will take couple of minutes to complete and progress of the deployment can be viewed as verbose output in the CLI.


On completion on the process, you will be able to verify the installation by verifying the EC2 instance running on AWS EC2 Console. You will see the EC2 instance running as per the configuration provided in the config.yaml


Deploy TKG Clusters


After the management cluster is deployed to Amazon EC2 instance, you can use the TKG CLI to deploy Tanzu Kubernetes clusters. In Tanzu Kubernetes Grid, Tanzu Kubernetes clusters are the Kubernetes clusters in which your application workloads run.


To deploy a Tanzu Kubernetes cluster to Amazon EC2, we will have to use tkg create cluster, specifying the cluster name.


This command deploys a Tanzu Kubernetes cluster that runs the default version of Kubernetes for this Tanzu Kubernetes Grid

tkg create cluster my-cluster --plan dev 

In case you need to deploy the cluster with custom worker nodes, you can specify the -- worker-machine-count while running tkg create cluster command.

tkg create cluster my-dev-cluster --plan dev --worker-machine-count 3 

Deploy a Cluster with Cluster Autoscaler Enabled


TKG Cluster Autoscaler provides the option to scales the number of worker nodes.


You will have to use the --enable-cluster-options autoscaler option while creating the cluster. Tanzu Kubernetes Grid creates a Cluster Autoscaler deployment in the management cluster. The Autoscaler configuration can be provided in the TKG configuration file, .tkg/config.yaml, or as environment variables. You cannot modify these values after you deploy the cluster.


Create cluster command with Autoscaler enabled would be as below Create the cluster. For example:

tkg create cluster example-cluster --plan prod --enable-cluster-options autoscaler 

If you need to create TKG cluster with different sizes for control plane and worker node, you will have to add additional arguments in the create cluster command. For example:

tkg create cluster my_cluster --plan prod --controlplane-machine-count 5 --worker-m achine-count 10 --controlplane-size m5.large --worker-size t3.xlarge 

Access Cluster


On successful deployment of management cluster, you will be able access the management cluster with TKG CLI commands as below. You can connect to the clusters by using kubectl commands.

View the list of management clusters

tkg get management-cluster 

Add credentials of a cluster to kubeconfig file

tkg get credentials my-cluster 

Connect to the cluster by using kubectl

kubectl config use-context my-cluster-admin@my-cluster 

View the status of the nodes in the cluster

kubectl get nodes 

View the status of the pods running in the cluster

kubectl get pods -A 

Additional References




63 views0 comments

Recent Posts

See All

Getting Started with NSX-T 3.0

VMware NSX Data Center is the network virtualization and security platform that enables a software-defined approach to networking that...

Micro-segmentation with NSX-T

Micro-segmentation provided by NSX-T DC enables organizations to logically divide the data center into distinct security segments down to...

Comments


bottom of page