Introduction#
Github shows essential components of an Amazon EKS cluster
- Essential Networking
- Essential Scurity
- Launch an EKS Cluster
- Deploy the First App
Architecture#
Essential Networking
- public and private access points
- the control plane is hosted in an AWS account and VPC
- the control plane can auto scale with at least 2 API server instances and 3 ectd instances
Essential Security
- Cluster role so control plane can call other AWS services on your behalf
- Node role for all applications running inside the node
- Use both node role and service account (EC2 launch type) for security best practice
- Use both node role and pod execution role (Faragate launch type) for security best practice
- Three policies are required to attach to the node role
- AmazonEKSClusterPolicy is required to attach to the cluster role
Network Stack#
create a VPC
const vpc = new aws_ec2.Vpc(this, `${props.name}-Vpc`, {vpcName: props.name,maxAzs: 3,enableDnsHostnames: true,enableDnsSupport: true,ipAddresses: aws_ec2.IpAddresses.cidr(props.cidr),// aws nat gateway service not instancenatGatewayProvider: aws_ec2.NatProvider.gateway(),// can be less than num az default 1 natgw/zonenatGateways: 1,// which public subet have the natgw// natGatewaySubnets: {// subnetType: aws_ec2.SubnetType.PRIVATE_WITH_EGRESS,// },subnetConfiguration: [{// cdk add igw and route tablesname: 'PublicSubnet',cidrMask: 24,subnetType: aws_ec2.SubnetType.PUBLIC},{// cdk add nat and route tablesname: 'PrivateSubnetNat',cidrMask: 24,subnetType: aws_ec2.SubnetType.PRIVATE_WITH_EGRESS}]})
create security group for worker nodes of EKS cluster
const eksSecurityGroup = new aws_ec2.SecurityGroup(this, 'EksSecurityGroup', {securityGroupName: 'EksSecurityGroup',vpc: vpc})eksSecurityGroup.addIngressRule(eksSecurityGroup,aws_ec2.Port.allIcmp(),'self reference security group')
add a sts vpc endpoint
vpc.addInterfaceEndpoint('STSVpcEndpoint', {service: aws_ec2.InterfaceVpcEndpointAwsService.STS,open: true,subnets: {subnetType: aws_ec2.SubnetType.PRIVATE_WITH_EGRESS},securityGroups: [eksSecurityGroup]})
Cluster Stack#
create an EKS cluster using CDK level 1 (equivalent to CloudFormation template)
select subnets where to place the worker nodes
const subnets: string[] = props.vpc.publicSubnets.map(subnet =>subnet.subnetId.toString())
create role for the EKS cluster
const role = new aws_iam.Role(this, `RoleForEksCluster-${props.clusterName}`, {roleName: `RoleForEksCluster-${props.clusterName}`,assumedBy: new aws_iam.ServicePrincipal('eks.amazonaws.com')})role.addManagedPolicy(aws_iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonEKSClusterPolicy'))
create an EKS cluster
const cluster = new aws_eks.CfnCluster(this,`EksCluster-${props.clusterName}`,{name: props.clusterName,version: '1.25',resourcesVpcConfig: {// at least two subnets in different zones// at least 6 ip address, recommended 16subnetIds: subnets,//endpointPrivateAccess: false,//endpointPublicAccess: true,// cidr block allowed to access cluster// default 0/0publicAccessCidrs: ['0.0.0.0/0'],// eks will create a security group to allow// communication between control and data plane// nodegroup double checksecurityGroupIds: [props.eksSecurityGroup.securityGroupId]},kubernetesNetworkConfig: {// don not overlap with VPC// serviceIpv4Cidr: "",},// role for eks call aws service on behalf of youroleArn: role.roleArn,logging: {// by deault control plan logs is not exported to CWclusterLogging: {enabledTypes: [{// api | audit | authenticator | controllerManagertype: 'api'},{type: 'controllerManager'},{type: 'scheduler'},{type: 'authenticator'},{type: 'audit'}]}}})
create role for worker node
const nodeRole = new aws_iam.Role(this, `RoleForEksNode-${props.clusterName}`, {roleName: `RoleForEksNode-${props.clusterName}`,assumedBy: new aws_iam.ServicePrincipal('ec2.amazonaws.com')})nodeRole.addManagedPolicy(aws_iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonEKSWorkerNodePolicy'))nodeRole.addManagedPolicy(aws_iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonEC2ContainerRegistryReadOnly'))nodeRole.addManagedPolicy(aws_iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonEKS_CNI_Policy'))
add an aws managed group the the cluster
const nodegroup = new aws_eks.CfnNodegroup(this, 'AWSManagedNodeGroupDemo', {nodegroupName: 'AWSManagedNodeGroupDemo',// kubernetes version default from cluster// version: "",nodeRole: nodeRole.roleArn,clusterName: cluster.name!,subnets: subnets,// eks ami release version default latest// releaseVersion: ,capacityType: 'ON_DEMAND',// default t3.mediuminstanceTypes: ['t2.medium'],diskSize: 50,// ssh remote accessremoteAccess: {ec2SshKey: 'eks-node-ssh'},// scaling configurationscalingConfig: {desiredSize: 2,maxSize: 5,minSize: 1},// update configurationupdateConfig: {maxUnavailable: 1// maxUnavailablePercentage: 30,},// label configurationlabels: {environment: 'dev'}})
Fargate Profile#
create pod role
const podRole = new aws_iam.Role(this,`RoleForFargatePod-${props.clusterName}`,{roleName: `RoleForFargatePod-${props.clusterName}`,assumedBy: new aws_iam.ServicePrincipal("eks-fargate-pods.amazonaws.com"),});podRole.addManagedPolicy(aws_iam.ManagedPolicy.fromAwsManagedPolicyName("AmazonEKSFargatePodExecutionRolePolicy"));
create a Fargate profile
const appFargateProfile = new aws_eks.CfnFargateProfile(this,"FirstFargateProfileDemo1",{clusterName: cluster.name!,podExecutionRoleArn: podRole.roleArn,selectors: [{namespace: "demo",labels: [{key: "environment",value: "dev",},],},],fargateProfileName: "demo",// default all private subnet in the vpcsubnets: subnets,tags: [{key: "name",value: "test",},],});
Node Selector#
When an EKS cluster consists of EC2 nodegroup and Fargate profile, in some cases, we want to select specific pods to run some pods. To do that, we can use node labels, node selector, or affinity. For example, as Fargate profile does not support deamonset, we can select only EC2 nodes to launch deamon set as the following
affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: eks.amazonaws.com/compute-typeoperator: NotInvalues:- fargate
show nodes lables
kubectl get nodes --show-labels
Troubleshooting#
Since the cluster created by CloudFormation, we need to run kube config update before can run kubectl from our terminal. Find the cloudformation execution role from aws console, then replace below role arn with the CF exection role.
aws eks update-kubeconfig --name cluster-xxxxx --role-arn arn:aws:iam::112233445566:role/yyyyy
Make sure that the role which your terminal assuming has a trust relationship with the CF execution role
{"Version": "2012-10-17","Statement": [{"Effect": "Allow","Principal": {"Service": "cloudformation.amazonaws.com"},"Action": "sts:AssumeRole"},{"Effect": "Allow","Principal": {"AWS": "arn:aws:iam::$ACCOUNT:role/TeamRole"},"Action": "sts:AssumeRole"}]}
Since the EKS cluster is created by an CloudFormation execution role, we need to take note
- Update kube config with the role before running kubectl
- Ensure that your terminal can assume the CF execution role (trust policy)
- Assume the CF execution role, aws configure before running eksctl