Service is an abstraction for the app running inside the pod. It's the entry point to reach the app. If an app runs multiple replicas then the service acts as a load balancer and distributes the traffic. I'm going to show you how it's working under the hood. 

Let clone my repo that has `helm charts` and `terraform scripts`.

$ git clone https://github.com/related2blog/gil.git
I use `eks` to spin a Kubernetes cluster quickly.

$ cd git/terraform/aws_eks/simple_cluster
$ terraform init

Initializing the backend...

Initializing provider plugins...
Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
$ terraform apply
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

Destroy complete! Resources: 18 destroyed.
  Enter a value: yes
		.
		.
Apply complete! Resources: 18 added, 0 changed, 0 destroyed.
Note: `apply` will prompt you to type `yes` and hit enter after verifying all the resources terraform going to create in your AWS infrastructure. 
Disclaimer: Don't use the production environment. The cluster takes around 20 minutes to come online completely. Once the eks cluster is online, we have to update kubeconfig with the newly created cluster so that cluster can be managed by `kubectl`.

$ aws eks --region ap-northeast-1 update-kubeconfig --name eks-c1
Updated context arn:aws:eks:ap-northeast-1:548748595146:cluster/eks-c1 in /Users/manny/.kube/config
$ kubectl get all
kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   172.20.0.1           443/TCP   15m
Cool kubectl can now access the Kubernetes cluster. Let's run `helm charts` to deploy the `busy_box` app.

$ cd kubernates
$ helm install apple lb_ex_1/
NAME: apple
LAST DEPLOYED: Fri Dec 10 21:37:11 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
after successful install, you can see helm chart details, as shown above, now make sure pods are not crashing and `load balancer` DNS name showing.

$ kubectl get all
NAME                        READY   STATUS    RESTARTS   AGE
pod/apple-db75469fb-k8mc9   1/1     Running   0          4m42s
pod/apple-db75469fb-tth2f   1/1     Running   0          4m42s

NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP                                                                    PORT(S)          AGE
service/apple        LoadBalancer   172.20.237.216   ad44317af518843aa92b909f6a277a9b-1681034099.ap-northeast-1.elb.amazonaws.com   8080:30000/TCP   4m45s
service/kubernetes   ClusterIP      172.20.0.1                                                                                443/TCP          26m

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/apple   2/2     2            2           4m45s

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/apple-db75469fb   2         2         2       4m46s
everything on the Kubernetes side looking fine. I'm going to quickly access our app at the load balancer's endpoint to make sure the app is running.


 well, the container doesn't have any HTML file so we `404 File Not Found` which is a good sign and expected. What we going to do is, log in to each pod and create `index.html` with two words in it ex: app1 and app2, and run the `httpd` server.

kubectl exec -it pod/apple-db75469fb-k8mc9 -- /bin/sh
/ # echo "<html><h1>App1</h1></html>" > index.html
/ # exit
do the with second pod only difference is to change the content of index.html to `App2` 

Note: Since `/` is the `httpd` server's root directory on the `busy_box` image, you don't have to run another instance of the `httpd` command.

finally, we completed the setup and it's time to test, what I'm doing do is refreshing the browser. What we trying to see is how the Kubernetes service routes traffic.  Each time I refresh the page I see HTML content change, it switches between App1 and App2.



 I even applied `helm upgrade` with 3 replicas.

$ helm upgrade apple lb_ex_1 --set replicas=3
Release "apple" has been upgraded. Happy Helming!
NAME: apple
LAST DEPLOYED: Fri Dec 10 22:20:13 2021
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None

Conclusion: It worked as expected, what I notice is I couldn't figure out which one I see on a refresh, every pod gets a chance but not in a specific order.

Cleaning up the resources:

$ cd gil/kubernates
$ helm uninstall apple
release "apple" uninstalled

$ cd gil/terraform/aws_eks/simple_cluster
$ terraform destroy

Changes to Outputs:

You can apply this plan to save these new output values to the Terraform state, without changing any real infrastructure.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value:yes
  			.
            .
            .
Destroy complete! Resources: 18 destroyed.
gil,