Pod Network in Kubernetes
===
###### tags: `Namespaces` `Pod Network`
# Introduce
Container一開始是為了解決單一且定義狹義的問題,ex:Microservice
但是將一個服務切成多個container這些container要如何支援?
# What's is a Kubernetes Pod?
1. Q:所以到底甚麼是K8s的Pod?
A:他是K8s在部署與管理的最小單位,我們可以把單一或多個container(K8s默認是Docker)包成一個Pod做部署與管理
2. Q:How tightly coupled?
A:The containers in a pod represent processes that would have run on the same server in a pre-container world.
3. Q:Pod的定位是?
A:Pod就像是一個小型的single server.每個container可以在localhost上access其他在Pod不同port的containers(each container can access the other containers in the pod as different ports on localhost.).
# Why does Kubernetes use a Pod as the smallest deployable unit, and not a single container?
* Q:看起來如果我們部署一個Docker就好啦 多簡單?!為何還要在多一層Pod?
A:因為要管理一個container,K8s需要額外附加的資訊,為了能夠簡單的管理他們還有附加的Properties K8s決定去使用新的entity,THE POD,他可以把單一或多個contaier管理在一個entity中。
# Why does Kubernetes allow more than one container in a Pod?
1. Container 在POD中在一個locial host上跑他們使用相同的Network namespace(same IP address,same port space), same IPC namespace.(and sam)

2. 因為上述的Properties使他能夠使這些Container能有效溝通並確保data locality,Pod就能夠使我們管理多個輕量app container成為一個最小單元
3. Q:所以一個app需要多個Container Run在相同的host上,那我們為何不把它弄成一個包山包海包你所需的contianer?
A:那這裡就會違反 "One process per Container"的原則。這是很重要的是因為很難去解決不同Process發出的log全都混再一起,也會很難去管理process的lifecycle。
# Use Cases for Multi-Container Pods
The primary purpose of a multi-container Pod is to support co-located, co-managed helper processes for a primary application.
General pattern for using helper processes in Pods:

- Sidecar container:
- 如果有一個Helper container他會"help" the main container.
- 有些 examples 包含log或data change watcher, monitoring adapters,等等。
- 他可以成為main container的file/data loader。
- Proxies, Bridges, and adapters:
- 幫main container連到外部世界。例如Apache HTTP server/nginx。他可以實現reverse proxy。
- 替main container 轉送 request到外部網路。這能使main container connect到localhost.
# Communication between containers in a Pod
使用multi-container in single Pod使他相對性的直接互相溝通,可以依靠一些方法達成
## Shared volumes in a Kubernetes Pod
* 在K8s 可以借助Shared Kubernetes Volume 這是一個簡單且有效方式使多個在同一個Pod的container共享data。
* For most cases, it is sufficient to use a directory on the host that is shared with all containers within a Pod.
* Kubernetes Volume 能使Data在container重啟後仍然不會不見但是他們的lifetime是跟pod一樣的。(Pod is deleted for any reason, even if an identical replacement is created, the shared Volume is also destroyed and created anew.)
* 假設我們部署兩個container 1st 2nd
```yaml=
apiVersion: v1
kind: Pod
metadata:
name: mc1
spec:
volumes:
- name: html
emptyDir: {}
containers:
- name: 1st
image: nginx
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
- name: 2nd
image: debian
volumeMounts:
- name: html
mountPath: /html
command: ["/bin/sh", "-c"]
args:
- while true; do
date >> /html/index.html;
sleep 1;
done
```

* 那我們看上圖我們在yaml中有定義一個Volume叫Html,他的typed是emptyDir且是空的,代表這volume是第一次assign到node上但他的lifetime會跟pod一樣久。
* 1st container runs nginx server也有shared Volume /usr/share/nginx/html
* 第二個container run debian shared volume 安裝到 /html
* 每秒 2nd將會date >> /html/index.html 當user做HTTP request到container時nginx將會回應
```zsh=
$ kubectl exec mc1 -c 1st -- /bin/cat /usr/share/nginx/html/index.html
...
Fri Aug 25 18:36:06 UTC 2017
$ kubectl exec mc1 -c 2nd -- /bin/cat /html/index.html
...
Fri Aug 25 18:36:06 UTC 2017
Fri Aug 25 18:36:07 UTC 2017
```
## Inter-process communications (IPC)
Container在Pod裡也會共享相同的IPC namespace也代表他們可以使用IPC彼此互相溝通,如System V semaphores / POSIX shared memory.
```yaml=
apiVersion: v1
kind: Pod
metadata:
name: mc2
spec:
containers:
- name: producer
image: allingeek/ch6_ipc
command: ["./ipc", "-producer"]
- name: consumer
image: allingeek/ch6_ipc
command: ["./ipc", "-consumer"]
restartPolicy: Never
```
* producer會創建一個standard Linux Message Queue並write a number of random message還有exit message。consumer會Open same message queue for reading message直到他收到exit message。

```zsh=
$ kubectl logs mc2 -c producer
...
Produced: f4
Produced: 1d
Produced: 9e
Produced: 27
$ kubectl logs mc2 -c consumer
...
Consumed: f4
Consumed: 1d
Consumed: 9e
Consumed: 27
Consumed: done
```
```zsh=
$ kubectl get pods --show-all -w
NAME READY STATUS RESTARTS AGE
mc2 0/2 Pending 0 0s
mc2 0/2 ContainerCreating 0 0s
mc2 0/2 Completed 0 29
```
* 缺點在這,Pod要解決container要如何restart
## Container dependencies and startup order
目前pod在start up 所有container並沒有先後順序是採parallel的方式,沒有辦法定義哪個container需要在某個container之前建立。
所以[Kubernetes Init Containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/)後來加入了一些方法:which start first(and sequentially)但是這會導致app要依序創建以上面為例app會要等到producer建完Message Queue才能繼續
## Inter-container network communication
* Container在Pod裡是通過"localhost"來互相溝通,他們使用相同的network namespace。
* 對container而言可以取得的host name是pod name 因為container他們共享Same IP address and port space我們需要給每個container不同的port來增加連線。POD必須要協調他們所使用的Port。
* 假設現在有一個nginx 來當作reverse proxy來傳送到其他的Web app second container。
1. 用nginx configuration file創建一個ConfigMap.HTTP request 會被送到localhost的 5000 port:
```yaml=
apiVersion: v1
kind: ConfigMap
metadata:
name: mc3-nginx-conf
data:
nginx.conf: |-
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream webapp {
server 127.0.0.1:5000;
}
server {
listen 80;
location / {
proxy_pass http://webapp;
proxy_redirect off;
}
}
}
```
2. 創建兩個pod nginx&sample web app 在兩個不同的container在pod中我們定義只有nginx是走80 port, 5000 port將不會被外界assible
```yaml=
apiVersion: v1
kind: Pod
metadata:
name: mc3
labels:
app: mc3
spec:
containers:
- name: webapp
image: training/webapp
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: nginx-proxy-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx-proxy-config
configMap:
name: mc3-nginx-conf
```
3. 把此POD換成Noteport service 暴露在外
```zsh=
$ kubectl expose pod mc3 --type=NodePort --port=80
service "mc3" exposed
```
4. 用service來幫Pod做Port forwarding
```zsh=
$ kubectl describe service mc3
...
NodePort: <unset> 31418/TCP
...
```

* 可以看到在port 80 會導送到port 5000
### Exposing multiple containers in a Pod
上述的範例顯示要如何去使用單一Pod去access Pod上其他的container,常用的做法就是讓多個container listen pod上不同的Port每個port都是需要被露出,為了做到這件事我們需要single service有多個exposed port
:::warning
* 但事實上單一container in single Pod 是最多人採用 我們其實沒必要還要為了muti-pod連接的問題找自己麻煩
* Q:When Multi-container in single pod?
A:1.When the containers have the exact same lifecycle, or when the containers must run on the same node.
2. The most common scenario is that you have a helper process that needs to be located and managed on the same node as the primary container.
:::
# Reference:
[Multi-container pods and container communication in Kubernetes](https://www.mirantis.com/blog/multi-container-pods-and-container-communication-in-kubernetes/)
[Multi-Container Pod Design Patterns in Kubernetes](https://matthewpalmer.net/kubernetes-app-developer/articles/multi-container-pod-design-patterns.html)