In the past few months, on all my machines I have replaced Docker with Podman and mostly the transition has been quite smooth. There are still some rough edges here and there, but the overall experience of using Podman has been great!
However, when trying to start a very simple container, one often runs into the following issue:
$ podman run -p80:80 nginx:latest
Error: error from slirp4netns while setting up port redirection: map[desc:bad request: add_hostfwd: slirp_add_hostfwd failed]
The error message looks very cryptic, but the issue is quite simple: As a regular user, one is typically not allowed to bind ports < 1024. So by trying to bind port 80, you will get the error above.
The fix is trivial, just use a port greater than 1024:
$ podman run -p8080:80 -d nginx:latest
22d2be2966e9cb77246a8b698f9024de89f4e6d1a0edfe44209bbe4fd27aa8b5
$ curl localhost:8080
[..]
Welcome to nginx!
[..]
If you really need to use a port number lower than 1024, there are multiple ways to configure that:
- Set
net.ipv4.ip_unprivileged_port_start=80
or similar in your sysctl
- Add the
CAP_NET_BIND_SERVICE
capability to your process or user
At their core, containers are just Linux processes that are namespaced. This means in practice, many containers still run as processes on the same host machine. While namespacing processes using cgroups creates very good boundaries between processes, the isolation is still not perfect.
Read the rest of this entry
So when working with a lot of different namespaces in Kubernetes and you only know the “oc project” command from OpenShift, you start to miss an easy way to change namespaces in Kubernetes.
The official documentation to switch namespaces proposes something like this:
$ kubectl config set-context $(kubectl config current-context) --namespace=<insert-namespace-name-here>
Not something that I want to type regularly. First I tried to create a BASH alias or something, which did not work. So I looked around for BASH functions. I found that Jon Whitcraft proposed a nice BASH function in a GitHub issue. I lightly modified this and placed this in my own .bashrc
file:
function kubectlns() {
ctx=`kubectl config current-context`
ns=$1
# verify that the namespace exists
ns=`kubectl get namespace $1 --no-headers --output=go-template={{.metadata.name}} 2>/dev/null`
if [ -z "${ns}" ]; then
echo "Namespace (${1}) not found, using default"
ns="default"
fi
kubectl config set-context ${ctx} --namespace="${ns}"
}
So to change your namespace, use something like this:
$ kubectlns simon
Context "kubernetes-admin@kubernetes" modified.
Nice and short.
So when using NodeSelectors in OpenShift, you’ll also have to set labels on your nodes. You can find more information on labeling nodes in the OpenShift documentation. Here is how you can add or remove a label from a node or pod:
To add a label to a node or pod:
# oc label node node001.krenger.ch mylabel=myvalue
# oc label pod mypod-34-g0f7k mylabel=myvalue
To remove a label (in the example “mylabel”) from a node or pod:
# oc label node node001.krenger.ch mylabel-
# oc label pod mypod-34-g0f7k mylabel-
You can also use oc label -h
to see more options for the oc label
command.
So in any larger container orchestrator installation, be it Kubernetes or OpenShift, you will encounter pods that crash regularly and enter the “CrashLoopBackOff” status.
$ oc get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
[..]
my-project-1 helloworld-11-9w3ud 1/1 Running 0 7h
my-project-2 myapp-simon-43-7macd 0/1 CrashLoopBackOff 3774 9h
Note the container that has status “CrashLoopBackOff” and 3774 restarts.
Read the rest of this entry
I recently started working with OpenShift and needed to get a list of all pods on the cluster. I quickly glanced at the documentation but could not find what I wanted. My colleagues quickly pointed me in the right direction:
oc get pod --all-namespaces -o wide
Here is the command with some example output of what to expect:
# oc get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
my-project my-pod-43-d9mo6 1/1 Running 0 1d 192.168.0.183 node3.krenger.local
yet-another-project another-pod-43-7g3r0 1/1 Running 0 2d 192.168.0.184 node4.krenger.local
[..]
If you just want to know which pods are on a certain node, use oc adm manage-node
:
oc adm manage-node node3.krenger.local --list-pods