Inspecting container checkpoints with checkpointctl

One of the newer features in Kubernetes (1.30 and later) is the Kubelet Checkpoint API. This new API allows users to create a stateful copy of a running container, a functionality which is often used for forensics or for debugging.

In Kubernetes installations where this feature is enabled, a checkpoint can be created by accessing the respective Kubelet API via curl or similar. In the following example I am also using the Kubernetes API /proxy endpoint (the same can also be done on the Node locally via localhost:10250/checkpoint/...):

$ curl -k -X POST --header "Authorization: Bearer $TOKEN" "$KUBERNETES_API_URL/api/v1/nodes/$NODE_NAME/proxy/checkpoint/$NAMESPACE_NAME/$POD_NAME/$CONTAINER_NAME"
{"items":["/var/lib/kubelet/checkpoints/checkpoint-fedora-74d79dd7f4-csrmg_skrenger-container-2024-12-12T12:56:19Z.tar"]}
Read the rest of this entry

Excluding / ignoring sensors in node_exporter

I like to use the Prometheus node_exporter to get metrics about my hardware. However some hardware (such as my X300M-STX mainboard) exposes sensors with some rather nonsensical values:

[..]
node_hwmon_temp_celsius{chip="platform_nct6775_656",sensor="temp13"} 49.75
node_hwmon_temp_celsius{chip="platform_nct6775_656",sensor="temp15"} 3.892313987e+06
node_hwmon_temp_celsius{chip="platform_nct6775_656",sensor="temp16"} 3.892313987e+06
[..]

To ignore such values, node_exporter only allowed the exclusion of complete chips / devices using --collector.hwmon.chip-exclude. However, in newer versions of node_exporter you’ll be able to exclude (or explicitly include) single sensors on a sensor-level using the following command line option:

--collector.hwmon.sensor-exclude="platform_nct6775_656;temp1[5,6]"

The argument is a regex that is matched against the device name and the sensor. Separate the chip name and the sensor name using “;“.

10GbE in the DeskMini X300

As my little home server I have an Asrock DeskMini X300 with an AMD Ryzen 7 5700G (16 cores) and 64GB of memory. A nice low powered home server to play around with. Out of the box, the DeskMini comes with one 1 Gbit network interface (a Realtek chipset). Since most of my devices are connected via WiFi anyway, this was more than enough until now. But then, modernity arrived in my part of the world and we now have 10Gbit fiber internet, great!

10Gbit internet sounds awesome, however devices connected via WiFi will only ever see a real-world maximum of around 700 Mbits/sec via WiFi 6. But maybe my little DeskMini could use all that 10Gbit? Unfortunately, the DeskMini motherboard does not have any of the usual PCIe expansion slots apart from SATA and M.2 slots. So I decided to try the IOCREST M.2 to Single 10G Ethernet Network Adapter (IO-M2F107-GLAN)” adapter (AliExpress link here), to see if that would work.

Read the rest of this entry

ERROR: release image arch amd64 does not match host arch arm64

Well, so I tried installing a new ARM-based OpenShift Container Platform cluster on AWS. To prepare, I created an install-config.yaml file and changed the controlPlane.architecture and the compute.architecture field to “arm64” and then launched the installer. That did not work, it still complains about the architecture:

$ ./openshift-install create cluster --dir=.
INFO Credentials loaded from the "default" profile in file "/home/simon/.aws/credentials" 
INFO Consuming Install Config from target directory 
INFO Creating infrastructure resources...         
INFO Waiting up to 20m0s (until 11:07AM) for the Kubernetes API at https://api.skrenger-arm.lab.example.com:6443... 
INFO Pulling VM console logs                      
INFO Pulling debug logs from the bootstrap machine 
ERROR Attempted to gather ClusterOperator status after installation failure: listing ClusterOperator objects: Get "https://api.skrenger-arm.lab.example.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 3.64.25.143:6443: connect: connection refused 
ERROR Bootstrap failed to complete: Get "https://api.skrenger-arm.lab.example.com:6443/version": dial tcp 3.68.144.150:6443: connect: connection refused 
ERROR Failed waiting for Kubernetes API. This error usually happens when there is a problem on the bootstrap host that prevents creating a temporary control plane. 
ERROR The bootstrap machine failed to download the release image 
INFO Pulling quay.io/openshift-release-dev/ocp-release@sha256:9ffb17b909a4fdef5324ba45ec6dd282985dd49d25b933ea401873183ef20bf8... 
INFO cfce1ab124f59e93a0f67d7e85283d524ddfd73a27d0535319d69d1dce746488 
INFO ERROR: release image arch amd64 does not match host arch arm64 
INFO Bootstrap gather logs captured here "/home/simon/Downloads/arm/log-bundle-20221124110737.tar.gz" 
Read the rest of this entry

“import torch” fails on the NVIDIA Jetson Nano

NVIDIA provides the Linux4Tegra (L4T) distribution as an image for use with the NVIDIA Jetson Nano. However, once you upgrade the whole system, strange problems will pop up, one of which I have described here: NVIDIA Docker “permission denied: unknown.” on Jetson Nano.

When applying a popular solution described here by adding a new repository to your L4T installation, this will result in interesting error messages such as the following when trying to run L4T-ML containers:

docker run  --rm --runtime nvidia -it nvcr.io/nvidia/l4t-ml:r32.7.1-py3 python3 -c "import torch"

[..]
libcurand.so.10: cannot open shared object file: No such file or directory
Read the rest of this entry

NVIDIA Docker “permission denied: unknown.” on Jetson Nano

I recently bought an NVIDIA Jetson Nano Developer Kit to fiddle around with things like MicroShift or TensorFlow. The board is typically used with L4T (Linux for Tegra) based on Ubuntu 18.04. Fedora can also be installed, although not all drivers (for example for the GPU) are available yet. So after properly updating the system with the latest packages, when starting a container using the nvidia runtime, I got the following error:

docker run -it --rm --runtime nvidia --network host nvcr.io/nvidia/l4t-ml:r32.6.1-py3
[..]
docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: error adding seccomp filter rule for syscall clone3: permission denied: unknown.
Read the rest of this entry

jq: Delete an element from an array

When working with JSON data, I typically use jq to mangle the data. I keep this post as a reference for myself on how to remove an element from a JSON list or array using jq.

Given we have the following array:

$ echo '{"hello": "world", "myarray": ["a", "b", "c"]}' | jq
{
  "hello": "world",
  "myarray": [
    "a",
    "b",
    "c"
  ]
}

To remove an element from the array, use the del function with the select function to select a single element:

jq 'del(.myarray[] | select(. == "b"))'

So when applying this to the above array, we can remove “b” from the array like so:

$ echo '{"hello": "world", "myarray": ["a", "b", "c"]}' | jq 'del(.myarray[] | select(. == "b"))'
{
  "hello": "world",
  "myarray": [
    "a",
    "c"
  ]
}

Docker Desktop for Mac: SSH into the Docker VM

As you may know, Docker Desktop on macOS runs a Linux VM in the background to run containers on macOS (since containers are a Linux concept). However, that VM is well hidden from view and you typically only interact with it when you start Docker Desktop or when you need to clean up images in the VM itself.

Sometimes you’ll want to have a shell into that VM, but that turns out to be more complicated than I initially expected. There is however an easily accessible debug shell available.

  • First, open a terminal and use socat to open the debug shell socket to the VM using the following command:
$ socat -d -d ~/Library/Containers/com.docker.docker/Data/debug-shell.sock pty,rawer
  • socat will print the line “PTY is /dev/ttys010“, to which you can then connect to using screen on another terminal window:
$ screen /dev/ttys0xx

So that will look something like this:

$ socat -d -d ~/Library/Containers/com.docker.docker/Data/debug-shell.sock pty,rawer
2021/01/02 21:28:43 socat[23508] N opening connection to LEN=73 AF=1 "/Users/simon/Library/Containers/com.docker.docker/Data/debug-shell.sock"
2021/01/02 21:28:43 socat[23508] N successfully connected from local address LEN=16 AF=1 ""
2021/01/02 21:28:43 socat[23508] N successfully connected via
2021/01/02 21:28:43 socat[23508] N PTY is /dev/ttys010
2021/01/02 21:28:43 socat[23508] N starting data transfer loop with FDs [5,5] and [6,6]

$ screen /dev/ttys010
/ #
/ # uname -a
Linux docker-desktop 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 Linux

The VM is a very stripped down Alpine image with no package manager available, so you’ll have to make do with what is available.

Quit with CTRL-D, which will also close the socat socket. Thanks to Tatsushi for figuring it out in this GitHub Gist.

OpenShift 4 – List installed Operators

In OpenShift Container Platform (OCP) 4, most of the functionality is controlled by Operators. To see the currently installed Operators and also their status, use the following command:

$ oc get clusteroperators
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.6.4     True        False         False      12m
cloud-credential                           4.6.4     True        False         False      38m
cluster-autoscaler                         4.6.4     True        False         False      32m
config-operator                            4.6.4     True        False         False      33m
console                                    4.6.4     True        False         False      21m
csi-snapshot-controller                    4.6.4     True        False         False      27m
dns                                        4.6.4     True        False         False      31m
etcd                                       4.6.4     True        False         False      32m
image-registry                             4.6.4     True        False         False      25m
ingress                                    4.6.4     True        False         False      24m
insights                                   4.6.4     True        False         False      33m
kube-apiserver                             4.6.4     True        False         False      30m
kube-controller-manager                    4.6.4     True        False         False      31m
kube-scheduler                             4.6.4     True        False         False      31m
kube-storage-version-migrator              4.6.4     True        False         False      24m
machine-api                                4.6.4     True        False         False      27m
machine-approver                           4.6.4     True        False         False      32m
machine-config                             4.6.4     True        False         False      32m
marketplace                                4.6.4     True        False         False      32m
monitoring                                 4.6.4     True        False         False      23m
network                                    4.6.4     True        False         False      33m
node-tuning                                4.6.4     True        False         False      33m
openshift-apiserver                        4.6.4     True        False         False      27m
openshift-controller-manager               4.6.4     True        False         False      24m
openshift-samples                          4.6.4     True        False         False      26m
operator-lifecycle-manager                 4.6.4     True        False         False      32m
operator-lifecycle-manager-catalog         4.6.4     True        False         False      32m
operator-lifecycle-manager-packageserver   4.6.4     True        False         False      27m
service-ca                                 4.6.4     True        False         False      33m
storage                                    4.6.4     True        False         False      32m

You can find the description of the default Operators in the documentation.

This will only list the Red Hat Operators that are installed as part of the cluster. These are all controlled by the ClusterVersionOperator, which is the “Master-Operator” of the cluster controlling all others.

If you want to list all Operators that were installed via the Operator Lifecycle Manager (OLM), you can use the following command:

$ oc get subscriptions --all-namespaces

Red Hat Certified Architect

Getting training and exams done in 2020 has been challenging. After reaching my RHCE mid-February, I am now proud to say that I achieved my Red Hat Certified Architect in Infrastructure certification less than 9 months later.

To reach my RHCA, I took the following Red Hat exams. As you can see, it is OpenShift and Ansible all the way down:

  • EX180 Red Hat Certified Specialist in Containers and Kubernetes
  • EX280 Red Hat Certified Specialist in OpenShift Administration
  • EX288 Red Hat Certified Specialist in OpenShift Application Development
  • EX407 Red Hat Certified Specialist in Ansible Automation
  • EX447 Red Hat Certified Specialist in Ansible Best Practices

Of course, the journey does not end here as there are quite a few interesting topics still to learn!

Hello world

My name is Simon Krenger, I am a Technical Account Manager (TAM) at Red Hat. I advise our customers in using Kubernetes, Containers, Linux and Open Source.

Elsewhere

  1. GitHub
  2. LinkedIn
  3. GitLab