fedora-minimal: Broken tzdata

For my own container images, I often like to use the Fedora Container Images as the base image. This means I often use the “fedora:32” or “fedora-minimal:32” image when building my own images.

Yesterday, while playing around with an image based on the “fedora-minimal” image that then uses nginx and php-fpm, I came across this curious error:

Invalid date.timezone value 'UTC', we selected the timezone 'UTC' for now
Read the rest of this entry

Dell U3818DW and Fedora 32

Due to COVID-19, like many others I am currently working from home and as a result I took the chance to update my home office. Working with a small laptop screen for months is not optimal, so I went the ultra-wide route and got myself a Dell U3818DW monitor.

Since I did not find a lot of information about running this monitor with Linux, here is a quick overview. To summarize, everything works out-of-the-box.

Read the rest of this entry

Creating a sosreport on CoreOS

With OpenShift 4, Red Hat introduced Red Hat Enterprise Linux CoreOS. It is a very minimalist operating system, focused on running container workload.

This new minimalism comes with some challenges. There are no more RPM packages and most of the tools we know and love are missing! Luckily, there is the Red Hat supplied toolbox container that contains all the necessary tools and is nicely integrated.

So to start the toolbox, use oc debug node/<nodename>. This will start a privileged container on the node you specify, mount the host file system on /host and drop you into a shell:

$ oc debug node/worker-0.lab.openshift.krenger.ch
Starting pod/worker-0labopenshiftkrengerch-debug ...
To use host binaries, run `chroot /host`
If you don't see a command prompt, try pressing enter.
sh-4.2# chroot /host
sh-4.4# toolbox
Container started successfully. To exit, type 'exit'.
sh-4.2#

Now we are running in the toolbox container on our CoreOS host with all the tools we know at our disposal, for example sosreport:

sh-4.2# sosreport

Running sosreport will generate a sosreport in /host/var/tmp/, which means it will be accessible in /var/tmp/ on the CoreOS host itself.

OpenShift 4 Upgrade Paths

For OpenShift 4, the upgrade paths are kept in the cincinnati-graph-data repository as YAML files and then exposed via an API.

There is a Red Hat Solution describing how this data can be queried via api.openshift.com and how you can use this data in your automation:

$ curl -sH 'Accept:application/json' 'https://api.openshift.com/api/upgrades_info/v1/graph?channel=fast-4.2&arch=amd64' | jq .

While this data is quite helpful for automation (the Solution also describes helpful queries), it is not very nice to look at the raw data. If you are looking for a graphical presentation of that data, you should check out this wonderful website that is maintained by a Red Hat colleague with hourly generated data: www.ocp-upgrade.net

Missing X-Forwarded-For header in Spring Boot application

So here is another one from the trenches.

More than once one of our OpenShift Container Platform customers approached us and said something along the lines of: “Help, I cannot see the X-Forwarded-For header in my application, our OpenShift Router is probably configured incorrectly!”.

In such cases, it is often a good idea to check what is really being forwarded to the Pods in the cluster. For this, I typically use my simonkrenger/echoenv container to print the headers received by the application. In many cases, it turns out that the application affected is a Spring Boot application and the header is passed correctly to the Pod itself. But the Spring Boot application does not show the header anyway.

We have observed a behaviour of Spring Boot that leads to the X-Forwarded-For header not being passed to the application, as it is consumed by Spring Boot. In the application.properties of a Spring Boot application, the following setting controls this:

server.use-forward-headers: true

This configuration leads to the header being consumed by Spring Boot and the header not being available in the application. See also the relevant sections in Spring documentation. Good to know.

Exploring the OpenShift etcd with etcdctl

Kubernetes uses etcd as the persistent store for API data. As etcd is a distributed key-value store, we can also use command line tools to query this store. The examples in this post are for OpenShift 3.x.

Apart from just using get, there is also the possibility to perform the following actions on certain keys:

  • put to write to a key – unless you know what you are doing, don’t touch the Kubernetes data in etcd, as this will manifest in very strange Kubernetes behaviour.
  • del to delete a key – also, this may break your Kubernetes cluster by introducing inconsistencies.
  • watch to keep a watch on an object. This is very helpful to track changes on a certain object.

The get action is probably the most helpful functionality for in-depth API debugging directly within etcd.

Read the rest of this entry

Java Service Wrapper 3.5.43 for Windows x64

Update November 2020: As per the announcement by Tanuki Software, Windows and Linux for Itanium versions were last provided for version 3.5.43. There will not be any further updates.

Tanuki has released a new version of their wrapper today (this has to be some kind of speed record for the new release here!). In this post, I provide the compiled version 3.5.43 of the Java Service Wrapper for Windows x64.

Read the rest of this entry

Red Hat Certified Engineer

Working for Red Hat certainly has its perks. One of them being that I have access to all the content from Red Hat University and I am able to take Red Hat exams for free. With these perks come of course some expectations. Customers expect a Red Hat TAM to be knowledgeable on a wide range of Red Hat products, even if they are not directly related to the function of the TAM.

The most common certifications for System Administrators and also for new TAMs are the Red Hat Certified System Administrator (RHCSA) and the Red Hat Certified Engineer (RHCE). So after passing my RHCSA exam in December 2019, I passed the exam EX294V8 to become a Red Hat Certified Engineer (RHCE) in mid-February. The next step is obviously to become a Red Hat Certified Architect (RHCA), in my case focussed on Cloud technologies such as OpenShift and Containerisation.

To prepare for the RHCE, I used Red Hat University Online courses (RH294) and also prepared using Tomas Nevars Ansible Sample Exam. As others have already noted, the RHCE for RHEL8 is a pure Ansible exam, so knowing your Ansible playbooks in and out will help you with the exam. The above courses and sample exam are great preparations for the exam itself.

vim settings for YAML files

For editing YAML, be it for OpenShift / Kubernetes or Ansible, having your editor set up right can help to avoid common mistakes. So here is the minimalistic config in my ~/.vimrc to make working with YAML files easier. I am sure there are even more plugins or settings available, but this minimal set of commands works fine for me:

set ts=2
set sts=2
set sw=2
set expandtab

syntax on
filetype indent plugin on

set ruler
Read the rest of this entry

Hello world

My name is Simon Krenger, I am a Technical Account Manager (TAM) at Red Hat. I advise our customers in using Kubernetes, Containers, Linux and Open Source.

Elsewhere

  1. GitHub
  2. LinkedIn
  3. GitLab