How to make a Docker container read-only

There are many ways to harden a Docker container, one is to make the container layer read-only.

This might be a marginal improvement to security, first your application should not run as root or has special privileges (e.g. CAP_DAC_OVERRIDE), so there is limited risk that an attacker exploiting a vulnerability of your application can modify sensitive applications. However, if you install your application within a Dockerfile as the application user (e.g. using bundle install) make the base layer read-only might protect it from unwanted modification.

I also like the idea of an immutable base layer and clearly identifying the writing data and if they should be persisted or not. I also relate that to security, because the better you know the behaviour of an application, the better you can adapt a confinement for it.

Setting the base layer read-only is somewhat challenging. Setting a container image to read-only is simple, there is a --read-only flag to the docker run command. But identifying which data is written by the containerised application can be a challenge One task is thus to identify all written data and defining of they should be persisted in a volume or not persisted. In the latter case, one could then use a tmpfs volume or a local volume (in a Swarm cluster).

We are going to use Docker layering approach to identify the written data. How to check the difference varies depending on the storage backend and they are too numerous for me to list each cases, I might complete the article in the future but today I will show how to use the BTRFS and Overlay2 backend.

What I am going to explain is based on the current implementation of the Docker storage backend as described in their respective guides. Each guide explains how the backend works, and by extracting that information I could find a way to compare the layers.

Continue reading “How to make a Docker container read-only”

Revisiting getting docker-compose on Raspberry Pi (ARM) the easy way

Whale

Two years ago I was publishing a post to build docker-compose on an ARM machine. Nowadays, you can find docker-compose on PyPI. However, if you intent to run docker-compose on a platform without Python dependencies, you might still be interested in my guide which generates an ELF binary executable.

My previous guide has worked well until release 1.22.0 after which the Dockerfile.armhf (which was merged) has been upgraded to match the changes for the X86-64 platform but broke my build instructions. The builds seems to work and generate an executable but it fails to run due to missing dependencies:

+ dist/docker-compose-Linux-armv7l version
[446] Failed to execute script docker-compose
Traceback (most recent call last):
File "bin/docker-compose", line 5, in
from compose.cli.main import main
File "/code/.tox/py36/lib/python3.6/site-packages/PyInstaller/loader/pyimod03_importers.py", line 627, in exec_module
exec(bytecode, module.dict)
File "compose/cli/main.py", line 13, in
from distutils.spawn import find_executable
ModuleNotFoundError: No module named 'distutils'

I have not found the root-cause of the problem as I am not familiar with tox, but it looks like a configuration problem of that tool. So I decided to simply use Python3 built-in virtualenv.

As in my previous guide, you need to clone the repository and choose a branch. You can take the release branch or a specific version branch (e.g. bump-1.23.2).

$ git clone https://github.com/docker/compose.git
$ cd compose
$ git checkout bump-1.23.2

The two next shell commands should modify the original build script to use virtualenv and to add the missing dependencies (which are correctly installed in the tox environment but would be missing in ours).

$ sed -i -e 's:^VENV=/code/.tox/py36:VENV=/code/.venv; python3 -m venv $VENV:' script/build/linux-entrypoint
$ sed -i -e '/requirements-build.txt/ i $VENV/bin/pip install -q -r requirements.txt' script/build/linux-entrypoint

Now you can follow the exact same steps as in the previous guide. In summary:

$ docker build -t docker-compose:armhf -f Dockerfile.armhf .
$ docker run --rm --entrypoint="script/build/linux-entrypoint" -v $(pwd)/dist:/code/dist -v $(pwd)/.git:/code/.git "docker-compose:armhf"
$ sudo cp dist/docker-compose-Linux-armv7l /usr/local/bin/docker-compose
$ sudo chown root:root /usr/local/bin/docker-compose
$ sudo chmod 0755 /usr/local/bin/docker-compose
$ docker-compose version
docker-compose version 1.23.2, build 1110ad01
docker-py version: 3.6.0
CPython version: 3.6.8
OpenSSL version: OpenSSL 1.1.0j 20 Nov 2018

Containers, volumes and file permissions

Permission Denied for Container’s Volume

There is a subject which seems to be completely abstruse to many users of containers on Linux, it is about sharing data between a host and a container or between containers.

I do think that solving this problem is not much different than it is without containers on Linux and on Unix. From my perspective, there is no much difference between managing file permissions with or without containers, the big change for me is the introduction of namespaces, especially the user namespaces.

So what is exactly the problem? And where does it come from?

The problem is that when running a process within a container, that process will run with a certain user and group ID (respectively UID and GID) and that those IDs might differ from the ones of the caller (the user creating and running the container), this might not be obvious. This is especially true with container technologies like Docker which by default will run the process within the container as root (unless overridden in the Dockerfile or command line) when any user with write access to the Docker socket can create such container. So you have by default a discrepancy for the UID and GID between the caller – probably a standard user – and a random Docker container.

In traditional Unix / Linux, this is “normal” or “expected” behaviour. You usually cannot run a process as root from your normal user unless you use sudo or a setuid program, so usually you do not have the problem that a program you launch might have different UID/GID than your own user. And when you use a program with sudo you understand that this might become a problem, so if you use sudo to run `tcpdump -w net-trace.pcap` you know the file net-trace.pcap will be owned by root and that you might not be able to access or delete it. This reflex needs to apply to running a container as well.

When you have done Unix/Linux development most of your career – and that you have adopted the principle of least privileges … I still know of few people only using the root account ;-) – you are used to create application that will run in the background (as a service) under a dedicated user and for which you need to handle the permissions for the data this application might need to use. So introducing containers (without user namespaces) should not bring any surprise here, it is part of the expectations. But you will see later that you can still be bitten by some edge cases from the container implementation.

So, let us see how to fix this problem of User/Group ID and file permissions. Note that the solution would be similar if you would use containers or not, and applies to all container implementations (e.g. LXC, Docker, etc.). Then, for everyone, we will see how to handle file permissions when using user namespaces (hint, the principles are the same, but it requires a few extra steps to understand what will be the effective UID/GID). Finally, in the case of Docker, we will see a few edge cases where you can still get off guard with respect to file permissions and volume declaration inside a Dockerfile.

Continue reading “Containers, volumes and file permissions”

Ubuntu Core – Atomicity rough on the edges

View of a Harbour Terminal before Containers existed
Before the invention of containers, docker was a much more manual job. And that’s what I’m looking for my Raspberry Pi.

I’ve been recently trying to play with Ubuntu Core on my 2nd Raspberry Pi 2. I like the concept of a minimalist host with atomic updates and the possibility to run my services inside containers. In addition, snap looks like an interesting package system (and more than that). But this setup does not fit my use cases, it is perfect for repeatable testing and safe environment for deployment. However I need a system that is stable and safe, but which I can tinker with (modify a specific configuration or kernel in order to optimise its use or detect new devices) and which grow organically (depending on my free time).

Introduction to Ubuntu Core

So Ubuntu Core (or the Project Atomic) do appeal to me but they are too restrictive for my use cases. I need more freedom. Anyway, for those of you who could be interested in these projects here is a quick review of these systems with respect to day-to-day use as I’m not going to explain the philosophy of Core/Atomic neither of snap/atomic technologies.

Both Ubuntu Core (armhf variant) and CentOS Atomic Host (x86_64 variant) felt a bit rough and despite carrying the respective name Ubuntu and CentOS I had to reconsider how I am used to administer such boxes. A basic concept is that you install a core (or minimalist) OS and you cannot pretty much change it (most parts are read only), but it should have everything to run containers. For Atomic Host, there is no way to install additional packages, you need to add every other bit of software as a container. The big difference with Ubuntu Core is that you have snaps which allows to extend the core OS without having to install and configure containers manually. A snap package – once installed – feels more or less like if you just installed a deb package. But there is a big difference, they are like little containers or sandboxed process(es) already neatly packaged so they feel like a normal command, but a lot is going on behind the scene. Here is an example, I’ve installed `htop` on my Raspberry Pi 2, now I get two distinct results if I run it with a standard user or with the super user rights, a behaviour uncommon on a standard Linux installation:

Standard User Super user

Snap htop - standard user
Snap htop – Standard User

Snap htop - super user (via sudo) - see complete list of processes
Snap htop – Super User (via sudo)

As it is visible in the case of htop, by default when I run it, htop shows only processes where the owner is myself. This is not bad but it is different from traditional Linux distribution and so you need to get the habit to do `sudo htop` to see all processes, but take care you then run htop as super user, and htop allows you to kill processes, so be careful.

However, on Raspberry Pi (and probably other ARMv7 (aka armhf) platforms) there is a very limited amount of snap packages available yet. It is probably changing fast, but if you run today Ubuntu Core 16 you can’t install much snaps and need to rely on Docker containers for adding extra applications to the base system (more on that later, including its current limitations).

Basic system configuration absent

Let’s get back to the beginning and I mean by that right after the installation of Ubuntu Core upon first boot. I still had my keyboard and screen attached to it (and you need it in order to login with your Launchpad SSO login to create the first user). After the first boot, I was not offered the possibility to change the keyboard layout (I do not have a US layout) but at least on Ubuntu Core one can change it afterwards by editing the file `/etc/default/keyboard`, this feat is not possible on Atomic Host. Anyway, not such a big issue as I’m using my Raspberry Pi mostly (if not alway) via SSH then it does not matter anymore. Staying on the localization topic, both systems do not allow changing the locale (language, regional preferences, etc.). On Atomic Host it is set to US with the “peculiar” time and date format ;-) no offense! Similar fate on Ubuntu Core but they are using the C locale (which sadly for me also uses the US date/time format). Although I would anyway stick to the English language, I don’t like the regional choice and I don’t like the lack of choices here.

So let’s move back to an SSH connection, at least the keyboard layout is no longer an issue. But here comes the next one ;-) whenever I use SSH, the first thing I do is launch a new tmux session or attach to an existing one. I’m open to other solutions, so if the host only has screen or byobu, I’m also fine. However, Ubuntu Core for ARMv7 does not ship by default with any of them, nor are they any existing snap package. So there is no simple way (out of compiling from source and creating a snap package) to have safe SSH session.

Docker

Normally thanks to container technology, it is possible to install more or less everything I would need and with some clever alias it would seem much like a snap installation. So I just wanted to create a Dockerfile which reference Alpine Linux and install tmux and build the container using Docker. Not possible with Ubuntu Core and the Docker snap. According to the Docker snap information, I should be creating a folder under `$HOME/apps/docker/1.6.xxx` (eventhough Docker 1.11.2 was installed, weird), but it does NOT work. The AppArmor profile installed with Docker snap denies it. I’ve tried many different folders to no avail. This is not a misconfiguration of Ubuntu Core but rather one of the Docker snap packager, but the end result is that as a user Ubuntu Core on armhf is barely usable (lack of too many essential tool, no snap and cannot easily use Docker).

Other rough edges with Docker are that your default user does not belong to the Docker group, so you need to sudo every single Docker command. There is also no way to add yourself to the Docker group (as the group is defined in the standard file `/etc/group` which is on a read-only filesystem). Your default user is also not a real traditional user, it is an “extra user” (did not know this subtlety before this) so there are a bunch of things that are not compatible with it like you cannot launch a container to run as yourself (docker run -u $(whoami) ...) as you are not a standard user (no entry in `/etc/passwd`), you are an extra users (see in `/var/lib/extrausers/passwd`), but at least it would be possible to do it specifying your user ID (e.g. 1000)! This is all logic but “rough”. A corollary to the previous statements is that you cannot run Docker in user namespaces mode (unprivileged Docker container) as you cannot add yourself to the Docker group.

At least the Docker snap includes Docker Compose, that is cool. On Atomic Host, Docker is installed but not Docker Compose. As the file system is also read-only there is good way to install Docker Compose in a central place, it should be run within a container. At least on Atomic Host, it is very easy (as on standard Linux OS) to create Dockerfile and build them. So these limitations (of not having Docker Compose and some other tools) can be overcome with some efforts.

Final Try

My next steps was to download some scripts and tools. However both curl and wget are absent of the base installation. Even git is not available as a snap or on the core installation. There is no way to build a Dockerfile, so in the end to have tmux, curl, git, etc. I had to create a lxd container running Ubuntu just to get a normal OS (or from my laptop, I could download the scripts and via scp pushing them to Ubuntu Core). But then I simply prefer to run Ubuntu Server or Raspbian on my Raspberry Pi.

For me I’ve lost too much time on this already. Ubuntu Core seems nice I really like the principles and approach, but it is definitively missing too many basic tools, too rough for my taste and available free time.

Conclusion

I will install Ubuntu Server or CentOS 7 for armhf on my second Raspberry Pi 2. Ubuntu Core is perhaps still a bit too young (maybe it is specific to the armhf platform) and requires too much work for my needs. At least this experience made me understand that I don’t want a lock down and safe box, but I need flexibility.

Picture credits: Public Domain. The picture is part of the State Library of South Australia collection (see original B 4433 photo).

Getting docker-compose on Raspberry Pi (ARM) the easy way (updated)

I really like docker-compose, it has a simple language (YaML) to describe how to build and run a container, so you do not have to remember (or count on your history availability) the long `docker build ...` and `docker run ...` commands (and many others).

However, docker-compose is not (yet) available for Raspberry Pi or any other ARM architecture.

(Update 2017-03-02: but we are getting there. A first series of patches to allow support has been merged in the master branch but is not yet released. However, it does not look like official releases of compose for ARM will be provided in the near future, but at least building them will become even easier.)

(Update 2019-03-24: it is now easily available using pip. Doing pip install docker-compose works on ARM.)

So I forked the official Docker Compose repository and did a few minimalistic changes in order to get a built of docker-compose for Raspberry Pi. I have created a Pull Request in the hope that it might get accepted and that ARMv7 be officially built. But while waiting for the review process to be triggered, here is how to do it for yourself.

Download the Project

As pre-requisite you need to have `git` (sudo apt-get install git) and `docker` (see my previous article) already installed on your platform.

Then get a copy of the project on your local Raspberry Pi.

$ git clone https://github.com/docker/compose.git
$ git checkout release

Now apply the following patch (Update 2017-03-02: soon when the master branch will be merged into the release one, these extra steps won’t be necessary):

$ cd compose
$ cp -i Dockerfile Dockerfile.armhf
$ sed -i -e 's/^FROM debian\:/FROM armhf\/debian:/' Dockerfile.armhf
$ sed -i -e 's/x86_64/armel/g' Dockerfile.armhf

Build and install docker-compose

To build the docker-compose binary, the procedure is rather simple. First you need to build the docker image which will be used to set-up the build environment. Second and last you need to run the container which will build docker-compose. At the end the binary will be available under the `dist` subfolder.

$ docker build -t docker-compose:armhf -f Dockerfile.armhf .
$ docker run --rm --entrypoint="script/build/linux-entrypoint" -v $(pwd)/dist:/code/dist -v $(pwd)/.git:/code/.git "docker-compose:armhf"

After several minutes you will get a binary file which you can then install on your system:

$ ls -l dist/
total 6816
-rwxr-xr-x 1 pi pi 6976500 Feb  8 11:41 docker-compose-Linux-armv7l
$ sudo cp dist/docker-compose-Linux-armv7l /usr/local/bin/docker-compose
$ sudo chown root:root /usr/local/bin/docker-compose
$ sudo chmod 0755 /usr/local/bin/docker-compose
$ docker-compose version
docker-compose version 1.11.0-rc1, build daed6db
docker-py version: 2.0.2
CPython version: 2.7.13
OpenSSL version: OpenSSL 1.0.1t  3 May 2016

Goodies: Install docker-compose bash autocompletion

Docker Compose provides autocompletion for bash. Installing it is as simple as doing:

$ sudo curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose version --short)/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose

Conclusion

If you might wonder why I stated “the easy way” in my title, well if you want to master Docker, you ought to consider the above easy :-)

Of course in our field of work nothing is as simple as a mouse click, especially when you need to create something that is not (until today) provided out of the box. If you want real easy simply install it using pip install docker-compose should work.

A Time Server in a Container – Part 1

GPL https://commons.wikimedia.org/wiki/File:Sablier-temps-icone-5376-128.png

To learn Docker in details I decided to use it to run a local time server using ntpd from the ntp.org project.

I have used an incremental approach where I started with an easy setup and then increased the challenges either to improve the time server or to better understand Docker.

Getting Started

So why ntpd and not <put your favourite time server>

I know ntpd for having configured it many times in the past 10 years. So I wanted to start with it first to quickly get time synchronisation working.

Which platform?

I have a Raspberry Pi (abbreviated RPi from now on) which serves as DHCP server and local forwarding and caching of DNS queries for my LAN. It had early support for Docker back in October when I started my experiment which added a bit of spice to it.

Getting Docker on a Raspberry Pi

There are many ways to get Docker running on your RPi. You could get the Hypriot OS Linux distribution which has everything setup nicely for running Docker containers. You can compile Docker on your platform of choice (which I had to do to squash a few early adopters’ bugs). You can install a tarball containing the binaries for your platform. But if running Raspbian Jessie – like I was – you can today just include Docker’s own repository and install a binary version using apt-get. Make sure your Kernel is recent (Docker requires 3.10 at least, but if you have a properly updated Raspbian it should be running 4.4 at the time of writing).

You can follow Docker’s installation guide for Debian, but by default it will install you the x86_64 Docker repository. As hinted in the documentation, for other architecture you need to use the [arch=...] clause. In addition, Docker provides a specific variant of the package for Raspbian. So for Raspbian Jessie, use the following entry for your docker.list file:

deb [arch=armhf] https://apt.dockerproject.org/repo raspbian-jessie main

Continue to follow the Docker guide, including how to set up non-root access to a specific user.

Creating a Docker image for ntpd

Create a specific folder somewhere on your Raspberry Pi storage (e.g. mkdir -p ~/projects/docker/ntpd) and create a file Dockerfile.armhf (I use the extension .armhf so I can have distinct Dockerfiles for each platform I use) with the following content:

FROM armhf/ubuntu:16.04
RUN apt-get update \
    && apt-get install -y --no-install-recommends ntp \
    && apt-get clean -q \
    && rm -Rf /var/lib/apt/lists/*
ENTRYPOINT ["/usr/sbin/ntpd"]

Note: This file as well as newer version of it and instructions to build and run the container are available on my GitHub ntp container project. In the rest of this blog post, I’m only going to detailed how I approach running the container and solve problems.

The first line state that the base image for the container will be Ubuntu 16.04 (the specific variant for RPi architecture). The second until the fifth lines are commands we execute on top of the base image, basically it updates the packages list to install the latest version of ntpd with the smallest dependencies, and it removes any cached or temporary files. So we minimise the size of the image on disk. Finally the last line, is the command that will be executed by Docker when instructed to run the container. I have used the term ENTRYPOINT because it allows me – while experimenting – to change the list of parameters I send to ntpd when I create the container and run it. This gives me flexibility with testing different parameters.

I picked up Ubuntu as the base image because it has sane default for the ntpd configuration file. It will use the NTP Pool project and the configuration is secured by default. Note that other base images could have also worked and have also sane default. I could have used Alpine Linux base image, it is really compact and lightweight, would have been perfect for a small platform like a Raspberry Pi, but it does not provide the ntpd packages from the NTP project which I wanted to start with. It only supports OpenNTPD (which does not support leap seconds, so it was a no go for me) and Chrony (which could be a good alternative but as I mentioned before I wanted to first experiment with Docker not learn yet another NTP application).

Let’s build the container image (I named the image “article/armhf/ntpd” and tagged it with the current date, but just name it like you want):

$ docker build -f Dockerfile.armhf -t article/armhf/ntpd:20170106.1 .

Running the NTP container

We are now going to spawn an instance of the container image in foreground to see what is going on and to notice any error:

$ docker run --rm -it article/armhf/ntpd:20170106.1 -n
 6 Jan 14:03:30 ntpd[1]: ntpd 4.2.8p4@1.3265-o Wed Oct  5 12:38:30 UTC 2016 (1): Starting
 6 Jan 14:03:30 ntpd[1]: Command line: /usr/sbin/ntpd -n
 6 Jan 14:03:30 ntpd[1]: Cannot set RLIMIT_MEMLOCK: Operation not permitted
 6 Jan 14:03:30 ntpd[1]: proto: precision = 1.198 usec (-20)
 6 Jan 14:03:30 ntpd[1]: Listen and drop on 1 v4wildcard 0.0.0.0:123
 6 Jan 14:03:30 ntpd[1]: Listen normally on 2 lo 127.0.0.1:123
 6 Jan 14:03:30 ntpd[1]: Listen normally on 3 eth0 172.17.0.2:123
 6 Jan 14:03:30 ntpd[1]: Listen normally on 4 lo [::1]:123
 6 Jan 14:03:30 ntpd[1]: Listening on routing socket on fd #21 for interface updates
 6 Jan 14:03:30 ntpd[1]: start_kern_loop: ntp_loopfilter.c line 1126: ntp_adjtime: Operation not permitted
 6 Jan 14:03:30 ntpd[1]: set_freq: ntp_loopfilter.c line 1089: ntp_adjtime: Operation not permitted
 6 Jan 14:03:31 ntpd[1]: Soliciting pool server 193.200.241.66
 6 Jan 14:03:32 ntpd[1]: Soliciting pool server 90.187.7.5
 6 Jan 14:03:32 ntpd[1]: adj_systime: Operation not permitted
 6 Jan 14:03:32 ntpd[1]: Soliciting pool server 129.70.132.37
 6 Jan 14:03:33 ntpd[1]: Soliciting pool server 85.25.210.112
 6 Jan 14:03:33 ntpd[1]: Soliciting pool server 31.25.153.77
 6 Jan 14:03:34 ntpd[1]: Soliciting pool server 178.63.9.212
 6 Jan 14:03:34 ntpd[1]: Soliciting pool server 193.22.253.13
^C 6 Jan 14:03:40 ntpd[1]: ntpd exiting on signal 2 (Interrupt)

We have a few errors (Operation not permitted) which I have highlighted above, one is about RLIMIT_MEMLOCK (this is about resetting the limit of the maximum locked-in-memory address space, ntpd uses it to forbid its main process from swapping to limit jitter) and the other ones are about ntp_adjtime and adj_systime (both are used by ntpd to interface with the Kernel and adjust the system time).

By default ntpd is running as root user, so it should have enough privilege for these operations. In addition, even though Docker supports running unprivileged containers (i.-e. the root user inside the container is mapped to a normal user on the host, this is based on user namespaces (see namespaces(7)), this is not the default Docker configuration, so my root user inside the container is the root user outside the container (and if Docker would be configured to use user namespace, they are not compiled in the Raspberry Pi foundation Kernel. So it is at the moment not possible to use that feature on a Raspberry Pi without some extra efforts, but I will details this in a future article).

In order to implement basic privilege limitations of container, Docker can use various security feature of the Linux Kernel to limit the container accessing certain sensible Kernel calls, the most notable ones are Linux Capabilities (since Docker 1.2), Linux SECCOMP filtering (since Docker 1.10, but better use Docker 1.12+ as pervious default SECCOMP profiles were in conflict with the Linux Capabilities management of Docker. In addition, the Raspbian Kernel (version 4.4 as of writing) has not the built-in support for SECCOMP filtering, so this functionality is not usable on Raspberry Pi, unless you compile your own Kernel) and Linux MAC (like SELinux or AppArmor, but none of them are available on Raspberry Pi without recompiling your own Kernel and installing the user space tools). So Docker on Raspberry Pi can only use Linux Capabilities as security feature.

By default Docker provides each container with a reasonable set of capabilities (see Docker documentation on capabilities). If you check both documentation (the Linux Capability manual and the Docker runtime privileges doc), you will find out that basically our container is missing the CAP_SYS_RESOURCE and CAP_SYS_TIME capabilities. Now there are 2 ways to add them, most online guide would tell you that when you run into “operations denied” errors, just add the --privilege flag to the docker run command line and it will be fixed, that’s the first way and it’s the wrong approach (sure it works, but it is like deactivating SELinux because you are not allowed to perform an operation). The other way is to add the missing capabilities to the container. This can be done by using the --cap-add flag. That’s what I’m going to show now:

$ docker run --rm -it --cap-add SYS_RESOURCE --cap-add SYS_TIME article/armhf/ntpd:20170106.1 -n
 7 Jan 11:19:24 ntpd[1]: ntpd 4.2.8p4@1.3265-o Wed Oct  5 12:38:30 UTC 2016 (1): Starting
 7 Jan 11:19:24 ntpd[1]: Command line: /usr/sbin/ntpd -n
 7 Jan 11:19:24 ntpd[1]: proto: precision = 1.823 usec (-19)
 7 Jan 11:19:24 ntpd[1]: Listen and drop on 0 v6wildcard [::]:123
 7 Jan 11:19:24 ntpd[1]: Listen and drop on 1 v4wildcard 0.0.0.0:123
 7 Jan 11:19:24 ntpd[1]: Listen normally on 2 lo 127.0.0.1:123
 7 Jan 11:19:24 ntpd[1]: Listen normally on 3 eth0 172.17.0.2:123
 7 Jan 11:19:24 ntpd[1]: Listen normally on 4 lo [::1]:123
 7 Jan 11:19:24 ntpd[1]: Listening on routing socket on fd #21 for interface updates
 7 Jan 11:19:25 ntpd[1]: Soliciting pool server 213.95.21.43
 7 Jan 11:19:26 ntpd[1]: Soliciting pool server 134.119.8.130
 7 Jan 11:19:26 ntpd[1]: Soliciting pool server 46.4.32.135
 7 Jan 11:19:27 ntpd[1]: Soliciting pool server 213.136.86.203
 7 Jan 11:19:27 ntpd[1]: Soliciting pool server 178.63.9.212
 7 Jan 11:19:27 ntpd[1]: Listen normally on 7 eth0 [fe80::42:acff:fe11:2%6]:123
 7 Jan 11:19:27 ntpd[1]: new interface(s) found: waking up resolver
 7 Jan 11:19:27 ntpd[1]: Soliciting pool server 46.165.212.205
 7 Jan 11:19:28 ntpd[1]: Soliciting pool server 109.239.58.247
 7 Jan 11:19:28 ntpd[1]: Soliciting pool server 131.188.3.221
 7 Jan 11:19:28 ntpd[1]: Soliciting pool server 78.46.189.152
 7 Jan 11:19:28 ntpd[1]: Soliciting pool server 195.50.171.101
^C  7 Jan 11:22:40 ntpd[1]: ntpd exiting on signal 2 (Interrupt)

To make sure this is working, first verify that you do not have any time synchronisation service running: $ sudo systemctl stop systemd-timesyncd ntp.

Then change the system time by shifting it by 5 seconds: $ sudo date -s "5 seconds".

Check that your system clock is now off by 5 seconds:

$ ntpdate -q time1.google.com
server 216.239.35.0, stratum 2, offset -5.002284, delay 0.14117
18 Jan 11:27:55 ntpdate[5217]: step time server 216.239.35.0 offset -5.002284 sec

Start the container in the background this time: $ docker run --name ntpd --detach --restart always --cap-add SYS_RESOURCE --cap-add SYS_TIME article/armhf/ntpd:20170106.1 -g -n

Wait a few seconds and query again the network time using the above ntpdate command. The offset should now be below 5 seconds and probably close to 0 second.

You have now a ntp service running inside a container and synchronising your system clock using Internet time servers from the NTP pool project. If you want to stop the experiment here and restore your system, you need to stop the container ($ docker stop ntpd) and block it from restarting at next boot ($ docker update --restart=no ntpd) and perhaps reboot so that you reactivate the default time synchronisation service.

But if you want to keep experimenting or let the container do its job of time synchronisation, you should make sure to deactivate any other time synchronisation mechanisms to avoid conflicts if you want to keep your NTP container running:

$ sudo timedatectl set-ntp false 
$ sudo systemctl disable ntp chronyd
$ sudo systemctl mask systemd-timesyncd
$ sudo systemctl stop systemd-timesyncd ntp chronyd

Foreword about Time and NTP on a Raspberry Pi

The Raspberry Pi (at the time of writing this applies to all models) has no real time clock (RTC) module on its board. A RTC is a small oscillator (e.g. quartz, like in your electronic wristwatch) plus some electronic to keep track of time and a battery (or equivalent). Those RTCs help a system keep track of time when there are off and in the early phases of boot. On a standard desktop or laptop computer the motherboard has an RTC. Many oscillators are not particularly accurate (low quality) with non-stable frequencies which can depend on external factors such as room temperature. It is possible to add a RTC module to the Raspberry Pi (I will have a detailed article on that soon), but without RTC you need a network connection in order for the RPi to know the current time.

On Linux, the kernel manage 2 clocks, the hardware clock (which is based on the RTC) and the system clock (which is the clock used by the system to query/set the time, this clock is ticking using a clocksource such as a CPU/SoC timer, Kernel jiffies, etc.). On boot, the current time is read from the hardware clock and is used to initialise the system clock. The system clock is then driven by the ticks from the selected clocksource and the time read at boot from the hardware clock. Usually, on shutdown, many Linux distribution are configured to store the system clock in the hardware clock.

The Raspberry Pi has maybe no hardware clock but it has a clock source (current clocksource on Raspberry Pi 2, other models may differ):

[    0.000000] arm_arch_timer: Architected cp15 timer(s) running at 19.20MHz (phys).
[    0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x46d987e47, max_idle_ns: 440795202767 ns
[    0.000010] sched_clock: 56 bits at 19MHz, resolution 52ns, wraps every 4398046511078ns
[    0.000032] Switching to timer-based delay loop, resolution 52ns

So if the time is set on boot, the OS can keep track of the time even if disconnected and as long as it is up and running. Systemd 213 introduced a new service systemd-timesyncd which is a SNTP client implementation, so it is able to query a network time server and set the OS system time based on the response. This service has an extra feature for systems without RTCs, it saves the system time on disk on shutdown. So when your Raspberry Pi reboot, it can use the stored time to initialise its system time while waiting for more accurate time once the network is ready. Sure during the early boot process the system time might be off by a couple of seconds but it is better than nothing.

As for NTP, it is adjusting the system time based on responses from network time servers or when offline based on the clock drift NTP has been calculating for the current clock source. This means that if you run NTP, it is good to let it run at least 24hours so it can accurately measure the clock source drift and then it can compensate it during network disconnection periods. In addition, NTP will regularly sync back the system time to the hardware time to correct the RTC clock. In up coming articles, we will see how we can add a RTC to our Raspberry Pi and how to overcome the challenges of allowing RTC access to NTP inside the container and increasing the clock accuracy. In addition, we will see how we can become an NTP network time server for the local LAN.

What did we learn about Docker

First, we practiced the basics of building a container (the Dockerfile syntax and docker build ... command), running a container in foreground or background mode (docker run ...) and controlling the running container (docker stop ... and docker update ...). I did not yet elaborate much on the capabilities of these commands offer but it is my intention that we will discover them further as we progress we the experiment.

Second, we learned about some of Docker security measures (like Linux capabilities) and limitations of the current Raspberry Pi platform (like no SECCOMP filtering or AppArmor or user namespace), and we also learned how to extend a container permission by adding new capabilities.

Next to learn will be how to provide access to specific devices (such as an RTC), how to do simple monitoring (checking the container is running, its resource usage and logs), how to increase its security (dropping unnecessary capabilities, using the other security measures). With this quest we will learn a lot on the Raspberry Pi as well, we will add an RTC module, we will compile our own Kernels in order to add new security functions and improve the OS jitter, etc.

LXC unprivileged containers on Ubuntu 14.04 LTS

LXC Containers
LXC Containers

I’ve been toying around with containers using LXC, and I decided to use this technology to do some performance technique for a PHP web application. So I set-up a VM with Ubuntu 14.04 LTS and decided to use containers to test various stacks, e.g. MySQL 5.5 + PHP 5.6 + Nginx 1.4.6 vs MariaDB 10 + PHP 7 RC3 + Nginx 1.9 (and all possible combinations), and HTTP/2, latencies, etc. On top of this experiment I wanted to learn more about Ansible.

Therefore my needs were the following:

  • One account to rule them all: one user account on the VM with sudo privileges. Used by Ansible to administer the VM and its containers.
  • Run each container as unprivileged ones.
  • Each container is bound to one unique user.

As of Ansible 1.8, LXC containers are supported, but as second class citizen: this extra module needs some dependency to work (which are not in the default repositories) and it would hide me how to configure the LXC containers, so I discarded this solution.

Unprivileged containers using the LXC command line UI

Standard containers usually run as root and the root user within that container maps to the root user outside of the container. This is not exactly how I think of a container.

Real Container
Real Container

In my view, a container is meant to hold something which I don’t want to leak out of the container, a chroot on steroids! So although I was really looking forward for containers on Linux, the first implementation were not matching my expectations or use cases.

Being able to run unprivileged containers is one of the great thing which finally decided me to check those containers on Linux, they finally cut the mapping between the privileged users on the host and those in the container.

Unprivileged containers are not as easy to set-up as normal fully privileged containers and you have to accept that you need to download an image from some website you need to trust. Not ideal, but resources about how to build an image for an unprivileged container are really scarce online, and I decided to first try it with the images, and then once I master LXC, to try building my own images.

Setting up LXC unprivileged containers require a few more packages, especially for the user and group IDs mapping, some preliminary account setup and giving LXC proper access to where you store the LXC containers data in your home folder. I did all of this, including setting up ACLs for the access. But when I hit lxc-start (...) it all failed!!!

Container_Failuer
Container in Distress

What I did after all preliminary configurations was:

$ sudo -H -u dbusr lxc-create -t download -n mysql55 -- --dist ubuntu --release trusty --arch amd64
$ sudo -H -u dbusr lxc-start -n mysql55 -d
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 346 To get more details, run the container in foreground mode.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options

And the lxc-start command failed miserably.

Why? A bit of background first, I will try to describe at a high level how containers “contain” on Linux. LXC should not really be compared to Solaris Zones or FreeBSD Jails. LXC uses Linux Control Groups (cgroups) to contain (allow/restricting/limiting) access by some process to some resources (e.g. CPU, memory, etc.). When one creates a “local” session on Ubuntu 14.04 LTS (and this is still valid on the up-coming Ubuntu 15.10 as of writing), such as login through the console or via SSH, Ubuntu allocates for the user different control groups controllers which can be viewed by doing:

$ cat /proc/self/cgroup
12:name=systemd:/user/1000.user/4.session
11:perf_event:/user/1000.user/4.session
10:net_prio:/user/1000.user/4.session
9:net_cls:/user/1000.user/4.session
8:memory:/user/1000.user/4.session
7:hugetlb:/user/1000.user/4.session
6:freezer:/user/masteen/0
5:devices:/user/1000.user/4.session
4:cpuset:/user/1000.user/4.session
3:cpuacct:/user/1000.user/4.session
2:cpu:/user/1000.user/4.session
1:blkio:/user/1000.user/4.session

Each line is a type of control group controller assigned to the user. For example the line about memory concerns the Memory Resource Controller which can be used for things such as limiting the amount of memory a group of process can use.

So when I run my script via Ansible, Ansible first establish a SSH session using my “rule-them-all” user, an SSH session is considered “local” and PAM is triggering systemd-logind to create automatically cgroups for my user shell process. Yes, even-though Ubuntu 14.04 LTS is still using upstart for init system, it has already a few dependencies on systemd! Now my “rule-them-all” user, when he is using sudo to execute commands as another user (be it root or one of my container users), the executed command is not considered as a “local” session for the sudo-ed user. So no cgroups are created for the new process, and it actually inherit the cgroups of the callee. This is easily visible by doing, you can see that the username and UID did not change despite the command being run as another user:

$ sudo -u userdb cat /proc/self/cgroup
12:name=systemd:/user/1000.user/4.session
11:perf_event:/user/1000.user/4.session
10:net_prio:/user/1000.user/4.session
9:net_cls:/user/1000.user/4.session
8:memory:/user/1000.user/4.session
7:hugetlb:/user/1000.user/4.session
6:freezer:/user/masteen/0
5:devices:/user/1000.user/4.session
4:cpuset:/user/1000.user/4.session
3:cpuacct:/user/1000.user/4.session
2:cpu:/user/1000.user/4.session
1:blkio:/user/1000.user/4.session

This usually does not really matter, unless you are LXC and you use cgroups heavily!! So what happened is that lxc-start wanted to write to the various cgroups to create the container. But lxc-start was called by the user dbuser (via the sudo command), however the cgroups it inherited were from my “rule-them-all” user and obviously (and thankfully) dbuser does not have the right to change the cgroups of my “rule-them-all” user. So lxc-start failed due to some permission denied:

lxc_container: cgmanager.c: lxc_cgmanager_create: 299 call to cgmanager_create_sync failed: invalid request
lxc_container: cgmanager.c: lxc_cgmanager_create: 301 Failed to create hugetlb:mysql55
lxc_container: cgmanager.c: cgm_create: 646 Error creating cgroup hugetlb:mysql55
lxc_container: start.c: lxc_spawn: 861 failed creating cgroups
lxc_container: start.c: __lxc_start: 1080 failed to spawn 'mysql55'
lxc_container: lxc_start.c: main: 342 The container failed to start.

Getting Ubuntu 14.04 LTS ready to run our Unprivileged Containers

I did not manage to solve the problem on Ubuntu 14.04 LTS in a first attempt. But after some extra steps (which I will details below), I made it work on the up-coming Ubuntu 15.10 (still alpha) release! So revisiting my Ubuntu 14.04 LTS setup, I identified the required packages needing an upgrade: kernel, lxc and cgmanager (and a few dependencies). For the Kernel, I’ve used the latest Ubuntu LTS Enablement Stack and upgraded to kernel 3.19. For the other packages, despite my aversion for 3rd party repositories, I decided to trust the Ubuntu LXC team (they are the ones who do the work to get LXC/LXD in Ubuntu in the first place) and their LXC PPA. So the commands were:

$ sudo apt-get install --install-recommends linux-generic-lts-vivid
$ sudo apt-add-repository ppa:ubuntu-lxc/lxc-stable
$ sudo apt update
$ sudo apt full-upgrade
$ sudo apt-get install --no-install-recommends lxc lxc-templates uidmap libpam-cgm

The lxc-templates package are necessary for me as I’m using the “download” template. The uidmap is mandatory for unprivileged containers. And libpam-cgm is necessary to resolve my problem, it is a PAM module for the cgmanager which is the Control Group Manager daemon (installed as a dependency to LXC on Ubuntu).

Now armed with this updated kernel and container stack it is time to present you the extra setup steps I was talking about. We need first to update either (or both depending which method you use) the PAM configuration for sudo or su. I will show the extra line for the former, they should apply for the later if you wish to use it. You need to edit as root the file /etc/pam.d/sudo and add the following lines after the line ‘@include common-session-noninteractive‘:

session required pam_loginuid.so
session required pam_systemd.so class=user

The above 2 lines will register the new session created by sudo with the systemd login manager (systemd-logind). That’s the guy we wanted notified so that the creation of the cgroups for our user can now work. I’m still scouring the internet for the exact explanation of how this work. If I find it, I will probably write another post with the information.

If I would simply use su -l userdb or sudo -u userdb -H -s, I would just have to execute the following:

$ sudo cgm create all $USER
$ sudo cgm chown all $USER $(id -u) $(id -g)
$ cgm movepid all $USER $$

This will create all cgroups under the user $USER which is userdb. It will then set the owner of these new cgroups to the UID and GID of the userdb. And the last command move the SHELL process ($$) within these new cgroups. Then, the rest is trivial:

$ lxc-start -n mysql55 -d

And it is working. And if you want to run those commands using sudo, this is how you do it:

$ sudo -H -i -u userdb bash -c 'sudo cgm create all $USER; sudo cgm chown all $USER $(id -u) $(id -g)'
$ sudo -H -i -u userdb bash -c 'cgm movepid all $USER $$; lxc-start -n mysql55 -d'

If you close later the SSH session and you reconnect to it, you have to run the 2 commands again, even though it might display some warning that the paths or what-not are already existing. I still have those pesky warning and extra commands which I’m not sure why I still have.

Conclusion

While trying to run unprivileged containers on Ubuntu 14.04 LTS I’ve met several problems which I could only solved by using the latest LXC and CGManager packages and some special PAM configuration for which I’m not 100% sure of the impact. So unprivileged LXC containers on Ubuntu 14.04 LTS is still quite rough.

But during my journey with LXC, I’ve found an even easier way to create unprivileged LXC containers, without touching PAM, but still requiring the latest LXC and CGManager packages. This solution is based on LXD, the Linux Containers Daemon. There is a really good getting started guide by Stéphane Graber, the man behind LXD. I’m exploring this avenue at the moment and will report soon on this very blog.

All of this make me really look forward to the next Ubuntu LTS due to next Spring, the newer LXC and the under-heavy-development LXD will be part of Ubuntu 16.04 LTS and this could give a really great experience out-of-the-box with Linux Containerisation powers.

Picture credits: LXC Containers is based on a Public Domain photo of an unknown author. Real Container by Petr Brož, licensed under CC BY-SA 3.0 via Wikimedia Commons. Container in distress is licensed under a CC-BY 2.0 license by the New Zealand Defence Force.

The State of Retina Displays

Since almost a year, I own a retina display device. This type of screen has been so much praised by reviewer that I fell for it. So I got a 15″ Retina MacBook Pro. The screen is indeed beautiful but next time, I will save on the retina display premium cost and buy instead a good external screen.

On a retina screen optimised applications look gorgeous, but I am not a pixel junky and applications can look beautiful too on a non-retina screen. However most non optimised apps look dreadful and pixelated, and they are still loads of them. In addition, little application make really use of the extra amount of pixels, so it just looks slicker for fonts but does not really make use of the extra space.

The money saved by buying a non-retina notebook can be used to buy an external display, and with a small addition a 27″ WQHD screen which is not retina but offers an incredible space to work, I got one at work and I love it, better than a retina display.

So if you need to choose, don’t get the retina and get an extra external screen for that price! You will be more productive.

Corollary: don’t follow the advice of these guys here at CNET. An ultrabook/Air device is meant to have a lot of battery life, so a manufacturer that delivers a good screen with incredible battery (10 hours and more) is doing the right choice. And if that means no retina and no touch screen, then be it. We can wait another couple of years to enjoy this little amusement while keeping sustained battery use.

Mobile Security, Not Realy Usable, Not Really Secure

Yesterday I stumble upon a journalist opinion who think that current Yahoo! CEO, Marissa Mayer, is just a twerp when it comes to mobile security. I do not think we should qualify someone, even the CEO of an internet company like Yahoo!, of a twerp just because one does not use a pass-lock for its mobile phone.

Why mobile phone security is important?

It is certainly about money, one could use your phone/data connection, but you can easily/quickly close this gap (e.g. using Apple’s Find my iPhone, or some Android equivalents). The other thing that has value is the personal data within the phone. But this has value either in the numbers (smart phone data from thousands of users stolen could interest spammers and hoaxers, for sure) – but unlikely with one phone theft – or because you are a high-profile person or you want to use those data against its rightful owner (blackmail, defamation, etc.).

So yes, security is important for a mobile device.

Security vs. usability

But the usability and efficiency of the security features on mobile is awful! It gets into the way of quickly grabbing the phone and performing an action (urgent call, photo, etc.).

Which is why I understand that people do not want to use a pass-code or similar mechanism to access their phone. Most are cumbersome, they get in the way of easily/readily use the phone (that’s why I still carry a camera when I want to take pictures!). And their relative security is dubious. The PIN1 or gesture authentication mechanism can sometimes – and with the right tool – be read from the finger traces on the phone and/or could be brute force (I am convinced there are ways to do that automated).

Since a few weeks I have a smart phone. It is not a too old one, neither a brand new one, but it is a good smart phone. I got a Samsung Galaxy S (1st generation). I have tried several pass-lock, there is the ubiquitous 4 digits PIN, the swipe gesture, a password, a simple lock and nothing. Clearly, the two last options are the one usable! Out of the pseudo secure ones, only the PIN code seemed not too difficult to use. This is a great disappointment. I do not trust my phone to protect my data to the same extent I trust my computer. And the worst, it seems that making this better is not on the agenda of most manufacturers/mobile OS providers, unless you count the attempt by Apple to improve at least the usability side with the fingerprinting.

Fingerprinting is not the panacea in terms of security. But when properly implemented it is as secured as a PIN-lock and it does not get in the way of using the phone2. Furthermore, I am sure this technology will evolved and we can hope in the near future to have good-enough fingerprinting-lock technology which can surpass PIN-lock in terms of security.

 

  1. I still do not get why PIN pass-code system do not randomise the place of the numbers on the screen!?!
  2. This is my humble opinion, no hard facts. Others have better expressed their views on fingerprinting on mobile phone than I did.

Finally someone got it right regarding 64-bit performance increase:

It seems that the bulk of the A7’s performance gains do not come from any advantages inherent to a 64-bit architecture, but rather from the switch from the outdated ARMv7 instruction set to the newly-designed ARMv8. – iFixit iPhone 5s Teardown