How to make a Docker container read-only

There are many ways to harden a Docker container, one is to make the container layer read-only.

This might be a marginal improvement to security, first your application should not run as root or has special privileges (e.g. CAP_DAC_OVERRIDE), so there is limited risk that an attacker exploiting a vulnerability of your application can modify sensitive applications. However, if you install your application within a Dockerfile as the application user (e.g. using bundle install) make the base layer read-only might protect it from unwanted modification.

I also like the idea of an immutable base layer and clearly identifying the writing data and if they should be persisted or not. I also relate that to security, because the better you know the behaviour of an application, the better you can adapt a confinement for it.

Setting the base layer read-only is somewhat challenging. Setting a container image to read-only is simple, there is a --read-only flag to the docker run command. But identifying which data is written by the containerised application can be a challenge One task is thus to identify all written data and defining of they should be persisted in a volume or not persisted. In the latter case, one could then use a tmpfs volume or a local volume (in a Swarm cluster).

We are going to use Docker layering approach to identify the written data. How to check the difference varies depending on the storage backend and they are too numerous for me to list each cases, I might complete the article in the future but today I will show how to use the BTRFS and Overlay2 backend.

What I am going to explain is based on the current implementation of the Docker storage backend as described in their respective guides. Each guide explains how the backend works, and by extracting that information I could find a way to compare the layers.

Continue reading “How to make a Docker container read-only”

Revisiting getting docker-compose on Raspberry Pi (ARM) the easy way

Whale

Two years ago I was publishing a post to build docker-compose on an ARM machine. Nowadays, you can find docker-compose on PyPI. However, if you intent to run docker-compose on a platform without Python dependencies, you might still be interested in my guide which generates an ELF binary executable.

My previous guide has worked well until release 1.22.0 after which the Dockerfile.armhf (which was merged) has been upgraded to match the changes for the X86-64 platform but broke my build instructions. The builds seems to work and generate an executable but it fails to run due to missing dependencies:

+ dist/docker-compose-Linux-armv7l version
[446] Failed to execute script docker-compose
Traceback (most recent call last):
File "bin/docker-compose", line 5, in
from compose.cli.main import main
File "/code/.tox/py36/lib/python3.6/site-packages/PyInstaller/loader/pyimod03_importers.py", line 627, in exec_module
exec(bytecode, module.dict)
File "compose/cli/main.py", line 13, in
from distutils.spawn import find_executable
ModuleNotFoundError: No module named 'distutils'

I have not found the root-cause of the problem as I am not familiar with tox, but it looks like a configuration problem of that tool. So I decided to simply use Python3 built-in virtualenv.

As in my previous guide, you need to clone the repository and choose a branch. You can take the release branch or a specific version branch (e.g. bump-1.23.2).

$ git clone https://github.com/docker/compose.git
$ cd compose
$ git checkout bump-1.23.2

The two next shell commands should modify the original build script to use virtualenv and to add the missing dependencies (which are correctly installed in the tox environment but would be missing in ours).

$ sed -i -e 's:^VENV=/code/.tox/py36:VENV=/code/.venv; python3 -m venv $VENV:' script/build/linux-entrypoint
$ sed -i -e '/requirements-build.txt/ i $VENV/bin/pip install -q -r requirements.txt' script/build/linux-entrypoint

Now you can follow the exact same steps as in the previous guide. In summary:

$ docker build -t docker-compose:armhf -f Dockerfile.armhf .
$ docker run --rm --entrypoint="script/build/linux-entrypoint" -v $(pwd)/dist:/code/dist -v $(pwd)/.git:/code/.git "docker-compose:armhf"
$ sudo cp dist/docker-compose-Linux-armv7l /usr/local/bin/docker-compose
$ sudo chown root:root /usr/local/bin/docker-compose
$ sudo chmod 0755 /usr/local/bin/docker-compose
$ docker-compose version
docker-compose version 1.23.2, build 1110ad01
docker-py version: 3.6.0
CPython version: 3.6.8
OpenSSL version: OpenSSL 1.1.0j 20 Nov 2018

Linux kernel 5.0+ switching to Multi-Queue Block as default

Hard Disk Drive
For one of my Raspberry Pi, I am maintaining myself my own kernel. By that I mean that I’m using the kernel repository from the Raspberry Pi Foundation but I am defining the configuration of the kernel myself. My goal is to make the kernel low latency, hardened and with specific drivers compiled instead of given as modules.

Recently I upgrade it to kernel version 5.0.0-rc8 and now to 5.0.0. At first I thought there was an error in the RC8 because I did not see the CFQ (Complete Fair Queue) or Deadline I/O schedulers (block layer I/O schedulers). But when the stable version was out, there was no longer a doubt, either they had been moved to a new section or removed. The new default scheduler was the mq-deadline Multi-Queue Block scheduler and there are two other alternatives as module: BFQ (Budget Fair Queueing) and Kyber.

Linux kernel 5.0+ is defaulting to blk-mq

I then discovered that Linux kernel 5.0.0 has dropped support for the legacy block schedulers and now only support the multi-queue block (blk mq) schedulers. That is a very interesting move, the multi-queue schedulers should provide better scalability so performances by using parallelism in hardware. On the desktop or a Raspberry Pi, I do not expect to see any improvements, but for servers there could be a win.

Sources:

Home network improvements – Setting up a Firewall

Closed Door at Gateway in Forbidden City

This is the fourth blog post about my home network improvements series. I am sorry it is taking me so long to write all those posts, but each takes a lot of hours to write and I am balancing my life more towards family at the moment. I hope you can bear with me until the end.

Great Wall winding over the mountains
Walls need to adapt to their environment

In the previous post, we presented installed the OS and set up networking and routing.

We will now see how to add another very important feature the firewall.

  1. Router features list (published)
  2. Creating a basic router, defining the network and routing (published)
  3. Adding a firewall to our router (this post)
  4. Providing basic network services, DHCP and DNS (to be published)
  5. Testing the firewall (to be published)
  6. Extra services (to be published, could be splitted in more than one post)

So today’s post will present a simple but secure firewall installation.

As I have said in a previous article, I want to try out nftables instead of using iptables. But we will continue iterating on the previous post and use iptables instead one more time. I want to have a working router and then I can think of switching to nftables and solving integration with other tools.

A Basic Firewall

Firewall - Forbidden City Gateway
Firewall

We will use iptables command line to populate the firewall rules. As changing those rules from the command line is not persistent, a simple reboot will restore your OS in the previous configuration so if things do not workout or if we get locked out by a wrong rule, just reboot and restart to setup your firewall. Once we will be happy with the firewall, we will save the rule set and make it permanent.

For rules, we obviously do not want any traffic coming from the WAN to establish new connections inside our LAN or on our router. Only established connections should be allowed through, e.g. an HTTP response is allowed through the firewall so that we can browse the internet. We want some network services to still function, like ICMP or DNS messages to pass through the firewall. We do not want to filter the outgoing traffic for the moment, so everything from the LAN is allowed to reach the WAN.

I like to set default policies for the different iptables chains instead of relying on the last rule to do the policy for me. However, in order to avoid getting locked out, we will set those policies at the very end and always start by defining what is allowed. In order to define our firewall, we will work first with the main chains of the filter table (the default one). Mostly caring of incoming packets and IP forwarding rules.

Continue reading “Home network improvements – Setting up a Firewall”

Catch of the Day: Is it Good or is it Bad phishing?

Fisherman on bambooboat China

I had a good laugh :D today at yet another phishing attempt.

The phisher behind this campaign must be philosophers or fans of Shakespeare. The phishing domain name used is – no kidding – goodorbad.email!

The link points to goodorbad.email domain name
Phishing – Good or Bad?

Bad luck also for our phisher, for once I was using Apple Mail on my wife’s laptop to check my daily email, and with a Retina screen the fake link was all blurry.

This is interesting because it is the first time I see an attack trying to obfuscate the link using an image. Frankly I do not see the advantages, it has the risk of being blurry on hidpi or retina displays, it has the risk that it won’t be displayed if the image is remote (in that case, the image is provided as attachment so it was autoloaded).

Anyway, the domain should have been probably goodorbad.phishing or simply bad.phishing!

Home network improvements – Building a Basic Router

Loop Junction in Chicago

This is the third blog post about my home network improvements series.

Gateway Appliance Picture - License CC BY-SA by Cuda-mwolfe
Gateway Appliance – License CC BY-SA by Cuda-mwolfe

In the previous post, we presented what feature should we implement in our router.

We will now see how to implement the basic features which are routing, firewall and NAT, DHCP and DNS.

  1. Router features list (published)
  2. Creating a basic router, defining the network and routing (this post)
  3. Adding a firewall to our router (to be published)
  4. Providing basic network services, DHCP and DNS (to be published)
  5. Extra services (to be published, could be splitted in more than one post)

So today’s post will present in order:

  1. OS installation
  2. Network interfaces configuration
  3. Discussion on what is routing, with activation of packet forwarding, Network Address Translation (NAT) and IP Masquerading

For some items we will see today, we will start with basic functionalities that we will improve or iterate in subsequent posts. As I have said in a previous article, I want to try out nftables instead of using iptables. But many tools I would like to use to quickly create a router are still only supporting iptables as backend, and you cannot mix iptables and nftables. Such tools include systemd-networkd, Docker, or the version of firewalld which Ubuntu is currently supporting (note that firewalld version 0.6+ does support nftables as a backend). So in this first iteration and in order to relatively quickly create a basic router, we will use mostly iptables either through systemd-networkd support or via other tools.

Continue reading “Home network improvements – Building a Basic Router”

Containers, volumes and file permissions

Permission Denied for Container’s Volume

There is a subject which seems to be completely abstruse to many users of containers on Linux, it is about sharing data between a host and a container or between containers.

I do think that solving this problem is not much different than it is without containers on Linux and on Unix. From my perspective, there is no much difference between managing file permissions with or without containers, the big change for me is the introduction of namespaces, especially the user namespaces.

So what is exactly the problem? And where does it come from?

The problem is that when running a process within a container, that process will run with a certain user and group ID (respectively UID and GID) and that those IDs might differ from the ones of the caller (the user creating and running the container), this might not be obvious. This is especially true with container technologies like Docker which by default will run the process within the container as root (unless overridden in the Dockerfile or command line) when any user with write access to the Docker socket can create such container. So you have by default a discrepancy for the UID and GID between the caller – probably a standard user – and a random Docker container.

In traditional Unix / Linux, this is “normal” or “expected” behaviour. You usually cannot run a process as root from your normal user unless you use sudo or a setuid program, so usually you do not have the problem that a program you launch might have different UID/GID than your own user. And when you use a program with sudo you understand that this might become a problem, so if you use sudo to run `tcpdump -w net-trace.pcap` you know the file net-trace.pcap will be owned by root and that you might not be able to access or delete it. This reflex needs to apply to running a container as well.

When you have done Unix/Linux development most of your career – and that you have adopted the principle of least privileges … I still know of few people only using the root account ;-) – you are used to create application that will run in the background (as a service) under a dedicated user and for which you need to handle the permissions for the data this application might need to use. So introducing containers (without user namespaces) should not bring any surprise here, it is part of the expectations. But you will see later that you can still be bitten by some edge cases from the container implementation.

So, let us see how to fix this problem of User/Group ID and file permissions. Note that the solution would be similar if you would use containers or not, and applies to all container implementations (e.g. LXC, Docker, etc.). Then, for everyone, we will see how to handle file permissions when using user namespaces (hint, the principles are the same, but it requires a few extra steps to understand what will be the effective UID/GID). Finally, in the case of Docker, we will see a few edge cases where you can still get off guard with respect to file permissions and volume declaration inside a Dockerfile.

Continue reading “Containers, volumes and file permissions”

Containers for Firefox

Wooden Boxes – CC0 Public Domain

There is a new feature coming to Firefox which was discretely introduced in Firefox 50 Nightly and is getting improved with follow up releases. It is called Containers and is part of the of the Contextual Identity Project.

In short each container – or context – is a “colour-coded” tab with a dedicated environment to help one separate his/her online activities. So you can have tabs in a particular context and others in another context.

This increases privacy, so sites cannot spy on you outside of the context you use them. It allows separation of concerns, so you can use a website (e.g. GitHub) for work and personal use inside the same browser but with each a different account. It increases security so if you access your bank in a dedicated context, it would be harder to perform some attacks (e.g. cross-site scripting) to access your bank data.

To activate it you can go to about:config page and then set to true the entry privacy.userContext.enabled, you get the vanilla experience, still a bit rough in Firefox 60 and 61, already quite improved in Firefox 62 Developer Edition. A recommended alternative is to use Mozilla’s Addon called Firefox Multi-Account Containers which provide a nice icon and a walk-through. It works at least on Firefox for Linux, macOS and Windows.

This is how it looks like on Ubuntu (I’m using the default Dark theme). You can see that my Gmail is opened in a blue-coloured container, I have GitHub in a purple, a shopping site in a pink “Shopping” and finally a news site in no specific container. I could open another tab to my Grafana site in the same purple-coloured container as GitHub, and I would then be able to use GitHub OAuth to login to Grafana. If I would open Grafana in another or no container, I would not be able to use GitHub OAuth without re-authenticating myself to GitHub in this new context.

Firefox Containers illustration
Firefox Containers illustration

So I’m really looking forward to improvements on Firefox Container.

Home network improvements – What does a Router can do?

This is the second blog post about my home network improvements series.

Gateway Appliance Picture - License CC BY-SA by Cuda-mwolfe
Gateway Appliance – License CC BY-SA by Cuda-mwolfe

In the previous post, we have evaluated our options for a new router and the conclusion was to build the hardware from PC parts and to install OPNsense. However, given that our selected PC parts are a bit too recent, the embedded NIC (a i219V) inside the intel B360 chipset is not yet recognised by the underlying FreeBSD core.

Therefore, we will now see how to build a router from scratch based on Ubuntu 18.04 LTS. I will only configure it for IPv4 as currently my ISP provides only IPv4 connectivity. I am currently planning a series of several posts including that one, I will update that list along the newer articles:

  1. Router features list (this post)
  2. Creating a basic router (to be published, could be splitted in more than one post)
  3. Extra services (to be published, could too be splitted in more than one post)

Disclaimer: I am not a security engineer, although I am very familiar with many aspects of security and security analysis. I am also not a network engineer, although I am very knowledgeable in network protocols, network programming and network security. This article is an exercise for me to see how far I can build a router for SOHO purpose. I make no warranty that it works as intended, nor that I will maintain this article to keep it up to date with respect to network technology and threats. Use at your own risk.

Note: I am mostly going to avoid using any Ubuntu specific tools but of course some will be unavoidable (e.g. network IP address configuration). So this guide should apply to other Linux distributions. Of course there will be some adaptations to do, especially with respect to configuring the network interfaces as there are so many different tools to do that.

Continue reading “Home network improvements – What does a Router can do?”

Home network improvements

Currently my home network is pretty simple … at least for a computer scientist! ;-)

Gateway Appliance Picture - License CC BY-SA by Cuda-mwolfe
Gateway Appliance – License CC BY-SA by Cuda-mwolfe

My ISP provided an all-in-one box with TV, landline and network router. The latter being very limited and with a crap WiFi access point (AP). So I’ve been using my old Asus RT-AC68U router as a gateway, a 24 ports switch and a Ubiquiti Unifi AP for providing WiFi in the complete house (and garden). The router and switch went into the basement whereas I’ve placed the AP roughly in the house centre. The ISP box could not be configured as bridge but supported to set a DMZ host, so I’ve configure the Asus router to be the DMZ.

Here is the basic setup:

+--------+             +--------+
|        |    DMZ      |        |          +------------------------+
|ISP Box +-------------+ Router +----------+ Switch                 |
|        |             |        |          +--+------+---+---+---+--+
+--------+             +--------+             |      |   |   |   |
                                              |      |   |   |   |
                                           +--+--+   +   +   +   +
                                           | AP  | Home Network / Lab
                                           +-----+

So I’m using only 2 ports on my router (or more exactly network gateway), the WAN and one on the LAN. This router is the peace in my current network I want to change and I will explain why and how.

Post updated on 2018-06-13.

Continue reading “Home network improvements”