How to make a Docker container read-only

There are many ways to harden a Docker container, one is to make the container layer read-only.

This might be a marginal improvement to security, first your application should not run as root or has special privileges (e.g. CAP_DAC_OVERRIDE), so there is limited risk that an attacker exploiting a vulnerability of your application can modify sensitive applications. However, if you install your application within a Dockerfile as the application user (e.g. using bundle install) make the base layer read-only might protect it from unwanted modification.

I also like the idea of an immutable base layer and clearly identifying the writing data and if they should be persisted or not. I also relate that to security, because the better you know the behaviour of an application, the better you can adapt a confinement for it.

Setting the base layer read-only is somewhat challenging. Setting a container image to read-only is simple, there is a --read-only flag to the docker run command. But identifying which data is written by the containerised application can be a challenge One task is thus to identify all written data and defining of they should be persisted in a volume or not persisted. In the latter case, one could then use a tmpfs volume or a local volume (in a Swarm cluster).

We are going to use Docker layering approach to identify the written data. How to check the difference varies depending on the storage backend and they are too numerous for me to list each cases, I might complete the article in the future but today I will show how to use the BTRFS and Overlay2 backend.

What I am going to explain is based on the current implementation of the Docker storage backend as described in their respective guides. Each guide explains how the backend works, and by extracting that information I could find a way to compare the layers.

Continue reading “How to make a Docker container read-only”

Home network improvements – Setting up a Firewall

Closed Door at Gateway in Forbidden City

This is the fourth blog post about my home network improvements series. I am sorry it is taking me so long to write all those posts, but each takes a lot of hours to write and I am balancing my life more towards family at the moment. I hope you can bear with me until the end.

Great Wall winding over the mountains
Walls need to adapt to their environment

In the previous post, we presented installed the OS and set up networking and routing.

We will now see how to add another very important feature the firewall.

  1. Router features list (published)
  2. Creating a basic router, defining the network and routing (published)
  3. Adding a firewall to our router (this post)
  4. Providing basic network services, DHCP and DNS (to be published)
  5. Testing the firewall (to be published)
  6. Extra services (to be published, could be splitted in more than one post)

So today’s post will present a simple but secure firewall installation.

As I have said in a previous article, I want to try out nftables instead of using iptables. But we will continue iterating on the previous post and use iptables instead one more time. I want to have a working router and then I can think of switching to nftables and solving integration with other tools.

A Basic Firewall

Firewall - Forbidden City Gateway
Firewall

We will use iptables command line to populate the firewall rules. As changing those rules from the command line is not persistent, a simple reboot will restore your OS in the previous configuration so if things do not workout or if we get locked out by a wrong rule, just reboot and restart to setup your firewall. Once we will be happy with the firewall, we will save the rule set and make it permanent.

For rules, we obviously do not want any traffic coming from the WAN to establish new connections inside our LAN or on our router. Only established connections should be allowed through, e.g. an HTTP response is allowed through the firewall so that we can browse the internet. We want some network services to still function, like ICMP or DNS messages to pass through the firewall. We do not want to filter the outgoing traffic for the moment, so everything from the LAN is allowed to reach the WAN.

I like to set default policies for the different iptables chains instead of relying on the last rule to do the policy for me. However, in order to avoid getting locked out, we will set those policies at the very end and always start by defining what is allowed. In order to define our firewall, we will work first with the main chains of the filter table (the default one). Mostly caring of incoming packets and IP forwarding rules.

Continue reading “Home network improvements – Setting up a Firewall”

Catch of the Day: Is it Good or is it Bad phishing?

Fisherman on bambooboat China

I had a good laugh :D today at yet another phishing attempt.

The phisher behind this campaign must be philosophers or fans of Shakespeare. The phishing domain name used is – no kidding – goodorbad.email!

The link points to goodorbad.email domain name
Phishing – Good or Bad?

Bad luck also for our phisher, for once I was using Apple Mail on my wife’s laptop to check my daily email, and with a Retina screen the fake link was all blurry.

This is interesting because it is the first time I see an attack trying to obfuscate the link using an image. Frankly I do not see the advantages, it has the risk of being blurry on hidpi or retina displays, it has the risk that it won’t be displayed if the image is remote (in that case, the image is provided as attachment so it was autoloaded).

Anyway, the domain should have been probably goodorbad.phishing or simply bad.phishing!

Home network improvements – What does a Router can do?

This is the second blog post about my home network improvements series.

Gateway Appliance Picture - License CC BY-SA by Cuda-mwolfe
Gateway Appliance – License CC BY-SA by Cuda-mwolfe

In the previous post, we have evaluated our options for a new router and the conclusion was to build the hardware from PC parts and to install OPNsense. However, given that our selected PC parts are a bit too recent, the embedded NIC (a i219V) inside the intel B360 chipset is not yet recognised by the underlying FreeBSD core.

Therefore, we will now see how to build a router from scratch based on Ubuntu 18.04 LTS. I will only configure it for IPv4 as currently my ISP provides only IPv4 connectivity. I am currently planning a series of several posts including that one, I will update that list along the newer articles:

  1. Router features list (this post)
  2. Creating a basic router (to be published, could be splitted in more than one post)
  3. Extra services (to be published, could too be splitted in more than one post)

Disclaimer: I am not a security engineer, although I am very familiar with many aspects of security and security analysis. I am also not a network engineer, although I am very knowledgeable in network protocols, network programming and network security. This article is an exercise for me to see how far I can build a router for SOHO purpose. I make no warranty that it works as intended, nor that I will maintain this article to keep it up to date with respect to network technology and threats. Use at your own risk.

Note: I am mostly going to avoid using any Ubuntu specific tools but of course some will be unavoidable (e.g. network IP address configuration). So this guide should apply to other Linux distributions. Of course there will be some adaptations to do, especially with respect to configuring the network interfaces as there are so many different tools to do that.

Continue reading “Home network improvements – What does a Router can do?”

Home network improvements

Currently my home network is pretty simple … at least for a computer scientist! ;-)

Gateway Appliance Picture - License CC BY-SA by Cuda-mwolfe
Gateway Appliance – License CC BY-SA by Cuda-mwolfe

My ISP provided an all-in-one box with TV, landline and network router. The latter being very limited and with a crap WiFi access point (AP). So I’ve been using my old Asus RT-AC68U router as a gateway, a 24 ports switch and a Ubiquiti Unifi AP for providing WiFi in the complete house (and garden). The router and switch went into the basement whereas I’ve placed the AP roughly in the house centre. The ISP box could not be configured as bridge but supported to set a DMZ host, so I’ve configure the Asus router to be the DMZ.

Here is the basic setup:

+--------+             +--------+
|        |    DMZ      |        |          +------------------------+
|ISP Box +-------------+ Router +----------+ Switch                 |
|        |             |        |          +--+------+---+---+---+--+
+--------+             +--------+             |      |   |   |   |
                                              |      |   |   |   |
                                           +--+--+   +   +   +   +
                                           | AP  | Home Network / Lab
                                           +-----+

So I’m using only 2 ports on my router (or more exactly network gateway), the WAN and one on the LAN. This router is the peace in my current network I want to change and I will explain why and how.

Post updated on 2018-06-13.

Continue reading “Home network improvements”

A Time Server in a Container – Part 1

GPL https://commons.wikimedia.org/wiki/File:Sablier-temps-icone-5376-128.png

To learn Docker in details I decided to use it to run a local time server using ntpd from the ntp.org project.

I have used an incremental approach where I started with an easy setup and then increased the challenges either to improve the time server or to better understand Docker.

Getting Started

So why ntpd and not <put your favourite time server>

I know ntpd for having configured it many times in the past 10 years. So I wanted to start with it first to quickly get time synchronisation working.

Which platform?

I have a Raspberry Pi (abbreviated RPi from now on) which serves as DHCP server and local forwarding and caching of DNS queries for my LAN. It had early support for Docker back in October when I started my experiment which added a bit of spice to it.

Getting Docker on a Raspberry Pi

There are many ways to get Docker running on your RPi. You could get the Hypriot OS Linux distribution which has everything setup nicely for running Docker containers. You can compile Docker on your platform of choice (which I had to do to squash a few early adopters’ bugs). You can install a tarball containing the binaries for your platform. But if running Raspbian Jessie – like I was – you can today just include Docker’s own repository and install a binary version using apt-get. Make sure your Kernel is recent (Docker requires 3.10 at least, but if you have a properly updated Raspbian it should be running 4.4 at the time of writing).

You can follow Docker’s installation guide for Debian, but by default it will install you the x86_64 Docker repository. As hinted in the documentation, for other architecture you need to use the [arch=...] clause. In addition, Docker provides a specific variant of the package for Raspbian. So for Raspbian Jessie, use the following entry for your docker.list file:

deb [arch=armhf] https://apt.dockerproject.org/repo raspbian-jessie main

Continue to follow the Docker guide, including how to set up non-root access to a specific user.

Creating a Docker image for ntpd

Create a specific folder somewhere on your Raspberry Pi storage (e.g. mkdir -p ~/projects/docker/ntpd) and create a file Dockerfile.armhf (I use the extension .armhf so I can have distinct Dockerfiles for each platform I use) with the following content:

FROM armhf/ubuntu:16.04
RUN apt-get update \
    && apt-get install -y --no-install-recommends ntp \
    && apt-get clean -q \
    && rm -Rf /var/lib/apt/lists/*
ENTRYPOINT ["/usr/sbin/ntpd"]

Note: This file as well as newer version of it and instructions to build and run the container are available on my GitHub ntp container project. In the rest of this blog post, I’m only going to detailed how I approach running the container and solve problems.

The first line state that the base image for the container will be Ubuntu 16.04 (the specific variant for RPi architecture). The second until the fifth lines are commands we execute on top of the base image, basically it updates the packages list to install the latest version of ntpd with the smallest dependencies, and it removes any cached or temporary files. So we minimise the size of the image on disk. Finally the last line, is the command that will be executed by Docker when instructed to run the container. I have used the term ENTRYPOINT because it allows me – while experimenting – to change the list of parameters I send to ntpd when I create the container and run it. This gives me flexibility with testing different parameters.

I picked up Ubuntu as the base image because it has sane default for the ntpd configuration file. It will use the NTP Pool project and the configuration is secured by default. Note that other base images could have also worked and have also sane default. I could have used Alpine Linux base image, it is really compact and lightweight, would have been perfect for a small platform like a Raspberry Pi, but it does not provide the ntpd packages from the NTP project which I wanted to start with. It only supports OpenNTPD (which does not support leap seconds, so it was a no go for me) and Chrony (which could be a good alternative but as I mentioned before I wanted to first experiment with Docker not learn yet another NTP application).

Let’s build the container image (I named the image “article/armhf/ntpd” and tagged it with the current date, but just name it like you want):

$ docker build -f Dockerfile.armhf -t article/armhf/ntpd:20170106.1 .

Running the NTP container

We are now going to spawn an instance of the container image in foreground to see what is going on and to notice any error:

$ docker run --rm -it article/armhf/ntpd:20170106.1 -n
 6 Jan 14:03:30 ntpd[1]: ntpd 4.2.8p4@1.3265-o Wed Oct  5 12:38:30 UTC 2016 (1): Starting
 6 Jan 14:03:30 ntpd[1]: Command line: /usr/sbin/ntpd -n
 6 Jan 14:03:30 ntpd[1]: Cannot set RLIMIT_MEMLOCK: Operation not permitted
 6 Jan 14:03:30 ntpd[1]: proto: precision = 1.198 usec (-20)
 6 Jan 14:03:30 ntpd[1]: Listen and drop on 1 v4wildcard 0.0.0.0:123
 6 Jan 14:03:30 ntpd[1]: Listen normally on 2 lo 127.0.0.1:123
 6 Jan 14:03:30 ntpd[1]: Listen normally on 3 eth0 172.17.0.2:123
 6 Jan 14:03:30 ntpd[1]: Listen normally on 4 lo [::1]:123
 6 Jan 14:03:30 ntpd[1]: Listening on routing socket on fd #21 for interface updates
 6 Jan 14:03:30 ntpd[1]: start_kern_loop: ntp_loopfilter.c line 1126: ntp_adjtime: Operation not permitted
 6 Jan 14:03:30 ntpd[1]: set_freq: ntp_loopfilter.c line 1089: ntp_adjtime: Operation not permitted
 6 Jan 14:03:31 ntpd[1]: Soliciting pool server 193.200.241.66
 6 Jan 14:03:32 ntpd[1]: Soliciting pool server 90.187.7.5
 6 Jan 14:03:32 ntpd[1]: adj_systime: Operation not permitted
 6 Jan 14:03:32 ntpd[1]: Soliciting pool server 129.70.132.37
 6 Jan 14:03:33 ntpd[1]: Soliciting pool server 85.25.210.112
 6 Jan 14:03:33 ntpd[1]: Soliciting pool server 31.25.153.77
 6 Jan 14:03:34 ntpd[1]: Soliciting pool server 178.63.9.212
 6 Jan 14:03:34 ntpd[1]: Soliciting pool server 193.22.253.13
^C 6 Jan 14:03:40 ntpd[1]: ntpd exiting on signal 2 (Interrupt)

We have a few errors (Operation not permitted) which I have highlighted above, one is about RLIMIT_MEMLOCK (this is about resetting the limit of the maximum locked-in-memory address space, ntpd uses it to forbid its main process from swapping to limit jitter) and the other ones are about ntp_adjtime and adj_systime (both are used by ntpd to interface with the Kernel and adjust the system time).

By default ntpd is running as root user, so it should have enough privilege for these operations. In addition, even though Docker supports running unprivileged containers (i.-e. the root user inside the container is mapped to a normal user on the host, this is based on user namespaces (see namespaces(7)), this is not the default Docker configuration, so my root user inside the container is the root user outside the container (and if Docker would be configured to use user namespace, they are not compiled in the Raspberry Pi foundation Kernel. So it is at the moment not possible to use that feature on a Raspberry Pi without some extra efforts, but I will details this in a future article).

In order to implement basic privilege limitations of container, Docker can use various security feature of the Linux Kernel to limit the container accessing certain sensible Kernel calls, the most notable ones are Linux Capabilities (since Docker 1.2), Linux SECCOMP filtering (since Docker 1.10, but better use Docker 1.12+ as pervious default SECCOMP profiles were in conflict with the Linux Capabilities management of Docker. In addition, the Raspbian Kernel (version 4.4 as of writing) has not the built-in support for SECCOMP filtering, so this functionality is not usable on Raspberry Pi, unless you compile your own Kernel) and Linux MAC (like SELinux or AppArmor, but none of them are available on Raspberry Pi without recompiling your own Kernel and installing the user space tools). So Docker on Raspberry Pi can only use Linux Capabilities as security feature.

By default Docker provides each container with a reasonable set of capabilities (see Docker documentation on capabilities). If you check both documentation (the Linux Capability manual and the Docker runtime privileges doc), you will find out that basically our container is missing the CAP_SYS_RESOURCE and CAP_SYS_TIME capabilities. Now there are 2 ways to add them, most online guide would tell you that when you run into “operations denied” errors, just add the --privilege flag to the docker run command line and it will be fixed, that’s the first way and it’s the wrong approach (sure it works, but it is like deactivating SELinux because you are not allowed to perform an operation). The other way is to add the missing capabilities to the container. This can be done by using the --cap-add flag. That’s what I’m going to show now:

$ docker run --rm -it --cap-add SYS_RESOURCE --cap-add SYS_TIME article/armhf/ntpd:20170106.1 -n
 7 Jan 11:19:24 ntpd[1]: ntpd 4.2.8p4@1.3265-o Wed Oct  5 12:38:30 UTC 2016 (1): Starting
 7 Jan 11:19:24 ntpd[1]: Command line: /usr/sbin/ntpd -n
 7 Jan 11:19:24 ntpd[1]: proto: precision = 1.823 usec (-19)
 7 Jan 11:19:24 ntpd[1]: Listen and drop on 0 v6wildcard [::]:123
 7 Jan 11:19:24 ntpd[1]: Listen and drop on 1 v4wildcard 0.0.0.0:123
 7 Jan 11:19:24 ntpd[1]: Listen normally on 2 lo 127.0.0.1:123
 7 Jan 11:19:24 ntpd[1]: Listen normally on 3 eth0 172.17.0.2:123
 7 Jan 11:19:24 ntpd[1]: Listen normally on 4 lo [::1]:123
 7 Jan 11:19:24 ntpd[1]: Listening on routing socket on fd #21 for interface updates
 7 Jan 11:19:25 ntpd[1]: Soliciting pool server 213.95.21.43
 7 Jan 11:19:26 ntpd[1]: Soliciting pool server 134.119.8.130
 7 Jan 11:19:26 ntpd[1]: Soliciting pool server 46.4.32.135
 7 Jan 11:19:27 ntpd[1]: Soliciting pool server 213.136.86.203
 7 Jan 11:19:27 ntpd[1]: Soliciting pool server 178.63.9.212
 7 Jan 11:19:27 ntpd[1]: Listen normally on 7 eth0 [fe80::42:acff:fe11:2%6]:123
 7 Jan 11:19:27 ntpd[1]: new interface(s) found: waking up resolver
 7 Jan 11:19:27 ntpd[1]: Soliciting pool server 46.165.212.205
 7 Jan 11:19:28 ntpd[1]: Soliciting pool server 109.239.58.247
 7 Jan 11:19:28 ntpd[1]: Soliciting pool server 131.188.3.221
 7 Jan 11:19:28 ntpd[1]: Soliciting pool server 78.46.189.152
 7 Jan 11:19:28 ntpd[1]: Soliciting pool server 195.50.171.101
^C  7 Jan 11:22:40 ntpd[1]: ntpd exiting on signal 2 (Interrupt)

To make sure this is working, first verify that you do not have any time synchronisation service running: $ sudo systemctl stop systemd-timesyncd ntp.

Then change the system time by shifting it by 5 seconds: $ sudo date -s "5 seconds".

Check that your system clock is now off by 5 seconds:

$ ntpdate -q time1.google.com
server 216.239.35.0, stratum 2, offset -5.002284, delay 0.14117
18 Jan 11:27:55 ntpdate[5217]: step time server 216.239.35.0 offset -5.002284 sec

Start the container in the background this time: $ docker run --name ntpd --detach --restart always --cap-add SYS_RESOURCE --cap-add SYS_TIME article/armhf/ntpd:20170106.1 -g -n

Wait a few seconds and query again the network time using the above ntpdate command. The offset should now be below 5 seconds and probably close to 0 second.

You have now a ntp service running inside a container and synchronising your system clock using Internet time servers from the NTP pool project. If you want to stop the experiment here and restore your system, you need to stop the container ($ docker stop ntpd) and block it from restarting at next boot ($ docker update --restart=no ntpd) and perhaps reboot so that you reactivate the default time synchronisation service.

But if you want to keep experimenting or let the container do its job of time synchronisation, you should make sure to deactivate any other time synchronisation mechanisms to avoid conflicts if you want to keep your NTP container running:

$ sudo timedatectl set-ntp false 
$ sudo systemctl disable ntp chronyd
$ sudo systemctl mask systemd-timesyncd
$ sudo systemctl stop systemd-timesyncd ntp chronyd

Foreword about Time and NTP on a Raspberry Pi

The Raspberry Pi (at the time of writing this applies to all models) has no real time clock (RTC) module on its board. A RTC is a small oscillator (e.g. quartz, like in your electronic wristwatch) plus some electronic to keep track of time and a battery (or equivalent). Those RTCs help a system keep track of time when there are off and in the early phases of boot. On a standard desktop or laptop computer the motherboard has an RTC. Many oscillators are not particularly accurate (low quality) with non-stable frequencies which can depend on external factors such as room temperature. It is possible to add a RTC module to the Raspberry Pi (I will have a detailed article on that soon), but without RTC you need a network connection in order for the RPi to know the current time.

On Linux, the kernel manage 2 clocks, the hardware clock (which is based on the RTC) and the system clock (which is the clock used by the system to query/set the time, this clock is ticking using a clocksource such as a CPU/SoC timer, Kernel jiffies, etc.). On boot, the current time is read from the hardware clock and is used to initialise the system clock. The system clock is then driven by the ticks from the selected clocksource and the time read at boot from the hardware clock. Usually, on shutdown, many Linux distribution are configured to store the system clock in the hardware clock.

The Raspberry Pi has maybe no hardware clock but it has a clock source (current clocksource on Raspberry Pi 2, other models may differ):

[    0.000000] arm_arch_timer: Architected cp15 timer(s) running at 19.20MHz (phys).
[    0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x46d987e47, max_idle_ns: 440795202767 ns
[    0.000010] sched_clock: 56 bits at 19MHz, resolution 52ns, wraps every 4398046511078ns
[    0.000032] Switching to timer-based delay loop, resolution 52ns

So if the time is set on boot, the OS can keep track of the time even if disconnected and as long as it is up and running. Systemd 213 introduced a new service systemd-timesyncd which is a SNTP client implementation, so it is able to query a network time server and set the OS system time based on the response. This service has an extra feature for systems without RTCs, it saves the system time on disk on shutdown. So when your Raspberry Pi reboot, it can use the stored time to initialise its system time while waiting for more accurate time once the network is ready. Sure during the early boot process the system time might be off by a couple of seconds but it is better than nothing.

As for NTP, it is adjusting the system time based on responses from network time servers or when offline based on the clock drift NTP has been calculating for the current clock source. This means that if you run NTP, it is good to let it run at least 24hours so it can accurately measure the clock source drift and then it can compensate it during network disconnection periods. In addition, NTP will regularly sync back the system time to the hardware time to correct the RTC clock. In up coming articles, we will see how we can add a RTC to our Raspberry Pi and how to overcome the challenges of allowing RTC access to NTP inside the container and increasing the clock accuracy. In addition, we will see how we can become an NTP network time server for the local LAN.

What did we learn about Docker

First, we practiced the basics of building a container (the Dockerfile syntax and docker build ... command), running a container in foreground or background mode (docker run ...) and controlling the running container (docker stop ... and docker update ...). I did not yet elaborate much on the capabilities of these commands offer but it is my intention that we will discover them further as we progress we the experiment.

Second, we learned about some of Docker security measures (like Linux capabilities) and limitations of the current Raspberry Pi platform (like no SECCOMP filtering or AppArmor or user namespace), and we also learned how to extend a container permission by adding new capabilities.

Next to learn will be how to provide access to specific devices (such as an RTC), how to do simple monitoring (checking the container is running, its resource usage and logs), how to increase its security (dropping unnecessary capabilities, using the other security measures). With this quest we will learn a lot on the Raspberry Pi as well, we will add an RTC module, we will compile our own Kernels in order to add new security functions and improve the OS jitter, etc.

An Unpredictable Raspberry Pi

Critical Miss! by Scott Ogle, CC BY-SA 2.0
Random Number Generator – Photo by Scott Ogle, CC BY-SA 2.0

Our computers are not really good at providing random numbers because they are quite deterministic (unless you count these pesky random bugs that make working on a computer so “enjoyable”). So we created different ways to generate pseudo-random numbers of various qualities depending on the use. For cryptography, it is paramount to have excellent random numbers, or an attacker could predict our next move!

Getting unpredictable is a difficult task, Linux tries to provide it by collecting environmental noise (e.g. disk seek time, mouse movement, etc.) in a first entropy pool which feeds a first Cryptographically Secure Pseudo-Random Number Generator (CSPRNG) which then output “sanitised” random numbers to different pools, one for each of the kernel output random device: /dev/random and /dev/urandom.

Raspberry Pi Logo (a Raspberry)

Our goal is to help our Raspberry Pi to have more entropy, so we will provide it with a new entropy collector based on its on-board hardware random number generator (HW RNG).

I have already presented quickly why you need entropy (and good one), and also a quick way of having more source for the Linux kernel entropy pool for the Raspberry Pi using Raspbian “Wheezy” or for any computer having a TPM chip on board.

This article is an update for all of you who upgraded their Raspbian to Jessie (Debian 8). The new system uses SystemD for the init process rather than Upstart for previous release.

The Raspberry Pi has an integrated hardware random number generator (HW RNG) which Linux can use to feed its entropy pool. The implication of using such HW RNG is debatable and I will discuss it in a coming article. But here is how to activate it.

It is still possible to load the kernel module using $ sudo modprobe bcm2708-rng. But I know recommend using the Raspberry Pi boot configuration, as it is more future proof: if there is a newer module for the BCM2709 in the Raspberry Pi 2 (or any newer model), using Raspberry Pi Device Tree (DT) overlays should always work. DT are a mean to set-up your Raspberry Pi for certain tasks by selecting automatically the right modules (or drivers) to load. It is possible to activate the HW RNG using this methods.
Actually, we do not need to load any DT overlays, but only to set the random parameter to ‘on‘. You can achieve this by editing the file /boot/config.txt, find the line starting with ‘dtparam=(...)‘ or add a new one starting with it. The value of dtparam is a comma separated list of parameters and value (e.g random=on,audio=on), see part 3 of the Raspberry Pi documentation for further info. So at least, you should have:

dtparam=random=on

With this method, you have to reboot so that the bootloader can pick-up automatically the right module for you.

Now install the rng-tools (the service should be automatically activated and started, default configuration is fine, but you can tweak/amend it in /etc/default/rng-tools), and set it to be enable at next boot:

$ sudo apt-get install rng-tools
$ sudo systemctl enable rng-tools

After awhile you can check the level of entropy in your pool and some stats on the rng-tools service:

$ echo $(cat /proc/sys/kernel/random/entropy_avail)/$(cat /proc/sys/kernel/random/poolsize)                                 
1925/4096
$ sudo pkill -USR1 rngd; sudo systemctl -n 15 status rng-tools
rngd[7231]: stats: bits received from HRNG source: 100064
rngd[7231]: stats: bits sent to kernel pool: 40512
rngd[7231]: stats: entropy added to kernel pool: 40512
rngd[7231]: stats: FIPS 140-2 successes: 5
rngd[7231]: stats: FIPS 140-2 failures: 0
rngd[7231]: stats: FIPS 140-2(2001-10-10) Monobit: 0
rngd[7231]: stats: FIPS 140-2(2001-10-10) Poker: 0
rngd[7231]: stats: FIPS 140-2(2001-10-10) Runs: 0
rngd[7231]: stats: FIPS 140-2(2001-10-10) Long run: 0
rngd[7231]: stats: FIPS 140-2(2001-10-10) Continuous run: 0
rngd[7231]: stats: HRNG source speed: (min=824.382; avg=1022.108; max=1126.435)Kibits/s
rngd[7231]: stats: FIPS tests speed: (min=6.459; avg=8.161; max=9.872)Mibits/s
rngd[7231]: stats: Lowest ready-buffers level: 2
rngd[7231]: stats: Entropy starvations: 0
rngd[7231]: stats: Time spent starving for entropy: (min=0; avg=0.000; max=0)us

Source:

Raspberry Pi is a trademark of the Raspberry Pi Foundation.

Using TPM as a source of randomness entropy

Lencois_Maranhenses_7-x256A headless server by definition has no input devices such as a keyboard or a mouse which provides a great deal of external randomness to the system. Thus, on such a server, even if using rotational hard disks, it can be difficult to avoid the depletion of the Linux kernel’s random entropy pool. A simplified view of this situation is if the entropy pool is deemed too low, one of the “dices” which generates random numbers is getting biased.  This is can be even more exacerbate on server hosted in virtual machines, but this article won’t help you in this case.

Update 2015-08-24: the article was updated to provide some more information on TPM and commands adapted to the new systemd-based Ubuntu 15.04 and newer. When marked, use either the classic commands from Ubuntu 14.04 LTS and older or the newer ones.

The level of the kernel entropy pool can be checked with the following command:

$ echo $(cat /proc/sys/kernel/random/entropy_avail)/$(cat /proc/sys/kernel/random/poolsize)
620/4096

(Note depending of the workload of your server the above result could be considered adequate or not)

What means a depleted entropy pool? It means that any call to /dev/random would be blocking until enough entropy is available. A blocking call is not usually wished, it means the application could be considered frozen until the call is freed. On the other side, calls to /dev/urandom would not be blocking in such situation, but there is a higher risk that the randomness quality of the output decreases. Such a higher risk could mean giving a higher chance for an attacker to predict your next dice roll. This could be exploitable or not and it is hard to tell, at least for me. Therefore, I tend to try avoiding having a depleted entropy pool especially for certain workload.

There are several mechanism to provide randomness sources to the entropy pool. The haveged daemon uses some CPU clock timers variation to achieve that, but it is highly dependant of the CPU being used. Other approaches are using sound cards, etc. And finally there are hardware random number generators (RNG). In my previous article, I talked briefly about the hardware RNG from the Raspberry Pi. In this article, I will present another hardware RNG which is available in many computers and servers: Trusted Platform Module (TPM).

 

I found the name somewhat marketing, and I’m not even sure we should trust it that much. But we will activate it only to provide a new source of entropy and nothing more. I would advise to use other source of entropy as well. Anyway, if interested the following paragraphs are describing how to achieve this, and if you decide to implement them, be reminded that I give no warranty that it will work for you nor that it won’t break things. I can only guarantee you that it worked on my machine running Ubuntu 14.04.2 LTS.

Let’s install the necessary tools and deactivate the main services (the tools launch a daemon which we don’t want to use as we will use rngd to get and verify the randomness of the TPM RNG before feeding the entropy pool).

Ubuntu 14.04 LTS

$ sudo apt install tpm-tools
$ sudo update-rc.d trousers disable
$ sudo service trousers stop

Ubuntu 15.04 and newer

$ sudo apt install tpm-tools
$ sudo systemctl stop trousers.service
$ sudo systemctl disable trousers.service

Then, you will need to go into the BIOS/EFI settings of your computer/server and activate TPM, and possibly clear the ownership of the TPM if it happened to be owned by someone else. Of course, don’t do that if it is not your computer.

I found out that my particular BIOS option only allow clearing from the BIOS settings. Trying to do so from the OS results in the following:

$ sudo tpm_clear --force
Tspi_TPM_ClearOwner failed: 0x0000002d - layer=tpm, code=002d (45), Bad physical presence value

I also found out that my particular BIOS when clearing the ownership, deletes also the Endorsement Key (EK). When I was trying to take ownership of the TPM device (see further) I was getting the following error:

$ sudo tpm_takeownership
Tspi_TPM_TakeOwnership failed: 0x00000023 - layer=tpm, code=0023 (35), No EK

So I had to do an extra step, to generate a new EK:

$ sudo tpm_createek

After having creating a new EK, I was able to successfully take ownership of the TPM device.

$ sudo tpm_takeownership
Enter owner password:
Confirm password:
Enter SRK password:
Confirm password:
tpm_takeownership succeeded

Once all this is done, we are ready to load the TPM RNG kernel module and launch the user space tool that will use this source to feed the Linux kernel entropy pool. The user space tool are the rng-tools suite (with the rngd daemon).

Load the kernel module (to load it permanently add tpm_rng as a new line to /etc/modules):

$ sudo modprobe tpm_rng

And then install the user space tool.

$ sudo apt install rng-tools

The default configuration should be good enough. But you can check it by editing the file /etc/default/rng-tools. The default settings for rngd are to not fill more than half of the pool. I don’t advise to set it to higher, unless you really trust blindly those TPM chips. If you have installed the tool, it should already be running but if you modified the default settings then a restart is necessary.

Ubuntu 14.04 LTS

$ sudo service rng-tools restart

Ubuntu 15.04 and newer

$ sudo systemctl restart rng-tools.services

Now you can check again the available entropy with the command that I gave at the beginning of this post.

Installing Linux on Raspberry Pi – The Easy Way

As I announced, I got a Raspberry Pi 2 Model B and although I did not get much time to play yet with it, it was just an excuse to get back to programming and a little computer science fun.

Anyway, I’ve installed Raspbian on it and did a few configurations which I think are worthy to share with others. I’m not going to describe a step-by-step to install a Linux distribution on your Pi, I recommend trying NOOBS and check the good documentation from the Raspberry Pi Foundation. With this setup you can choose during the installation process which Linux distribution to install.

That’s what I did as a warm-up and a quick way to get an up-and-running Pi.

Note that the Raspberry Pi 2 has a new CPU (ARMv7) which is not yet fully exploited by most distribution targeted at the Raspberry Pi ecosystem (with the exception of the Linux kernel). There is one exception: Snappy Ubuntu Core but it is yet alpha. However it should be possible to install most standard Linux distribution that are supporting ARMv7 instructions set, however this is an interesting exercise which I did not perform (yet).

Anyway, using the NOOBS installation or any of the images provided for SD Card has several implication in terms of security. Here is what I recommend to do after the first boot.

Changing password and locking root

If you use the Raspbian installation, then you have one user account `pi’. If you haven’t changed the password for this user during installation, then better do it now. Once connected as `pi’ do:

$ passwd

And enter and then confirm the password you want to use.

With Raspbian, the `pi’ user has administrator’s rights. You can call sudo to perform system changes. If you are comfortable with that, then simply lock the root account:

$ sudo passwd -dl root

Or if you simply want to give a password of your own and use root:

$ sudo passwd root

Getting some Entropy

I am preparing a more detail article on this topic, so for now I am only going to give some background and the commands to increase the sources of entropy. Entropy is important, you need sufficient entropy (which your Linux kernel can gather for you) to be able to generate good pseudo-random numbers. Those numbers are used by many cryptographic services, such as generating SSH keys or SSL/TLS certificates.

The problem with embedded devices such as the Raspberry Pi (especially if you run it headless like I do) is that there aren’t many sources of entropy available to the kernel, especially at boot time.

The Raspberry Pi has an integrated hardware random number generator (HW RNG) which Linux can use to feed its entropy pool. The implication of using such HW RNG is debatable and I will discuss it in the coming article. But here is how to activate it.

First of all, load the kernel module (aka driver) for the HW RNG (the Raspberry Pi 2 can use the same kernel module as the Raspberry Pi 1 models). The Linux way of doing it is via modprobe:

$ sudo modprobe bcm2708-rng

If it worked a new device should be now available /dev/hwrng and the following should be visible in the system messages:

$ dmesg | tail -1
[ 133.787336] bcm2708_rng_init=bbafe000

To make the change permanent, on any Linux system you simply load the module in the /etc/modules file by adding a new line with bcm2708_rng (see next command). Or you can use the Raspberry Pi firmware Device Tree (DT) overlays (see below).

$ sudo bash -c 'echo bcm2708_rng >> /etc/modules'

Now install the rng-tools (the service should be automatically activated and started, default configuration is fine, but you can tweak/amend it in /etc/default/rng-tools)

$ sudo apt-get install rng-tools

After awhile you can check the level of entropy in your pool and some stats on the rng-tools service:

$ echo $(cat /proc/sys/kernel/random/entropy_avail)/$(cat /proc/sys/kernel/random/poolsize)                                 
1925/4096
$ sudo pkill -USR1 rngd; tail -15 /var/log/syslog
rngd[7231]: stats: bits received from HRNG source: 100064
rngd[7231]: stats: bits sent to kernel pool: 40512
rngd[7231]: stats: entropy added to kernel pool: 40512
rngd[7231]: stats: FIPS 140-2 successes: 5
rngd[7231]: stats: FIPS 140-2 failures: 0
rngd[7231]: stats: FIPS 140-2(2001-10-10) Monobit: 0
rngd[7231]: stats: FIPS 140-2(2001-10-10) Poker: 0
rngd[7231]: stats: FIPS 140-2(2001-10-10) Runs: 0
rngd[7231]: stats: FIPS 140-2(2001-10-10) Long run: 0
rngd[7231]: stats: FIPS 140-2(2001-10-10) Continuous run: 0
rngd[7231]: stats: HRNG source speed: (min=824.382; avg=1022.108; max=1126.435)Kibits/s
rngd[7231]: stats: FIPS tests speed: (min=6.459; avg=8.161; max=9.872)Mibits/s
rngd[7231]: stats: Lowest ready-buffers level: 2
rngd[7231]: stats: Entropy starvations: 0
rngd[7231]: stats: Time spent starving for entropy: (min=0; avg=0.000; max=0)us

Securing SSH – the basic

I run my Pi headless for the moment. So after the first boot, I went into the advanced configuration of the raspi-config and activated SSH daemon. I’m not here going to describe how to properly secure the SSHd daemon, which can be a topic for a future article. I’m only going to focus on one topic which is SSH host keys.

SSH host keys are the means for a user connecting to a remote server to verify the authenticity of the server. That the sort of message you see the first time you connect to an SSH server:

$ ssh pi@raspberrypi
The authenticity of host 'raspberrypi (192.168.1.52)' can't be established.
ECDSA key fingerprint is 42:3d:4f:b7:0b:4b:62:11:47:28:cc:86:76:0c:ac:24.
Are you sure you want to continue connecting (yes/no)?

If an attacker tries to spoof your SSH server, on connection you will get this message:

$ ssh pi@raspberrypi
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
42:3d:4f:b7:0b:4b:62:11:47:28:cc:86:76:0c:ac:24.
Please contact your system administrator.
Add correct host key in /home/pithagore/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /home/pithagore/.ssh/known_hosts:3
  remove with: ssh-keygen -f "/home/pithagore/.ssh/known_hosts" -R raspberrypi
ECDSA host key for raspberrypi has changed and you have requested strict checking.
Host key verification failed.

So this is all good, but the problem with these host keys is that you cannot really trust the one installed by default. They could be the same as the one from the image you downloaded and flashed on your SD Card, and so would be similar to all other persons installing the same image. Another problem is that they could have been generated at a point in the installation where not enough entropy was gathered by the kernel to be able to provide non-predictable random numbers. Therefore someone could use this to generate new host keys that would match the one on your Raspberry Pi.

Now that in the earlier chapter we added good source to feed the entropy pool, we can regenerate better host keys. First get rid off the previous ones:

$ sudo rm /etc/ssh/ssh_host_*key{,.pub}

Then generate new ones with one of the 3 possibilities:

  • Automatic configuration using dpkg
$ sudo dpkg-reconfigure openssh-server 
Creating SSH2 RSA key; this may take some time ...
Creating SSH2 ECDSA key; this may take some time ...
Restarting OpenBSD Secure Shell server: sshd.
  • Automatic configuration using ssh-keygen
$ sudo ssh-keygen -A
$ sudo service ssh restart
  • Manual configuration
$ sudo ssh-keygen -t rsa -b 4096 -N "" -f /etc/ssh/ssh_host_rsa_key
$ sudo ssh-keygen -t ecdsa -b 521 -N "" -f /etc/ssh/ssh_host_ecdsa_key
$ sudo service ssh restart

Now if you want to print the fingerprint of any of your host keys to make sure of the authenticity of the server you are connecting to (should only be needed the first time your client connects to your server):

$ ssh-keygen -l -f

 

App security is really low

Web browsers have had since ages a small lock to tell their users when they were securely connected to a web server.

Apps, especially those for mobiles, do not expose this information. It is very difficult to know if they use proper encryption, or that they even check the validity of the encryption certificates (when using methods such as SSL or TLS). Without such encryption, user credentials (login and password) and the exchanged personal data can easily be snooped.

App makers should mandatorily use encryption for communication. And they should use certificates to make sure they connect to the expected server. And they should expose that to the user interface!