Home network improvements – Building a Basic Router

Loop Junction in Chicago

This is the third blog post about my home network improvements series.

Gateway Appliance Picture - License CC BY-SA by Cuda-mwolfe
Gateway Appliance – License CC BY-SA by Cuda-mwolfe

In the previous post, we presented what feature should we implement in our router.

We will now see how to implement the basic features which are routing, firewall and NAT, DHCP and DNS.

  1. Router features list (published)
  2. Creating a basic router, defining the network and routing (this post)
  3. Adding a firewall to our router (to be published)
  4. Providing basic network services, DHCP and DNS (to be published)
  5. Extra services (to be published, could be splitted in more than one post)

So today’s post will present in order:

  1. OS installation
  2. Network interfaces configuration
  3. Discussion on what is routing, with activation of packet forwarding, Network Address Translation (NAT) and IP Masquerading

For some items we will see today, we will start with basic functionalities that we will improve or iterate in subsequent posts. As I have said in a previous article, I want to try out nftables instead of using iptables. But many tools I would like to use to quickly create a router are still only supporting iptables as backend, and you cannot mix iptables and nftables. Such tools include systemd-networkd, Docker, or the version of firewalld which Ubuntu is currently supporting (note that firewalld version 0.6+ does support nftables as a backend). So in this first iteration and in order to relatively quickly create a basic router, we will use mostly iptables either through systemd-networkd support or via other tools.

Continue reading “Home network improvements – Building a Basic Router”

Containers, volumes and file permissions

Permission Denied for Container’s Volume

There is a subject which seems to be completely abstruse to many users of containers on Linux, it is about sharing data between a host and a container or between containers.

I do think that solving this problem is not much different than it is without containers on Linux and on Unix. From my perspective, there is no much difference between managing file permissions with or without containers, the big change for me is the introduction of namespaces, especially the user namespaces.

So what is exactly the problem? And where does it come from?

The problem is that when running a process within a container, that process will run with a certain user and group ID (respectively UID and GID) and that those IDs might differ from the ones of the caller (the user creating and running the container), this might not be obvious. This is especially true with container technologies like Docker which by default will run the process within the container as root (unless overridden in the Dockerfile or command line) when any user with write access to the Docker socket can create such container. So you have by default a discrepancy for the UID and GID between the caller – probably a standard user – and a random Docker container.

In traditional Unix / Linux, this is “normal” or “expected” behaviour. You usually cannot run a process as root from your normal user unless you use sudo or a setuid program, so usually you do not have the problem that a program you launch might have different UID/GID than your own user. And when you use a program with sudo you understand that this might become a problem, so if you use sudo to run `tcpdump -w net-trace.pcap` you know the file net-trace.pcap will be owned by root and that you might not be able to access or delete it. This reflex needs to apply to running a container as well.

When you have done Unix/Linux development most of your career – and that you have adopted the principle of least privileges … I still know of few people only using the root account ;-) – you are used to create application that will run in the background (as a service) under a dedicated user and for which you need to handle the permissions for the data this application might need to use. So introducing containers (without user namespaces) should not bring any surprise here, it is part of the expectations. But you will see later that you can still be bitten by some edge cases from the container implementation.

So, let us see how to fix this problem of User/Group ID and file permissions. Note that the solution would be similar if you would use containers or not, and applies to all container implementations (e.g. LXC, Docker, etc.). Then, for everyone, we will see how to handle file permissions when using user namespaces (hint, the principles are the same, but it requires a few extra steps to understand what will be the effective UID/GID). Finally, in the case of Docker, we will see a few edge cases where you can still get off guard with respect to file permissions and volume declaration inside a Dockerfile.

Continue reading “Containers, volumes and file permissions”

Containers for Firefox

Wooden Boxes – CC0 Public Domain

There is a new feature coming to Firefox which was discretely introduced in Firefox 50 Nightly and is getting improved with follow up releases. It is called Containers and is part of the of the Contextual Identity Project.

In short each container – or context – is a “colour-coded” tab with a dedicated environment to help one separate his/her online activities. So you can have tabs in a particular context and others in another context.

This increases privacy, so sites cannot spy on you outside of the context you use them. It allows separation of concerns, so you can use a website (e.g. GitHub) for work and personal use inside the same browser but with each a different account. It increases security so if you access your bank in a dedicated context, it would be harder to perform some attacks (e.g. cross-site scripting) to access your bank data.

To activate it you can go to about:config page and then set to true the entry privacy.userContext.enabled, you get the vanilla experience, still a bit rough in Firefox 60 and 61, already quite improved in Firefox 62 Developer Edition. A recommended alternative is to use Mozilla’s Addon called Firefox Multi-Account Containers which provide a nice icon and a walk-through. It works at least on Firefox for Linux, macOS and Windows.

This is how it looks like on Ubuntu (I’m using the default Dark theme). You can see that my Gmail is opened in a blue-coloured container, I have GitHub in a purple, a shopping site in a pink “Shopping” and finally a news site in no specific container. I could open another tab to my Grafana site in the same purple-coloured container as GitHub, and I would then be able to use GitHub OAuth to login to Grafana. If I would open Grafana in another or no container, I would not be able to use GitHub OAuth without re-authenticating myself to GitHub in this new context.

Firefox Containers illustration
Firefox Containers illustration

So I’m really looking forward to improvements on Firefox Container.

Home network improvements – What does a Router can do?

This is the second blog post about my home network improvements series.

Gateway Appliance Picture - License CC BY-SA by Cuda-mwolfe
Gateway Appliance – License CC BY-SA by Cuda-mwolfe

In the previous post, we have evaluated our options for a new router and the conclusion was to build the hardware from PC parts and to install OPNsense. However, given that our selected PC parts are a bit too recent, the embedded NIC (a i219V) inside the intel B360 chipset is not yet recognised by the underlying FreeBSD core.

Therefore, we will now see how to build a router from scratch based on Ubuntu 18.04 LTS. I will only configure it for IPv4 as currently my ISP provides only IPv4 connectivity. I am currently planning a series of several posts including that one, I will update that list along the newer articles:

  1. Router features list (this post)
  2. Creating a basic router (to be published, could be splitted in more than one post)
  3. Extra services (to be published, could too be splitted in more than one post)

Disclaimer: I am not a security engineer, although I am very familiar with many aspects of security and security analysis. I am also not a network engineer, although I am very knowledgeable in network protocols, network programming and network security. This article is an exercise for me to see how far I can build a router for SOHO purpose. I make no warranty that it works as intended, nor that I will maintain this article to keep it up to date with respect to network technology and threats. Use at your own risk.

Note: I am mostly going to avoid using any Ubuntu specific tools but of course some will be unavoidable (e.g. network IP address configuration). So this guide should apply to other Linux distributions. Of course there will be some adaptations to do, especially with respect to configuring the network interfaces as there are so many different tools to do that.

Continue reading “Home network improvements – What does a Router can do?”

Home network improvements

Currently my home network is pretty simple … at least for a computer scientist! ;-)

Gateway Appliance Picture - License CC BY-SA by Cuda-mwolfe
Gateway Appliance – License CC BY-SA by Cuda-mwolfe

My ISP provided an all-in-one box with TV, landline and network router. The latter being very limited and with a crap WiFi access point (AP). So I’ve been using my old Asus RT-AC68U router as a gateway, a 24 ports switch and a Ubiquiti Unifi AP for providing WiFi in the complete house (and garden). The router and switch went into the basement whereas I’ve placed the AP roughly in the house centre. The ISP box could not be configured as bridge but supported to set a DMZ host, so I’ve configure the Asus router to be the DMZ.

Here is the basic setup:

+--------+             +--------+
|        |    DMZ      |        |          +------------------------+
|ISP Box +-------------+ Router +----------+ Switch                 |
|        |             |        |          +--+------+---+---+---+--+
+--------+             +--------+             |      |   |   |   |
                                              |      |   |   |   |
                                           +--+--+   +   +   +   +
                                           | AP  | Home Network / Lab
                                           +-----+

So I’m using only 2 ports on my router (or more exactly network gateway), the WAN and one on the LAN. This router is the peace in my current network I want to change and I will explain why and how.

Post updated on 2018-06-13.

Continue reading “Home network improvements”

Ubuntu 18.04 (Bionic Beaver) – Some notable changes for sysadmin and Java dev

If you administer a Ubuntu server or if you are a power user, you might have a look at these particular changes in Ubuntu before and after upgrading. They can impact your installation and the way you use it.

  • NTP is no longer supported (part of Universe), you should use now Chrony. My opinion is that Chrony is not a bad choice either, it’s perhaps smoother in handling leap seconds (via smearing) but obviously less accurate than NTP in the case.
  • The local DNS resolver is no longer dnsmasq but systemd-resolvd. For most user this should be transparent. Note that if systemd-resolvd does not receive a DNS configuration, it will fallback to using Google Public DNS.
  • Network will be now managed by systemd-networkd (or still by NetworkManager on the desktop) for new installation. If you upgrade, you will still have the old `/etc/network/interfaces` file (and al) and the ifup and ifdown scripts. But this is no longer installed on new installation. Instead you have systemd-networkd and netplan. For people upgrading there is (not yet) clear path to switch to the new tools if wished.

Ubuntu 18.04 offers many more changes and I’m looking forward to upgrade my desktop and server. There are other changes not mentioned above which should be evaluated before upgrading. But I consider the above ones as core element which everybody needs whatever the purpose of the server is.

For developers, I would take care with Java and the OpenJDK. Ubuntu 16.04 LTS came with OpenJDK 8 which is the current LTS version of Java. The next LTS version of Java is 11 which is not yet published. Ubuntu 18.04 will come with OpenJDK 10 (a short-term support edition) by default and will switch the default to OpenJDK 11 when it will be released (hopefully only for new installation). Ubuntu will still provide OpenJDK 8 in universe for 18.04 with security support provided until EOL of Ubuntu 16.04 LTS (so until April 2021) to offer developer a transition time (while waiting for Java 11 to be published, matured and application migrated/validated on this new platform).

Converting RAID1 to RAID10 online

Schema of a RAID10 array
Schema of a RAID10 array (CC BY JaviMZN)

I have a (now old) HP microserver with 4 HDDs. I installed Ubuntu 14.04 (then in beta) on it on a quiet Sunday in February 2014. It is now running Ubuntu 16.04 and still working perfectly. However, I’m not sure what I thought on that Sunday more than 3 years ago. I had partitioned the 4 HDDs in a similar fashion each with a partition for /boot, one for swap and the last one for a BTRFS volume (with subvolumes to separate / from other spaces like /var or /home). My idea was to have the 4 partitions for /boot in RAID10 and the 4 ones for swap in RAID0. I realised today that I only used 2 partitions for /boot and configured them in RAID1, and only used 3 partitions for swap in RAID0.

I have a recurrent problem that because each partition for /boot was 256MB, therefore instead of having 512 (RAID10 with 4 devices) I ended up having only 256MB (RAID1), and that’s not much especially if you install the Ubuntu HWE (Hardware Enablement) kernels, then you quickly have problems with unattended-update failing to install security update because there is no space left on /boot, etc. It was becoming high maintenance and with 4 kids to attend I had to remediate that quickly.

But here is the magic with Linux, I did an online reshaping from RAID1 to RAID10 (via RAID0) and an online resizing of /boot (ext4). And in 15 minutes I went from 256MB problematic /boot to 512MB low maintenance one without rebooting!

That’s how I did it, and it will only work if you have mdadm 3.3+ (could work with 3.2.1+ but not tested) and a recent kernel (I had 4.10, but should have worked with the 4.4 shipped with Ubuntu 16.04 and probably older Kernel). Note that you should backup, test your backup and know how to recover your /boot (or whatever partition you are trying to change).

Increasing the size a RAID0 array (for swap)

First this is how I fixed the RAID0 for the swap (no backup necessary, but you should make sure that you have enough free space to release the swap). The current RAID0 is called md0 and is composed of sda3, sdb3 and sdc3. The partition sdd3 is missing.

$ sudo mdadm --grow /dev/md0 --raid-devices=4 --add /dev/sdd3
mdadm: level of /dev/md0 changed to raid4
mdadm: added /dev/sdd3
mdadm: Need to backup 6144K of critical section..
$ cat /proc/mdstat
md0 : active raid4 sdd3[4] sdc3[2] sda3[0] sdb3[1]
      17576448 blocks super 1.2 level 4, 512k chunk, algorithm 5 [5/4] [UUU__]
      [>....................]  reshape =  1.8% (105660/5858816) finish=4.6min speed=20722K/sec
$ sudo swapoff /dev/md0
$ grep swap /etc/fstab
UUID=2863a135-946b-4876-8458-454cec3f620e none            swap    sw              0       0
$ sudo mkswap -L swap -U 2863a135-946b-4876-8458-454cec3f620e /dev/md0
$ sudo swapon -a

What I just did is tell MD that I need to grow the array from 3 to 4 devices and add the new device. After that, one can see that the reshape is taking place (it was rather fast because the partitions were small, only 256MB). After that first operation, the array is bigger but the swap size is still the same. So I “unmounted” or turn off the swap, recreated it using the full device and “remounted” it. I grepped for the swap in my `/etc/fstab` file in order to see how it was mounted, here it is using the UUID. So when formatting I reused the same UUID so I did not need to change my `/etc/fstab`.

Converting a RAID1 to RAID10 array online (without copying the data)

Now a bit more complex. I want to migrate the array from RAID1 to RAID10 online. There is no direct path for that, so we need to go via RAID0. You should note that RAID0 is very dangerous, so you should really backup as advised earlier.

Converting from RAID1 to RAID0 online

The current RAID1 array is called m1 and is composed of sdb2 and sdc2. I’m going to convert it to a RAID0. After the conversion, only one disk will belong to the array.

$ sudo mdadm --grow /dev/md1 --level=0 --backup-file=/home/backup-md0
$ cat /proc/mdstat
md1 : active raid0 sdc2[1]
      249728 blocks super 1.2 64k chunks
$ sudo mdadm --misc --detail /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Sun Feb  9 15:13:33 2014
     Raid Level : raid0
     Array Size : 249664 (243.85 MiB 255.66 MB)
   Raid Devices : 1
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Tue Jul 25 19:27:56 2017
          State : clean 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 64K

           Name : jupiter:1  (local to host jupiter)
           UUID : b95b33c4:26ad8f39:950e870c:03a3e87c
         Events : 68

    Number   Major   Minor   RaidDevice State
       1       8       34        0      active sync   /dev/sdc2

I printed some extra information on the array to illustrate that it is still the same array but in RAID0 and with only 1 disk.

Converting from RAID0 to RAID10 online

$ sudo mdadm --grow /dev/md1 --level=10 --backup-file=/home/backup-md0 --raid-devices=4 --add /dev/sda2 /dev/sdb2 /dev/sdd2
mdadm: level of /dev/md1 changed to raid10
mdadm: added /dev/sda2
mdadm: added /dev/sdb2
mdadm: added /dev/sdd2
raid_disks for /dev/md1 set to 5
$ cat /proc/mdstat
md1 : active raid10 sdd2[4] sdb2[3](S) sda2[2](S) sdc2[1]
      249728 blocks super 1.2 2 near-copies [2/2] [UU]
$ sudo mdadm --misc --detail /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Sun Feb  9 15:13:33 2014
     Raid Level : raid10
     Array Size : 249664 (243.85 MiB 255.66 MB)
  Used Dev Size : 249728 (243.92 MiB 255.72 MB)
   Raid Devices : 2
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jul 25 19:29:10 2017
          State : clean 
 Active Devices : 2
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 2

         Layout : near=2
     Chunk Size : 64K

           Name : jupiter:1  (local to host jupiter)
           UUID : b95b33c4:26ad8f39:950e870c:03a3e87c
         Events : 91

    Number   Major   Minor   RaidDevice State
       1       8       34        0      active sync set-A   /dev/sdc2
       4       8       50        1      active sync set-B   /dev/sdd2

       2       8        2        -      spare   /dev/sda2
       3       8       18        -      spare   /dev/sdb2

As the result of the conversion, we are in RAID10 but with only 2 devices and 2 spares. We need to tell MD to use the 2 spares as well if not we just have a RAID1 named differently.

$ sudo mdadm --grow /dev/md1 --raid-devices=4
$ cat /proc/mdstat
md1 : active raid10 sdd2[4] sdb2[3] sda2[2] sdc2[1]
      249728 blocks super 1.2 64K chunks 2 near-copies [4/4] [UUUU]
      [=============>.......]  reshape = 68.0% (170048/249728) finish=0.0min speed=28341K/sec
$ sudo mdadm --misc --detail /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Sun Feb  9 15:13:33 2014
     Raid Level : raid10
     Array Size : 499456 (487.83 MiB 511.44 MB)
  Used Dev Size : 249728 (243.92 MiB 255.72 MB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jul 25 19:29:59 2017
          State : clean, resyncing 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 64K

  Resync Status : 99% complete

           Name : jupiter:1  (local to host jupiter)
           UUID : b95b33c4:26ad8f39:950e870c:03a3e87c
         Events : 111

    Number   Major   Minor   RaidDevice State
       1       8       34        0      active sync set-A   /dev/sdc2
       4       8       50        1      active sync set-B   /dev/sdd2
       3       8       18        2      active sync set-A   /dev/sdb2
       2       8        2        3      active sync set-B   /dev/sda2

Once again, the reshape is very fast but this is due to the small size of the array. Here what we can see is that the array is now 512MB but only 256MB are used. Next step is to increase the file system size.

Increasing file system to use full RAID10 array size online

This cannot be done online with all file systems. But I’ve tested it with XFS or ext4 and it works perfectly. I suspect other file systems support that too, but I never tried it online. In all cases, as already advised, make a backup before continuing.

$ sudo resize2fs /dev/md1
resize2fs 1.42.13 (17-May-2015)
Filesystem at /dev/md1 is mounted on /boot; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 2
The filesystem on /dev/md1 is now 499456 (1k) blocks long.

$ df -Th /boot/
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md1       ext4  469M  155M  303M  34% /boot

When changing the /boot array, do not forget GRUB

I already had a RAID array before. So the Grub configuration is correct and does not need to be changed. But if you reshaped your array from something different than RAID1 (e.g. RAID5), then you should update Grub because it is possible that you need different module for the initial boot steps. On Ubuntu run `sudo update-grub`, on other platform see `man grub-mkconfig` on how to do it (e.g. `sudo grub-mkconfig -o /boot/grub/grub.cfg`).

It is not enough to have the right Grub configuration. You need to make sure that the GRUB bootloader is installed on all HDDs.

$ sudo grub-install /dev/sdX  # Example: sudo grub-install /dev/sda

GitLab – Disable Container Registry feature for all existing Projects

Today I activate a new feature at work which provides the Container Registry feature on our GitLab instance.

However not all projects are requiring this feature, actually at the moment just a few. Therefore we wanted this option to be disabled by defaults and to the responsibility of the project leaders to activate or not when needed.

GitLab offers to disable the Container registryfeature for new projects only. What I wanted was to do that for existing projects. This needs to be done directly on the database, so back it up before doing this, and try it on a non-production environment first. Note that this was tested on GitLab 8.17.3 using PostreSQL 9.6, with other releases this could be different. In addition, the following is provided as is without any warranty, it worked for me, it might not for you and I’m in no way responsible if you mess your database.

Now that you have done your backup, to perform the changes on the database, you need to login as the `gitlab-psql` user:

When using Docker:

$ docker exec -it --user=gitlab-psql gitlab bash

When using omnibus package:

$ su - gitlab-psql

The following applies for both Docker and Omnibus installation once logged in:

$ export PGHOST="/var/opt/gitlab/postgresql"
$ psql gitlabhq_production
gitlabhq_production=# update projects SET container_registry_enabled = FALSE;

That’s it!

Installing Ubuntu Server on Raspberry Pi – Headless

Raspberry Pi 2 Model B+ v1.1This article will describes the steps to install Ubuntu Server 16.04 on a Raspberry Pi 2. This article provides extra steps so that no screen or keyboard are required on the Raspberry Pi, it will be headless. But of course you need a screen and keyboard on the computer on which you will download the image and write it to the MicroSD card. It is similar to a previous article about installing Debian on Raspberry Pi 2, also headless mode.

Disclaimer: you need to know a minimum about computer, operating system, Linux and Raspberry Pi. If you just want to install an Operating System on your Raspberry Pi, get NOOBS the Raspberry Pi Foundation installer. This guide is for more advanced users. If you follow this guide but do mistakes, you might wipe out disk content or could even brick Micro SD card or what not.

Known limitations: This guide will not work for Raspberry Pi 3 (unless you follow these extra steps to boot the Ubuntu Server raspi2 image on a Raspberry Pi 3) and currently cannot work easily on Raspberry Pi 3 B+ because there are no specific DTB on Ubuntu (Linux kernel device tree blob, although some people on Fedora 28 beta are successful by simply renaming the DTB from the Raspberry Pi 3 model) and one need a new uboot for this model (which in the Ubuntu Server images is an “older” version not currently supporting the new 3 B+ model, and even the Raspberry Pi 2 image for Bionic Beaver, the current development version which will become Ubuntu 18.03, does not support it yet).

Install the Ubuntu Server image

Ubuntu Circle of Friend LogoGrab your official Ubuntu Server for Raspberry Pi 2 image (the latest version at time of writing is ubuntu-16.04.4-preinstalled-server-armhf+raspi2.img.xz but in a few days the image for Ubuntu 16.04.4 should be available, it will save you some time when upgrading it (and save some write cycles on your Micro SD card). Once downloaded, you need to insert the Micro SD card on your computer (you probably need a USB card reader for that) and try to figure out which device it corresponds to, see the Ubuntu documentation for further guidance. I assume you know what you do but be weary that the next command if done on the wrong device could wipe out the data on that device. I do not take any responsibility if things go wrong.

$ xzcat ubuntu-16.04.4-preinstalled-server-armhf+raspi2.img.xz | dd of=<device> bs=4M oflag=dsync status=progress

Create a user account and allow SSH access

Then make sure to sync your media data and then mount the newly created partition (normally there are 2 partitions created, we are interested in the second one, it should be named <device>p2 or <device>2:

$ sync
$ sudo mkdir -p /mnt/rpi
$ sudo mount <device>2 /mnt/rpi

User account creation

As the Raspberry Pi uses an ARM processor and the computer on which I created the Micro SD card is a x86_64 processor, I cannot simply chroot and execute adduser in the newly mounted partition. The programs are compiled for a different architecture. So to add a new user we will need to do it manually by editing system files. We will create a new user and group, then add the corresponding entries in the files where the passwords are kept.

Add a new user (replace $(whoami) by your username if you want a different username than your current one).

$ echo "$(whoami):x:1000:1000:<Full Name>:/home/$(whoami):/bin/bash" | sudo tee -a /mnt/rpi/etc/passwd

Now create your group by editing /mnt/rpi/etc/group:

$ echo "$(whoami):x:1000:"" | sudo tee -a /mnt/rpi/etc/group

Now edit the group password database:

$ echo "$(whoami):*::$(whoami)" | sudo tee -a /mnt/rpi/etc/gshadow

And the user passsword database (it will have no default password but allow SSH key base authentication over the network and it will request to set a password upon first login. Note that with this configuration remote SSH login cannot happen without the SSH key, so it is a secure configuration):

$ echo "$(whoami)::0:0:99999:7:::" | sudo tee -a /mnt/rpi/etc/shadow

Grant your user access to administrative tasks (via sudo), but still requires that the user enter his own password:

$ echo "$(whoami) ALL=(ALL) ALL" | sudo tee /mnt/rpi/etc/sudoers.d/20_$(whoami)_superuser

User home folder and SSH access

Now we shall create the user’s home and add the SSH public key so we can login (it is assumed that you have a public RSA key under your home directory named ~/.ssh/id_rsa.pub change the name if it’s different):

$ sudo cp -R /mnt/rpi/etc/skel /mnt/rpi/home/$(whoami)
$ sudo chmod 0750 /mnt/rpi/home/$(whoami)
$ sudo mkdir -m 0700 /mnt/rpi/home/$(whoami)/.ssh
$ cat ~/.ssh/id_rsa.pub | sudo tee -a /mnt/rpi/home/$(whoami)/.ssh/authorized_keys
$ sudo chmod 0600 /mnt/rpi/home/$(whoami)/.ssh/authorized_keys
$ sudo chown -R 1000:1000 /mnt/rpi/home/$(whoami)

Setup Systemd for enabling SSH access and headless mode

Normally everything else should be correctly setup. However you might want to have a look at systemd configuration, mostly of interests are which default target is in use (for headless you want multi-user.target) and if the SSH service is part of the default target. What I did was the following (it also avoid creating the ubuntu user):

$ cd /mnt/rpi/lib/systemd/system
$ rm -f default.target
$ ln -s multi-user.target default.target
$ cd /mnt/rpi/etc/systemd/system/multi-user.target.wants
$ ln -s /lib/systemd/system/ssh.service ssh.service

(if the last command fails because the file already exist then it is all OK)

Start Ubuntu Server on Raspberry Pi 2

Now unmount the card and eject it: sudo umount /mnt/rpi. You can now safely insert the card in your Raspberry Pi 2 and boot it. It boots slower than with Raspbian, so be patient. Note that with all the above configuration, you do not need to boot with a keyboard or screen attached to your Raspberry Pi. Only an Ethernet cable and the power plug are necessary.

Now you need to find your newly installed Ubuntu Server on your network, the default hostname is ubuntu so you could always start with that (ssh $(whoami)@ubuntu) if it is not in conflict with another device of yours and if your router is clever enough to have updated the DNS resolver. Or else you need to scan your network for it. To scan your network you need to know your subnet (e.g. 192.168.1.0 with a netmask of 255.255.255.0) and have nmap installed on your computer (sudo dnf install nmap will work for Fedora, and it is as easy for Debian/Ubuntu-based distros as well, just replace sudo apt-get install nmap).

$ sudo nmap -sP 192.168.1.0/24

Of course you need to adapt the above command to your subnet. The “/24” part is the netmask equivalent of 255.255.255.0. I recommend running the above command with sudo because it will display the MAC address of all the discovered devices which will help you spot your Raspberry Pi as nmap is displaying the vendor attached to each MAC address. See for yourself in the example output:

Starting Nmap 6.47 ( http://nmap.org ) at 2015-07-19 20:12 CEST
(...)
Nmap scan report for ubuntu.lan (192.168.1.9)
Host is up (0.0060s latency).
MAC Address: B8:27:EB:1E:42:18 (Raspberry Pi Foundation)
(...)
Nmap done: 256 IP addresses (8 hosts up) scanned in 2.05 seconds

Now you can simply connect to your RPi using SSH:

ssh $(whoami)@192.168.1.9
Enter passphrase for key '~/.ssh/id_rsa':
You are required to change your password immediately (root enforced)
Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-1017-raspi2 armv7l)

(...)

142 packages can be updated.
69 updates are security updates.

(...)

WARNING: Your password has expired.
You must change your password now and login again!
(current) UNIX password:

Now that you are authenticated and have access to your newly installed Ubuntu Server, it is time to upgrade it.

Upgrade Ubuntu Server to latest packages

The tool tmux should already be installed on your system (or do sudo apt install tmux), so use it to create a new session, so even if you get a network problem your session is not killed (simply do tmux attach)

$ tmux
$ sudo apt dist-upgrade
$ sudo systemctl reboot

Note: it is possible that unattended-upgrade kicks in before you can do the upgrade manually. Then wait an hour or more (depending on the speed of your internet connection and Micro SD card mainly) before doing the above steps. It is still worth while as the dist-upgrade command will perform more thorough upgrade (potentially removing deprecated packages or even downgrading some if necessary) but you will be in sync with the latest and greatest Ubuntu Server.

Picture credits: Photo of a Raspberry Pi board by me, see the website licensing policy. Ubuntu Circle of Friends logo is copyright by Canonical.

Ubuntu Core – Atomicity rough on the edges

View of a Harbour Terminal before Containers existed
Before the invention of containers, docker was a much more manual job. And that’s what I’m looking for my Raspberry Pi.

I’ve been recently trying to play with Ubuntu Core on my 2nd Raspberry Pi 2. I like the concept of a minimalist host with atomic updates and the possibility to run my services inside containers. In addition, snap looks like an interesting package system (and more than that). But this setup does not fit my use cases, it is perfect for repeatable testing and safe environment for deployment. However I need a system that is stable and safe, but which I can tinker with (modify a specific configuration or kernel in order to optimise its use or detect new devices) and which grow organically (depending on my free time).

Introduction to Ubuntu Core

So Ubuntu Core (or the Project Atomic) do appeal to me but they are too restrictive for my use cases. I need more freedom. Anyway, for those of you who could be interested in these projects here is a quick review of these systems with respect to day-to-day use as I’m not going to explain the philosophy of Core/Atomic neither of snap/atomic technologies.

Both Ubuntu Core (armhf variant) and CentOS Atomic Host (x86_64 variant) felt a bit rough and despite carrying the respective name Ubuntu and CentOS I had to reconsider how I am used to administer such boxes. A basic concept is that you install a core (or minimalist) OS and you cannot pretty much change it (most parts are read only), but it should have everything to run containers. For Atomic Host, there is no way to install additional packages, you need to add every other bit of software as a container. The big difference with Ubuntu Core is that you have snaps which allows to extend the core OS without having to install and configure containers manually. A snap package – once installed – feels more or less like if you just installed a deb package. But there is a big difference, they are like little containers or sandboxed process(es) already neatly packaged so they feel like a normal command, but a lot is going on behind the scene. Here is an example, I’ve installed `htop` on my Raspberry Pi 2, now I get two distinct results if I run it with a standard user or with the super user rights, a behaviour uncommon on a standard Linux installation:

Standard User Super user

Snap htop - standard user
Snap htop – Standard User

Snap htop - super user (via sudo) - see complete list of processes
Snap htop – Super User (via sudo)

As it is visible in the case of htop, by default when I run it, htop shows only processes where the owner is myself. This is not bad but it is different from traditional Linux distribution and so you need to get the habit to do `sudo htop` to see all processes, but take care you then run htop as super user, and htop allows you to kill processes, so be careful.

However, on Raspberry Pi (and probably other ARMv7 (aka armhf) platforms) there is a very limited amount of snap packages available yet. It is probably changing fast, but if you run today Ubuntu Core 16 you can’t install much snaps and need to rely on Docker containers for adding extra applications to the base system (more on that later, including its current limitations).

Basic system configuration absent

Let’s get back to the beginning and I mean by that right after the installation of Ubuntu Core upon first boot. I still had my keyboard and screen attached to it (and you need it in order to login with your Launchpad SSO login to create the first user). After the first boot, I was not offered the possibility to change the keyboard layout (I do not have a US layout) but at least on Ubuntu Core one can change it afterwards by editing the file `/etc/default/keyboard`, this feat is not possible on Atomic Host. Anyway, not such a big issue as I’m using my Raspberry Pi mostly (if not alway) via SSH then it does not matter anymore. Staying on the localization topic, both systems do not allow changing the locale (language, regional preferences, etc.). On Atomic Host it is set to US with the “peculiar” time and date format ;-) no offense! Similar fate on Ubuntu Core but they are using the C locale (which sadly for me also uses the US date/time format). Although I would anyway stick to the English language, I don’t like the regional choice and I don’t like the lack of choices here.

So let’s move back to an SSH connection, at least the keyboard layout is no longer an issue. But here comes the next one ;-) whenever I use SSH, the first thing I do is launch a new tmux session or attach to an existing one. I’m open to other solutions, so if the host only has screen or byobu, I’m also fine. However, Ubuntu Core for ARMv7 does not ship by default with any of them, nor are they any existing snap package. So there is no simple way (out of compiling from source and creating a snap package) to have safe SSH session.

Docker

Normally thanks to container technology, it is possible to install more or less everything I would need and with some clever alias it would seem much like a snap installation. So I just wanted to create a Dockerfile which reference Alpine Linux and install tmux and build the container using Docker. Not possible with Ubuntu Core and the Docker snap. According to the Docker snap information, I should be creating a folder under `$HOME/apps/docker/1.6.xxx` (eventhough Docker 1.11.2 was installed, weird), but it does NOT work. The AppArmor profile installed with Docker snap denies it. I’ve tried many different folders to no avail. This is not a misconfiguration of Ubuntu Core but rather one of the Docker snap packager, but the end result is that as a user Ubuntu Core on armhf is barely usable (lack of too many essential tool, no snap and cannot easily use Docker).

Other rough edges with Docker are that your default user does not belong to the Docker group, so you need to sudo every single Docker command. There is also no way to add yourself to the Docker group (as the group is defined in the standard file `/etc/group` which is on a read-only filesystem). Your default user is also not a real traditional user, it is an “extra user” (did not know this subtlety before this) so there are a bunch of things that are not compatible with it like you cannot launch a container to run as yourself (docker run -u $(whoami) ...) as you are not a standard user (no entry in `/etc/passwd`), you are an extra users (see in `/var/lib/extrausers/passwd`), but at least it would be possible to do it specifying your user ID (e.g. 1000)! This is all logic but “rough”. A corollary to the previous statements is that you cannot run Docker in user namespaces mode (unprivileged Docker container) as you cannot add yourself to the Docker group.

At least the Docker snap includes Docker Compose, that is cool. On Atomic Host, Docker is installed but not Docker Compose. As the file system is also read-only there is good way to install Docker Compose in a central place, it should be run within a container. At least on Atomic Host, it is very easy (as on standard Linux OS) to create Dockerfile and build them. So these limitations (of not having Docker Compose and some other tools) can be overcome with some efforts.

Final Try

My next steps was to download some scripts and tools. However both curl and wget are absent of the base installation. Even git is not available as a snap or on the core installation. There is no way to build a Dockerfile, so in the end to have tmux, curl, git, etc. I had to create a lxd container running Ubuntu just to get a normal OS (or from my laptop, I could download the scripts and via scp pushing them to Ubuntu Core). But then I simply prefer to run Ubuntu Server or Raspbian on my Raspberry Pi.

For me I’ve lost too much time on this already. Ubuntu Core seems nice I really like the principles and approach, but it is definitively missing too many basic tools, too rough for my taste and available free time.

Conclusion

I will install Ubuntu Server or CentOS 7 for armhf on my second Raspberry Pi 2. Ubuntu Core is perhaps still a bit too young (maybe it is specific to the armhf platform) and requires too much work for my needs. At least this experience made me understand that I don’t want a lock down and safe box, but I need flexibility.

Picture credits: Public Domain. The picture is part of the State Library of South Australia collection (see original B 4433 photo).