Containers, volumes and file permissions

Permission Denied for Container’s Volume

There is a subject which seems to be completely abstruse to many users of containers on Linux, it is about sharing data between a host and a container or between containers.

I do think that solving this problem is not much different than it is without containers on Linux and on Unix. From my perspective, there is no much difference between managing file permissions with or without containers, the big change for me is the introduction of namespaces, especially the user namespaces.

So what is exactly the problem? And where does it come from?

The problem is that when running a process within a container, that process will run with a certain user and group ID (respectively UID and GID) and that those IDs might differ from the ones of the caller (the user creating and running the container), this might not be obvious. This is especially true with container technologies like Docker which by default will run the process within the container as root (unless overridden in the Dockerfile or command line) when any user with write access to the Docker socket can create such container. So you have by default a discrepancy for the UID and GID between the caller – probably a standard user – and a random Docker container.

In traditional Unix / Linux, this is “normal” or “expected” behaviour. You usually cannot run a process as root from your normal user unless you use sudo or a setuid program, so usually you do not have the problem that a program you launch might have different UID/GID than your own user. And when you use a program with sudo you understand that this might become a problem, so if you use sudo to run `tcpdump -w net-trace.pcap` you know the file net-trace.pcap will be owned by root and that you might not be able to access or delete it. This reflex needs to apply to running a container as well.

When you have done Unix/Linux development most of your career – and that you have adopted the principle of least privileges … I still know of few people only using the root account ;-) – you are used to create application that will run in the background (as a service) under a dedicated user and for which you need to handle the permissions for the data this application might need to use. So introducing containers (without user namespaces) should not bring any surprise here, it is part of the expectations. But you will see later that you can still be bitten by some edge cases from the container implementation.

So, let us see how to fix this problem of User/Group ID and file permissions. Note that the solution would be similar if you would use containers or not, and applies to all container implementations (e.g. LXC, Docker, etc.). Then, for everyone, we will see how to handle file permissions when using user namespaces (hint, the principles are the same, but it requires a few extra steps to understand what will be the effective UID/GID). Finally, in the case of Docker, we will see a few edge cases where you can still get off guard with respect to file permissions and volume declaration inside a Dockerfile.

Continue reading “Containers, volumes and file permissions”

Containers for Firefox

Wooden Boxes – CC0 Public Domain

There is a new feature coming to Firefox which was discretely introduced in Firefox 50 Nightly and is getting improved with follow up releases. It is called Containers and is part of the of the Contextual Identity Project.

In short each container – or context – is a “colour-coded” tab with a dedicated environment to help one separate his/her online activities. So you can have tabs in a particular context and others in another context.

This increases privacy, so sites cannot spy on you outside of the context you use them. It allows separation of concerns, so you can use a website (e.g. GitHub) for work and personal use inside the same browser but with each a different account. It increases security so if you access your bank in a dedicated context, it would be harder to perform some attacks (e.g. cross-site scripting) to access your bank data.

To activate it you can go to about:config page and then set to true the entry privacy.userContext.enabled, you get the vanilla experience, still a bit rough in Firefox 60 and 61, already quite improved in Firefox 62 Developer Edition. A recommended alternative is to use Mozilla’s Addon called Firefox Multi-Account Containers which provide a nice icon and a walk-through. It works at least on Firefox for Linux, macOS and Windows.

This is how it looks like on Ubuntu (I’m using the default Dark theme). You can see that my Gmail is opened in a blue-coloured container, I have GitHub in a purple, a shopping site in a pink “Shopping” and finally a news site in no specific container. I could open another tab to my Grafana site in the same purple-coloured container as GitHub, and I would then be able to use GitHub OAuth to login to Grafana. If I would open Grafana in another or no container, I would not be able to use GitHub OAuth without re-authenticating myself to GitHub in this new context.

Firefox Containers illustration
Firefox Containers illustration

So I’m really looking forward to improvements on Firefox Container.

GitLab – Disable Container Registry feature for all existing Projects

Today I activate a new feature at work which provides the Container Registry feature on our GitLab instance.

However not all projects are requiring this feature, actually at the moment just a few. Therefore we wanted this option to be disabled by defaults and to the responsibility of the project leaders to activate or not when needed.

GitLab offers to disable the Container registryfeature for new projects only. What I wanted was to do that for existing projects. This needs to be done directly on the database, so back it up before doing this, and try it on a non-production environment first. Note that this was tested on GitLab 8.17.3 using PostreSQL 9.6, with other releases this could be different. In addition, the following is provided as is without any warranty, it worked for me, it might not for you and I’m in no way responsible if you mess your database.

Now that you have done your backup, to perform the changes on the database, you need to login as the `gitlab-psql` user:

When using Docker:

$ docker exec -it --user=gitlab-psql gitlab bash

When using omnibus package:

$ su - gitlab-psql

The following applies for both Docker and Omnibus installation once logged in:

$ export PGHOST="/var/opt/gitlab/postgresql"
$ psql gitlabhq_production
gitlabhq_production=# update projects SET container_registry_enabled = FALSE;

That’s it!

Upcoming articles

I’ve been busy in the past months and did not update much this  site. However, here are a few hints about some upcoming articles:

  • Install and run Docker on Raspberry Pi: published
  • Run ntpd to provide time to your local network (Docker), I will talk about the pros and cons of running that on a Raspberry Pi: published
  • Run dnsmasq to provide a DHCP and DNS resolver on your local network (Docker): draft-only
  • This website is now using TLS so that the URL has changed to HTTPS. The certificate authority is Let’s Encrypt which my hosting provider is offering easily. I will show how easy it can be to setup HTTPS on WordPress: draft-only
  • I have a few more unfinished articles which might still take some time to complete, topics are ranging from: LXC advance usage and tips&tricks, LXD first usage, Linux SSH key management, SSD caching on Linux, FreeBSD/Arch Linux on Raspberry Pi 2.
  • Last but not least, I ought to announce why I’ve been so busy in the past months (and years). I will give some hints soon.

Internet of Things? Not as it is marketed

Prototype of an IoT project based on ArduinoAs I’m trying to prototype some sensors which I will then use around my home to monitor events and perhaps also react on them, I’ve been a bit more looking at the Internet of Things (aka IoT) trend. So here is my opinion on IoT in regards to personal home automation.

And to start franckly, I think the use of IoT for home automation is idiotic. It is my view that current companies in this field understood IoT as being online, in the cloud, whereas I thought it was about to be based on network standards (such as those used on the internet) for improved interoperability. Why is it needed to make it “internet” connected (collected)? It really does not need to be on the internet, IoT just needs the local network access and standard communication stack!

I think the monitoring elements, the storage of this data, the analysis and control systems, and the actuators should all be local, in house. If an actuator needs data from the Internet or in the end calling an internet service it’s still possible even if it stays local. There is absolutely no advantage to have all this in a “cloud”, this is only to the benefits of ad agency and other agencies which can use your data to better “monitor” you!

And having an IoT brings many challenges: data transmission, congestion, storage, latency, security, privacy, etc. some of those are mentioned in this article I found on Twitter today. But this article is also oriented towards other usage for IoT than in the house. (Note that if you’re used to build M&C – Monitoring and Control – systems, you will not be surprised by this article content, these are classical challenges in M&C domain).

From my perspective and when used within a house, my decentralized approach to IoT (without cloud or external internet services) is not subject (to the same extent) to most of the challenges presented in this article.

I also think that data retention for a house is really limited (e.g. only the latest status for a window/door open state sensor; maybe up to a week/month of data from a temperature sensor; etc.) so storage of data is not challenging.

Latency is also not such a problem. Only few actuators in a home would require immediate response (e.g. so called “smart lock”). For other sensors the reporting of new data could be cyclic (with long update cycles such as once every 15 minutes) or on change (with big thresholds) because latency is of such lower priority.

Raspberry Pi 2 box with Logo (a raspberry)I therefore think a device as simple as a Raspberry Pi 2 is perfectly suited to be the core element of a Home Automation system. It has enough processing power, storage capacity, interfaces capabilities to be the host of all the gathering of monitored data, their processing and analysis, and of all actuators. And it can easily use internet services (if need be) thanks to its network interface.

If one day I find the need to have access to my home automation system, it will be simply done from my mobile via VPN or router configuration.

As a conclusion you can find consolidated here my opinion regarding IoT and Home Automation:

  • On premise: the data should not leave the house, processing and controlling should be done at home;
  • No Cloud: this is the corollary to my previous point. The data belong to us and shall be kept private. Pushing them to the cloud add complexity, risks with no benefits;
  • Open standards: communication and interoperability are paramount for the success of IoT. Adding a new IoT device should be easy; and
  • Short data lifespan: no need to keep tracks of IoT data for long periods. Most of it is interesting only the moment it changes and then can be forgotten.

.Space I could not resist you

I don’t have much time to explore more my Raspberry Pi or other technical stuff lately, because I have lots of interesting (and also demanding) stuff to do in my non-cyber life (the real one)!

But I always have an eye on what’s happening there and when I saw availability of the .space domain, I hesitated a few months before I could not resist it any longer.

You can now visit my site using berthon.space! You will land for the moment on the same page. I have time now to think how I will personalise this domain.

Closing comments

Comments are the feature least used on this blog. In addition, they could pose a security risk by allowing anonymous user entries (see WordPress 4.2 Stored XSS). I am therefore closing them and I am not sure I will reopen them in the future.

I plan to move this blog to a static blog engine (such as Octopress but I haven’t selected one yet). Such engine – as far as I know – do not support comments and I don’t want to rely on 3rd party products such as disqus for obvious privacy reasons which I’m not going to details now.

There are several ways to continue the discussion with me. Social “media” are one, and eventhough I’m not really active there, I’m monitoring them. I will think of other ways to provide discussions and update this post.

Comments blocked on Magical World

Today I took the decision to block all comments on Magical World.

Spammers are currently flooding the blog with spam which are filling up our backend database (it is a small and cheap hosting service). Although all spam are captured by our spam filter, we end up with our hosting service provider blocking SQL insert and update statements. It basically freezes our website.

In the near future I will evaluate a better solution and restore comments. But for the time being, feedback can be provided to us via the contact us page.

OS X unstability, Mavericks is just unfinished!

I am really really disapointed by Apple, and for more than just one reason. Here below is my list of  growing concerns with the company and especially how they molest OS X.

First of, when I have a stable running system, the last thing I want to hear is that there is availability of an upgraded OS version which, on top of providing new features, is correcting several security flaws which won’t be corrected on the current stable OS release!!!

Imagine a world where Microsoft would have stopped providing security updates to Windows XP the day Vista was out, even if it had been for free, would you have done the upgrade right away? No! You want to wait that the new OS gets stable enough (in Vista’s case that meant waiting for Windows 7) before you upgrade.

Well bad luck, Apple decided last October to provide OS X 10.9 (a.k.a Mavericks) to everyone without providing further supports (at least in terms of security updates) to previous OS X versions. Giving this upgrade for free was just the sweet juice to cover the bitter poison taste.

I analysed the vulnerabilities in our current OS X version and decided that I could wait for OS X 10.9.1 before upgrading, hoping that Apple would also see that stopping security updates on previous OS X versions was plain stupid. This did not happen and shortly after OS X 10.9.1 was published I did the jump and upgraded.

The upgrade was bug free, but not the use of OS X since then. For the past month we have been struggling with the following problems:

  • Impossibly slow to switch users (my wife and me are sharing the same and unique MacBook, so we do use quite often this feature)…
  • …when it does not simply hang in the process of switching!
  • The screen, mouse and keyboard freezes randomly, often this is related to the insertion of the Thunderbolt network adapter, other times it is out of the blue.
    Note: sometimes pulling out the Thunderbolt plug unfreezes the Mac, sometimes not.
  • Mail App crashes often or the geometry of the window keeps on changing to the weirdest form. I simply stopped using it.
  • Launchpad should support keyboard input to quickly filter the list of application, so that typing “Con” would propose you “Contacts”, “Console”, etc.. But since Mavericks this handy feature does not work everyday.
  • WiFi needs router reboot to get IP address after the computer was asleep. It is a bug from iOS that has been carefully ported to OS X. Now I can also enjoy router reboot because I just want to use the WiFi! Yeah!
  • App Store updates which get lost: I got notified that I had 1 possible updates, I fired up the App Store app and clicked on Update. I saw that Evernote had an update but then the computer froze. I waited 5 min after which I shutted down and restarted. The App Store shows no update to perform (even after search for them) but looking for Evernote shows that it can be updated! That’s a reliable update mechanism!

Giving upgrade for free is no excuse for the unstability and little testing that this OS X Mavericks has received. Take the example of Ubuntu and their Long Term Support release, that is serious work done and they also provide their upgrades for free!

So I really despise Apple for bragging so much about the 200 new features coming along with OS X Mavericks. Well I have seen a myriad of bugs, if they count as new features then yes they probably are not far from the 200 ones. But apart from an application which provides maps (where is the web version of it, I don’t want an app for that!), another one for reading books (like a MacBook is the most handy reading machine, sure!!) there are close to no visible features to the end-users. Ho yeah I forgot tags, like I am going to tag the 1 TiB of data I own just because I can!!!

Really Apple, stop wasting so much of your developers time into just providing what looks like a new skin for iOS 7+ and invest some valuable effort into bringing back OS X to the stable and professional OS it was!

Addendum – 2014-02-05: Today I tried to rate the OS X 10.9.1 application in the Mac App Store, my rate was one star, I clicked on it but got the following message “To rate this app, you must have purchased it from the Mac App Store”?!? Well it is true that I did not “buy” OS X 10.8, it was bundled with the laptop. So I only upgraded it to the “free” version from the Mac App Store using the Mac App Store.You can't give negative feedback ;-) So I did not technically buy it, true, but I own it! You can see a screenshot on the right-end side.