Closing comments

Comments are the feature least used on this blog. In addition, they could pose a security risk by allowing anonymous user entries (see WordPress 4.2 Stored XSS). I am therefore closing them and I am not sure I will reopen them in the future.

I plan to move this blog to a static blog engine (such as Octopress but I haven’t selected one yet). Such engine – as far as I know – do not support comments and I don’t want to rely on 3rd party products such as disqus for obvious privacy reasons which I’m not going to details now.

There are several ways to continue the discussion with me. Social “media” are one, and eventhough I’m not really active there, I’m monitoring them. I will think of other ways to provide discussions and update this post.

Using TPM as a source of randomness entropy

On a headless server, even if using rotational hard disks, it can be difficult to avoid the depletion of the Linux kernel’s random entropy pool. The level can be checked with the following command:

$ echo $(cat /proc/sys/kernel/random/entropy_avail)/$(cat /proc/sys/kernel/random/poolsize)
620/4096

(Note depending of the workload of your server the above result could be considered adequate or not)

What means a depleted entropy pool? It means that any call to `/dev/random’ would be blocking until enough entropy is available. A blocking call is not usually wished, it means the application could be considered frozen until the call is freed. On the other side, calls to `/dev/urandom’ would not be blocking in such situation, but there is a higher risk that the randomness quality of the output decreases. Such a higher risk could mean giving a higher chance for an attacker to predict your next dice roll. This could be exploitable or not and it is hard to tell, at least for me. Therefore, I tend to try avoiding having a depleted entropy pool especially for certain workload.

There are several mechanism to provide randomness sources to the entropy pool. The haveged daemon uses some CPU clock timers variation to achieve that, but it is highly dependant of the CPU being used. Other approaches are using sound cards, etc. And finally there are hardware random number generators (RNG). In my previous article, I talked briefly about the hardware RNG from the Raspberry Pi. In this article, I will present another hardware RNG which is available in many computers and servers: Trusted Platform Module (TPM).

The name is somewhat marketing, and I’m not even sure we should trust it that much. But we will activate it only to provide a new source of entropy and nothing more. I would advise to use other source of entropy as well. Anyway, if interested the following paragraphs are describing how to achieve this, and if you decide to implement them, be reminded that I give no warranty that it will work for you nor that it won’t break things. I can only guarantee you that it worked on my machine running Ubuntu 14.04.2 LTS.

Let’s install the necessary tools and deactivate the main services (the tools launch a daemon which we don’t want to use as we will use rngd to get and verify the randomness of the TPM RNG before feeding the entropy pool):

$ sudo apt install tpm-tools
$ sudo update-rc.d trousers disable
$ sudo service trousers stop

Then, you will need to go into the BIOS/EFI settings of your computer/server and activate TPM, and possibly clear the ownership of the TPM if it happened to be owned by someone else. Of course, don’t do that if it is not your computer.

I found out that my particular BIOS option only allow clearing from the BIOS settings. Trying to do so from the OS results in the following:

$ sudo tpm_clear --force
Tspi_TPM_ClearOwner failed: 0x0000002d - layer=tpm, code=002d (45), Bad physical presence value

I also found out that my particular BIOS when clearing the ownership, deletes also the Endorsement Key (EK). When I was trying to take ownership of the TPM device (see further) I was getting the following error:

$ sudo tpm_takeownership
Tspi_TPM_TakeOwnership failed: 0x00000023 - layer=tpm, code=0023 (35), No EK

So I had to do an extra step, to generate a new EK:

$ sudo tpm_createek

After having creating a new EK, I was able to successfully take ownership of the TPM device.

$ sudo tpm_takeownership
Enter owner password:
Confirm password:
Enter SRK password:
Confirm password:
tpm_takeownership succeeded

Once all this is done, we are ready to load the TPM RNG kernel module and launch the user space tool that will use this source to feed the Linux kernel entropy pool. The user space tool are the rng-tools suite (with the rngd daemon).

Load the kernel module (to load it permanently add tpm_rng as a new line to /etc/modules):

$ sudo modprobe tpm_rng

And then install the user space tool.

$ sudo apt install rng-tools

The default configuration should be good enough. But you can check it by editing the file /etc/default/rng-tools. The default settings for rngd are to not fill more than half of the pool. I don’t advise to set it to higher, unless you really trust blindly those TPM chips. If you have installed the tool, it should already be running but if you modified the default settings then a restart is necessary:

$ sudo service rng-tools restart

Now you can check again the available entropy with the command that I gave at the beginning of this post.

Installing Linux on Raspberry Pi – The Easy Way

As I announced, I got a Raspberry Pi 2 Model B and although I did not get much time to play yet with it, it was just an excuse to get back to programming and a little computer science fun.

Anyway, I’ve installed Raspbian on it and did a few configurations which I think are worthy to share with others. I’m not going to describe a step-by-step to install a Linux distribution on your Pi, I recommend trying NOOBS and check the good documentation from the Raspberry Pi Foundation. With this setup you can choose during the installation process which Linux distribution to install.

That’s what I did as a warm-up and a quick way to get an up-and-running Pi.

Note that the Raspberry Pi 2 has a new CPU (ARMv7) which is not yet fully exploited by most distribution targeted at the Raspberry Pi ecosystem (with the exception of the Linux kernel). There is one exception: Snappy Ubuntu Core but it is yet alpha. However it should be possible to install most standard Linux distribution that are supporting ARMv7 instructions set, however this is an interesting exercise which I did not perform (yet).

Anyway, using the NOOBS installation or any of the images provided for SD Card has several implication in terms of security. Here is what I recommend to do after the first boot.

Changing password and locking root

If you use the Raspbian installation, then you have one user account `pi’. If you haven’t changed the password for this user during installation, then better do it now. Once connected as `pi’ do:

$ passwd

And enter and then confirm the password you want to use.

With Raspbian, the `pi’ user has administrator’s rights. You can call sudo to perform system changes. If you are comfortable with that, then simply lock the root account:

$ sudo passwd -dl root

Or if you simply want to give a password of your own and use root:

$ sudo passwd root

Getting some Entropy

I am preparing a more detail article on this topic, so for now I am only going to give some background and the commands to increase the sources of entropy. Entropy is important, you need sufficient entropy (which your Linux kernel can gather for you) to be able to generate good pseudo-random numbers. Those numbers are used by many cryptographic services, such as generating SSH keys or SSL/TLS certificates.

The problem with embedded devices such as the Raspberry Pi (especially if you run it headless like I do) is that there aren’t many sources of entropy available to the kernel, especially at boot time.

The Raspberry Pi has an integrated hardware random number generator (HW RNG) which Linux can use to feed its entropy pool. The implication of using such HW RNG is debatable and I will discuss it in the coming article. But here is how to activate it.

First of all, load the kernel module (aka driver) for the HW RNG (the Raspberry Pi 2 can use the same kernel module as the Raspberry Pis 1):

$ sudo modprobe bcm2708-rng

If it worked a new device should be now available /dev/hwrng and the following should be visible in the system messages:

$ dmesg | tail -1
[ 133.787336] bcm2708_rng_init=bbafe000

To make the change permanent, load the module in the /etc/modules file by adding a new line with bcm2708_rng:

$ sudo bash -c 'echo bcm2708_rng >> /etc/modules'

Now install the rng-tools (the service should be automatically activated and started)

$ sudo apt-get install rng-tools

After awhile you can check the level of entropy in your pool and some stats on the rng-tools service:

$ echo $(cat /proc/sys/kernel/random/entropy_avail)/$(cat /proc/sys/kernel/random/poolsize)                                 
1925/4096
$ sudo pkill -USR1 rngd; tail -15 /var/log/syslog
rngd[7231]: stats: bits received from HRNG source: 100064
rngd[7231]: stats: bits sent to kernel pool: 40512
rngd[7231]: stats: entropy added to kernel pool: 40512
rngd[7231]: stats: FIPS 140-2 successes: 5
rngd[7231]: stats: FIPS 140-2 failures: 0
rngd[7231]: stats: FIPS 140-2(2001-10-10) Monobit: 0
rngd[7231]: stats: FIPS 140-2(2001-10-10) Poker: 0
rngd[7231]: stats: FIPS 140-2(2001-10-10) Runs: 0
rngd[7231]: stats: FIPS 140-2(2001-10-10) Long run: 0
rngd[7231]: stats: FIPS 140-2(2001-10-10) Continuous run: 0
rngd[7231]: stats: HRNG source speed: (min=824.382; avg=1022.108; max=1126.435)Kibits/s
rngd[7231]: stats: FIPS tests speed: (min=6.459; avg=8.161; max=9.872)Mibits/s
rngd[7231]: stats: Lowest ready-buffers level: 2
rngd[7231]: stats: Entropy starvations: 0
rngd[7231]: stats: Time spent starving for entropy: (min=0; avg=0.000; max=0)us

Securing SSH – the basic

I run my Pi headless for the moment. So after the first boot, I went into the advanced configuration of the raspi-config and activated SSH daemon. I’m not here going to describe how to properly secure the SSHd daemon, which can be a topic for a future article. I’m only going to focus on one topic which is SSH host keys.

SSH host keys are the means for a user connecting to a remote server to verify the authenticity of the server. That the sort of message you see the first time you connect to an SSH server:

$ ssh pi@raspberrypi
The authenticity of host 'raspberrypi (192.168.1.52)' can't be established.
ECDSA key fingerprint is 42:3d:4f:b7:0b:4b:62:11:47:28:cc:86:76:0c:ac:24.
Are you sure you want to continue connecting (yes/no)?

If an attacker tries to spoof your SSH server, on connection you will get this message:

$ ssh pi@raspberrypi
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
42:3d:4f:b7:0b:4b:62:11:47:28:cc:86:76:0c:ac:24.
Please contact your system administrator.
Add correct host key in /home/pithagore/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /home/pithagore/.ssh/known_hosts:3
  remove with: ssh-keygen -f "/home/pithagore/.ssh/known_hosts" -R raspberrypi
ECDSA host key for raspberrypi has changed and you have requested strict checking.
Host key verification failed.

So this is all good, but the problem with these host keys is that you cannot really trust the one installed by default. They could be the same as the one from the image you downloaded and flashed on your SD Card, and so would be similar to all other persons installing the same image. Another problem is that they could have been generated at a point in the installation where not enough entropy was gathered by the kernel to be able to provide non-predictable random numbers. Therefore someone could use this to generate new host keys that would match the one on your Raspberry Pi.

Now that in the earlier chapter we added good source to feed the entropy pool, we can regenerate better host keys. First get rid off the previous ones:

$ sudo rm /etc/ssh/ssh_host_*key{,.pub}

Then generate new ones with one of the 3 possibilities:

  • Automatic configuration using dpkg
$ sudo dpkg-reconfigure openssh-server 
Creating SSH2 RSA key; this may take some time ...
Creating SSH2 ECDSA key; this may take some time ...
Restarting OpenBSD Secure Shell server: sshd.
  • Automatic configuration using ssh-keygen
$ sudo ssh-keygen -A
$ sudo service ssh restart
  • Manual configuration
$ sudo ssh-keygen -t rsa -b 4096 -N "" -f /etc/ssh/ssh_host_rsa_key
$ sudo ssh-keygen -t ecdsa -b 521 -N "" -f /etc/ssh/ssh_host_ecdsa_key
$ sudo service ssh restart

Now if you want to print the fingerprint of any of your host keys to make sure of the authenticity of the server you are connecting to (should only be needed the first time your client connects to your server):

$ ssh-keygen -l -f

 

Gnome 3 on Ubuntu LTS

This mini how-to is to install Gnome on Ubuntu 14.04 LTS. Of course the best approach would have been to install Ubuntu Gnome in the first place. But if you didn’t (like me) and have Unity as desktop and would like Gnome (or just give it a try), check the rest of this article.

Note and disclaimer: Gnome3 will be available alongside Unity. So you can always switch back and forth between the two without troubles. However, a word of caution, as usual with system modifications, make sure you have backups available and working before proceeding. I make no guarantee that the following will work for you. I’m only stated it worked for me.

Installing Gnome Shell (aka Gnome3)

Installing the “core” Gnome 3 is rather easy. You need to have the Universe repository activated and then:

$ sudo apt update; sudo apt upgrade
$ sudo apt install gnome-shell

During installation of gnome-shell, you will be prompted to choose between GDM (the default Gnome Display Manager) and LightDM (an alternative Ubuntu is using by default). The main functions of these DM, from an end user perspective, is that they provide a graphical login where you can choose the user (and type a password) and choose the desktop environment to use once logged in. In addition these DM can offer things such as autologin. There is no wrong answer here. You can stick with LightDM (it is the one installed by default on UBuntu, and that’s the option I chose) or switch to GDM. In both cases, you need to choose for the session if you wish to use Gnome or Unity desktop environment.

Now you can log out and log in again (choosing Gnome for the session).

Installing some missing default Gnome Apps

This is entirely optional. If you have seen Gnome3 release notes, you will be looking for a few extra applications that are not installed by default by Ubuntu and that are available in the Universe repository. Example applications: Gnome Maps, Gnome Weather, etc.

To install them, you can either install the gnome-core package which will install a minimal Gnome applications environment to start with. Or install a complete Gnome applications environment (which includes all of gnome-core) by installing the package gnome. Or finally simply install a few goodies (a subselection of gnome) that is tailored to your needs. Here is a list of goodies that I installed (from Universe Repos):

$ sudo apt install eog-plugins gnome-{backgrounds,boxes,clocks,color-manager} \
  gnome-{dictionary,documents,maps,packagekit,shell-extension-weather} \
  gnome-{system-tools,weather,music,photos} gnote

And a few others from the main repos:

$ sudo apt install gnome-user-share indicator-printers

And with this, you can get a nice Gnome environment with a Ubuntu based OS.

For Ubuntu 14.10 Users

The above instructions works also for Ubuntu 14.10 users.

Too long without programming

I love programming! And because I do it less and less at work, I felt the need to do something about it.

I was looking for a project to help when I read the news about the new Raspberry Pi. And it took me only a couple of days to realise that was it, so I ordered it online and just received it. Raspberry Pi 2 Model BA beautiful little device with enough connectivity and power to have some fun!

I did not have much time yet to play with it, I was able to install Raspbian on it and that’s it. But I’m quite happy about my decision and I’m looking forward for the projects I plan with it (I’m hesitating between creating a weather station together with a small Arduino board as the “sensor”, or perhaps starting with something less ambitious maybe related to home automation).

Click more to see the system information (OS name, memory and CPU information) of that little wonder.

Continue reading

Comments blocked on Magical World

Today I took the decision to block all comments on Magical World.

Spammers are currently flooding the blog with spam which are filling up our backend database (it is a small and cheap hosting service). Although all spam are captured by our spam filter, we end up with our hosting service provider blocking SQL insert and update statements. It basically freezes our website.

In the near future I will evaluate a better solution and restore comments. But for the time being, feedback can be provided to us via the contact us page.

How to verify your Synology NAS hard disks

I upgraded my 2 HDD in my Synology NAS to bigger ones. The change and rebuild of the RAID mirror was seemless. But I wanted to verify the health of the filesystems before growing the volumes. Here is how to do it.

Note: I am making the following assumptions, you know what you are doing, you activated SSH on your box and know how to connect as root, you know and understand how your NAS HDD have been configured, you have a wokring backup of your HDD data, you are not afraid of losing your data.

First find out what is the drive name of your volume(s) and also what is the filesystem type:

# mount
/dev/root on / type ext4 (rw,relatime,barrier=0,journal_checksum,data=ordered)
(...)
/dev/mapper/vol1-origin on /volume1 type ext4 (usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl)
/dev/mapper/vol2-origin on /volume2 type ext4 (usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl)

In my case, I have created 2 volumes on top of my mirror. The device on which these volumes are stored are /dev/mapper/vol1-origin and vol2-origin, they are both ext4 filesystems. But you probably do not have such a setup and only have one volume on top of your RAID array and your device might simply be /dev/md[x].

The fact that my devices were in /dev/mapper hinted me that they might be a LVM layer somewhere. So I executed the following command (harmless):

# lvs
LV                    VG   Attr   LSize    Origin Snap%  Move Log Copy%  Convert
syno_vg_reserved_area vg1  -wi-a-   12.00M
volume_1              vg1  -wi-ao    1.00T
volume_2              vg1  -wi-ao    1.00T

So my 2 volumes are LVM logical volumes. Now that I have this information, I can do the following to verify the filesystems’ health. First of all, shutting down most services and unmounting the filesystems:

# syno_poweroff_task

Now if you did not have LVM but rather a /dev/md[x] device, you can simply do (if you have an ext2/ext3/ext4 filesystem only, and replace the ‘x’ by the correct number):

# e2fsck -pvf /dev/mdx

But if like me you have LVM, then you will need a few extra steps. The ‘syno_power_off’ has probably deactivated the LVM logical volumes, to be sure check the “LV Status” given by the next command (harmless, not the complete output is here given):

# lvdisplay
--- Logical volume ---
LV Name                /dev/vg1/volume_1
VG Name                vg1
LV UUID                <UUID>
LV Write Access        read/write
LV Status              NOT available
LV Size                1.00 TB

As you can see the logical volume is not available. We need to make it available so that the link to the logical volume is accessible:

# lvm lvchange -ay vg1/volume_1
# lvdisplay
--- Logical volume ---
LV Name                /dev/vg1/volume_1
VG Name                vg1
LV UUID                <UUID>
LV Write Access        read/write
LV Status              available
# open                 0
LV Size                1.00 TB

The status has changed now to “available”, so we can proceed with the filesystem verification:

# e2fsck -pvf /dev/vg1/volume_1

To finish this, you need to remount and restart all the stopped services. I do not know a specific Synology command to do that, so I simply rebooted the machine:

# shutdown -r now