Adding New Fonts in Bulk to Ubuntu 16.04

This process should also work for 12.04 and 14.04.

Create a new directory under /usr/share/fonts

sudo mkdir /usr/share/fonts/opentype/newfonts

Place all OTF or TTF files in that directory.

Run the font-caching utility to fix permissions on these new fonts and make them available to applications immediately.

sudo chmod -R 655 /usr/share/fonts
sudo fc-cache -fv

Other Methods

There are other methods available on modern Ubuntu as well. For individual fonts you can just double-click on them and click the “install” option in the upper right. Or use a purpose-built program like font-manager.

sudo apt-get install font-manager
Advertisements

Using Active/Backup Bonding (mode 1) with Ifupdown2

Ifupdown2 is a very useful interface configuration utility with tons of enhancements over the stock utility ifupdown. It was built with a specific initial use-case in mind which is for use on network operating systems (NOS) like Cumulus Linux. Cumulus requires LACP support as the primary bonding method. Other modes like active-backup (mode 1) were not initially fully implemented if ifupdown2. This is changing however; CM-14985 brings support for the bond-primary keyword and will be present in the next release of Cumulus Linux and the next version of Ifupdown2.

To hold you over until then here’s a workaround I’ve been using on my server at home running Ifupdown2 for performing active/backup bonding. Writing the sys file directly can provide the same behavior.

auto lo
iface lo inet loopback

auto enp4s0
iface enp4s0
 alias Motherboard Ethernet 
 mtu 9194

auto enxf01e341f95
iface enxf01e341f95
 alias USB3 Ethernet
 mtu 9194

auto bond0
iface bond0
 alias ActiveBackup Uplink
 bond-mode active-backup
 bond-slaves enxf01e341f95 enp4s0
 address 192.168.1.10/24
 gateway 192.168.1.1
 mtu 9194
 pre-up echo enp4s0 > /sys/class/net/bond0/bonding/primary

Building FRRouting for PowerPC on Debian Wheezy

Tried to do this to modernize the routing software running on an older whitebox which was built on the PowerPC architecture.

One of the challenges on these platforms aside from the PPC arch is the limited space. I found my switch did not have enough hard disk space to complete the build. My answer was to use a USB stick to provide additional disk space to complete the build. At the completion of the build my build directory consumed ~214 MB so plan accordingly if your switch does not have sufficient on-board space.

Assume ROOT for all commands unless otherwise stated.

I mounted my USB stick to –> /mnt/USB

mkdir /mnt/USB
# Use Fdisk to confirm USB device.
fdisk -l 
mount /dev/sda1 /mnt/USB

Add the sources

cat << EOT >> /etc/apt/sources.list
deb http://httpredir.debian.org/debian/ wheezy main contrib non-free
deb-src http://httpredir.debian.org/debian/ wheezy main contrib non-free

deb http://security.debian.org/ wheezy/updates main contrib non-free
deb-src http://security.debian.org/ wheezy/updates main contrib non-free

deb http://httpredir.debian.org/debian/ wheezy-updates main contrib non-free
deb-src http://httpredir.debian.org/debian/ wheezy-updates main contrib non-free

deb http://ftp.debian.org/debian/ wheezy-backports main non-free contrib
EOT

Add the Prereq packages

apt-get install git autoconf automake libtool make gawk libreadline-dev texinfo dejagnu pkg-config libpam0g-dev bison flex python-pytest libc-ares-dev python3-dev libjson-c-dev build-essential fakeroot devscripts

 

Install some out of Repo Prereqs from Source as shown in the Ubuntu 12.04 LTS build guide

Install newer bison from Ubuntu 14.04 package source:

mkdir builddir
cd builddir
wget http://archive.ubuntu.com/ubuntu/pool/main/b/bison/bison_3.0.2.dfsg-2.dsc
wget http://archive.ubuntu.com/ubuntu/pool/main/b/bison/bison_3.0.2.dfsg.orig.tar.bz2
wget http://archive.ubuntu.com/ubuntu/pool/main/b/bison/bison_3.0.2.dfsg-2.debian.tar.gz
tar -jxvf bison_3.0.2.dfsg.orig.tar.bz2 
cd bison-3.0.2.dfsg/
tar xzf ../bison_3.0.2.dfsg-2.debian.tar.gz 
sudo apt-get build-dep bison
debuild -b -uc -us
cd ..
sudo dpkg -i ./libbison-dev_3.0.2.dfsg-2_amd64.deb ./bison_3.0.2.dfsg-2_amd64.deb 
cd ..
rm -rf builddir

Install newer version of autoconf and automake:

wget http://ftp.gnu.org/gnu/autoconf/autoconf-2.69.tar.gz
tar xvf autoconf-2.69.tar.gz
cd autoconf-2.69
./configure --prefix=/usr
make
sudo make install
cd ..

wget http://ftp.gnu.org/gnu/automake/automake-1.15.tar.gz
tar xvf automake-1.15.tar.gz
cd automake-1.15
./configure --prefix=/usr
make
sudo make install
cd ..

Add frr groups and user

sudo groupadd -g 92 frr
sudo groupadd -r -g 85 frrvty
sudo adduser --system --ingroup frr --home /var/run/frr/ \
   --gecos "FRR suite" --shell /sbin/nologin frr
sudo usermod -a -G frrvty frr

Download Source, configure and compile it

git clone https://github.com/frrouting/frr.git frr
cd frr
./bootstrap.sh
./configure \
    --prefix=/usr \
    --enable-exampledir=/usr/share/doc/frr/examples/ \
    --localstatedir=/var/run/frr \
    --sbindir=/usr/lib/frr \
    --sysconfdir=/etc/frr \
    --enable-pimd \
    --enable-watchfrr \
    --enable-ospfclient=yes \
    --enable-ospfapi=yes \
    --enable-multipath=64 \
    --enable-user=frr \
    --enable-group=frr \
    --enable-vty-group=frrvty \
    --enable-configfile-mask=0640 \
    --enable-logfile-mask=0640 \
    --enable-rtadv \
    --enable-fpm \
    --with-pkg-git-version \
    --with-pkg-extra-version=-MyOwnFRRVersion   
make
make install

Most guides would end here but there’s a bit more required to get FRR functioning.

Create empty FRR configuration files

sudo install -m 755 -o frr -g frr -d /var/log/frr
sudo install -m 775 -o frr -g frrvty -d /etc/frr
sudo install -m 640 -o frr -g frr /dev/null /etc/frr/zebra.conf
sudo install -m 640 -o frr -g frr /dev/null /etc/frr/bgpd.conf
sudo install -m 640 -o frr -g frr /dev/null /etc/frr/ospfd.conf
sudo install -m 640 -o frr -g frr /dev/null /etc/frr/ospf6d.conf
sudo install -m 640 -o frr -g frr /dev/null /etc/frr/isisd.conf
sudo install -m 640 -o frr -g frr /dev/null /etc/frr/ripd.conf
sudo install -m 640 -o frr -g frr /dev/null /etc/frr/ripngd.conf
sudo install -m 640 -o frr -g frr /dev/null /etc/frr/pimd.conf
sudo install -m 640 -o frr -g frr /dev/null /etc/frr/ldpd.conf
sudo install -m 640 -o frr -g frr /dev/null /etc/frr/nhrpd.conf
sudo install -m 640 -o frr -g frrvty /dev/null /etc/frr/vtysh.conf

Install the init.d service

sudo install -m 755 tools/frr /etc/init.d/frr
sudo install -m 644 tools/etc/frr/daemons /etc/frr/daemons
sudo install -m 644 tools/etc/frr/daemons.conf /etc/frr/daemons.conf
sudo install -m 644 -o frr -g frr tools/etc/frr/vtysh.conf /etc/frr/vtysh.conf

Enable your Routing Daemons

cat << EOT > /etc/frr/daemons
zebra=yes
bgpd=yes
ospfd=no
ospf6d=no
ripd=no
ripngd=no
isisd=no
pimd=no
ldpd=no
nhrpd=no
eigrpd=no
babeld=no
EOT

Start FRR

service frr start
service frr status

Hopefully that should do it for you. Now the next step is figuring out how to build a proper deb from the source. I’ll leave that process for next time 🙂

Troubleshooting Vagrant Libvirt Simulations

There are a lot of moving parts in a vagrant-libvirt simulation. Vagrant calls the vagrant-libvirt plugin which controls libvirt and libvirt in turn, is used to control QEMU.

Vagrant  → libvirt → qemu

Our troubleshooting is going to focus on the libvirt component. Our solution to vagrant issues is to correct permissions and basically remove all of the files that vagrant uses to keep state. We’ll use a series of virsh commands to manually correct issues with libvirt.

Suggestions to keep yourself out of hot water when working with libvirt simulations:

  • vagrant activities are unique per user so perform all actions from a single non-root user account, do not mix and match

Common Troubleshooting Path:

1). Make sure your user is added to the libvirtd group with the “id” command.

$ id
uid=1000(eric) gid=1000(eric) groups=1000(eric),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),113(lpadmin),128(sambashare),131(vboxusers),132(libvirtd)

NOTE: to append this group to your user id use this command

sudo usermod -a -G libvirtd userName

log out and log back in for group change to happen

2). Change ownership of everything in that user’s .vagrant.d directory back to the user in question.

$ sudo chown [user] -Rv ~/.vagrant.d/

3). List all Domains (VMs) and their storage volumes (Hard Drives)

$ virsh list --all
$ virsh vol-list default

Images are stored in /var/lib/libvirt/images/

4). Stop each VM, undefine it, and remove the virtual hard drive

$ virsh destroy vagrant_asdfasdf
$ virsh undefine vagrant_asdfasdf
$ virsh vol-delete --pool default vagrant_asdfasdf.img

 

VM list should be empty now and volume list should not have any of the volumes that correspond to VM names in your simulation.

$ virsh list --all
$ virsh vol-list default

 

5). Remove the hidden .vagrant directory in simulation folder

$ rm -rfv ./.vagrant/

 

6). Try your Vagrant up now.

$ vagrant status
$ vagrant up --provider=libvirt

 

Have VMs that you’ve already removed, stuck in output from “vagrant global-status”?

Remove the machine-index file as follows:

rm ~/.vagrant.d/data/machine-index/index

Use Less Power on Your NAS

Inspired by my read of this article http://sandeen.net/wordpress/computers/how-to-build-a-10w-6-5-terabyte-commodity-x86-linux-server/ I wanted to do some experimentation on my own NAS server.

The biggest power users (aside from the CPU) are the hard drives. In my NAS I have (5) 3TB WD Red Drives, and (1) 2TB WD Red Drive as well as a pair of Samsung 850 based SSDs. Of course the spinning disks consume the most power. I’m able to determine this based on the UPS which I have connected to the NAS. It has a nice little power meter built-in which I’m sure isn’t extremely accurate but is good enough for my needs.

After much experimentation with the options provided in the article above I found the only things to make a noticeable difference on my NAS were the spindowns of the mechanical harddrives.

Using the following script I was able to cut down my idle power draw from ~72 watts –> ~54 watts for an 18 watt savings or 25% which isn’t too bad!

#!/bin/bash

echo "Setting Power down to 60 seconds."
hdparm -S 12 /dev/sda
hdparm -S 12 /dev/sdb
hdparm -S 12 /dev/sdc
hdparm -S 12 /dev/sde
hdparm -S 12 /dev/sdf

hdparm -S 12 /dev/sdd

echo "Powering Down Hard drives immediately."
hdparm -y /dev/sda
hdparm -y /dev/sdb
hdparm -y /dev/sdc
hdparm -y /dev/sde
hdparm -y /dev/sdf

hdparm -y /dev/sdd

 

 

 

Controlling Docker from Within A Docker Container

I’ve been tinkering with a project to interact with the Docker-Engine api using docker-py. The catch is that the program is running inside a docker container.

Modify /lib/systemd/system/docker.service to fix the Docker daemon to a TCP port.

First create a loopback IP address

sudo ip addr add 10.254.254.254/32 dev lo

By default the Unix socket used by Docker is inaccessible from inside the container for this reason we need to use TCP.

Modify the ExecStart line to remove the unix socket and instead replace it with the address of your loopback and a port of your choosing. I used 2376 because that is what Docker uses on Windows where the unix sockets are not available.

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target docker.socket firewalld.service
Requires=docker.socket

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H 10.254.254.254:2376
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
ExecReload=/bin/kill -s HUP $MAINPID
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

Reload systemd daemons to pull in the new changes to the unit file.

sudo systemctl daemon-reload

Start Docker with the new settings.

sudo systemctl stop docker.service
sudo systemctl start docker.service

Modify your environment variables so you don’t have to use the -H argument everytime you call a docker cli command.

export DOCKER_HOST=10.254.254.254:2376
echo "export DOCKER_HOST=10.254.254.254:2376" >> ~/.bashrc

Run a new docker image in the new environment.

docker run -itd --name=test ubuntu /bin/bash
docker exec -itd ubuntu /usr/bin/apt-get update -y 
docker exec -itd ubuntu /usr/bin/apt-get install python python-pip -y
docker exec -itd ubuntu /usr/local/bin/pip install docker

Create a Test Script to Try In the Container

#!/usr/bin/python
import docker
import pprint

# PrettyPrint Setup
pp = pprint.PrettyPrinter(indent=4)

# DOCKER API Setup
#  NORMAL SETUP
#client = docker.from_env()
#low_level_client=docker.APIClient(base_url='unix://var/run/docker.sock')
#  Custom SETUP
client = docker.DockerClient(base_url='tcp://10.254.254.254:2376')
low_level_client=docker.APIClient(base_url='tcp://10.254.254.254:2376')

container_list=client.containers.list()
pp.pprint(container_list)
for container in container_list:
    docker_container=low_level_client.inspect_container(container_id)
    pp.pprint(docker_container)

Run The Test Script

chmod +x ./test.py
docker cp ./test.py test:/root/test.py
docker exec -it test /root/test.py

Security: Don’t forget to secure your newly exposed port with IPtables rules!

sudo iptables -t filter -A INPUT -i eth0 -p tcp -m tcp --dport 2376 -j DROP

Migrating From Machine to Machine with Smokeping (and Armbian)

After having purchased an Orange Pi I have been looking for services that it can host on my network. One of the first services that come to mind for me is Smokeping. I love Smokeping it does one thing, and it does it well a true embodiment of the Unix Philosophy.

Enough glorification of Smokeping how do we get it running.

In this case I wasn’t starting from scratch. I already have an installation running on a Raspberry Pi elsewhere in my network. My first attempt had me copying all the files in the /etc/smokeping directory over to the Orange Pi directly. However this did not work. What I came to find is that the RRD files are architecture-specific, as it turns out, while both the Raspberry Pi and Orange Pi are ARM-based, they are not the same version of ARM and hence the RRD files are not directly compatible.

Starting fresh is no good here because I have years of Smokeping data in my existing install that I don’t want to lose; so how to migrate that data.

Scouring some obscure Smokeping mailing lists I was able to put together this procedure.

NOTE: I’m assuming preshared keys have already been setup between the root account of the old and new machines.

#######################
# On the New Machine
####################### 
sudo su
# Install Smokeping
apt-get install smokeping rrdtool sendmail -y
systemctl stop smokeping
#######################
# On the Old Machine
#######################
sudo su

echo <<EOT >/tmp/smokeping_backup.sh
#!/bin/bash

NEWMACHINE="192.168.1.100"

cd /var/lib/smokeping
echo "Stoping SMOKEPING"
service smokeping stop

echo "GENERATING XML..."
rm -v ./*/*.xml
for f in ./*/*.rrd; do echo ${f}; rrdtool dump ${f} > ${f}.xml; done

echo "SENDING FILES TO NEW MACHINE..."
scp -rv ./* root@$NEWMACHINE:/var/lib/smokeping/
scp -v /etc/smokeping/config.d/General root@$NEWMACHINE:/etc/smokeping/config.d/General
scp -v /etc/smokeping/config.d/Targets root@$NEWMACHINE:/etc/smokeping/config.d/Targets
scp -v /etc/smokeping/config.d/Probes root@$NEWMACHINE:/etc/smokeping/config.d/Probes

echo "CLEANING-UP AND RESTARTING SMOKEPING..."
rm -v ./*/*.xml
service smokeping start

echo "DONE!"
EOT

chmod +x /tmp/smokeping_backup.sh
/tmp/smokeping_backup.sh
#######################
# On the New Machine
#######################
sudo su
cat <<EOT > /tmp/smokeping_restore.sh 
#!/bin/bash

# convert XML to RRD
cd /var/lib/smokeping

systemctl stop smokeping

echo "REMOVING ANY EXISTING RRD FILES..."
rm -v ./*/*.rrd

echo "CONVERTING XML FILES BACK INTO RRD..."
for f in ./*/*.xml; do echo ${f}; rrdtool restore $f `echo $f |sed s/.xml//g`; done

echo "REMOVING Interim XML FILES..."
rm -v ./*/*.xml
sleep 1

echo "CHANGING OWNERSHIP ON SMOKEPING FILES..."
chown -v smokeping:www-data /var/lib/smokeping/*/*.rrd
chmod 755 -Rv /var/lib/smokeping

echo "STARTING SMOKEPING."
systemctl restart apache2
systemctl start smokeping

echo "DONE!"
EOT

chmod +x /tmp/smokeping_restore.sh
/tmp/smokeping_restore.sh

This can convert the RRD files to an intermediary XML format that can then be converted back to RRD on the migration target, on Armbian, even with Smokeping installed, rrdtool itself was not installed. After installing rrdtool I operated on the subdirectories full of RRD files in the /var/lib/smokeping/ directory. Once converted to XML I moved the XML files in place on the new machine and converted them back to RRD.

These scripts were used on my machines, hopefully they can help you too!