Building FRRouting for PowerPC on Debian Wheezy

Tried to do this to modernize the routing software running on an older whitebox which was built on the PowerPC architecture.

One of the challenges on these platforms aside from the PPC arch is the limited space. I found my switch did not have enough hard disk space to complete the build. My answer was to use a USB stick to provide additional disk space to complete the build. At the completion of the build my build directory consumed ~214 MB so plan accordingly if your switch does not have sufficient on-board space.

Assume ROOT for all commands unless otherwise stated.

I mounted my USB stick to –> /mnt/USB

mkdir /mnt/USB
# Use Fdisk to confirm USB device.
fdisk -l 
mount /dev/sda1 /mnt/USB

Add the sources

cat << EOT >> /etc/apt/sources.list
deb http://httpredir.debian.org/debian/ wheezy main contrib non-free
deb-src http://httpredir.debian.org/debian/ wheezy main contrib non-free

deb http://security.debian.org/ wheezy/updates main contrib non-free
deb-src http://security.debian.org/ wheezy/updates main contrib non-free

deb http://httpredir.debian.org/debian/ wheezy-updates main contrib non-free
deb-src http://httpredir.debian.org/debian/ wheezy-updates main contrib non-free

deb http://ftp.debian.org/debian/ wheezy-backports main non-free contrib
EOT

Add the Prereq packages

apt-get install git autoconf automake libtool make gawk libreadline-dev texinfo dejagnu pkg-config libpam0g-dev bison flex python-pytest libc-ares-dev python3-dev libjson-c-dev build-essential fakeroot devscripts

 

Install some out of Repo Prereqs from Source as shown in the Ubuntu 12.04 LTS build guide

Install newer bison from Ubuntu 14.04 package source:

mkdir builddir
cd builddir
wget http://archive.ubuntu.com/ubuntu/pool/main/b/bison/bison_3.0.2.dfsg-2.dsc
wget http://archive.ubuntu.com/ubuntu/pool/main/b/bison/bison_3.0.2.dfsg.orig.tar.bz2
wget http://archive.ubuntu.com/ubuntu/pool/main/b/bison/bison_3.0.2.dfsg-2.debian.tar.gz
tar -jxvf bison_3.0.2.dfsg.orig.tar.bz2 
cd bison-3.0.2.dfsg/
tar xzf ../bison_3.0.2.dfsg-2.debian.tar.gz 
sudo apt-get build-dep bison
debuild -b -uc -us
cd ..
sudo dpkg -i ./libbison-dev_3.0.2.dfsg-2_amd64.deb ./bison_3.0.2.dfsg-2_amd64.deb 
cd ..
rm -rf builddir

Install newer version of autoconf and automake:

wget http://ftp.gnu.org/gnu/autoconf/autoconf-2.69.tar.gz
tar xvf autoconf-2.69.tar.gz
cd autoconf-2.69
./configure --prefix=/usr
make
sudo make install
cd ..

wget http://ftp.gnu.org/gnu/automake/automake-1.15.tar.gz
tar xvf automake-1.15.tar.gz
cd automake-1.15
./configure --prefix=/usr
make
sudo make install
cd ..

Add frr groups and user

sudo groupadd -g 92 frr
sudo groupadd -r -g 85 frrvty
sudo adduser --system --ingroup frr --home /var/run/frr/ \
   --gecos "FRR suite" --shell /sbin/nologin frr
sudo usermod -a -G frrvty frr

Download Source, configure and compile it

git clone https://github.com/frrouting/frr.git frr
cd frr
./bootstrap.sh
./configure \
    --prefix=/usr \
    --enable-exampledir=/usr/share/doc/frr/examples/ \
    --localstatedir=/var/run/frr \
    --sbindir=/usr/lib/frr \
    --sysconfdir=/etc/frr \
    --enable-pimd \
    --enable-watchfrr \
    --enable-ospfclient=yes \
    --enable-ospfapi=yes \
    --enable-multipath=64 \
    --enable-user=frr \
    --enable-group=frr \
    --enable-vty-group=frrvty \
    --enable-configfile-mask=0640 \
    --enable-logfile-mask=0640 \
    --enable-rtadv \
    --enable-fpm \
    --with-pkg-git-version \
    --with-pkg-extra-version=-MyOwnFRRVersion   
make
make install

Most guides would end here but there’s a bit more required to get FRR functioning.

Create empty FRR configuration files

sudo install -m 755 -o frr -g frr -d /var/log/frr
sudo install -m 775 -o frr -g frrvty -d /etc/frr
sudo install -m 640 -o frr -g frr /dev/null /etc/frr/zebra.conf
sudo install -m 640 -o frr -g frr /dev/null /etc/frr/bgpd.conf
sudo install -m 640 -o frr -g frr /dev/null /etc/frr/ospfd.conf
sudo install -m 640 -o frr -g frr /dev/null /etc/frr/ospf6d.conf
sudo install -m 640 -o frr -g frr /dev/null /etc/frr/isisd.conf
sudo install -m 640 -o frr -g frr /dev/null /etc/frr/ripd.conf
sudo install -m 640 -o frr -g frr /dev/null /etc/frr/ripngd.conf
sudo install -m 640 -o frr -g frr /dev/null /etc/frr/pimd.conf
sudo install -m 640 -o frr -g frr /dev/null /etc/frr/ldpd.conf
sudo install -m 640 -o frr -g frr /dev/null /etc/frr/nhrpd.conf
sudo install -m 640 -o frr -g frrvty /dev/null /etc/frr/vtysh.conf

Install the init.d service

sudo install -m 755 tools/frr /etc/init.d/frr
sudo install -m 644 tools/etc/frr/daemons /etc/frr/daemons
sudo install -m 644 tools/etc/frr/daemons.conf /etc/frr/daemons.conf
sudo install -m 644 -o frr -g frr tools/etc/frr/vtysh.conf /etc/frr/vtysh.conf

Enable your Routing Daemons

cat << EOT > /etc/frr/daemons
zebra=yes
bgpd=yes
ospfd=no
ospf6d=no
ripd=no
ripngd=no
isisd=no
pimd=no
ldpd=no
nhrpd=no
eigrpd=no
babeld=no
EOT

Start FRR

service frr start
service frr status

Enable FRR At boot time for subsequent reboots

sudo update-rc.d frr defaults

Fix Exit Scripts

 sed -i 's/ip route flush proto ripng/ip route flush proto 190 \# ripng/' /usr/lib/frr/frr
 sed -i 's/ip route flush proto bgp/ip route flush proto 186 \# bgp/' /usr/lib/frr/frr
 sed -i 's/ip route flush proto isis/ip route flush proto 187 \# isis/' /usr/lib/frr/frr
 sed -i 's/ip route flush proto ospf/ip route flush proto 188 \# ospf/' /usr/lib/frr/frr
 sed -i 's/ip route flush proto rip/ip route flush proto 189 \# rip/' /usr/lib/frr/frr
 sed -i 's/ip route flush proto static/ip route flush proto 191 \# static/' /usr/lib/frr/frr

Hopefully that should do it for you. Now the next step is figuring out how to build a proper deb from the source. I’ll leave that process for next time 🙂

Advertisements

Troubleshooting Vagrant Libvirt Simulations

There are a lot of moving parts in a vagrant-libvirt simulation. Vagrant calls the vagrant-libvirt plugin which controls libvirt and libvirt in turn, is used to control QEMU.

Vagrant  → libvirt → qemu

Our troubleshooting is going to focus on the libvirt component. Our solution to vagrant issues is to correct permissions and basically remove all of the files that vagrant uses to keep state. We’ll use a series of virsh commands to manually correct issues with libvirt.

Suggestions to keep yourself out of hot water when working with libvirt simulations:

  • vagrant activities are unique per user so perform all actions from a single non-root user account, do not mix and match

Common Troubleshooting Path:

1). Make sure your user is added to the libvirtd group with the “id” command.

$ id
uid=1000(eric) gid=1000(eric) groups=1000(eric),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),113(lpadmin),128(sambashare),131(vboxusers),132(libvirtd)

NOTE: to append this group to your user id use this command

sudo usermod -a -G libvirtd userName

log out and log back in for group change to happen

2). Change ownership of everything in that user’s .vagrant.d directory back to the user in question.

$ sudo chown [user] -Rv ~/.vagrant.d/

3). List all Domains (VMs) and their storage volumes (Hard Drives)

$ virsh list --all
$ virsh vol-list default

Images are stored in /var/lib/libvirt/images/

4). Stop each VM, undefine it, and remove the virtual hard drive

$ virsh destroy vagrant_asdfasdf
$ virsh undefine vagrant_asdfasdf
$ virsh vol-delete --pool default vagrant_asdfasdf.img

 

VM list should be empty now and volume list should not have any of the volumes that correspond to VM names in your simulation.

$ virsh list --all
$ virsh vol-list default

 

5). Remove the hidden .vagrant directory in simulation folder

$ rm -rfv ./.vagrant/

 

6). Try your Vagrant up now.

$ vagrant status
$ vagrant up --provider=libvirt

 

Have VMs that you’ve already removed, stuck in output from “vagrant global-status”?

Remove the machine-index file as follows:

rm ~/.vagrant.d/data/machine-index/index

Use Less Power on Your NAS

Inspired by my read of this article http://sandeen.net/wordpress/computers/how-to-build-a-10w-6-5-terabyte-commodity-x86-linux-server/ I wanted to do some experimentation on my own NAS server.

The biggest power users (aside from the CPU) are the hard drives. In my NAS I have (5) 3TB WD Red Drives, and (1) 2TB WD Red Drive as well as a pair of Samsung 850 based SSDs. Of course the spinning disks consume the most power. I’m able to determine this based on the UPS which I have connected to the NAS. It has a nice little power meter built-in which I’m sure isn’t extremely accurate but is good enough for my needs.

After much experimentation with the options provided in the article above I found the only things to make a noticeable difference on my NAS were the spindowns of the mechanical harddrives.

Using the following script I was able to cut down my idle power draw from ~72 watts –> ~54 watts for an 18 watt savings or 25% which isn’t too bad!

#!/bin/bash

echo "Setting Power down to 60 seconds."
hdparm -S 12 /dev/sda
hdparm -S 12 /dev/sdb
hdparm -S 12 /dev/sdc
hdparm -S 12 /dev/sde
hdparm -S 12 /dev/sdf

hdparm -S 12 /dev/sdd

echo "Powering Down Hard drives immediately."
hdparm -y /dev/sda
hdparm -y /dev/sdb
hdparm -y /dev/sdc
hdparm -y /dev/sde
hdparm -y /dev/sdf

hdparm -y /dev/sdd

 

 

 

Controlling Docker from Within A Docker Container

I’ve been tinkering with a project to interact with the Docker-Engine api using docker-py. The catch is that the program is running inside a docker container.

Modify /lib/systemd/system/docker.service to fix the Docker daemon to a TCP port.

First create a loopback IP address

sudo ip addr add 10.254.254.254/32 dev lo

By default the Unix socket used by Docker is inaccessible from inside the container for this reason we need to use TCP.

Modify the ExecStart line to remove the unix socket and instead replace it with the address of your loopback and a port of your choosing. I used 2376 because that is what Docker uses on Windows where the unix sockets are not available.

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target docker.socket firewalld.service
Requires=docker.socket

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H 10.254.254.254:2376
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
ExecReload=/bin/kill -s HUP $MAINPID
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

Reload systemd daemons to pull in the new changes to the unit file.

sudo systemctl daemon-reload

Start Docker with the new settings.

sudo systemctl stop docker.service
sudo systemctl start docker.service

Modify your environment variables so you don’t have to use the -H argument everytime you call a docker cli command.

export DOCKER_HOST=10.254.254.254:2376
echo "export DOCKER_HOST=10.254.254.254:2376" >> ~/.bashrc

Run a new docker image in the new environment.

docker run -itd --name=test ubuntu /bin/bash
docker exec -itd ubuntu /usr/bin/apt-get update -y 
docker exec -itd ubuntu /usr/bin/apt-get install python python-pip -y
docker exec -itd ubuntu /usr/local/bin/pip install docker

Create a Test Script to Try In the Container

#!/usr/bin/python
import docker
import pprint

# PrettyPrint Setup
pp = pprint.PrettyPrinter(indent=4)

# DOCKER API Setup
#  NORMAL SETUP
#client = docker.from_env()
#low_level_client=docker.APIClient(base_url='unix://var/run/docker.sock')
#  Custom SETUP
client = docker.DockerClient(base_url='tcp://10.254.254.254:2376')
low_level_client=docker.APIClient(base_url='tcp://10.254.254.254:2376')

container_list=client.containers.list()
pp.pprint(container_list)
for container in container_list:
    docker_container=low_level_client.inspect_container(container_id)
    pp.pprint(docker_container)

Run The Test Script

chmod +x ./test.py
docker cp ./test.py test:/root/test.py
docker exec -it test /root/test.py

Security: Don’t forget to secure your newly exposed port with IPtables rules!

sudo iptables -t filter -A INPUT -i eth0 -p tcp -m tcp --dport 2376 -j DROP

Migrating From Machine to Machine with Smokeping (and Armbian)

After having purchased an Orange Pi I have been looking for services that it can host on my network. One of the first services that come to mind for me is Smokeping. I love Smokeping it does one thing, and it does it well a true embodiment of the Unix Philosophy.

Enough glorification of Smokeping how do we get it running.

In this case I wasn’t starting from scratch. I already have an installation running on a Raspberry Pi elsewhere in my network. My first attempt had me copying all the files in the /etc/smokeping directory over to the Orange Pi directly. However this did not work. What I came to find is that the RRD files are architecture-specific, as it turns out, while both the Raspberry Pi and Orange Pi are ARM-based, they are not the same version of ARM and hence the RRD files are not directly compatible.

Starting fresh is no good here because I have years of Smokeping data in my existing install that I don’t want to lose; so how to migrate that data.

Scouring some obscure Smokeping mailing lists I was able to put together this procedure.

NOTE: I’m assuming preshared keys have already been setup between the root account of the old and new machines.

#######################
# On the New Machine
####################### 
sudo su
# Install Smokeping
apt-get install smokeping rrdtool sendmail -y
systemctl stop smokeping
#######################
# On the Old Machine
#######################
sudo su

echo <<EOT >/tmp/smokeping_backup.sh
#!/bin/bash

NEWMACHINE="192.168.1.100"

cd /var/lib/smokeping
echo "Stoping SMOKEPING"
service smokeping stop

echo "GENERATING XML..."
rm -v ./*/*.xml
for f in ./*/*.rrd; do echo ${f}; rrdtool dump ${f} > ${f}.xml; done

echo "SENDING FILES TO NEW MACHINE..."
scp -rv ./* root@$NEWMACHINE:/var/lib/smokeping/
scp -v /etc/smokeping/config.d/General root@$NEWMACHINE:/etc/smokeping/config.d/General
scp -v /etc/smokeping/config.d/Targets root@$NEWMACHINE:/etc/smokeping/config.d/Targets
scp -v /etc/smokeping/config.d/Probes root@$NEWMACHINE:/etc/smokeping/config.d/Probes

echo "CLEANING-UP AND RESTARTING SMOKEPING..."
rm -v ./*/*.xml
service smokeping start

echo "DONE!"
EOT

chmod +x /tmp/smokeping_backup.sh
/tmp/smokeping_backup.sh
#######################
# On the New Machine
#######################
sudo su
cat <<EOT > /tmp/smokeping_restore.sh 
#!/bin/bash

# convert XML to RRD
cd /var/lib/smokeping

systemctl stop smokeping

echo "REMOVING ANY EXISTING RRD FILES..."
rm -v ./*/*.rrd

echo "CONVERTING XML FILES BACK INTO RRD..."
for f in ./*/*.xml; do echo ${f}; rrdtool restore $f `echo $f |sed s/.xml//g`; done

echo "REMOVING Interim XML FILES..."
rm -v ./*/*.xml
sleep 1

echo "CHANGING OWNERSHIP ON SMOKEPING FILES..."
chown -v smokeping:www-data /var/lib/smokeping/*/*.rrd
chmod 755 -Rv /var/lib/smokeping

echo "STARTING SMOKEPING."
systemctl restart apache2
systemctl start smokeping

echo "DONE!"
EOT

chmod +x /tmp/smokeping_restore.sh
/tmp/smokeping_restore.sh

This can convert the RRD files to an intermediary XML format that can then be converted back to RRD on the migration target, on Armbian, even with Smokeping installed, rrdtool itself was not installed. After installing rrdtool I operated on the subdirectories full of RRD files in the /var/lib/smokeping/ directory. Once converted to XML I moved the XML files in place on the new machine and converted them back to RRD.

These scripts were used on my machines, hopefully they can help you too!

 

Fixing LLDPd Default PortID Configuration

LLDPd is a great program written by Vincent Bernat which implements 802.1ab for unix based OSes.

https://vincentbernat.github.io/lldpd/

Debian and Ubuntu offer this lovely utility right in their default software repositories meaning that is only an apt-get install away. One caveat is that the default configuration provides the MAC address of the interface instead of the interface name which is unusual compared to the implementation of LLDP in use on most network OS vendors like Cumulus, Cisco, Juniper, Arista etc…

Here is what is sent by default as seen by the far side of the link:

-------------------------------------------------------------------------
LLDP neighbors:
-------------------------------------------------------------------------
Interface: swp4, via: LLDP, RID: 2, Time: 0 day, 00:00:12
 Chassis: 
 ChassisID: mac 52:54:00:b9:e3:98
 SysName: server1
 SysDescr: Ubuntu 16.04 LTS Linux 4.4.0-22-generic #40-Ubuntu SMP Thu May 12 22:03:46 UTC 2016 x86_64
 TTL: 120
 MgmtIP: 192.168.121.53
 MgmtIP: fe80::5054:ff:feb9:e398
 Capability: Bridge, off
 Capability: Router, off
 Capability: Wlan, off
 Capability: Station, on
 Port: 
 PortID: mac 44:38:39:00:00:06
 PortDescr: eth1
 PMD autoneg: supported: yes, enabled: yes
 Adv: 10Base-T, HD: yes, FD: yes
 Adv: 100Base-TX, HD: yes, FD: yes
 Adv: 1000Base-T, HD: no, FD: yes
 MAU oper type: 1000BaseTFD - Four-pair Category 5 UTP, full duplex mode
-------------------------------------------------------------------------

Luckily there are knobs to change that behavior. The configuration below will set LLDPd to provide a more similar configuration like a network device might expect.

sudo apt-get install lldpd
sudo bash -c "echo 'configure lldp portidsubtype ifname' > /etc/lldpd.d/port_info.conf"
sudo systemctl restart lldpd.service

 

Here is the net result of the changed configuration:

-------------------------------------------------------------------------
Interface: swp4, via: LLDP, RID: 2, Time: 0 day, 00:00:21
 Chassis: 
 ChassisID: mac 52:54:00:b9:e3:98
 SysName: server1
 SysDescr: Ubuntu 16.04 LTS Linux 4.4.0-22-generic #40-Ubuntu SMP Thu May 12 22:03:46 UTC 2016 x86_64
 TTL: 120
 MgmtIP: 192.168.121.53
 MgmtIP: fe80::5054:ff:feb9:e398
 Capability: Bridge, off
 Capability: Router, off
 Capability: Wlan, off
 Capability: Station, on
 Port: 
 PortID: ifname eth1
 PortDescr: eth1
 PMD autoneg: supported: yes, enabled: yes
 Adv: 10Base-T, HD: yes, FD: yes
 Adv: 100Base-TX, HD: yes, FD: yes
 Adv: 1000Base-T, HD: no, FD: yes
 MAU oper type: 1000BaseTFD - Four-pair Category 5 UTP, full duplex mode
-------------------------------------------------------------------------

Securely Wiping a Hard Drive

When getting rid of a hard drive, I, like everyone else like to be secure about it. After collecting my data I usually like to overwrite the drive with garbage. In the past I used to just use the basic DD approach to zero the drive out.

dd if=/dev/zero of=/dev/sd<DRIVE>

This works fine but I started hearing rumors of being able to recover data from a zeroed out drive. Indeed this is partially true. Zeroing out is probably sufficient in most cases.

Ideally, I would write data out from /dev/random or /dev/urandom (whatever your system has) but the amount of entropy that is harnessed here is not enough to saturate the write speed of the drive meaning that it will take forever a very long time. Never the less I was curious to find out about another option to wipe a drive.

This approach uses OpenSSL with seed data from /dev/urandom. Supposedly it is possible to generate about 1.5gbps of garbage data with this technique… I’ll never know though because the write speed of my drive is nowhere near that.

Command:

Use the following command to randomize the drive/partition using a randomly-seeded AES cipher from OpenSSL (displaying the optional progress meter with pv):

# openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt </dev/zero \
    | pv -bartpes <DISK_SIZE> | dd bs=64K of=/dev/sd"X"

where the (optional) total disk size in bytes (DISK_SIZE) may be obtained via:

# blockdev --getsize64 /dev/sd"X"
250059350016

Sidenote: I love the use of PV here, this is an underloved utility that is truly awesome; I’ve only ever used it in one other place. TARing a remote file over SSH for delivery on my local machine (shown below).

ssh –c blowfish user@host  "tar cjpf - /home/user/file" | pv | cat > ./file.tar.bz2