Should I Leave The Window Open?

As the days start to get cooler again in North Carolina I’m always left with this conundrum… should I leave the window open or not?

On one hand I could feel a pleasant draft that cools me and my wife down all evening without having to pay for air conditioning, on the other hand I could end up waking up to a soaked wall as it’s rained in the middle of the night or the air conditioning has kicked on and I’m pumping lovely cool air that I’ve just paid for directly out the open window.

#firstworldproblems…. I know but it is never the less something I think about. So what can be done here? I’ve got a Nest thermostat and am a reasonably competent scripter.

Here is what I put together, a quick webpage that you can hit from any device on the home network and displays the following info…

selection_064

Turns out there are a few libraries that help us get off to a useful start.

  • weather-cli — this one allows us to pull real time hourly weather info for the next 24 hours based on precise location.
  • flask — flask can be used to quickly serve up a webpage in python
  • python-nest — this is used to get/set data with Nest thermostats.

Interestingly, python-nest and the nest-API  provides some hourly weather info back in a pretty useful form. However it is missing one pivotal piece of data… which is the actual weather conditions for those individual hours, is it raining? is it sunny? that very important datapoint is not present and pretty much the primary reason I’m using weather-cli here.

There’s not a ton of documentation on weather-cli but for the few things that I wanted to know that weren’t documented, there’s always the source-code, and the code was helpful here.

Installing Software

sudo apt-get install python-pip
sudo pip install pip --upgrade
sudo pip install setuptools --upgrade
sudo pip install  weather-cli flask python-nest

Setting Up Weather-CLI

To use weather-cli, you’re going to need a Forecast.io API key, register here to obtain one. With the basic free plan, you have 999 API calls per day for free, enough to check the weather a couple hundred times per day which is more than enough for me.

To get weather-cli set up we need to use the setup argument.

pi@underhousepi ~/scripts/weather_predictor $ sudo weather-cli --setup

Enter your forecast.io api key (required):xxxxxxxxxxxxxxxxx
Warning:
The script will try to geolocate your IP.
If you fill the latitude and longitude, you avoid the IP geolocation.
Use it if you want a more acurated result, use a different location,
or just avoid the IP geolocation.
Check your latitude and longitude from http://www.travelmath.com/

Enter the latitude (optional): 36.982917
Enter the longitude (optional): -75.402301
generating config file...
setup complete

Once setup is complete we can simply call weather-cli and it will return data for us based on the location we used for initial setup.

The rest is writing a quick flask app. Here is the code for that in the gist below. You’ll notice the flask app calls a template. I’ve placed that index.html template in a templates directory as shown below.

https://gist.github.com/ericpulvino/b7753c1f598642f3bbd64895553232db

Note that I’m also calling the nest api to query the target temperature of the nest device and using that as a basis for whether or not the window open. You could hard-code that value alternatively if you don’t have a nest or for testing purposes. I also have more than one nest in my home so in line 29 I’m calling devices[1] to get to query the second nest in my environment. You may want to change this to devices[0] or another value based on your output from the python-nest api’s documentation page.

user@raspi4 ~/scripts/window_open $ tree
.
├── templates
│   └── index.html
└── window_open.py

Here is the code for the template too.

https://gist.github.com/ericpulvino/8196a057493dec94d0f48c495ad5f706

Hope this might save you a few minutes in your search for cool air at an affordable price.

Advertisements

Handling SSH Protocol Links in Chrome (on Linux)

As a network engineer, I have been annoyed by not being able to click on SSH links in webpages for years while running Linux.

After digging in a bit I was able to find this solution which works for SSH in Chrome.

I’m running Ubuntu 16.04 in my setup.

My SSH URL links look like this:

ssh:///user@host:port

Here is the process I used to get these links working.

#Check which handler is setup for SSH
xdg-mime query default x-scheme-handler/ssh

#Write code for new handler
cat << EOF > ~/.local/share/applications/ssh.desktop
[Desktop Entry]
Version=1.0
Name=SSH Launcher
Exec=bash -c '(URL="%U" HOST=\$(echo \${URL:6} | cut -d ":" -f1) PORT=$(echo \${URL:6} | cut -d ":" -f2); ssh \$HOST -p \$PORT); bash'
Terminal=true
Type=Application
Icon=utilities-terminal
EOF

#Apply New handler for SSH
xdg-mime default ssh.desktop x-scheme-handler/ssh

#Confirm new handler has been applied
xdg-mime query default x-scheme-handler/ssh

Manually Installing Plugins in Vagrant

It looks like there have been some changes lately in Vagrant from v 1.7.4 –> 1.8.4 that allow me to no longer install plugins locally.

I kept getting failures that looked like this:

vagrant plugin install ./vagrant-libvirt-0.0.33.gem 
Installing the './vagrant-libvirt-0.0.33.gem' plugin. This can take a few minutes...
Bundler, the underlying system Vagrant uses to install plugins,
reported an error. The error is shown below. These errors are usually
caused by misconfigured plugin installations or transient network
issues. The error from Bundler is:

Could not find gem 'vagrant-libvirt (= 0.0.33)' in any of the gem sources listed in your Gemfile or available on this machine.

Warning: this Gemfile contains multiple primary sources. Using `source` more than once without a block is a security risk, and may result in installing unexpected gems. To resolve this warning, use a block to indicate which gems should come from the secondary source. To upgrade this warning to an error, run `bundle config disable_multisource true`.Warning: this Gemfile contains multiple primary sources. Using `source` more than once without a block is a security risk, and may result in installing unexpected gems. To resolve this warning, use a block to indicate which gems should come from the secondary source. To upgrade this warning to an error, run `bundle config disable_multisource true`.

So based on my read of https://github.com/mitchellh/vagrant/issues/5643 there is a work around to host a local gem server to make the install proceed that way and sure enough it works. Here are the steps for that workaround as it pertains to the vagrant-libvirt plugin for Vagrant.

sudo apt-get install ruby-dev zlib1g-dev

#Download and build the Vagrant-libvirt plugin
git clone https://github.com/vagrant-libvirt/vagrant-libvirt.git
cd vagrant-libvirt/
gem build vagrant-libvirt.gemspec

#workaround for Local Gem Install Failure
#https://github.com/mitchellh/vagrant/issues/5643

#Install it locally
sudo gem install ./vagrant-libvirt*.gem

#Serve the locally installed gems on localhost:8808
sudo gem server & 

#Install the vagrant plugin while pointing at a local gemserver
vagrant plugin install vagrant-libvirt --plugin-source http://localhost:8808

# Turn off the gem server
kill %1

Autoplaying Files in Kodi Isengard

Almost all the affects of the change in naming from XBMC to Kodi have been realized at this point except for a few small areas. One of those areas is using the on-box API to send commands into Kodi or automate certain actions.

In my case I was trying to automatically play a file as soon as my Kodi player boots. To do that you simply place a file with the right content in the right place.

My System Details:

  • Hardware: Raspberry Pi 2 /w Edimax EW-7811UTC AC600 Wifi Adapter
  • Software: OpenELEC v6.0.3 (Kodi 15.2 Isengard)

Specifically, create a file called autoexec.py in the /storage/.kodi/userdata/ directory.

import xbmc
xbmc.executebuiltin( "PlayMedia(smb://192.168.1.20/usenet/Baby_Stream.strm)" )
xbmc.executebuiltin( "PlayerControl(repeat)" )

If you’re trying to test the script on the CLI via ssh by using the python interpreter you might notice that calls to import the xbmc module fail.

OpenELEC:~ # python
Python 2.7.3 (default, Feb 29 2016, 21:17:05) 
[GCC 4.9.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import xbmc
Traceback (most recent call last):
 File "<stdin>", line 1, in <module>
ImportError: No module named xbmc
>>>

This is expected because the xbmc module is not exposed to the default python path.

Testing your Script:

In order to test your script try having Kodi send it through it’s python namespace using the kodi-send command:

OpenELEC:~ #kodi-send -a "RunScript(/storage/.kodi/userdata/autoexec.py)"

Quick and easy automation in Kodi. To see other functions that can be called either via kodi-send or  automated in xbmc.executebuiltin statements, check Kodi’s official docs on the subject.

Udev Network Interface Renaming with no Reboot

While working on the Topology_Converter for work I came upon several lessons with Udev. The topology_converter project essentially takes input (from a graphiviz file) and builds a network topology with proper interface names. In order to make the interface names work there is a script which spits out udev rules.

Writing Udev Rules

With Udev you can rename interfaces using a number of parameters which are defined in rules. Rules should be stuck in the “/etc/udev/rules.d/70-persistent-net.rules” file to follow convention but you could technically stick them anywher in the rules.d directory.

To see all of the possible criteria that can be matched upon for a given network interface, use the command below replacing “eth0” with your interface of choice.

udevadm info -a -p /sys/class/net/eth0

Udevadm info starts with the device specified by the devpath and then
walks up the chain of parent devices. It prints for every device
found, all possible attributes in the udev rules key format.
A rule to match, can be composed by the attributes of the device
and the attributes from one single parent device.

looking at device '/devices/pci0000:00/0000:00:19.0/net/eth0':
 KERNEL=="eth0"
 SUBSYSTEM=="net"
 DRIVER==""
 ATTR{addr_assign_type}=="0"
 ATTR{addr_len}=="6"
 ATTR{address}=="54:ee:75:22:3d:70"
 ATTR{broadcast}=="ff:ff:ff:ff:ff:ff"
 ATTR{carrier}=="0"
 ATTR{carrier_changes}=="1"
 ATTR{dev_id}=="0x0"
 ATTR{dev_port}=="0"
 ATTR{dormant}=="0"
 ATTR{duplex}=="unknown"
 ATTR{flags}=="0x1003"
 ATTR{gro_flush_timeout}=="0"
 ATTR{ifalias}==""
 ATTR{ifindex}=="2"
 ATTR{iflink}=="2"
 ATTR{link_mode}=="0"
 ATTR{mtu}=="1500"
 ATTR{netdev_group}=="0"
 ATTR{operstate}=="down"
 ATTR{proto_down}=="0"
 ATTR{speed}=="-1"
 ATTR{tx_queue_len}=="1000"
 ATTR{type}=="1"

looking at parent device '/devices/pci0000:00/0000:00:19.0':
 KERNELS=="0000:00:19.0"
 SUBSYSTEMS=="pci"
 DRIVERS=="e1000e"
 ATTRS{broken_parity_status}=="0"
 ATTRS{class}=="0x020000"
 ATTRS{consistent_dma_mask_bits}=="64"
 ATTRS{d3cold_allowed}=="1"
 ATTRS{device}=="0x15a2"
 ATTRS{dma_mask_bits}=="64"
 ATTRS{driver_override}=="(null)"
 ATTRS{enable}=="1"
 ATTRS{irq}=="56"
 ATTRS{local_cpulist}=="0-3"
 ATTRS{local_cpus}=="0f"
 ATTRS{msi_bus}=="1"
 ATTRS{numa_node}=="-1"
 ATTRS{subsystem_device}=="0x2227"
 ATTRS{subsystem_vendor}=="0x17aa"
 ATTRS{vendor}=="0x8086"

looking at parent device '/devices/pci0000:00':
 KERNELS=="pci0000:00"
 SUBSYSTEMS==""
 DRIVERS==""

</code>

You can see there are quite a few options to match on. When remapping physical interfaces on Linux, I strongly recommend adding the match for PCI to make sure this interface is mapped to the PCI bus in some way. The concern when not using the PCI match (as shown below) is that if these physical interfaces are to take part in bridges or bonds with vlans or sub interfaces…. in that case your bridge or bond may inherit mac addresses from a physical interface and there will be a collision in the renaming process which means your interfaces may be left named “renameXX” or something like that.

Here are some sample Udev rules for a given series of interface renaming operations.

#### UDEV Rules (/etc/udev/rules.d/70-persistent-net.rules) ####
ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:1a", NAME="swp2", SUBSYSTEMS=="pci" 
ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:12", NAME="swp1", SUBSYSTEMS=="pci" 
ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:49", NAME="swp48", SUBSYSTEMS=="pci" 
ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:42", NAME="eth0", SUBSYSTEMS=="pci" 
ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="08:00:27:8a:39:05", NAME="vagrant", SUBSYSTEMS=="pci"

Applying the New Rules

Now that you’ve written the new rules it’d be nice to apply them without having to reboot.

EDIT: In Ubuntu 16.04 you have another option.

systemctl restart systemd-udev-trigger.service

 

That can be a little complicated and is totally disruptive to networking traffic likely on all interfaces but the procedure looks like this:

  1. Detect the driver used by each interface that requres a remap. The easiest way is to use
    $ ethtool -i eth0
    driver: e1000e
    version: 3.2.6-k
    firmware-version: 0.2-4
    expansion-rom-version: 
    bus-info: 0000:00:19.0
    supports-statistics: yes
    supports-test: yes
    supports-eeprom-access: yes
    supports-register-dump: yes
    supports-priv-flags: no

    You could potentially use this technique:

    $ basename $(readlink /sys/class/net/eth0
    /device/driver/module)

    or this one:

    $ basename $(readlink /sys/class/net/+interface+/device/driver)

    YMMV  depending on the driver in use.

  2. Remove the driver that is shared/used by each interface that is to be remapped (other interfaces that are using that driver may get caught in the crossfire here).
    $ modprobe -r e1000e
  3. Run the following command to detect the newly installed rules
    $ udevadm control --reload-rules
  4. Apply the new rules with the last command
    $ udevadm trigger

Applying the new rules with the trigger operation will also reinitialize the driver that you’ve previously removed.

Presto you’re done.

Using Nautilus in Ubuntu 14.04 and 16.04 as a Box Client

It is not well known that the Nautilus file manager in Gnome can be used as a client to access box shares. This article seeks to document how to set up that connectivity mostly as a reminder for when I need to do it later.

proceedure1). Open Nautilus and select “Connect to Server”

Select Connect To Server in Nautilus File Browser
Select Connect To Server in Nautilus File Browser

2). Fill in the “Server Address” as follows:

davs://username%40yourdomain.com@dav.box.com/dav

note: The ‘%40’ is the character encoding for ‘@’ and you must leave that there exactly as shown.

Selection_004

 

 

 

 

 

 

3). Click Connect and, when prompted, enter your external password for Enterprise Box. This should bring up a File Browser window showing you the files you have in your Box space.

 

If your password doesn’t work… you may need to create an “External Password” for use with apps (like Nautlius) that do not have access to your single sign-on (SSO) system.

1). Log into your Box account from the website.

2). At the upper right, click your name. From the drop-down menu, click Account Settings.

3). Near the bottom, under “Create External Password”, click Edit password. and save it when finished.

4). Try this password in the above procedure.

 

Heavy Duty (and Cheap) Workbench

After months of searching on Craigslist I was not able to find a 7+ foot workbench that was made of solid wood. I was getting increasingly frustrated so I began looking online for different plans to make my own.

Workbench Criteria:

  • 7 to 8 ft long ~2ft deep
  • THICK wooden top (I wanted more for aesthetics than anything else)
  • Heavy Duty Construction ( I didn’t want to think twice about putting 500 pounds on it )

After a while of searching I found an excellent starting point in an old Family Handyman article. I referenced this article for all the steps on the construction of the base and tabletop with several modifications:

  • I wanted a 2×6 base mainly for looks but also because I intend to keep this workbench for my lifetime and want it to last at least that long.
  • I also added some shelving to the crossbars underneath because it should have been there from the start and I had some leftover lumber from my earlier garage shelving project.
  • I added a 45 degree chamfer on the table top since this is just pine and could otherwise be pretty easily marred on the corners.
  • Lastly I inset a T-square in the corner of the table because I had an extra one lying around and I thought it could be useful.

Cost Breakdown

Materials:

  • $61 — Lumber
  • $50 — Used Vise off of Craigslist (looked new to me)
  • $20 — Lag Bolts and Hardware
  • $5 — Consumables (Wood Glue)
  • FREE — 3″ Deck Screws — I had these leftover
  • FREE — L Square — I had an extra

Tools:

  • $50 — New Table Saw Blade
  • $56 — (3) 36″ Clamps

~$250 total mostly in new tools that I would have bought for something else. All in all I could not be happier with the result, it was just what I wanted and meets every one of my needs.

Bringing the lumber home

This little car has never carried so much lumber in its entire life. So many 2x4s but I made it all in one trip!

Constructing the workbench

Adding the Finishing Touches

Finished.

IMG_20140731_225532
Just what I always wanted for this space.