HOWTO: Add OAUTH to your Alexa Smart Home skill in 10 minutes

Alexa smart home skills require you to provide OAUTH2 so that users can authorise a skill to access the assumed cloud service powering their lightbulbs or any number of other pointlessly connected devices.  This makes sense since OAUTH2 is a standard and secure way to grant access for users from one system to the resources of another.  However, with this come a few caveats which are potential blockers for casual skill developers like me.  If you’re writing a skill for your own personal use, with no intention of adding it to the store, you still have to have a valid and recognised SSL certificate and a whole OAUTH2 server set up somewhere.

The SSL certificate is easy enough to implement, but it’s a bit of a faff (renewing Let’s Encrypt certs, or paying for cert which needs you to deal with the certificate authorities, send in scans of your passport and other tedious red tape) but – in my opinion anyway – setting up an OAUTH server is even more of a faff.  If only there was some way to avoid having to do either of these things….

Using “Login With Amazon” as your OAUTH provider

Since you already have an Amazon account you can use “Login With Amazon” as your skill’s OAUTH server and your normal everyday Amazon account as your credentials.  You’re only sharing your Amazon account data with yourself, and even then we can restrict it to just your login ID.  You don’t actually need to do anything with the OAUTH token once it’s returned since you’re the only user.  I mean, you could if you wanted to, but this HOWTO assumes that you’re the only user and that you don’t care about that sort of thing.  We are also going to assume that you have already created the Lambda function and the smart home skill or are familiar with how to do that.  This is a bit tricky because you can’t test your smart home skill on a real device until you’ve implemented OAUTH, and you can’t complete the OAUTH set-up until you’ve got the IDs from your Lambda function and skill.  If you haven’t written your skill yet, just create a placeholder Lambda function and smart home skill to be going on with.

Much of this information is available from the official Amazon instructions available here: https://developer.amazon.com/public/community/post/Tx3CX1ETRZZ2NPC/Alexa-Account-Linking-5-Steps-to-Seamlessly-Link-Your-Alexa-Skill-with-Login-wit. What follows is a rehash and slight reorganisation of that doc which is hopefully a bit easier to follow.

1. Create a new Login With Amazon Security Profile

From the Developer Console in AWS, go to Apps & Services -> Login With Amazon.  Or click https://developer.amazon.com/lwa/sp/overview.html

Click “Create a New Security Profile”.  Fill out the form along these lines:

screenshot-from-2016-11-30-13-40-23

and hit Save.

You should see a message along the lines of “Login with Amazon successfully enabled for Security Profile.”

Hover the mouse over the cog icon to the right of your new security profile and choose “Security Profile”.

Copy your “Client ID”  and “Client Secret” and paste it in to a notepad.  You’ll need this again shortly.

clientid2

2. Configure your skill to use Login With Amazon

Back in the Developer Console, navigate to the Configuration page for your skill.  (Click on your skill, then click on Configuration).  You need to enable “Account Linking” and this will then show the extra boxes discussed below.

In to the “Authorization URL” box you should put:

https://www.amazon.com/ap/oa/?redirect_url=

and then copy the Redirect URL from further down the page and append it to the end of the Authorization URL.  For example:

https://www.amazon.com/ap/oa/?redirect_url=https://layla.amazon.com/api/skill/link/1234ABCD1234AB

authurl

As far as I can tell Layla is for UK/Europe and Pitangui is for the US.  Use the appropriate one for you.  Also, keep a note of the redirect URL in your notepad, you will need this again later.

In to the “Client Id” box paste your client id from step 1.

You can leave “Domain List” blank for now.

For “Scope” I suggest you use:

profile:user_id

This will give your Alexa Skill access to a minimal amount of information about you from Amazon, in this case just your user_id.  Since you don’t really have any customers for your skill, only you, there is no reason to provide access to any other information.

Further down the page you need to configure the Grant Type:

granttype

Select an “Auth Code Grant

Set the “Access Token URI” to:

https://api.amazon.com/auth/o2/token

and in to “Client Secret” paste your secret from step 1.

You must include a link to your “Privacy Policy URL“.  Since you are the only person who cares you could host a blank file somewhere, or maybe link to a Rick Astley video on YouTube?

Finally hit Save.

3. Link Login With Amazon back to your Skill

Head back to the Login With Amazon page: https://developer.amazon.com/lwa/sp/overview.html

Hover over the cog of your Security Profile and choose Web Settings:

returnurl

In to the “Allowed Return URLs” box paste your Redirect URL from step 2 and hit save.

4.  Login to Amazon from your skill and do the OAUTH dance

From the Alexa app on your phone navigate to your new Smart Home Skill and you see that it says “Account Linking Required“.

img_0473

Click “Enable Skill” and you’ll be asked to login with your Amazon credentials:

img_0474

Once you log in you should see a success message:

img_0475

And you’re done.

 

bravialib – a Python library to abstract the Bravia web API

I posted this to Github, but thought I would mirror it here too. You can download the code from here:

https://github.com/8none1/bravialib

 

bravialib

A Python library for talking to some Sony Bravia TVs, and an accompanying Alexa Skill to let you control the TV by voice. If you like that sort of thing.

These scripts make use of the excellent Requests module. You’ll need to install that first.

bravialib itself

This is a fairly simple library which allows you to “pair” the script with the TV and will then send cookie-authenticated requests to the TVs own web API to control pretty much everything. You can:

  • Set up the initial pairing between the script and the TV
  • Retrieve system information from the TV (serial, model, mac addr etc)
  • Enumerate the available TV inputs (HDMI1,2,3,4 etc)
  • Switch to a given input
  • Enumerate the remote control buttons
  • Virtually press those buttons
  • Find out what Smart TV apps are available
  • Start those apps
  • Enumerate the available DVB-T channels
  • Switch to those channels
  • Send Wake On Lan packets to switch the TV on from cold (assuming you’ve enabled that)
  • A couple of convenience functions

The library tries to hide the complexity and pre-requisites and give you an easy to use API.

I built this for a couple of reasons: 1. Because the TV had an undocumented API, and that tickles me 2. I quite fancied hooking it up to Alexa for lols

Most of the information about how to talk to the TV’s API came from looking at packet captures from the iPhone app “TV Sideview”.

There is a script called testit.py that will give you a few clues about how to use it, but it’s a bit of a mess. I’ve left a lot of comments in the code for the library which should help you.

Really, I think that this library should be imported in to a long-running process rather than be called every time you want to press a remote control button. On a Raspberry Pi, Requests can take a while (a couple of seconds) to import, and then bravialib pre-populates a few data sources, and all of that takes time, like about 20 seconds – so you really don’t want to use this library if you just want to fire a few remote control commands. Also – be aware that the TV takes a long time to boot and accept commands. From cold you’re talking about a minute maybe two, it’s really annoying.

The aforementioned long running process – bravia_rest.py

As the main library takes a while to start and needs a certain amount of data from the TV to work properly it really makes sense to start it up once and then leave it running as long as you can. The bravia_rest.py script does exactly that, and also exposes some of the functionality as a very crude REST interface that you can easily hook it in to various home automation systems.

First you need to add the IP address and MAC address (needed to turn on the TV the first time the script is run, it can be discovered automatically if you just power the TV on for a few minutes before you run the script).

Then run bravia_rest.py.

If this is the first time you have run it you will need to pair with the TV. You will be told to point your browser at the IP address of the machine where the script is running on port 8090 (by default). Doing this will make the script attempt to pair with the TV. If it works you will see a PIN number on the TV screen, you will need to enter this in to the box in your browser. After a few seconds, and with a bit of luck, pairing will now complete. This shouldn’t take too long.

If you are now paired, in your browser go to /dumpinfo for a view in to what the script knows about the TV.

Once everything is running you can POST to these URLs for things to happen (no body is required):

  • /set/power/[on|off] – turns the telly on and off
  • /set/send/<button> – e.g. mute, play, pause, up, down. See dumpinfo for all the key names.
  • /set/volumeup/3 – turn the volume up 3 notches. You MUST pass a number, even it it’s just 1.
  • /set/volumedown/1 – as above.
  • /set/loadapp/<app name> – e.g. Netflix, iplayer. Again /dumpinfo will show you what apps are available.
  • /set/channel/<channel> – e.g. BBC ONE, BBC TWO
  • /set/input/<input label> – You need to have given your inputs labels on the TV, then pass the label here.

Hooking it up to Alexa

Now we can poke the TV through a simplified REST interface, it’s much easier to hook in to other things, like Alexa for example. Setting up a custom skill in AWS/Lambda is beyond the scope of what I can write up at lunchtime, I’m sure there are lots of other people who have done it better than I could. You’ll need to create a custom app and upload a Python Deployment Package to Lambda including my lambda_function.py script (see inside the Alexa directory), a secrets.py file with your info in it, a copy of the Requests library (you need to create a Python Virtual Environment – it’s quite easy) and possibly a copy of your PEM for a self signed HTTPS certificate. I’ve also included the skills data such as the utterances that I’m using. These will need to be adjusted for your locale.

You can read more about Python deployment packages and AWS here:http://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-deployment-package.htmlhttp://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-deployment-pkg.html#with-s3-example-deployment-pkg-python

Here’s how it works:

block diagram

  1. You issue the command to Alexa: Alexa tell The TV to change to channel BBC ONE.
  2. Your voice is sent to AWS (the green lines) decoded and the utterances and intents etc are sent to the Lambda script.
  3. The Lambda script works out what the requested actions are and sends them back out (the orange lines) to a web server running in your home (in my case a Raspberry Pi running Apache and the bravia_proxy.py script). You need to make that Apache server accessible to the outside world so that AWS can POST data to it. I would recommend that you configure Apache to use SSL and you put at least BASIC Auth in front of the proxy script.
  4. The bravia_proxy.py script receives the POSTed form from AWS and in turn POSTs to the bravia_rest.py script having done a quick bit of sanity checking and normalisation. The proxy and the rest scripts could live on different hosts (and probably should, there are no security considerations in either script – so ya know, don’t use them.)
  5. bravia_rest.py uses bravialib to poke the TV in the right way and returns back a yes or a no which then flows back (the blue lines) to AWS and your Lambda function.
  6. If everything worked you should hear “OK” from Alexa and your TV should do what you told it.

I could have put bravia_rest.py straight on the web an implemented some basic auth and SSL there – but I think this is something better handled by Apache (or whichever server you prefer), not some hacked up script that I wrote.

Caveats:

  • It doesn’t deal with the TV being off at all well at the moment.
  • I don’t know what happens when the cookies expire.
  • I haven’t done much testing.
  • I have no idea what I’m doing.

24 hours with Amazon Prime

Preamble

As a analytically minded person, is it worth getting Amazon Prime?

I signed up yesterday and spent the evening setting up the various services.  It was on offer with £20 off, it was the day before The Grand Tour came out, and I recently bought an Amazon Dot – well played Amazon, well played.

As I’m sure you’re aware Amazon Prime comes with a few bundled goodies to sweeten the deal. But are they actually useful? I think the real value is the next day delivery (if you order lots of things from Amazon), the online photo storage and perhaps the video. Everything else falls short of being good enough to be considered a real product in it’s own right.  Yes, I know that’s on purpose – but the marketing material would have you believe otherwise.  How about that.

The loan of a book a month from the Kindle Lending Library

Rating: 4/10 (based on going to an actual library being 8/10)
You’re not going to find many books you actually want to read in there. Browsing the catalogue from your desktop is virtually impossible and it’s been made that way deliberately. This is frustrating and annoying. Also annoying is that if you use the Kindle app on your phone you get nagged to upgrade to Kindle Unlimited all the bloody time.  You can share this with someone else on your household account, but they won’t thank you for it.

Prime Music

Rating: 6.5/10 (based on Spotify Premium being 10/10)
Better than I expected, but still full of annoyances. And again with the constant nagging to upgrade to Music Unlimited.
The selection is ok. It feels like they’ve taken the time to make it just-good-enough that you will use it but not good enough that you won’t consider upgrading.

You can’t share this perk with anyone else in your household, and since there’s no point in two people in your household having a Prime account each, you end up setting up another Chrome profile just to access the music service for managing play lists etc. Also annoying is that you can’t play music on an Echo or Dot and play music in the browser at the same time. This raises the question of what will happen if you have two Amazon Alexa devices on the same account and you try and play music on both. If you can’t (and I don’t know yet, perhaps someone can comment if they do) then one of the good features of Alexa is somewhat spoiled.  (Edit: some searching suggests that indeed you cannot listen to Prime Music on more than one device at at time.)

This service is a very good replacement for the kitchen radio but not a replacement for even a free Spotify subscription.  If you already have a premium Spotify account then this will be of no interest to you at all.

Prime Video

Rating: 7.5/10 (based on Netflix being 10/10)

Not so many nag screens here, but guess what, you still need to spend more money for the full experience.  It’s a bit galling to discover that some of the headline shows need to be paid for (Game Of Thrones for example).  But, there is some genuinely good exclusive content here, The Grand Tour for example.  I don’t see myself watching it more than Netflix but it’s worth, say, 3 quid a month of the £6.50 a month Prime subscription.

Installing on an iPhone was easy enough, but on Android it’s outrageously bad.  You have to install Amazon’s “Underground” app store, and to do that you have to enable “Unknown Sources”.  Unlol.  The app works fine, and you can uninstall Underground once it’s on there.  The app on my Sony TV is fine.  Prime Video doesn’t support Chromecast, which isn’t a surprise but is, you guessed it, annoying.

Next Day Delivery

Rating: 10/10 (based on going to the shops being 1/10)

I don’t order that much from Amazon, which is why I’m wondering if this is all worth it, but when I do having it turn up the next day for “free” is nice.

Unlimited Online Photo Storage

Rating: 9/10 (based on Google Drive being 7/10)

Google also offer unlimited photo storage, but critically they compress your photos.  The Amazon offering does not (based on MD5 sums for a picture selected at random).  There is also an excellent Linux cli util called acd_cli (https://github.com/yadayada/acd_cli).  You’ll want to exclude videos and other files from being uploaded because they will count towards your 5GB limit for other stuff.  Something like this:

acd_cli --verbose ul -xe mov -xe mp4 -xe ini . /Pictures

Don’t be surprised if this has a storage limit applied in the future.

Prime Early Access

Rating: 0/10 (based on normal Amazon shopping being 10/10)

Early access to the electronic jumble sale that is Black Friday.  Pointless.

Twitch Prime

Rating: 5/10 (based on not having it being 0/10)

Skip this if you don’t know what Twitch is.  With Prime you can subscribe to a channel for a month for free.  It’s a free way to support Twitch streamers you like, and you get some crappy game add-ons that you won’t ever use.

Summary

Amazon Prime is basically shareware from the late 90s.  It does do what they claim, but there are constant nags to spend money on the basis that all the good stuff is just out of reach.

The new Amazon Dot in the kitchen was not warmly welcomed by everyone at Whizzy Towers but having it play music on demand has changed that perception quite a lot in the last day.  I have saved the first episode of The Grand Tour to watch tonight, which I am looking forward to since the reviews have been good, and all my photos which had previously been backed up on to a USB HDD are now slowly making their way to the cloud.  I also ordered a £5 book yesterday which arrived today as promised.

However, the shortcomings have the feeling of being deliberate.  Which is more annoying than if the services were just a bit crap.

So all in all you’re getting what you pay for.  It’s not amazing value, but neither is it a total rip off.  Someone at Amazon has done their job well.  I would recommend you get it.

What to do with Unity 8 now

As you’re probably aware Ubuntu 16.10 was released yesterday and brings with it the Unity 8 desktop session as a preview of what’s being worked on right now and a reflection of the current state of play.

You might have already logged in and kicked the proverbial tyres.  If not I would urge you to do so.  Please take the time to install a couple of apps as laid out here:

http://insights.ubuntu.com/2016/10/13/unity-8-preview-session-in-ubuntu-16-10-yakkety-yak/

The main driver for getting Unity 8 in to 16.10 was the chance to get it in the hands of users so we can get feedback and bug reports.  If you find something doesn’t work, please, log a bug.  We don’t monitor every forum or comments section on the web so the absolute best way to provide your feedback to people who can act on it is a bug report with clear steps on how to reproduce the issue (in the case of crashes) or an explanation of why you think a particular behaviour is wrong.  This is how you get things changed or fixed.

You can contribute to Ubuntu by simply playing with it.

Read about logging bugs in Ubuntu here: https://help.ubuntu.com/community/ReportingBugs

And when you are ready to log a bug, log it against Unity 8 here: https://bugs.launchpad.net/ubuntu/+source/unity8

 

 

 

Apache – 20 second lag before serving pages

TL;DR:  There is no such thing as a “none” directive in Apache 2.  If you’ve got “deny from none” or “allow from none” then you’re doing DNS lookups on each host that connects regardless of whether you want to or not.

 

I was experiencing a very annoying problem trying to serve static HTML pages and CGI scripts from Apache 2 recently.  The problem manifested itself like this:

  • Running the scripts on the server hosting Apache shows they ran in well under a second
  • Connecting to the Apache server from the LAN, everything was fine and ran in under a second
  • Connecting to the Apache server from the Internet, but from a machine known to my network, ran fine
  • Connecting from an AWS Lambda script, suddenly there is a 20 second or more delay before getting data back
  • Connecting from Digital Ocean, there is a 20 second delay
  • Connecting from another computer on the internet, there is a 20 second delay

What the heck is going on here?

I spent time trying to debug my CGI scripts and adding lots more logging and finally convinced myself that it was a problem with the Apache config and not something like MTUs or routing problems.

But what was causing it?  It started to feel like like a DNS related issue since the machines where it ran fine where all known to me, and so had corresponding entries in my local DNS server.  But but but… I clearly had “HostnameLookups Off” in my apache2.conf file.  When I looked at the logs again, I noticed that indeed hostnames were being looked up, even though I told it not to.

966381

Why?  Because I don’t know how to configure Apache servers properly.  At some point in time I thought this was a good idea:

Order deny, allow
Deny from none
Allow from all

But, there is no such thing as a “none” directive.  Apache interprets “none” as a host name and so has to look it up to see if it’s supposed to be blocking it or not, which causes a DNS lookup delays and hostnames to appear in your Apache logs.

Englightenment came from here: http://kb.simplywebhosting.com/idx/6/213/article/

There is also a suggestion that inline comments can do the same thing here:  https://www.drovemebatty.com/wp/entries/11

 

 

Unity 7 Low Graphics Mode

Unity 7 has had a low graphics mode for a long time but recently we’ve been making it better.

Eleni has been making improvements to reduce the amount of visual effects that are seen while running in low graphics mode.  At a high level this includes things like:

  • Reducing the amount of animation in elements such as the window switcher, launcher and menus (in some cases down to zero)
  • Removing blur and fade in/out
  • Reducing shadows

The result of these changes will be beneficial to people running Ubuntu in a virtual machine (where hardware 3D acceleration is not available) and for remote-control of desktops with VNC, RDP etc.

Low graphics mode should enable itself when it detects certain GL features are not available (e.g. in a virtualised environment) but there are times when you might want to force it on.  Here’s how you can force low graphics mode on 16.04 LTS (Xenial) :

  1. nano ~/.config/upstart/lowgfx.conf
  2. Paste this into it:
start on starting unity7
pre-start script
    initctl set-env -g UNITY_LOW_GFX_MODE=1
end script
  1. Log out and back in

If you want to stop using low graphics comment out the initctl line by placing a ‘#’ at the start of the line.

This hack won’t work in 16.10 Yakkety because we’re moving to systemd for the user session.  I’ll write up some instructions for 16.10 once it’s available.

Here’s a quick video of some of the effects in low graphics mode:

 

 

DHCP clients not registering hostnames in DNS automatically

To remind myself as much as anything:

I run a dnsmasq server on my router (which is a Raspberry Pi 2) to handle local DNS, DNS proxying and DHCP. For some reason one of the hosts stopped registering its hostname with the DHCP server, and so I couldn’t resolve its name to an IP address from other clients on my network.

I’m pretty sure it used to work, and I’m also pretty sure I didn’t change anything – so why did it suddenly stop? My theory is that the disk on the client became corrupt and a fsck fix removed some files.

Anyway, the cause is that the DHCP client didn’t know to send it’s hostname along with the DHCP request.

This is fixed by creating (or editing) /etc/dhcp/dhclient.conf and adding this line:

send host-name = gethostname();

 

Online searches in the dash to be off by default.

Scopes are a leading feature of the Ubuntu Phone and of Unity 8 in general.  That concept, the story of scopes, started out in Unity 7 and in 12.10 when we added results from online searches to the dash home screen.

Well, we’re making some changes to the Unity 7 Dash searches in 16.04 LTS.  On Unity 8 the Scopes concept has evolved into something which gives the user finer control over what is searched and provides more targeted results.  This functionality cannot be added into Unity 7 and so we’ve taken the decision to gracefully retire some aspects of the Unity 7 online search features.

What is changing?

First of all online search will be off by default.  This means that out-of-the-box none of your search terms will leave your computer.  You can toggle this back on through the Security & Privacy option in System Settings.  Additionally, if you do toggle this back on then results from Amazon & Skimlinks will remain off by default.  You can toggle them back on if you wish.  Further, the following scopes will be retired from the default install and moved to the Universe repository for 16.04 LTS onwards:

    1. Audacious
    2. Clementine
    3. gmusicbrowser
    4. Gourmet
    5. Guayadeque
    6. Musique

The Music Store will be removed completely for 16.04 LTS onwards.

Why now?

By making these changes now we can better manage our development priorities, servers, network bandwidth etc throughout the LTS period. We allow ourselves more freedom to make changes without further affecting the LTS release (e.g SRUs), specifically we can better manage the eventual transition to Unity 8 and not have to maintain two sets of scope infrastructure for the duration of the LTS support period of five years.

What about previous supported releases?

Search results being off by default will not affect previous releases or upgrades, only new installs (i.e. we will not touch your existing settings).  Changes to search results from Amazon & Skimlinks will also only affect 16.04 and beyond.  The removal of the Music Store will be SRU’d back to older supported releases and the option will be removed from the Dash.

When will this happen?

We’re preparing the make the changes in the archive, to Unity 7 and to the Online Search servers right now.  This will take a little while to test and roll out.  We’ll let you know once all the changes are in Xenial.

Hacking 433Mhz support into a cheap Carbon Monoxide detector

Skill level:  Easy

My home automation systems use two mechanisms for communication:  Ethernet (both wired and wireless) and 433MHz OOK radio.

433MHz transmitters are readily available and are cheap but unreliable.  Wifi enabled MCUs such as the ESP8266 are also cheap (coming in at around the same cost as an Arduino clone, a 433MHz transmitter and a bag of bits to connect them together), they are reliable enough but extremely power hungry.  If I can plug a project into the mains then I’ll use an ESP8266 and a mobile phone charger for power, if the project needs to run off batteries then a 433MHz equipped Arduino is the way I’ve gone.

Like most people playing with 433MHz radio I found reliability and range of the radio link to be super flaky.  I’ve finally got a more-or-less reliable set-up:

  • A full wave dipole antenna at the receiver
  • A high quality receiver from RF Solutions in place of the cheap ones which are bundled with transmitters. A decent receiver on eBay
  • A big capacitor on the transmitter.  I saw the frequency and amplitude drifting massively during transmission.  Adding a 470µF cap helps.  Allow time for the cap to charge and the oscillator to stabilise, a few seconds delay seemed to do the trick.
  • Using the RCSwitch library on the transmitter:
    • RCSwitch mySwitch = RCSwitch();
    • mySwitch.setProtocol(2); // Much longer pulse lengths = much better range?
    • mySwitch.setRepeatTransmit(20); // Just brute-force it!

With this setup I can get receive a 24bit number from an Arduino running off 2 AA batteries and a coiled 1/2 wave antenna from about 5 meters indoors through walls.  That’s still poor, but it does the job.  Increasing the voltage to the transmitter would probably help.

Once you have a reliable 433MHz receiver setup then you can also buy off the shelf 433MHz enabled home automation gizmos like this smoke alarm or these door sensors.  They have a set of jumpers inside where you can set an ID, which is essentially the same 24bit number that RCSwitch lets you transmit.  For what it’s worth I also have kite-marked smoke detectors in my house, but from the testing I’ve done with a bit of smoldering paper the cheap imports work just fine.

I couldn’t find a cheap Carbon Monoxide which also has 433MHz support so I thought I’d quickly hack one together out of this Carbon Monoxide detector and an Arduino clone and 433MHz radio:

CO Alarm inside

 

 

 

 

 

 

 

 

 

IMG_1238

You can barely notice it!

 

 

 

 

 

 

 

It’s certainly untidy, but it does the job.  If I had PCB facilities at home I’m fairly sure it could be made to fit inside the alarm, along with some more holes in the case for ventilation.

The premise is simple enough.  The Arduino is powered by the 3v3 regulator on the CO alarm PCB.  The cathode of the red alarm LED is connected to pin 2 of the Arduino as an external interrupt.  When the pin goes low the Arduino wakes up and sends it’s 24bit ID number over the radio which is picked up by the receiver which sends an SMS alert, switches the boiler off, etc.  I’ve connected the radio transmitter to directly to the 3 x AA batteries (4.5 volts) via a transistor which is switched by a pin on the Arduino.  In standy-by mode the additional equipment draws a fraction of a milliamp and so I’m not worried about draining the batteries faster.

As with the smoke alarms, this is not my only source of Carbon Monoxide detection.  I’ve yet to test it’s sensitivity.  This is considered to be a “well, if it works, and it turns the boiler off automatically then it’s certainly worth a go, but I’m not relying on it” project.

10 years with Ubuntu

IMG_1220

Today I have had a Launchpad account for ten years!

I got started out on this road around 1992.  I remember the day Stuart got a PC and installed Minix on it.  That box was biege, naturally, was about 3 feet square and constructed from inch thick iron plate.  Minix was totally alien when compared to the Acorn MOS and RISCOS powered machines I’d used until then, and absolutely intriguing.

A few years later at university I encountered VAX/VMS and Sun SPARCstations and The Internet and Surfers and Mozilla and a Gopher connected Coke machine.

Then out into the big wide world of work and run-ins with AS400 and RS/6000s running AIX.  During this time I started seeing more and more Red Hat in places where there once would have been the more established players, providing email and web servers.  The fascination with *nix was always there and I started using Red Hat at home for fun.

I quickly ran into frustrations with RPMs and Stuart, always a source of wisdom, suggested I try Debian.

Dpkg made my life a whole lot easier and I started using Debian as my default OS for everything. Pretty soon after that I found myself compiling kernels, modules and software packages because I needed or wanted something in a newer version.  Coupled with the availability of cheap unbranded webcams, sound cards, network cards, TV cards etc and a strong desire to make these things work with Linux meant that I had found a wonderful way to stay up until 4 in the morning getting more and more frustrated.  The phrase “I’m going home to play with the kernel” was frequently questioned by my boss Jeremy.  I wanted these things to work but was endlessly faffing about trying to make it happen.

Better call Stuart.

“You should try this new Debian based distribution called Ubuntu” he said.

So I did, and it just worked.  A box fresh kernel with all the goodies I needed already compiled in and an up-to-date GNOME desktop (I’d set my allegiances before trying Ubuntu so this was another tick in the box), not forgetting one of the brownest themes known to man.

And that was that.  Ubuntu worked for me and I was immediately a fan.

And here I am today, 10 years later, still running Ubuntu.  My servers run Ubuntu, all the desktops in my house run Ubuntu, I have an Ubuntu powered phone and soon I’ll have an Ubuntu powered Mycroft with which I’ll be able to control my Ubuntu powered things while wearing my Ubuntu T shirt and drinking tea (should that be kool-aid?) from my Ubuntu mug.

I salute my Ubuntu brothers and sisters.  Thanks for making all of this possible.