Author Archives: Ryan

https and express: brought to you by the EFF

Thanks to the helpful folks at Lets Encrypt and the EFF, it is possible for a nobody to at least enable https on their site for free! You could use a self-signed certificate but if you want to access, for instance, Slack’s webhooks, you’ll need a recognized CA behind your certs.

As of this writing, you can find the instructions for installing the Lets Encrypt certificate engine at for generic installs of Ubuntu, with even easier installers available if you peruse the drop down menu at the top of the page. Following these installation instructions:

chmod a+x certbot-auto

Downloads the certbot command line utility that will install dependecies and grab certs for you, all in one tool. From there, you can start to follow along at (replacing letsencrypt-auto with certbot-auto) where the next step is to, for example:

./certbot-auto certonly --standalone --email -d

which sets up certbot and pulls in cert files for the domain , using the contact email given. It will require interaction unless you add the auto TOS flag to the command.

Note that you will have to have done some prep before this point; you’ll need to have set up your DNS to point at \something/ that will at do something with the connection. Note: a router that doesn’t have port forwarding for port 443 will not reply, causing certbot to error out. It doesn’t have to be a full webpage, but it seems like it at least needs to be a machine that will close the connection.

I found that letsencrypt made uber conservative permissions on the key files, so lets relax that a little:
/etc/letsencrypt/ is the root for the install, I found that archive and live both had restrictive permissions that prevented the user from even reading the contents unless they were root. However, upon relaxing the permissions for the folders, the private key files inside had read permissions for group and world! No Bueno, fix that asap.

apparently ports below 1024 need root access to open, but you can allow node to selectively have access to restricted ports with the setcap command

setcap 'cap_net_bind_service=+ep' /path/to/nodejs

This sets the kernel “capacities” for the node executable, to have permission to bind services to restricted ports, and that it is effective upon running the executable (other options are that it can be inherited if a process with permission launches the executable).

to the base boilerplate generated with express myapp, I added:

var fs = require('fs');
var https = require('https');
var http = require('http');

to the requires section. fs implements filesystem access, https implements TLS, and http will let us redirect unwitting users to https. Adding the following after var app = express();

var http_redirect = express();
http_redirect.use(function(req, res, next) {
var httpsUrl = 'https://' + req.get('host') + req.originalUrl;
res.redirect(301, httpsUrl);

var server = https.createServer(
key: fs.readFileSync('./tls/privkey.pem'),
cert: fs.readFileSync('./tls/fullchain.pem')

finishes setting up the http redirect to https, and the https server for the app.

Installing libftdi – Libraries from source

Brief log of installing libftdi:

Find the libftdi repository (Thanks intra2net!)

git clone git://
cd libftdi
git checkout v1.3 or whichever version you are aiming for.

You can cat the README for the install instructions, happily it uses a typical CMAKE structure.
mkdir build; cd build
cmake -DCMAKE_INSTALL_PREFIX="/usr" ../ to set the install path if you want to install it
make; sudo make install

This sets up the binaries and headers where you need them. You may want to update ldconfig immediately and check your install:
sudo ldconfig; ldconfig -p |grep libftdi

go and grab the example code from the documentation
The install process as described puts the header at “libftdi1/ftdi.h” so fix that in the code
gcc example.c -lftdi1 -oexample_ftdi
the linker automatically prepends the “lib” to ftdi1 when it searches the files, see this in action with:
ld -lftdi --verbose

Libftdi reminds you to unload the stock ftdi driver in the kernel for proper operation:
sudo modprobe -r ftdi_sio

This is yet another scenario where direct access to the usb device typically requires permission elevation. rather than running the app with sudo every single time, we can designate this particular device (the FTDI 2232H chip) as having read/write permissions from any user. the 2232H has a device ID of 0403:6010

add to /etc/udev/rules.d/48-ftdi2232h.rules

# ftdi 2232h devices on the digilent CMOD A7 devboard

SUBSYSTEMS=="usb", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6010", \

# If you share your linux system with other users, or just don't like the
# idea of write permission for everybody, you can replace MODE:="0666" with
# OWNER:="yourusername" to create the device owned by you, or with
# GROUP:="somegroupname" and mange access using standard unix groups.

Using Make to Encode the Date and Time in BCD

I tried pretty hard several times over the past day to find a pre-built solution for doing this. This, being, program the “current” (to a few seconds) date and time into my STM32F4’s RTC for initialization, using a couple BCD words. Unable to find it (seriously? Does everyone program it from a user interface? I doubt that…), I had to create my own solution. My build tools being generally bash and make, I figured it should simply be a matter of setting date to the right output type. People make BCD clocks all the time, right?

date is a wonderful utility, but doesn’t have a BCD output (and probably shouldn’t since there are a million different ways to order the digits), so I just needed to process its output. Fine, this is fine. Giving date the argument +%-H,%-M,%-S tells it to output something like 20,12,43 to say the time is 20:12:43. My newfound friend awk can be given custom field delimiters through the -F flag (but for silly reasons some characters are better than others, I choose commas since they are generally safe), and generate a string as output. Unfortunately, the printf command in awk doesn’t have a way to print things as binary (hex and dec are fine). Sooooo, next utility at bat is bc, essentially a command line calculator with some nicer features than most unix command line builtins. Critically, it can convert numbers between arbitrary bases.

At this point, the general scheme is to have make do a shell call and grab the output. The shell call will be a call to date piped into awk which will build the command string to be piped to bc which will do the math and binary conversion necessary to get a set of decimal hours, minutes and seconds converted to a 32bit integer that matches the encoding for the STM32F446 (and potentially other chips in the STM32F4 line).

The final relevant make lines are as follows:

RTC_BCD_TIME := 0b$(shell date +%-H,%-M,%-S | awk -F"," '{print "obase=2;scale=0;hours="$$1";minutes="$$2";seconds="$$3";(((hours/10)*1048576)+((hours%10)*65536)+((minutes/10)*4096)+((minutes%10)*256)+((seconds/10)*16)+(seconds%10))"}' - | bc )
RTC_BCD_DATE := 0b$(shell date +%-y,%-m,%-d,%-w | awk -F"," '{print "obase=2;scale=0;years="$$1";months="$$2";days="$$3";dayofweek="$$4";if(dayofweek==0)dayofweek=7;(((years/10)*1048576)+((years%10)*65536)+((months/10)*4096)+((months%10)*256)+(dayofweek*8192)+((days/10)*16)+(days%10))"}' - | bc)

Note prepending the string returned by the shell call with “0b” to designate it as a binary sequence. I guess at the end of the day it didn’t need to be converted to binary, but it may help with debugging later on since it is BCD. It’s also worthwhile to be aware of the __DATE__ and __TIME__ automatically defined by gcc, but they are in string form and difficult to manipulate with just define statements. I felt like doing this outside the compiler was a better option. If anyone finds themselves in the same scenario, hope this helps!

Tee and Moira 2 (or, the Better Option)

So, last time I had found the moira interactive prompt utility which had a habit of just ultra-dumping my moira-list membership list along with a ton of control characters and excess prompt word schmoo. This is undesirable because any attempt to make a find-and-replace style moira list utility would require major cleanup of the logged output from the interactive prompt. However, since anything worth over-doing was once worth just plain doing, I knew there had to be a more naked utility hiding under the overall moira prompt.

Queue the MIT SIPB site, particularly its page on moira (*doh!*) Scroll to the bottom, you find “Making Moira Queries Directly,” and the acronym GLOM, get_lists_of_member. Perform a query (qy), of type glom (glom) to recursively find all memberships of a user (ruser) with the username (NAME) is the magic (mostly) answer!

qy glom ruser NAME

I say mostly because in this raw form it also releases a bunch of other information, but at least in an entirely repeatable (read, removable) fashion. However, more digging on the SIPB site shows that qy _help glom can give us more info on the glom-type query, its data-fields (importantly, list_name), and how to tell qy we only want the list_name field.

Appending -f list_name is the ticket, (again, mostly) to filter out the other data we don’t want. There is still “list_name:” prepended to everything, but at least a convenient colon is there to delimit.

Apparently, (typing as I’m working on this), -s is the final key! Add that and each list name is set nicely on its own line, without any fluff. Perfect!

For the next trick, a forum post tipped my off that awk is a useful utility for iterating through newline-delimited input. Typically, awk '/searchstring/ {print $0;}' inputfile is a silly way to grep-style search through inputfile and print any lines matching the regex searchstring. $0 is the “field” of the input line that matched searchstring for which you can craft the delimiters for. Luckily, I’ve already cleaned up the input so each line is a single string that is the only thing of interest, so everything here on out will use $0.

awk is also designed to use input files, and must be given “-” as the input file to be told to use stdin (e.g. to use piped input). so CMD | awk '/foo/ {print $0;}' - gets use well on our way.

Printing is very useful for debugging, but we want to actually do things at this point. The command sent to awk in the curly braces isn’t automatically a shell command; awk is an interpreter and has its own command set. Luckily it is easy to just pipe a print command through to shell: CMD | awk '/foo/ {print $0 | "/bin/sh"}' - will run whatever the matching lines are (which probably won’t be very useful).

Padding the print statement with some actual commands is the final step:
qy glom user NAME -f list_name -s | awk '{print "blanche "$0" -a LISTNAME" | "/bin/sh"}' -
This pipes the list of base moira lists of which NAME is a member, omits the optional filter in awk (processes all input), pipes the blanche command to a shell with the current line’s list as the first arg, then the -a option with LISTNAME as the moira member to add to the list.

Essentially, run this replacing NAME for your kerberos, and replacing LISTNAME for a moira list or member, to add LISTNAME to all lists you have permissions to modify and NAME is a member of. Obviously, you may not have permissions for every list you are a member of, but awk with print these errors and failover to the next line. Neat!

Tee and Moira

So, as all aging MIT students must, today I set about looking into adding my future email address to the mailing lists I’d like to remain on post-graduation. Obviously, it would be desirable to do so programatically rather than clicking through the Web Moira interface for the 700 lists I’m on an manually adding my new email to each. First things first, I know about and have used Blanche to add myself to lists and get info about lists. However, blanche doesn’t allow you to search by kerberos and get a list of memberships. The Googles allowed me to find a list of Athena utilities (including Blanche if you’re curious) that includes the Moira command-line interface!

It’s really a silly little interactive prompt that assumes you know way more about how things work than you probably do, but gets the job done in a somewhat unsatisfactory way. I’ll keep looking for options or a better utility. My work thus far:
1). Get yo’self an Athena dialup session or log in in a cluster *gasp!*
2). Pull up a terminal and enter moira
3). Option 3 for the Lists and Group Menu
4). Option 7 for the List Info Menu
5). Option 1 for the Show all lists to which a given member belongs
6). If you haven’t done anything crazy with how you logged in (you’ll know already if you are) the defaults should give you everything you ever wanted (just hit enter at each remaining prompt, it will autofill USER and your kerb name)
7). The output is a less-formatted spew of all the lists you are on!

This is great but I’m hardly going to copy paste this. It’s annoying that the output is embedded in an interactive prompt. I suppose a funny way to do it could be to run this inside screen, modify the apparent window height, then take advantage of the scrollback buffer and Copy Mode to scroll back to the start of output and copy all the lists to the screen buffer. But that’s gross.

So far, my target is still gross. I found out about tee which allows the interception of the usual stdin/out/err descriptors, with particular utility for logging them to files. Running:
moira | tee moira.txt
logs the output for the entire interactive session! Unfortunately, they used vt100 or similar terminal coding to blank the display, reset to home, and write the next prompt instead of whitespacing out the old stuff (for the record though, that’s 100% how you should do it, its just frustrating for the log files) so the logs are full of ESC[ gobbledeegook. oops. soooooo, automated cleanup, or a more better programmatic way of things? Next time!

Triangulating Camera Position from Known Points

triangulating camera position from known points opencv

Triangulating camera position from known points

As per my last post on this effort,, with the camera field of view parameters determined, and the lensing warp shown to be fairly low, there’s a straightforward path to taking an array of 2D points in the camera view, with known real-world coordinates, and backprojecting them to determine the camera’s position in space. Though this has probably been done a million times before by every vision system ever, it seemed like the kind of thing that should be easy! Or, as is often the case, would turn out to be interesting on its own and therefore worth the experience. 😀

Given my ultimate goal of extracting my swimming robot’s coordinates from a dual-camera setup, I need to know both cameras’ poses in the global reference frame to make a senseable coordinate extraction. So, with a static fishtank in the frame, containing the robot as well as generating perfect markers for a rectangular coordinate frame, I set about doing lots of trig. Essentially, the corners of the fishtank become a set of points in 3, with known coordinates since I can measure the fishtank. It’s a perfect rectangular prism, so it makes sense to align the global coordinate frame with its axes. Given 4 points on the fishtank, I can triangulate the camera location (and likely orientation though I haven’t thought that through yet and don’t need it).

Basic Method:
In OpenCV, click my 4 registration points on the 2D camera feed.
Given the pixel distance -> angle conversion I talked about in the previous post on the subject, convert every combination of 2 points to an angular measure.
Since the actual 3D position of all the points is known, the distance between any set of 2 is also known.
Taking each pair and considering the “point” location of the camera, it is clear there is a triangle for every point pair with the camera location as its third point.
The known distance between the two points on the fishtank is then opposite the angle determined by the pixel distance of the point pairs.
This should sound exactly like its heading towards the Law of Cosines to determine the other sides of a triangle with one angle known.
Since all of these imaginary triangles actually share sides with each other, a simple algebraic relationship exists to find all side lengths of all triangles from the known sides and the angles (these side lengths can also be interpreted as the distance from fishtank points to camera point).
To get back to what we actually want, the camera position in 3D, we can replace side lengths as values with side lengths as a function of the camera position. Instead of getting the lengths of every side, we do a little more algebra and get back from the solver the point that satisfies the distances.
If the world were perfect, we could stop there and call it a day. Just take the angles from the camera and the known line lengths from the fishtank points, run it through solver and boom, a point in ℝ3. If we were working with exact values, this could happen, but a lot of approximations have happened thus far (the actual dimensions of the fishtank, the selected points from the video feed, the assumption of a perfect pixel to angle conversion, among others).
So instead we are left with having to find the camera point that minimizes the error between the computer world and the real world. We can start from a test point in space, check how badly it fits our equations by comparing the triangles it makes with the edge lengths and angles we know, then try new points that make this error smaller. Its essentially a “hot or cold” search where you drag a point through space, getting constant feedback of “hotter” or “colder”.
Fortunately, this works great!

Some notes on this:
I first did a demo in MATLAB, since the visualization tools there are a little easier to use. There I discovered a few flaws, the biggest being that only optimizing for the point is globally stable, optimizing the distances of the camera to each point has some local minima that can trap the solver.
The second is that a non-linear optimizer like fmincon in MATLAB is really a math-package’s bread and butter, and they are not letting anyone peak under the hood. fmincon is /not/ available to the MATLAB Coder c-code generation utility. Bummer.
However, Free Software was to the rescue, with Python‘s SciPy package containing the desired non-linear optimization suite. minimize can take a scalar function of multiple variables and scrobble the inputs using various methods to find a minimum of the function. It’s really beautiful that tools this good are in the public domain.

Characterizing the PS3 Eye

Wikipedia claims that a PS3 Eye zoomed to “blue” has a field of view of 75 degrees. This is presumably the horizontal field of view while I need both angular measures, so I decided to check both out myself.

Put PS3 Eye sensor at 4.25″ above flat surface
Reinforce 8.5″x11″ sheet of paper with tongue depressors
Holding the paper vertically in Landscape, with one edge flush against, and exactly perpendicular to, the table, adjust angle of camera (it pivots about one axis on its base) and distance from paper until the top and bottom edges are at the very top and bottom of the camera view. The camera sensor should now be entirely parallel to the sheet of paper, in all axes. The edges of the paper should disappear and reappear together if you move the paper a little closer or a little further.
Mark the distance from paper to camera
Now shift the paper side to side, at this same distance, to observe where the side edges leave the frame. Mark the edges of the frame on the paper once the paper entirely fills the frame.

The field of view is then 2*atan((paper_measurement/2)/distance_to_camera), where paper_measurement is either 8.5″ or the distance between the horizontal extremes of the sheet that you marked.

This test resulted in a horizontal field of view of 60.32 degrees (not that I actually have that many sigdigs) and a vertical of 49.35 degrees.

It is important to note that points on a plane perpendicular to the camera can have their angles linearly interpolated from pixel distance against this maximum reference. The image resolution is 620×480 in normal video rate, so the horizontal angle between two points with deltaX=310 pixels, is 30.16 degrees. This should jive, as the image doesn’t appear overly warped. Horizontal distance is relatively the same anywhere on the sensor.

If the angle of the plane is not known, it is ambiguous whether a long object appearing short on the camera feed is doing so because of being at a sharp angle to the camera or because it is far away. The distance to one point must be known. However, perhaps a 3rd point, out of plane and thus defining a cube (like one of the back corners of my rectangularly prismatic fishtank) will provide the needed scaling factor. Should be cautious of accuracy here, if those point are at highly oblique angles.

Follow along with my next steps at locating the camera at

The MEAN Stack

Because this info is spread all over the web, and because I keep coming back to it after long hiatuses and having /no clue/ what I’m doing, here’s my own quickstart guide. Very much inspired after but sometimes more or less verbose

Install MongoDB:
sudo apt-get install mongodb may also work, but is at the mercy of repo managers to have the latest versions

Install Node: sudo apt-get install nodejs

Update npm globally: sudo npm install npm -g
Install Express-Generator globally: sudo npm install express-generator -g

setup app dir and install dependencies: express myapp
cd myapp
npm install

Make sure angular is setup in the app: npm install angular –save

at this point running: npm start
gives you your app at localhost:3000
where it should be available on the LAN for you viewing pleasure!

**********npm init*************
If you don’t want to use the automagical Express-Generator, you can use
npm init

entry point could be app.js

git repo!
optional keywords
license: GPL-3.0 is good
review and

Digikey API and OAuth

OAuth is harder than expected. I’m just going to jot down some notes as I go:

node interpreter is very useful for quickly testing out packages! Just run “node” from the command line. “.exit” quits the interpreter.

A nice way to keep secrets in a node system is in a file, with good system permissions. If you make the file follow the JSON file format:

  "key": "value",
  "key_of_set": ["value1", "value2"],
  "key_of_dict": {"keys_forever": "values too"}


then running:

var file_json = require("path_to_file.json")

loads your object into file_json.

Versioning this secret file is fun. You can make a dummy file, with something like "secret_key": "mtwannahuckaloogie" git add that file, then put the file on a .gitignore and run:

git update-index --assume-unchanged 

and git will ignore it forevermore! (I think. Probably try not to shake your index too hard or you might leak it)

The usual package protocol:

npm install --save simple-oauth2

gets the app ready for action! (i hope)

The OAuth demo code on the npm page for simple-oauth2 is pretty straightforward, I basically copy pasted it, along with the relevant clientID and clientSecret given by digikey.
Unfortunately, my hope for having the callback address of work, was a bust. Looks like tomorrow I’ll need to do some funny business to open up some ports :O

Ok, the redirect isn’t accessed by Digikey, its simply stuffed into the user’s browser. This is good, I was just being an idiot on the config side at Digikey. When Digkey asked for my apps redirect callback URI, I gave it the user was then sent to their own port 3000, obviously wrong it should go to my webserve’s port 3000. for me on lan, that means 192.168.x.x:3000/callback. This works better, gets me to digikey’s login.

HOWEVER, Digikey wants an https address for the callback (doesn’t look required by oauth since the demo code uses http), so I guess I need to set up https for the app. Since I’m not paying the big bucks for getting a cert from a CA (though I could get one from MIT for the next year), I’ll just self-sign one. Instructions on that can be found at:

I’ll lay down the gist here though since that url seems unstable

openssl genrsa -out key.pem
openssl req -new -key key.pem -out csr.pem
openssl x509 -req -days 9999 -in csr.pem -signkey key.pem -out cert.pem
rm csr.pem

The package ‘https’ will automatically manage the remainder of the connection stuff. If you’ve used the express directory formatter/file autogen thing, then you’ll want to edit your /bin/www.


var https = require('https');
var fs = require('fs');

var https_config = {
  key: fs.readFileSync('key.pem'),
  cert: fs.readFileSync('cert.pem')

var https_server = https.createServer(https_config, app);


And replacing the port in listen() with the port you want. Make sure create_server goes after the var app = express(); line.

The https stuff was a problem, as Digikey won’t accept a redirect without ssl, but not the big problem. Even after getting that set up, I kept getting ERROR 402’s all the time. I had subscribed to the API’s using the kind of hidden buttons on the Digikey API page, and was flummoxed at what the problem was. “PAYMENT_REQUIRED”? Sometimes this error is used for APIs you have exceeded quotas on or need to pay for. Adding some console.logs to the simple-oauth library, I extracted the full reply from the server:

The client MUST NOT use more than one authentication method in each request.

Huh? I thought I was only using one… I dumped the POST request parameters just before they were sent (also by throwing a console.log into the library files) and noticed Authorization: Basic ... in there with a huge hash of some kind. Crap! There’s a default token of some kind coming from simple-oauth2, you just have to add

useBasicAuthorizationHeader: false

to the config parameters for simple-oauth2, and then you /just/ have the authentication code coming in. FIXED!

Mouser SOAP

You can buy a lot on Mouser, but unfortunately I don’t think there is soap. You can, however, get SOAP from Mouser; that is, Simple Object Access Protocol, an API standard. Signing up for dev access at lets you use more automated format requests, hopefully faster that using the site. Also, most importantly, gives you programmatic access to the data if using a SOAP module in you webapp, such as soap for NPM.

By gym.king (001_MG_2576_(015)) [CC BY-SA 2.0 (], via Wikimedia Commons

This mouser needs a bath.

setup SOAP as an app dependency:

npm install soap --save

SOAP basically grabs an XML doc from somewhere (disk or web, usually ending in WSDL), reads it to determine what commands are available and what inputs they need, then calls one and returns the reply. Mouser has its API file at If you follow the link there, you can read each of the commands, also described in the human-readable API page:

The node SOAP module is /super easy/ once you know how to use it, the doc was a little vague for my tastes. The procedure is this:

soap.createClient("url_to_xml_WSDL", callback_fun);

This createClient isn’t a long lived thing, its just for the duration of this one request and callback. There is no client object to store. The callback is called immediately upon client creation, so we pass an anonymous function in reality.

soap.createClient("url_to_xml_WSDL", function(err, client){
  client.MyFunction(args, callback);

client.MyFunction will look for MyFunction in the returned XML API sheet, and draw up a request with MyFunction, and the appropriate args fields filled in. The response will be returned into this callback function, which for ease we will create right here with another anonymous function. For the basic Mouser API, ServiceStatus is a valid function, taking no arguments. This yields:

soap.createClient("url_to_xml_WSDL", function(err, client){
  client.ServiceStatus({}, function(err, result){

and if url_to_xml_WSDL was the correct one for Mouser as listed above, you should get back a dict with {ServiceStatusResult: true}