Category Archives: My Projects

https and express: brought to you by the EFF

Thanks to the helpful folks at Lets Encrypt and the EFF, it is possible for a nobody to at least enable https on their site for free! You could use a self-signed certificate but if you want to access, for instance, Slack’s webhooks, you’ll need a recognized CA behind your certs.

As of this writing, you can find the instructions for installing the Lets Encrypt certificate engine at for generic installs of Ubuntu, with even easier installers available if you peruse the drop down menu at the top of the page. Following these installation instructions:

chmod a+x certbot-auto

Downloads the certbot command line utility that will install dependecies and grab certs for you, all in one tool. From there, you can start to follow along at (replacing letsencrypt-auto with certbot-auto) where the next step is to, for example:

./certbot-auto certonly --standalone --email -d

which sets up certbot and pulls in cert files for the domain , using the contact email given. It will require interaction unless you add the auto TOS flag to the command.

Note that you will have to have done some prep before this point; you’ll need to have set up your DNS to point at \something/ that will at do something with the connection. Note: a router that doesn’t have port forwarding for port 443 will not reply, causing certbot to error out. It doesn’t have to be a full webpage, but it seems like it at least needs to be a machine that will close the connection.

I found that letsencrypt made uber conservative permissions on the key files, so lets relax that a little:
/etc/letsencrypt/ is the root for the install, I found that archive and live both had restrictive permissions that prevented the user from even reading the contents unless they were root. However, upon relaxing the permissions for the folders, the private key files inside had read permissions for group and world! No Bueno, fix that asap.

apparently ports below 1024 need root access to open, but you can allow node to selectively have access to restricted ports with the setcap command

setcap 'cap_net_bind_service=+ep' /path/to/nodejs

This sets the kernel “capacities” for the node executable, to have permission to bind services to restricted ports, and that it is effective upon running the executable (other options are that it can be inherited if a process with permission launches the executable).

to the base boilerplate generated with express myapp, I added:

var fs = require('fs');
var https = require('https');
var http = require('http');

to the requires section. fs implements filesystem access, https implements TLS, and http will let us redirect unwitting users to https. Adding the following after var app = express();

var http_redirect = express();
http_redirect.use(function(req, res, next) {
var httpsUrl = 'https://' + req.get('host') + req.originalUrl;
res.redirect(301, httpsUrl);

var server = https.createServer(
key: fs.readFileSync('./tls/privkey.pem'),
cert: fs.readFileSync('./tls/fullchain.pem')

finishes setting up the http redirect to https, and the https server for the app.

Installing libftdi – Libraries from source

Brief log of installing libftdi:

Find the libftdi repository (Thanks intra2net!)

git clone git://
cd libftdi
git checkout v1.3 or whichever version you are aiming for.

You can cat the README for the install instructions, happily it uses a typical CMAKE structure.
mkdir build; cd build
cmake -DCMAKE_INSTALL_PREFIX="/usr" ../ to set the install path if you want to install it
make; sudo make install

This sets up the binaries and headers where you need them. You may want to update ldconfig immediately and check your install:
sudo ldconfig; ldconfig -p |grep libftdi

go and grab the example code from the documentation
The install process as described puts the header at “libftdi1/ftdi.h” so fix that in the code
gcc example.c -lftdi1 -oexample_ftdi
the linker automatically prepends the “lib” to ftdi1 when it searches the files, see this in action with:
ld -lftdi --verbose

Libftdi reminds you to unload the stock ftdi driver in the kernel for proper operation:
sudo modprobe -r ftdi_sio

This is yet another scenario where direct access to the usb device typically requires permission elevation. rather than running the app with sudo every single time, we can designate this particular device (the FTDI 2232H chip) as having read/write permissions from any user. the 2232H has a device ID of 0403:6010

add to /etc/udev/rules.d/48-ftdi2232h.rules

# ftdi 2232h devices on the digilent CMOD A7 devboard

SUBSYSTEMS=="usb", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6010", \

# If you share your linux system with other users, or just don't like the
# idea of write permission for everybody, you can replace MODE:="0666" with
# OWNER:="yourusername" to create the device owned by you, or with
# GROUP:="somegroupname" and mange access using standard unix groups.

Using Make to Encode the Date and Time in BCD

I tried pretty hard several times over the past day to find a pre-built solution for doing this. This, being, program the “current” (to a few seconds) date and time into my STM32F4’s RTC for initialization, using a couple BCD words. Unable to find it (seriously? Does everyone program it from a user interface? I doubt that…), I had to create my own solution. My build tools being generally bash and make, I figured it should simply be a matter of setting date to the right output type. People make BCD clocks all the time, right?

date is a wonderful utility, but doesn’t have a BCD output (and probably shouldn’t since there are a million different ways to order the digits), so I just needed to process its output. Fine, this is fine. Giving date the argument +%-H,%-M,%-S tells it to output something like 20,12,43 to say the time is 20:12:43. My newfound friend awk can be given custom field delimiters through the -F flag (but for silly reasons some characters are better than others, I choose commas since they are generally safe), and generate a string as output. Unfortunately, the printf command in awk doesn’t have a way to print things as binary (hex and dec are fine). Sooooo, next utility at bat is bc, essentially a command line calculator with some nicer features than most unix command line builtins. Critically, it can convert numbers between arbitrary bases.

At this point, the general scheme is to have make do a shell call and grab the output. The shell call will be a call to date piped into awk which will build the command string to be piped to bc which will do the math and binary conversion necessary to get a set of decimal hours, minutes and seconds converted to a 32bit integer that matches the encoding for the STM32F446 (and potentially other chips in the STM32F4 line).

The final relevant make lines are as follows:

RTC_BCD_TIME := 0b$(shell date +%-H,%-M,%-S | awk -F"," '{print "obase=2;scale=0;hours="$$1";minutes="$$2";seconds="$$3";(((hours/10)*1048576)+((hours%10)*65536)+((minutes/10)*4096)+((minutes%10)*256)+((seconds/10)*16)+(seconds%10))"}' - | bc )
RTC_BCD_DATE := 0b$(shell date +%-y,%-m,%-d,%-w | awk -F"," '{print "obase=2;scale=0;years="$$1";months="$$2";days="$$3";dayofweek="$$4";if(dayofweek==0)dayofweek=7;(((years/10)*1048576)+((years%10)*65536)+((months/10)*4096)+((months%10)*256)+(dayofweek*8192)+((days/10)*16)+(days%10))"}' - | bc)

Note prepending the string returned by the shell call with “0b” to designate it as a binary sequence. I guess at the end of the day it didn’t need to be converted to binary, but it may help with debugging later on since it is BCD. It’s also worthwhile to be aware of the __DATE__ and __TIME__ automatically defined by gcc, but they are in string form and difficult to manipulate with just define statements. I felt like doing this outside the compiler was a better option. If anyone finds themselves in the same scenario, hope this helps!

Triangulating Camera Position from Known Points

triangulating camera position from known points opencv

Triangulating camera position from known points

As per my last post on this effort,, with the camera field of view parameters determined, and the lensing warp shown to be fairly low, there’s a straightforward path to taking an array of 2D points in the camera view, with known real-world coordinates, and backprojecting them to determine the camera’s position in space. Though this has probably been done a million times before by every vision system ever, it seemed like the kind of thing that should be easy! Or, as is often the case, would turn out to be interesting on its own and therefore worth the experience. 😀

Given my ultimate goal of extracting my swimming robot’s coordinates from a dual-camera setup, I need to know both cameras’ poses in the global reference frame to make a senseable coordinate extraction. So, with a static fishtank in the frame, containing the robot as well as generating perfect markers for a rectangular coordinate frame, I set about doing lots of trig. Essentially, the corners of the fishtank become a set of points in 3, with known coordinates since I can measure the fishtank. It’s a perfect rectangular prism, so it makes sense to align the global coordinate frame with its axes. Given 4 points on the fishtank, I can triangulate the camera location (and likely orientation though I haven’t thought that through yet and don’t need it).

Basic Method:
In OpenCV, click my 4 registration points on the 2D camera feed.
Given the pixel distance -> angle conversion I talked about in the previous post on the subject, convert every combination of 2 points to an angular measure.
Since the actual 3D position of all the points is known, the distance between any set of 2 is also known.
Taking each pair and considering the “point” location of the camera, it is clear there is a triangle for every point pair with the camera location as its third point.
The known distance between the two points on the fishtank is then opposite the angle determined by the pixel distance of the point pairs.
This should sound exactly like its heading towards the Law of Cosines to determine the other sides of a triangle with one angle known.
Since all of these imaginary triangles actually share sides with each other, a simple algebraic relationship exists to find all side lengths of all triangles from the known sides and the angles (these side lengths can also be interpreted as the distance from fishtank points to camera point).
To get back to what we actually want, the camera position in 3D, we can replace side lengths as values with side lengths as a function of the camera position. Instead of getting the lengths of every side, we do a little more algebra and get back from the solver the point that satisfies the distances.
If the world were perfect, we could stop there and call it a day. Just take the angles from the camera and the known line lengths from the fishtank points, run it through solver and boom, a point in ℝ3. If we were working with exact values, this could happen, but a lot of approximations have happened thus far (the actual dimensions of the fishtank, the selected points from the video feed, the assumption of a perfect pixel to angle conversion, among others).
So instead we are left with having to find the camera point that minimizes the error between the computer world and the real world. We can start from a test point in space, check how badly it fits our equations by comparing the triangles it makes with the edge lengths and angles we know, then try new points that make this error smaller. Its essentially a “hot or cold” search where you drag a point through space, getting constant feedback of “hotter” or “colder”.
Fortunately, this works great!

Some notes on this:
I first did a demo in MATLAB, since the visualization tools there are a little easier to use. There I discovered a few flaws, the biggest being that only optimizing for the point is globally stable, optimizing the distances of the camera to each point has some local minima that can trap the solver.
The second is that a non-linear optimizer like fmincon in MATLAB is really a math-package’s bread and butter, and they are not letting anyone peak under the hood. fmincon is /not/ available to the MATLAB Coder c-code generation utility. Bummer.
However, Free Software was to the rescue, with Python‘s SciPy package containing the desired non-linear optimization suite. minimize can take a scalar function of multiple variables and scrobble the inputs using various methods to find a minimum of the function. It’s really beautiful that tools this good are in the public domain.

Characterizing the PS3 Eye

Wikipedia claims that a PS3 Eye zoomed to “blue” has a field of view of 75 degrees. This is presumably the horizontal field of view while I need both angular measures, so I decided to check both out myself.

Put PS3 Eye sensor at 4.25″ above flat surface
Reinforce 8.5″x11″ sheet of paper with tongue depressors
Holding the paper vertically in Landscape, with one edge flush against, and exactly perpendicular to, the table, adjust angle of camera (it pivots about one axis on its base) and distance from paper until the top and bottom edges are at the very top and bottom of the camera view. The camera sensor should now be entirely parallel to the sheet of paper, in all axes. The edges of the paper should disappear and reappear together if you move the paper a little closer or a little further.
Mark the distance from paper to camera
Now shift the paper side to side, at this same distance, to observe where the side edges leave the frame. Mark the edges of the frame on the paper once the paper entirely fills the frame.

The field of view is then 2*atan((paper_measurement/2)/distance_to_camera), where paper_measurement is either 8.5″ or the distance between the horizontal extremes of the sheet that you marked.

This test resulted in a horizontal field of view of 60.32 degrees (not that I actually have that many sigdigs) and a vertical of 49.35 degrees.

It is important to note that points on a plane perpendicular to the camera can have their angles linearly interpolated from pixel distance against this maximum reference. The image resolution is 620×480 in normal video rate, so the horizontal angle between two points with deltaX=310 pixels, is 30.16 degrees. This should jive, as the image doesn’t appear overly warped. Horizontal distance is relatively the same anywhere on the sensor.

If the angle of the plane is not known, it is ambiguous whether a long object appearing short on the camera feed is doing so because of being at a sharp angle to the camera or because it is far away. The distance to one point must be known. However, perhaps a 3rd point, out of plane and thus defining a cube (like one of the back corners of my rectangularly prismatic fishtank) will provide the needed scaling factor. Should be cautious of accuracy here, if those point are at highly oblique angles.

Follow along with my next steps at locating the camera at

The MEAN Stack

Because this info is spread all over the web, and because I keep coming back to it after long hiatuses and having /no clue/ what I’m doing, here’s my own quickstart guide. Very much inspired after but sometimes more or less verbose

Install MongoDB:
sudo apt-get install mongodb may also work, but is at the mercy of repo managers to have the latest versions

Install Node: sudo apt-get install nodejs

Update npm globally: sudo npm install npm -g
Install Express-Generator globally: sudo npm install express-generator -g

setup app dir and install dependencies: express myapp
cd myapp
npm install

Make sure angular is setup in the app: npm install angular –save

at this point running: npm start
gives you your app at localhost:3000
where it should be available on the LAN for you viewing pleasure!

**********npm init*************
If you don’t want to use the automagical Express-Generator, you can use
npm init

entry point could be app.js

git repo!
optional keywords
license: GPL-3.0 is good
review and

Digikey API and OAuth

OAuth is harder than expected. I’m just going to jot down some notes as I go:

node interpreter is very useful for quickly testing out packages! Just run “node” from the command line. “.exit” quits the interpreter.

A nice way to keep secrets in a node system is in a file, with good system permissions. If you make the file follow the JSON file format:

  "key": "value",
  "key_of_set": ["value1", "value2"],
  "key_of_dict": {"keys_forever": "values too"}


then running:

var file_json = require("path_to_file.json")

loads your object into file_json.

Versioning this secret file is fun. You can make a dummy file, with something like "secret_key": "mtwannahuckaloogie" git add that file, then put the file on a .gitignore and run:

git update-index --assume-unchanged 

and git will ignore it forevermore! (I think. Probably try not to shake your index too hard or you might leak it)

The usual package protocol:

npm install --save simple-oauth2

gets the app ready for action! (i hope)

The OAuth demo code on the npm page for simple-oauth2 is pretty straightforward, I basically copy pasted it, along with the relevant clientID and clientSecret given by digikey.
Unfortunately, my hope for having the callback address of work, was a bust. Looks like tomorrow I’ll need to do some funny business to open up some ports :O

Ok, the redirect isn’t accessed by Digikey, its simply stuffed into the user’s browser. This is good, I was just being an idiot on the config side at Digikey. When Digkey asked for my apps redirect callback URI, I gave it the user was then sent to their own port 3000, obviously wrong it should go to my webserve’s port 3000. for me on lan, that means 192.168.x.x:3000/callback. This works better, gets me to digikey’s login.

HOWEVER, Digikey wants an https address for the callback (doesn’t look required by oauth since the demo code uses http), so I guess I need to set up https for the app. Since I’m not paying the big bucks for getting a cert from a CA (though I could get one from MIT for the next year), I’ll just self-sign one. Instructions on that can be found at:

I’ll lay down the gist here though since that url seems unstable

openssl genrsa -out key.pem
openssl req -new -key key.pem -out csr.pem
openssl x509 -req -days 9999 -in csr.pem -signkey key.pem -out cert.pem
rm csr.pem

The package ‘https’ will automatically manage the remainder of the connection stuff. If you’ve used the express directory formatter/file autogen thing, then you’ll want to edit your /bin/www.


var https = require('https');
var fs = require('fs');

var https_config = {
  key: fs.readFileSync('key.pem'),
  cert: fs.readFileSync('cert.pem')

var https_server = https.createServer(https_config, app);


And replacing the port in listen() with the port you want. Make sure create_server goes after the var app = express(); line.

The https stuff was a problem, as Digikey won’t accept a redirect without ssl, but not the big problem. Even after getting that set up, I kept getting ERROR 402’s all the time. I had subscribed to the API’s using the kind of hidden buttons on the Digikey API page, and was flummoxed at what the problem was. “PAYMENT_REQUIRED”? Sometimes this error is used for APIs you have exceeded quotas on or need to pay for. Adding some console.logs to the simple-oauth library, I extracted the full reply from the server:

The client MUST NOT use more than one authentication method in each request.

Huh? I thought I was only using one… I dumped the POST request parameters just before they were sent (also by throwing a console.log into the library files) and noticed Authorization: Basic ... in there with a huge hash of some kind. Crap! There’s a default token of some kind coming from simple-oauth2, you just have to add

useBasicAuthorizationHeader: false

to the config parameters for simple-oauth2, and then you /just/ have the authentication code coming in. FIXED!

Mouser SOAP

You can buy a lot on Mouser, but unfortunately I don’t think there is soap. You can, however, get SOAP from Mouser; that is, Simple Object Access Protocol, an API standard. Signing up for dev access at lets you use more automated format requests, hopefully faster that using the site. Also, most importantly, gives you programmatic access to the data if using a SOAP module in you webapp, such as soap for NPM.

By gym.king (001_MG_2576_(015)) [CC BY-SA 2.0 (], via Wikimedia Commons

This mouser needs a bath.

setup SOAP as an app dependency:

npm install soap --save

SOAP basically grabs an XML doc from somewhere (disk or web, usually ending in WSDL), reads it to determine what commands are available and what inputs they need, then calls one and returns the reply. Mouser has its API file at If you follow the link there, you can read each of the commands, also described in the human-readable API page:

The node SOAP module is /super easy/ once you know how to use it, the doc was a little vague for my tastes. The procedure is this:

soap.createClient("url_to_xml_WSDL", callback_fun);

This createClient isn’t a long lived thing, its just for the duration of this one request and callback. There is no client object to store. The callback is called immediately upon client creation, so we pass an anonymous function in reality.

soap.createClient("url_to_xml_WSDL", function(err, client){
  client.MyFunction(args, callback);

client.MyFunction will look for MyFunction in the returned XML API sheet, and draw up a request with MyFunction, and the appropriate args fields filled in. The response will be returned into this callback function, which for ease we will create right here with another anonymous function. For the basic Mouser API, ServiceStatus is a valid function, taking no arguments. This yields:

soap.createClient("url_to_xml_WSDL", function(err, client){
  client.ServiceStatus({}, function(err, result){

and if url_to_xml_WSDL was the correct one for Mouser as listed above, you should get back a dict with {ServiceStatusResult: true}


2.009 – Product Engineering Process (Part III)

So I realize this has been a long series of posts, but bear with me, here is the good part (previous still-good parts here). Because I’m limited on time and I want to get this out there, below is the ultimate spoiler, the video of the presentation! Though I was working way too hard on final electronics an code revisions to be a main presenter, I was awake enough to answer questions at the end (2nd video). Check it out, our presenters did an incredible job!

Final Presentation: Pink: Origin from 2.009 @ MIT on Vimeo.

Final Presentation: Pink: Origin Q&A from 2.009 @ MIT on Vimeo.

I couldn’t be more proud of everyone on Pink Team, it was an incredible semester and a ton of fun. GO PINK!

I’ll probably post some more technical stuff soon, but I have to get back to psets.

2.009 – Product Engineering Process (Part II)

If you just got here and want to know more about the background of course 2.009 at MIT, check out this post part I!

Otherwise, another spoiler is appropriate:
But hang on, we haven’t talked at all about the product design process! And what an exciting and challenging process it is.

portable-mechanical-coolerThe ideation process taught in 2.009 is Professor Wallace’s preferred method, and is the same as the one taught in 2.00b. Essentially, the first round of the process is to gather a many ideas as possible, without filter, to amass a huge board of thought-provoking and not-provoking ideas. Participants should be encouraged to generate derivatives and spin-offs, along with fresh material. From there, a brief grouping process can show connections between ideas, and optionally filter ones that were off topic. With some clearer categories, a more focused brainstorming session can generate new concepts, flesh out old ones, or remix a couple. At this point, attention should be paid to the core concept of an idea; what problem is it trying to solve? The first implementation that comes out on paper may not be the best.

From here, we took a fairly democratic path by voting on concepts that seemed interesting and tractable to test, and finally developed a Pugh Chart to try and select three concepts impartially for initial testing. The cooler here was one of my concepts that reached the initial stages of testing (and we continued pursuing it until the final vote). Some other popular ideas we worked on were actively regulated thermalwear, an ice depth testing device, and a portable light-based sports field line marking system. After several stages of testing, mixing in new ideas and performing more tests with old ones, Pink team reached a point where we had three strong, tested, and exciting ideas. A pivotal moment for every team in 2.009 is the meeting where the final product is decided. Luckily, Pink team was full of intelligent, impartial, critical thinkers, and the decision making process went smoothly (and relatively quietly!).

What I said about the first implementation not being the best? It’s probably true quite frequently, and it was certainly true for our final product, Origin. Originally (hah! get it?), another team member and I came up with fairly similar concepts for a wrist-mounted proximity beacon to alert sky divers of others around them, in hopes of preventing canopy collisions (the #1 cause of death in skydiving). Initially, people were concerned about the market size and utility of such a device. At some point, someone considered the broader need; location information for social/safety/datametrics reasons in areas outside the range or use cases of cell phones. That was the moment of birth of the idea that grew into Origin, our final product.