Some time ago I decided I’d like to learn a purely functional language and settled on Haskell as the language of choice. A few weeks ago I came across a wonderful free book by Miran Lipovača humorously titled “Learn You a Haskell for Great Good”. I cannot recommend this book highly enough; the author’s clear explanations are entertainingly complemented by his most appealing sense of humour. I found it to be an excellent guide. As I explain below it helped me to write this nice bit of Haskell (which also has a C++ version).

### A computational geometry exercise in Haskell

After I finished Lipovača’s book I made a list of bite-sized coding projects that I thought might help me get acquainted with my newly-forged Haskell weaponry. Amongst these was a nice little computational geometry problem from the USA Computing Olympiad website. The problem, titled “Closed Fences” and nicknamed “fence4″, is as follows:

Input: A point in the plane, known as the eye, together with a list of points in the plane which form the vertices of a polygon. All coordinates are integers.

Output: The set of sides of the polygon which are at least partially visible to the eye (looking in any direction).

For example, in the polygon below, the edges coloured red are those visible to the black dot representing the eye:

Example edge visibility problem

The exact description provided by the USACO training website is available here.

### Choice of algorithm

The obvious approaches to this problem fall into two categories:

• Raycasting
• Clipping

In both cases we independently consider the visibility of each side in turn. Thus we seek an algorithm to determine whether or not one particular side is visible.

#### Raycasting algorithms

The easier class of algorithms to implement are those based on raycasting. To get the basic idea we take up an old idea of Plato’s and imagine the eye to be an omnidirectional light source. Thinking this way, whether or not a side is visible to the eye depends on whether it is fully shadowed by other sides. If a side is partly shadowed, then the ray joining the edge of the shadow to the eye must just graze a vertex of the polygon. We can thus find all potential shadow boundary points for a given side by intersecting all rays from the eye through each polygon vertex with the line containing the side we’re testing for visibility. With these points in hand, the side is visible if and only if any point strictly between any adjacent pair has a direct line of sight to the eye. The algorithm needs a little tidy-up but it should be clear this will work and in its current form it requires O(N2) time per side where N is the number of vertices.

#### Clipping algorithms

Slightly more difficult to implement are those algorithms based on clipping. For these algorithms the key object is what we will call (by a slight abuse of terminology) the viewport. This is the triangle whose vertices are the eye together with a pair of points, on the side whose visibility we are trying to determine, that represent our current belief about which portion of this side is visible. Initially this pair of points is the pair of vertices that form the end points of the side. We update the viewport sequentially by “clipping” it according to whether it intersects the other sides of the polygon. For example the initial and final viewports of the lower-most side in our example polygon are shown below:

Initial (include shaded region) and final (exclude shaded region) viewports for lower-most side

A side is visible if and only if its final viewport has positive area. In its current form this algorithm requires O(N) time per side.

#### Algorithm chosen

I opted for a clipping-based algorithm. I also decided that I wanted perfect accuracy and so I only allowed myself to use integer arithmetic. Most importantly though, I wrote the algorithm in that beautiful language, Haskell! Below is an outline of the algorithm I ended up with.

The algorithm is based around a function with the following type:

lineSegMeetsLineSeg :: LineSeg -> LineSeg -> Bool


This function returns true or false according as whether or not two line segments in the plane intersect. Given this function, the algorithm to determine visibility of a given side starts out with a viewport that can see the entire side and for each other side of the polygon uses the lineSegMeetsLineSeg function to check if it meets either edge of the viewport and takes action accordingly, taking care to handle end point intersections correctly. This is accomplished by the following functions:

• clipEdge :: Edge -> LineSeg -> ViewPort -> ViewPort

When a polygon side does intersect an edge of the viewport, this function returns a new appropriately-clipped viewport.

• clipByOneSide :: ViewPort -> LineSeg -> Maybe ViewPort

This function determines whether a viewport needs clipping and calls clipEdge if necessary.

• clipByAllSides :: Point -> [Point] -> Int -> Maybe LineSeg

This function calls clipByOneSide for all polygon sides (other than the side whose visibility we are testing) keeping track of the viewport as it proceeds.

And that’s it! If we wanted to speed things up we could also use 2-dimensional back-face culling but there is no need.

### Source code

The USACO website is more than a list of problems: it allows you to submit your source code for automatic testing against a range of test cases. Unsurprisingly, Haskell is not amongst the set of languages that the USACO accept. Thus, as soon as I had my Haskell code working, I set about converting it to C++ so that I could submit to the USACO website for testing.

Although it was a slightly tedious exercise translating the Haskell code to C++, it was interesting to see the sort of C++ code that resulted. It seemed to me to be a good example of how it is possible to write good quality code with almost no object orientation. If nothing else, this exercise convinced me to try harder for referential transparency when writing C++ in future.

In any case for those who might be interested, here are links to both the C++ source and the (significantly shorter) Haskell source. I think the algorithm is particularly easy to understand in the Haskell implementation.

One final comment on the Haskell. The key function has the following type:

clipByOneSide :: ViewPort -> LineSeg -> Maybe ViewPort


and the visibility problem is solved by folding this function over the polygon sides. Since clipByOneSide can return the Monadic type Maybe ViewPort, I had to use the Monadic version of foldl, namely foldM. However when debugging I wanted to look at the progress of the clipping as it proceeded and so I wanted a Monadic equivalent of the scanl function. To my surprise, the Control.Monad module (which contains foldM) does not contain a corresponding scanM function. Here is how easy it was to introduce the missing scanM function:

c :: (Monad m) => (a -> b -> m a) -> m a -> b -> m a
c f y z = y >>= ((flip f) z)

scanM :: (Monad m) => (a -> b -> m a) -> a -> [b] -> [m a]
scanM f x l = scanl (c f) (return x) l


### An aside on the projective plane

Computational geometry algorithms are notorious for being a lot more difficult to get exactly correct than to get mostly correct. This is often due to the large number of degenerate types of behaviour that tend to occur (e.g., intersection along edges or in corners etc.).

Suppose we imagine the viewport triangle and ask ourselves what are the different ways a line can intersect it. (This line represents the line containing a side which might be clipping the viewport.) For now let’s exclude the case where the line passes through a vertex of the viewport and see how many cases there are. In the pictures below, E represents the eye and A and B are points on the side to which the viewport belongs. The “clipping” line is in blue:

 Case 1 Case 2 Case 3 Case 4 Case 5 Case 6 Case 7 Case 8 Case 9

Evidently there are 9 cases because there are 3 different locations for P (behind E, between E and B, beyond B) and 3 for Q. In fact we can simplify things by introducing the line at infinity and so passing to the projective plane. If we do this then the cases “behind E” and “beyond B” are the same since the two ends of the line BE are connected at infinity. Similarly for Q. Thus there are now just 2×2 = 4 cases instead of 9 overall. In terms of the picture above, these four cases are:

• Cases 1, 3, 7, 9
• Cases 2, 8
• Cases 4, 6
• Case 5

OK great, just 4 cases to deal with instead of 9 but now what about those degenerate cases we excluded, i.e., when the blue line passes through A or B or E? Even in the projective plane there are still rather a lot of these. The first set of degenerate cases occurs when the blue line passes through exactly one of A, B, E and there are 2 cases for each of A, B, E giving 6 of these. The second (and last) set of degenerate cases occur when the blue line passes through two of A, B, E, i.e., when it is in fact coincident with one of the three lines AB, BE, EA. So even though the generic behaviour is described by just 4 cases we still have to handle a total 4+6+3 = 13 cases overall. Grouped by how degenerate they are, the cases occur in families of decreasing dimension: 2, 1, 0. This is classic computational geometry behaviour and I usually regard it as a bit of a pain. However there is a nice way of looking at it.

First note that the viewport actually defines a triangulation of the entire projective plane. All we have to do is extend its three sides to full projective lines. Here is a picture of what happens:

Projective plane triangulation

The triangulation has four faces which are numbered 1, 2, 3, 4 in the picture above. The first three look disconnected but they are in fact just as good triangles as face 4 except that they include a segment of the line at infinity. Similar remarks apply to the edges, of which there are 6, and which are coloured to indicate when they are really connected because they pass through infinity (e.g., there is just one red edge, the two segments we see are connected at infinity). Lastly there are 3 vertices which, for once, are all visible to us. (Thus this triangulation has $V = 3, E = 6, F = 4$ and so Euler characteristic $\chi = 3 - 6 + 4 = 1$ as it must.)

All very well but what’s the point? Well, now note that in fact we also get a natural induced triangulation of the dual projective plane with the same number of vertices, edges and faces as the original. This is because three points in general position (i.e., non-collinear) in the projective plane uniquely determine three points in general position in the dual projective plane (i.e., three non-concurrent lines). In other words, it is equivalent to specify a triangle by providing its vertices or its sides. To spell this out, recall that the points of the dual projective plane are lines of the original and that the lines of the dual projective plane are points of the original (and that the incidence relation between points and lines is unchanged). The 4 faces of the dual projective plane triangulation correspond to the 4 2-dimensional families of lines that have no degenerate intersections (i.e., do not pass through A, B or E). The 6 edges of the dual triangulation correspond to the first set of 1-dimensional families of degenerate intersections for lines which pass through exactly one of A, B, E and the 3 vertices of the dual triangulation are the 3 lines AB, BE, EA.

Thus the different types of behaviour of the clipping line, including all its degenerate intersection possibilities, correspond to where it sits, as a point in the dual projective plane, in terms of the triangulation of the dual projective plane that is naturally induced by the viewport triangle. I find it pleasing that something which can be annoying, namely the degenerate intersection cases, can be described so concisely.

One final word on the use of the projective plane. Since the problem of clipping concerns only incidence geometry, we are free to employ the full group of projective transformations as a tool to simplify our problem. I didn’t bother, but it is easy to find a projective transformation that takes the eye to infinity and the vertices A, B to (0, 0), (1, 0) and the lines AE, BE to vertical lines. Doing this transforms the viewport into a semi-infinite rectangle which can greatly simplify and speed up clipping calculations. I believe this is the standard approach when clipping in three dimensions.

Lastly note that if we really wanted to opt for purity in our algorithm, we could construct an algorithm which is only told which of the 13 cases it is in and then told the ordering of the 4 points on the clipping line that are P, Q, I, J where P, Q are as above and I, J are the vertices of the side which define the clipping line. I considered implementing this but decided there were more cases than I wanted to handle especially as we need to consider cases when some of the points P, Q, I, J coincide. I believe this could be done elegantly though.

### Quickly testing multiple eye positions for same polygon

Here is a question which the USACO did not ask but is worth mentioning briefly. Suppose we were required to solve this problem a large number of times for the same polygon but differing eye positions. How might we do this efficiently?

Having already indulged myself with ludicrously expansive comments on the projective plane I will keep it brief and say only that good algorithms would be based around spatial subdivision data structures (e.g., BSP trees with nodes corresponding to lines through polygon sides). The point is that if we subdivide the plane into regions in which the set of sides visible to the eye is constant then we need only decide which region the eye is in to solve the problem. Thus the challenge is to represent this planar subdivision in a way that is optimized for repeated querying. For example, it would be possible to set things up so that query time is determined by the (log of the) size of a BSP tree and not (directly) the number of sides of the polygon.

Back in the good old days of first person shooters when all walls were vertical and you could not look up and down, i.e., when I was idkfa-ing my way through Doom, this was exactly the problem that had to be solved and I believe BSP trees were part of the solution that John Carmack used. Doom’s predecessor Wolfenstein 3D employed simpler raycasting-based algorithms and the geometry of the levels was necessarily much simpler (though on the flip side you did get to shoot it out with Hitler at the end of Wolfenstein).

I think it would be fun if the next challenge the USACO website presented to the user who has just finished “fence4″ was the multiple-eye-position question with otherwise unchanged parameters but a huge number of different eye positions and a tight running time constraint!

### USACO test cases

I thought I would end by exhibiting a few of the test cases that the USACO website uses to test this problem. (You get to see the tests after you have submitted your code.) They range from embarrassingly easy cases such as this rectangular polygon:

to more sensible tests like these:

and even include the following rotter:

When I saw this last case, I was glad I had opted for an exact integer-arithmetic-based solution.

## On Klein’s icosahedral solution of the quintic

Icosahedral tiling of the sphere (click to view PDF)

## On Klein’s icosahedral solution of the quintic

[EDIT: Post updated in March 2013]

### PDFs available

Those interested in the below may wish to consult either:

### A solution of the quintic

Since the work of Ruffini (1799) and Abel (1823) it has been known that it is not possible to solve the quintic in radicals. In 1858, Hermite presented the first solution of the quintic; the non-radical functions he added to his toolbox for the purpose were certain modular functions. This solution received a lot of attention and shortly afterwards Kronecker and Brioschi also published solutions. Eventually in about 1878, the great geometer Klein (drawing on work of Gordon and crucially of Galois) showed how to unify the different approaches and also demonstrated a new, extremely beautiful solution using icosahedral functions. Here is (a slightly simplified version of) that solution:

Given the quintic equation:

$y^5 + 5y + \gamma = 0$

set:

$\nabla = \sqrt{\gamma^4 + 256}$

$Z = \frac{1}{2\cdot 1728} [2\cdot 1728 + 207\gamma^4 + \gamma^8 - \gamma^2 (81 + \gamma^4)\nabla]$

$z = \frac{{}_2F_1(\frac{31}{60}, \frac{11}{60}; \frac{6}{5}; Z^{-1})} {(1728Z)^{1/5}{}_2F_1(\frac{19}{60}, -\frac{1}{60}; \frac{4}{5}; Z^{-1})}$

and

$f(z) = z(z^{10} + 11z^5 - 1)$

$H(z) = -(z^{20} + 1) + 228(z^{15} - z^5) - 494z^{10}$

$T(z) = (z^{30} + 1) + 522(z^{25} - z^5) - 10005(z^{20} + z^{10})$

$B(z) = -1 - z - 7(z^2 - z^3 + z^5 + z^6) + z^7 - z^8$

$D(z) = -1 + 2z + 5z^2 + 5z^4 - 2z^5 - z^6$

Then

$y = -\gamma\cdot\frac{f(z)}{H(z)/B(z)} - \frac{7\gamma^2 + 9\nabla}{2\gamma(\gamma^4 + 648)} \cdot\frac{D(z)T(z)}{f(z)^2H(z)/B(z)}$

is a root!

Replacing $z$ with $e^{2\pi\nu i/5}z$ for $\nu=1, 2, 3, 4$ provides all the other roots. Also, in case you’re wondering why I have organised terms using $H/B$, this is because $B$ is a factor of $H$. I have expressed things this way for reasons that will become clear in a few paragraphs’ time when we consider the locations of the roots of these polynomials.

The first thing to notice in the above is that the non-radical functions employed are certain hypergeometric functions. Although they may look somewhat arbitrary at first sight, these are extremely special hypergeometric functions which belong to Schwarz’s list of algebraic hypergeometric functions (those with finite monodromy). The quotient of the pair used in the above equation inverts the 60-fold branched covering of the complex projective line with $A_5$ monodromy. The picture at the top of this post is a picture of that branched covering as well as being a picture of the icosahedral tiling of the sphere.

The next thing to consider in the above formula is the invariant $Z$. This is an icosahedral invariant of the quintic and it has a very neat geometric interpretation. The vector of roots of a quintic defines an $S_5$-orbit in complex projective 4-space. If the quintic has no degree 4 or 3 terms, this orbit lies on the quadric surface and in fact the $S_5$-action on the quadric is induced from a natural action on the lines in this quadric. If we adjoin the square root of the discriminant $\nabla$ then the $S_5$-action becomes an $A_5$-action and each of the two families of lines in the ruled surface carries and $A_5$-action. These families are parameterised by complex projective lines and the quotient by $A_5$ defines the icosahedral invariants. Evidently, it is this quotient that we invert using the hypergeometric functions mentioned above.

Note that even the formula for $Z$ begins to tells us about the quintic. For example, we can see that if $\gamma^4 = -81$ then the second term in the formula for $Z$ vanishes so that it does not depend on the choice of square root of the discriminant. This corresponds to the fact that this quintic is reducible. (With a sign change, we can even see this without leaving the integers, e.g., $y^5 - 5y + 3 = (y^2 + y - 1)(y^3 - y^2 + 2y - 3)$).

Lastly the five auxiliary polynomials $f, H, T$ and $B, D$ also have geometric interpretations. The roots of the first three $f, H, T$ are, respectively, the locations of the projection of the vertices, face centres and edge midpoints of a regular icosahedron onto its circumsphere (once this circumsphere has been identified with the extended complex plane by stereographic projection). The roots of the last two $B, D$ are, respectively, the locations of the vertices and face centres of a regular cube inscribed in the icosahedron. Indeed the vertices of this cube are the vertices of a pair of dual-tetrahedra inscribed in the icosahedron, one of which I show below:

Icosahedron with inscribed tetrahedron

The notes I mentioned above provide similar (more complicated) formulae for the case of a quintic in the more general form $y^5 + 5\alpha y^2 + 5\beta y + \gamma = 0$. (Any quintic can be put in this form by solving one auxiliary quadratic equation.)

### Background

My own interest in this problem dates from when I was at high school. Back then I spent a while delving into E.T. Bell’s “Men of Mathematics” books. I was warned that they were somewhat biased and not all that accurate historically but I still enjoyed them. At the time I could not appreciate much of the mathematics that Bell mentioned but I did understand bits here and there. In particular I always remembered that he mentioned that Klein had established a connection between the rotations of the icosahedron and the solution of the quintic equation. I couldn’t imagine how this would work but I never forgot this intriguing idea. Searching through the books now over 15 years later I see that the relevant passage is from Bell’s discussion of Cauchy (ironically Bell doesn’t discuss Klein):

To give but one instance, the set of all rotations which twirl a regular icosahedron (twenty-sided regular solid) about its axes of symmetry, so that after any rotation of the set the volume of the solid occupies the same space as before, forms a group, and this group of rotations, when expressed abstractly, is the same group as that which appears, under permutations of the roots, when we attempt to solve the general equation of the fifth degree. [...] This beautiful unification was the work of Felix Klein (1849-1925) in his book on the icosahedron (1884).

Last year (i.e., 2011) I found myself considering how this connection might work. I bought a copy of Klein’s “Lectures on the Icosahedron and the Solution of the Fifth Degree” and started reading it and various other references. It was a delight to finally understand the connection. I decided to make a few notes for myself and as they took shape I thought it might be worth making them available online. Part of my motivation for doing so was this Mathoverflow question. It took me quite a while to get round to finishing them but I have at last done so and I have posted them online here. Any feedback would be greatly appreciated.

### Sketching out the connection

Although I have written up the details of the quintic’s icosahedral geometry in the notes I mentioned, it might be worth saying a few brief words here. From an abstract point of view, one reason for the connection between the quintic and the icosahedron is that the group $A_5$ can be made to play three roles:

• As the Galois group of a general quintic (together with a distinguished square root of its discriminant)
• As the group of rotations of the icosahedron
• As the monodromy group of the hypergeometric differential equation with appropriate parameters (from Schwarz’s list)

More concretely, using a radical transformation, any quintic can be put in the form:

$y^5 + 5\alpha y^2 + 5\beta y + \gamma = 0$

The vector roots then lies on the doubly-ruled quadric surface:

Doubly-ruled surface
(photo of a model from Gaudi’s house in Park Güell, Barcelona)

Since the quintic, together with a square root of its discriminant, only determines the vector of roots up to an even permutation of its coordinates, we in fact obtain an $A_5$-orbit in each of the two families of lines in the doubly-ruled surface. These families are naturally parameterised by the complex projective line which can be identified with the circumsphere of the icosahedron in such a way that the Galois $A_5$ action becomes the group of icosahedral rotations. The quintic thus defines a point in the quotient of the circumsphere of an icosahedron by its group of rotations. Finally, it is not hard to see that finding a local inverse for this branched cover is equivalent to solving the quintic and, moreover, this can be accomplished using hypergeometric functions with $A_5$ monodromy.

### Implementing the solution

I think there’s nothing like actually doing something for yourself so I decided to check my calculations by writing a short python script to implement the icosahedral solution of the quintic. For minimalists, there is a short script for solving quintics in Bring-Jerrard form available for download here (also shown below) and for those interested in the more general solution there is also a corresponding script available here. I am happy to report that both work!

import mpmath, numpy as N

def nabla(cc):
return N.exp(N.log(cc**4 + 256 + 0j)/2)

def Z(cc):
return (2*1728 + cc**4*(207 + cc**4) - cc*cc*(81 + cc**4)*nabla(cc))/(2*1728)

def z(ZZ):
mpmath.mp.dps = 20
return N.exp(-N.log(1728*ZZ) / 5) * mpmath.hyp2f1(31.0/60, 11.0/60, 6.0/5, 1.0/ZZ) /\
mpmath.hyp2f1(19.0/60, -1.0/60, 4.0/5, 1.0/ZZ)

def HB(zz):
return (zz**4 - 3*zz**3 - zz**2 + 3*zz + 1) *\
(zz**8 + 4*zz**7 + 7*zz**6 + 2*zz**5 + 15*zz**4 - 2*zz**3 + 7*zz**2 - 4*zz + 1)

def D(zz):
return -1 + 2*zz + 5*zz**2 + 5*zz**4 - 2*zz**5 - zz**6

def f(zz):
u = zz**5
return zz*(-1 + u * (11 + u))

def T(zz):
u = zz**5
return 1 + u * (-522 + u * (-10005 + u * u * (-10005 + u * (522 + u))))

def y(cc):
ys = []
eps = N.exp(2*N.pi*1j / 5)
zz = z(Z(cc))
for n in range(5):
ys.append(-cc * f(zz) / HB(zz) -\
(7*cc*cc + 9*nabla(cc))/(2*cc**5 + 2*648*cc)*D(zz)*T(zz)/(HB(zz)*f(zz)**2))
zz *= eps
return [(yy, abs(yy**5 + 5*yy + cc)) for yy in ys]

for (z, err) in y(-2.8234): # Randomly chosen value for quintic (see xkcd #221).
print '%.6f + %.6fi   (%.10f)' % (z.real, z.imag, err)


## Magnetic Core Memory Reborn

16 bits of magnetic core memory (click to enlarge)

## Magneic Core Memory Reborn

I’ve been waiting a long time to write this post so it is with great satisfaction that I finally sit down to do so.

I recently finished working on a fairly long project which I carried out jointly over the past year or so. My friend Ben had been reading about magnetic core memory and had discovered that it was possible to buy old, surplus magnetic cores on eBay. This gave him the idea that it might be fun to try to build a modern core memory module. He mentioned this idea to me and explained a bit about the technology. It wasn’t long before we had agreed to take on the challenge. As far as we knew, it was going to be the first time somebody had made a core memory module for decades!

In fact this post is really just an advertisment for the real report which is hosted here. Please visit if you are interested.

It was an enormously satisfying project for a number of reasons. We learned about the physics of magnetic materials (in much greater than we needed as it happened), we learned exactly how this interesting technology works, we learned how to design PCBs, how to have them fabricated and how to reflow solder them at home and we acquired huge respect for the original inventors of the technology whose task was so much more difficult than ours. Also, importantly and despite some unexpected challenges, we were ultimately successful. Taming the analog world and building a reliable digital memory module was indeed a rewarding experience.

The original 8-bit prototype (click to enlarge)

There were quite a few stages to the project and we hope to write up the details of our project in detail in the near future. However we decided we would first write up a clear, clean report of how we think one should go about building a core memory module, rather than exactly how we did it.

Lastly, we couldn’t resist the temptation to publish on the 60th anniversary of the patent which cast core memory into its current form. Core memory had several inventors as is evident from the US patent history. The first patent, 2992414, filed on May 29, 1947 by F.W. Viehe seems to be the earliest mention of the idea. Just over two years later on Oct 21, 1949, A. Wang (founder of Wang Laboratories) filed patent 2708722 in which the idea was apparently reintroduced. However it was on May 11, 1951 that J.W. Forrester filed patent 2736880 in which the final key idea: coincident current addressing was introduced. Coincident current addressing was the idea which made large core memory arrays a reality. Indeed, IBM paid MIT \$13m for this patent in 1964 according to the computer history musem.

In any case, here again is the link to our report on this the 60th anniversary of Forrester’s patent application.

The final 32-bit version (click to enlarge)

Please do get in touch with both me and with Ben if you are interested in our work.

## Dublin bikes revisited

Sadly the Dublin bikes do not look like this.

## Dublin bikes revisited

### Same idea, more data and more cities

Quite a while ago I wrote a post about the Dublin bikes scheme. J.C.Decaux, who run the scheme, make real-time data about the location of the bikes throughout the city available and I thought it would be interesting to collect this data and carry out some simple analysis of it.

At the time when I wrote my previous post, October 2009, I only had 7 weeks of data (about the length of time the scheme had then been running). I was pleased to discover that it was possible to learn a bit about Dubliners’ bike habits even with this limited quantity of data. Nevertheless, I wanted to return to the idea when I had a bigger data-set.

It has taken a while but I have finally collected over a year of data for the Dublin bikes scheme as well as almost a year of data for similar bike schemes (also run by J.C.Decaux) in Brussels, Lyon, Paris and Seville.

### Questioning the data

To my mind, there are all sorts of questions one might investigate using this bike data. Below I describe those that I looked into. I would be interested to hear about others’ work. (Incidentally, I have made the data I have available for download – see below).

### Bike usage by time of day

The fundamental datum that J.C.Decaux make available is the number of bikes docked at each station at a given point in time. Let us call this a snapshot of a given station at a moment in time. By adding snapshot data up across all stations at the same time we can deduce how many bikes are in use. Taking the average over all days in the data-set and treating week days and weekends separately (for obvious reasons) we can generate charts of bike usage as a function of time of day.

In the below charts we plot the average number of bikes in use as a function of time of day minus the number that were in use at midnight.

Dublin bike usage by time of day (weekdays)

Dublin bike usage by time of day (weekends)

The peaks in the morning and evening presumably represent people going to and coming from work. As such they give us a way to estimate the length of people’s working days (including the time getting to/from the bike stations). I made this observation in my previous post and it occurred to me that it might be interesting to use this as a method for comparing the length of the working day across different European cities. At the time, I did not have any data for other cities. Now however, that has all changed! Below are the corresponding (weekday) charts for Brussels, Lyon, Paris and Seville.

Brussels bike usage by time of day

Lyon bike usage by time of day

Paris bike usage by time of day

Seville bike usage by time of day

The first thing that strikes me about this data is that of the five cities, four of them have reasonably unsurprising profiles but one, Brussels, does not. Based solely on this quick plotting exercise, it looks like something may be wrong (or at least very different) with the Brussels bike scheme. In fact I was not completely surprised when I saw this because I had had some correspondence with an entrepreneurial Brussels bike user, Jonathan Van Parys, who mentioned he had concerns about the Brussels bike scheme. In his words:

Brussels’s physical geography leads to a rather inefficient allocation of the bikes as the day goes on (ie. stations at the top of hills are empty while those at the bottom are full), which can be a little frustrating

Furthermore he has set up an excellent website with useful data about the Brussels bike scheme with the intention of drawing attention to this important issue.

Returning to the original idea of comparing different European cities’ apparent/implied working hours, let us gather our data in a table:

 City Morning peak (local time) Evening peak (local time) Time between peaks Brussels 08:45 None N/A Dublin 08:50 17:45 8 hrs 55 mins Lyon 08:50 18:10 9 hrs 20 mins Paris 08:55 18:50 9 hrs 55 mins Seville 07:55 20:25 12 hrs 30 mins

So what do we make of that!? Certainly the data makes no suggestion that our European counterparts are slacking off with a shorter working week. Indeed Dublin comes in bottom of the four cities. The peak times are much less well defined for the other cities than for Dublin (this is because there are far more bikes and stations) but even allowing for this, there is no way this data suggests we work any harder here in Dublin (indeed it is hard not to conclude the opposite!). Of course this is an extremely crude method and shouldn’t really be taken very seriously for a whole host of reasons but I still thought it would be a fun to see what I might find.

Two other brief points here are

1. The reason the graphs for the other cities, especially Paris and Brussels, are so noisy is that there are a great many more stations in these cities than there are in Dublin. As a result, and because it takes time to scrape the data from each station, I have a similar number of records per day for these cities as I do for Dublin but a much smaller number of snapshots of the entire city. (E.g., if we have 100 records between 10 stations we have 10 snapshots of the city but if we have 100 records between 50 stations, we only have 2 snapshots of the city). Fewer data-points means more noise.
2. There appear to be two peaks around lunchtime for Lyon. I do not know why this is the case and would be interested if anyone has an idea. (Perhaps it is a seasonal effect and people try to avoid the hottest sun in the summer? I could look for this in the data but I haven’t bothered.)

### Seasonality

Another obvious question I thought I’d ask of the data was how the Summer compared to the Winter and how Mondays compared to Fridays in terms of bike usage.

Regarding Summer vs. Winter, the situation is mildly interesting. The peaks in our usage by time-of-day graph occur at exactly the time and the usage peaks to the same value in the morning. However usage peaks at a significantly smaller value later in the day for Winter than for Summer. My guess is that people prefer using the bikes in Summer but not so much that they are willing to be late for work! (So morning usage is unaffected by the weather but not so for afternoon/evening.)

Dublin bike usage by time of day (weekdays): Winter vs. Summer

For Monday vs. Friday, there is no significant difference. I had wondered if it might be possible to observe a late arrival/early departure time effect but the data does not even hint at its presence.

Dublin bike usage by time of day (weekdays): Monday vs. Friday

### Bike usage by station

Yet another question one can ask is how the different stations compare to each other in terms of busyness. This is impossible to measure perfectly using the snapshot-type data that I have because many dockings/undockings can take place between two snapshots and it is impossible to know how many. Nevertheless we can count how many times each station’s snapshot changes throughout the day to get an approximate idea of station busyness. This is a biased measure (e.g., it would be biased against stations that have intense but sparse periods of busyness and biased in favour of stations with a more even spread of busyness) but there should still be some value in its results. Here is what I found

Busyness by station (count of snapshot changes versus busyness rank)

Evidently there is quite a significant spread. I would expect the true chart to have approximately the same ordering but a much greater range of values. Even based on this data, it is worth noting that Hardwicke Street and Parnell Square North stand out as extremely underused stations relative to the others. These two are quite close to each other, it might be interesting to plot station usage on a map to see this data in terms of the city geography. Below is the underlying data for the chart including the dictionary which reveals stations by busyness rank.

 Busyness rank Station name Busyness 40 Hardwicke street 10436 39 Parnell Square North 13651 38 Georges Quay 15947 37 Blessington Street 16722 36 Greek Street 18594 35 Fitzwilliam Square West 19516 34 Bolton Street 20301 33 Custom House 20442 32 Ormond Quay Upper 20794 31 Eccles Street 21521 30 Christchurch Place 21957 29 St. Stephen’s Green East 22104 28 Golden Lane 23741 27 Leinster Street South 23997 26 Merrion Square West 24129 25 Dame Street 24400 24 Cathal Brugha Street 24824 23 Parnell Street 26293 22 Mountjoy Square West 26692 21 Jervis Street 26733 20 James Street East 26879 19 Molesworth Street 27536 18 Earlsfort Terrace 28074 17 High Street 28082 16 Townsend Street 28662 15 Merrion Square East 29301 14 St. Stephen’s Green South 29408 13 Fownes Street Upper 29743 12 Talbot Street 31604 11 Wilton Terrace 32323 10 Portobello Harbour 32326 9 Charlemont Street 32999 8 Exchequer Street 34557 7 Princes Street / O’Connell Street 34765 6 Smithfield 34829 5 Grantham Street 35034 4 Chatham Street 36681 3 Herbert Place 39368 2 Pearse Street 39857 1 Custom House Quay 40035

### Weather effects

Finally I thought it might be worthwhile looking at weather effects directly (not just through the seasons). I found that Met Eireann make historical weather station data available and scraped a little over a year of it (available for download below). I tested bike usage against weather station data from Dublin Airport (obviously I would prefer more central weather station data, like Merrion Square, but I could not get it). The data I tested against was:

• Rainfall (in mm)
• Sunshine (in hours)
• Minimum day’s temperature (in degrees Celsius)
• Maximum day’s temperature (in degrees Celsius)

I had one data-point for each of the above measurements per day. I calculated a day’s busyness in terms of bike usage by adding up the number of changes to a station’s snapshots over the whole day, across all stations. I then I scatter-plotted this busyness against the corresponding day’s weather figure for each day.

At first I expected the rainfall data to have the strongest predictive power, however this turned out not to be the case. Thinking about it, I should not have been surprised. Firstly, I only have one point in the scatter plot per day and only just over a year of data, so only about 400 data-points. While it might seem surprising to those of us who live in Dublin, it did not actually rain at all on the vast majority of days so most of the data-set is wasted on dry days. Secondly, while I believe rain is a very strong disincentive for people to cycle it usually only rains for relatively brief periods so we would probably need rainfall data on an much finer timescale like every 5 or 10 minutes to see the effect clearly. Finally, we have rainfall data for Dublin Airport, not Dublin City Centre. In any case, below are the charts.

Dublin rainfall (mm) as a function of time
note how similar Summer and Winter are!

Busyness vs. rainfall (mm) (no apparent relationship)

After giving up on rainfall, I did however find that there appeared to be a relationship between maximum day’s temperature and bike usage. This makes sense and it is easier to believe that such a weather effect would not be as susceptible to the same problems as the rainfall effect since temperature is much less localised than rain and persists for longer. While the scatter plot is noisy, it does look like there is a plausible positive relationship. (Btw if you’re an experimental physicist, the relationship probably looks weak; to a quant. in finance it looks strong!) Here are the charts

Dublin maximum temperature as a function of time
note the maximum is below 0 in January 2010!

Busyness vs. max temp. (plausible positive relationship)

### Further thoughts

There is a bike-sharing blog that is worth a visit if you are at all interested in this sort of thing if only to see its world map of bike schemes around the world. I had not realised they were so widespread till I saw this.

Although I like the Dublin bike scheme, I can’t help wondering if it is good value for money. It may well be, but I can’t help wondering. It seems to me that there is a potential conflict of interest for the city councilors who make the decisions (the scheme is popular so they could be biased in favour of it, even at a bad price) or at least that the best decisions may not be made because of a possible asymmetric perception of opportunity cost relative to upfront cost by either the councilors of the public. A significant part of the payment for the scheme is in advertising revenue forgone by providing J.C.Decaux with free hoardings throughout the city. I would be interested to see the pricing detail. I have been told by those in the know that the keyword when buying such advertising space is the rate card and it seems that this website has some figures for that. I have not quite found the time to look into this in detail myself but would be interested to hear others’ thoughts. I’m sure that a few FOI requests could turn up some interesting figures.

For my own part, I feel like I have spent more than enough time analysing bike schemes for now (indeed I had to force myself to write this post). However Dublin Bus are still promising that they will be providing GPS data on the location of their buses any day (they missed their own 2010 deadline). They claim to have a pilot scheme running on the 123 route and I did manage to find one stop with a so called real time passenger information (RTPI) display on this route, but it was blank. However in the last few weeks I have noticed the posts for several such displays appearing beside stops on Nassau Street so I expect this data may be on the way. I look forward to gathering and analysing it to see if I can detect any correlation at all between timetables and the behaviour of the buses.

### A post-script on technical details

Based on some of the queries I have received in relation to my other post on this topic, I thought it might be useful if I included a little bit of technical information about how/where to scrape the data. In that post, I give the URLs to visit in order to get the data but I did not supply the simple python script to actually scrape the data. I have received quite a few requests for this so here it is (in all its hacky glory!):

import urllib, time, csv, sys, datetime, gzip
from xml.dom import minidom

URL_station_list = 'https://abo-%s.cyclocity.fr/service/carto'
URL_data = 'https://abo-%s.cyclocity.fr/service/stationdetails/%d'

def prev_day_s(d_s):
d = datetime.date(int(d_s[:4]), int(d_s[4:6]), int(d_s[6:8]))
d -= datetime.timedelta(days=1)
return '%04d%02d%02d' % (d.year, d.month, d.day)

def yyyymmdd_from_epoch(t):
tm = time.localtime(t)
return '%04d%02d%02d' % (tm.tm_year, tm.tm_mon, tm.tm_mday)

def get_stations(city, date, get_file = True):
fname = '%s/%s.stations.xml' % (city, date)
if get_file:
urllib.urlretrieve(URL_station_list % city, fname)
stations = map(lambda x: int(x.getAttribute('number')), minidom.parse(fname).getElementsByTagName('marker'))
assert len(stations) > 0 # Horrible hack to quickfix case when web serves up incorrect stations data.
return sorted(stations)

if len(sys.argv) != 5:
sys.stderr.write('Usage: python %s <city> <run time> <main loop delay> <request delay>\n' % sys.argv[0])
sys.exit(1)

city = sys.argv[1]
(run_time, main_delay, req_delay) = map(float, sys.argv[2:])

start_epoch = time.time()
date = yyyymmdd_from_epoch(start_epoch)
try:
stations = get_stations(city, date)
except:
stations = get_stations(city, prev_day_s(date), False)

out = csv.writer(gzip.open('%s/%s.out.csv.gz' % (city, date), 'a'))
err = csv.writer(gzip.open('%s/%s.err.csv.gz' % (city, date), 'a'))
t = time.time()
while t < start_epoch + run_time:
for i in stations:
t = time.time()
try:
out.writerow([i, t] + map(str.strip, urllib.urlopen(URL_data % (city, i))))
except:
err.writerow([i, t, sys.exc_info()[0], sys.exc_info()[1]])
time.sleep(req_delay)
time.sleep(main_delay)


I also thought it would be worth recording the summary statistics for my various data-sets. So here are a few that seem relevant:

 City Start date End date Number of records Link to data Brussels 7-Dec-2009 8-Nov-2010 33,741,610 bruxelles.csv.bz2 (121 MB) Dublin 19-Sep-2009 8-Nov-2010 30,909,771 dublin.csv.bz2 (104 MB) Lyon 7-Dec-2009 8-Nov-2010 34,236,139 lyon.csv.bz2 (147 MB) Seville 7-Dec-2009 8-Nov-2010 33,365,277 seville.csv.bz2 (124 MB)) Paris 7-Dec-2009 8-Nov-2010 40,232,297 paris.csv.bz2 (189 MB)

A record consists of a snapshot of a station and as such consists of a station ID, a date/time-stamp and a number which is the number of bikes docked at that station at that date/time. (It would be extremely interesting to get hold of J.C.Decaux’s data in which they can actually track individual bikes.)

Also, in case it is useful to somebody, here is the weather data page on Met Eireann and the data which I have weather.csv.

One last point which might be worth making (if only to document it for my own sake) is that I ended up working with these moderately large data-sets in a raw csv format but that I experimented with using sqlite to manage them. A simple python script like this:

mport sqlite3
import sys
import datetime
import csv
import gzip
from xml.dom import minidom
from xml.parsers.expat import ExpatError
from utils import yyyymmdd_from_date, date_from_yyyymmdd, ssm_from_datetime

path = '../raw_data/v1/%s/%s.out.csv.gz'

city, startdate, enddate = sys.argv[1:4]
startdate = date_from_yyyymmdd(startdate)
enddate = date_from_yyyymmdd(enddate)

db = sqlite3.connect('%s.db' % city)
curs = db.cursor()
curs.execute('create table station_snaps (station_id integer, date text, ssm integer, available integer)')

d = startdate
while d <= enddate:
db_rows = []
try:
for row in csv.reader(gzip.open(path % (city, yyyymmdd_from_date(d)))):
try:
station_id = int(row[0])
timestamp = float(row[1])
dt = datetime.datetime.fromtimestamp(timestamp) # NB: This handles DST properly.
xml_data = '\n'.join(row[2:])
fld = minidom.parseString(xml_data).getElementsByTagName('available')[0].firstChild
if fld is not None:
db_rows.append((station_id, yyyymmdd_from_date(dt), round(ssm_from_datetime(dt)), int(fld.data)))
except (ValueError, IndexError, ExpatError):
pass
except (IOError, csv.Error):
pass
if len(db_rows) > 1:
curs.executemany('insert into station_snaps values (?, ?, ?, ?)', db_rows)
db.commit()
print d
d += datetime.timedelta(days=1)
db.close()


turns the scraped xml files into a nice sqlite3 file and then we can calculate many of our statistics very easily, in principle, using SQL. For example

select
ssm_300, sum(avg_available)
from
(select
station_id, ssm/300 ssm_300, avg(available) avg_available
from
station_snaps
group by
station_id, ssm_300)
group by
ssm_300
order by ssm_300


Unfortunately however this was just too slow (at least on my laptop) so I had to do things by hand. This is to be expected given that the database cannot use the natural time-ordering of the records which I could in my scripts.

## A 3D printing puzzle

12 piece wooden puzzle

## A 3D printing puzzle

### A wooden puzzle

Back in 1998 when I was in Taiwan I bought a nice little wooden puzzle. The point of the puzzle is to disassemble/reassemble a cube like the one pictured above. It caught my eye because it had the same design as a puzzle that a friend from back home in Dublin had. I had often admired my friend’s puzzle but never knew where to get one (he had bought his while skiing in Andorra). These were the early days of online commerce and it was not so easy to get your hands on things. As a result I didn’t hesitate before snapping up the puzzle when I happened across it in the gift shop in Taipei.

A beautiful building in Liberty Square in Taiwan

Fast forward 10 years to 2008. A different friend was round at my apartment and took an interest in my nice Taiwanese puzzle. He was having trouble solving it so I encouraged him to take it home. Little did I realise. I was not to see that wooden puzzle again for almost 3 years (and btw he still hadn’t solved it when he finally returned it!).

Towards the end of those 3 years, i.e., a few months ago, I began to get interested in 3D printing. The idea to have a replacement of my missing wooden puzzle made occurred to me. Although I lacked the puzzle, I had typed the coordinates of all the pieces into my computer about 10 years before (for a 3D rendering project I was working on). It had been pretty painful work typing up all those coordinates so I had always been careful not to lose them. Now, finally, they were going to be really useful!

Two views of the 12 pieces of my virtual puzzle

The idea that I could turn these simple lists of coordinates into a physical 3D object hugely appealed to me. Also, I had (indeed I still have) some other more substantial 3D printing projects in mind and this simple puzzle request would give me data on the accuracy of the process. In other words, I thought having the puzzle made would be a good test run. Plus, I’d have my puzzle back!

### 3D Printing

We live in a brave new world: 3D printing is here! I for one find the idea of being able to print 3D objects extremely exciting.

There are a number of techniques including selective laser sintering, fused deposition modeling and DLP/stereolithography (this last term is a little loose). Each of these builds up the 3D model as a series of relatively thin (usually ~0.1mm) 2D slices and as such are additive processes in contrast to more traditional machining methods. There are a number of firms making the various types of 3D printing machines including ZCorp, Eos, Stratasys and Objet. There are even two open source 3D-printing possibilities, that are financially withing range of many a sufficiently interested hobbyest: Makerbot and Reprap. Finally there are several web-based firms which offer to 3D print a model on an industrial machine and post the results back to you. These firms include:

 i.materialise using many machines Ponoko using ZCorp SLS machine machines (Ponoko also offer excellent 2D laser cutting services) Sculpteo using ZPrinter 650 by ZCorp and Formiga P100 by Eos machines Shapeways using Stratasys FDM 400mc and Objet Eden 500(V) machines

Each of these websites/machines accept various 3D file formats. The most widely accepted, and also the simplest, file format seems to be the so-called Stereolithography file format (extension STL). I decided to supply my pieces in this format and wrote a simple python script to convert my old lists of coordinates into this format as well as to tidy up the 3D geometry in various ways (orient normals consistently, introduce clearance space between the pieces, fix T-junctions etc.). The source code, which looks like this:

from geometry import *
from collections import defaultdict
import sys

#...

piece = Solid(sys.argv[1][:-3] if sys.argv[1].endswith('.3d') else sys.argv[1])
piece.fix_normals()
piece.fix_T_junctions()
piece.create_clearance(0.5, 25)
piece.place_on_z_plane(int(sys.argv[2]), int(sys.argv[3])) # Sorry, horrible hack!
piece.write_to_stl_file()


is available here, the raw input files are available here and finally the output, ready for 3D printing, is available here.

Finally, with my data ready and in the right format, I uploaded my STL file to i.materialise. I was curious to see what their “alumide” material (a mixture of polyamide and aluminium powder) would be like so I requested to have my puzzle printed as alumide using selective laser sintering. I was pleased with the results:

Two views of the 12 pieces my replacement alumide puzzle

### Accuracy of the 3D-printed alumide puzzle

As I mentioned above, one of the reasons for having the puzzle made was to see how accurate an instance of the 3D printing process was. Fortunately the puzzle is an easy object to measure because every surface point lies in a plane parallel to a (Euclidean) coordinate plane. I thus decided to use a digital caliper to make measurements of the pieces of the puzzle. The caliper I used to make the measurements has a precision of 0.01mm and claims to have accuracy to this scale also. I doubt this but I am reasonably confident that it has at least 0.1mm accuracy based on a few tests (and I expect better). Based on 41 measurements I found a sample mean and standard deviation error of 0.24mm and 0.11mm respectively. I also found that all errors were positive meaning that they were due to too much material being deposited; no measured dimension was too small. (Incidentally this means that if you are ambivalent as regards errors caused by too much or too little material being present, you should shrink your model appropriately by ~0.24mm to try and achieve zero average signed mean and 0.12mm average absolute mean.)

The alumide puzzle assembled.

For the sake of completeness I might as well provide the table of measurements I took.

Measurements

 Record ID Piece Normal direction Plane 1 location Plane 2 location Theoretical distance Measured distance Error 1 1 z 0 19.5 19.5 19.67 0.17 2 1 z 0 19.5 19.5 19.73 0.23 3 1 x -25 25 50 50.29 0.29 4 1 x -25 25 50 50.24 0.24 5 1 x -25 -5.5 19.5 19.79 0.29 6 1 z 8.5 14.5 6 6.56 0.56 7 1 y -25 25 50 50.24 0.24 8 1 y -25 25 50 50.22 0.22 9 1 y -5 5 10 10.32 0.32 10 2 z 0 19.5 19.5 19.9 0.40 11 2 z 0 19.5 19.5 19.86 0.36 12 2 x -25 25 50 50.25 0.25 13 2 x -25 25 50 50.29 0.29 14 2 x -25 -5.5 19.5 19.76 0.26 15 2 z 8.5 14.5 6 6.45 0.45 16 2 y 80 30 50 50.2 0.20 17 2 y 80 30 50 50.2 0.20 18 2 y 50 60 10 10.33 0.33 19 3 y 80 30 50 50.16 0.16 20 3 y 60 50 10 10.21 0.21 21 3 z 0 9 9 9.14 0.14 22 3 x 45.5 54.5 9 9.34 0.34 23 3 x 45.5 54.5 9 9.42 0.42 24 5 x 30 80 50 50.21 0.21 25 5 x 30 37.5 7.5 7.54 0.04 26 5 x 72.5 80 7.5 7.55 0.05 27 5 y -0.5 -9.5 9 9.28 0.28 28 5 z 0 4 4 4.08 0.08 29 7 y 80 30 50 50.15 0.15 30 7 y 49.5 30 19.5 19.64 0.14 31 7 z 0 9 9 9.32 0.32 32 7 x 44.5 30 14.5 14.66 0.16 33 9 x 30 80 50 50.09 0.09 34 9 y -10.5 -25 14.5 14.63 0.13 35 9 z 0 9 9 9.36 0.36 36 11 x 85.5 94.5 9 9.28 0.28 37 11 y 25 -25 50 50.21 0.21 38 11 z 0 9 9 9.08 0.08 39 12 x 85.5 94.5 9 9.3 0.30 40 12 y 80 30 50 50.17 0.17 41 12 z 0 9 9 9.16 0.16 Avg 0.24 Std 0.11 Min 0.04 Max 0.56

### Final words on the 12 piece puzzle

I thought I’d finish with a brief appeal for information. In the course of the above I began to wonder about the origin of my puzzle. I would be interested to know when/where it was invented. I was unable to determine this after a few hours of googling but I think it likely some facts about the origin are known. I did come across this page where it is claimed that the puzzle dates from at least the 19th century (incidentally there is also a clear solution to the puzzle given on that page).

I also discovered that there is an extremely extensive database of mechanical puzzles assembled by Jerry Slocum known as the Slocum collection but I could not find my puzzle in the online database. Apparently a classic puzzle reference book is Prof. Hoffman’s “Puzzles old and new”, 1893 so it would be interesting to see if my puzzle is mentioned there but the book is hard to come by. My guess is the puzzle dates from 19th century Europe but it seems that puzzles have been very popular in Japan for a long time where the keyword is “kumiki” apparently meaning “joining wood together”.

## Discman Powered Mower

Robomow (click for larger image)

## A robotic lawn-mower

### Robomow

A few years ago my father bought one of Friendly Robotics‘s Robomows. The Robomow is a battery-powered robotic lawn-mower that will happily mow your lawn for you; all you have to do is charge up its battery and ask nicely. My father’s model is the RL550 and not only does it do an excellent job of cutting the grass but it emits a satisfyingly deep humming sound as it proceeds merrily about its work. Watching it work, one gets the feeling this is a mower that really enjoys its work.

### The perimeter wire

Clever though the Robomow may be, it does need a bit of help figuring out exactly what it is that you would like mowed. To Robomow you are just as you are likely to want your prized Blue Bell Tunicates cut down to size as your are keen to have your grass cut.

Blue Bell Tunicate (not my picture)

In other words Robomow is absolutely snookered when it comes to finding the edge of your lawn unless you’re willing to lend it a hand. In fact (perhaps as a result of bad experiences in the past, I don’t know) Robomow will stubbornly refuse to cut a single blade of grass until you tell it where the boundary of what you would like cut is. As a result of this stubbornness, you have to lay a perimeter wire around your lawn at a safe distance from flowerbeds, pets and other garden features you think might not appreciate Robomow’s particular brand of attention. It takes a bit of time to lay the perimeter wire but it’s worth it and the good news is that once the perimeter wire is in place you need just one more thing: a perimeter switch.

### The perimeter switch

The people at Friendly Robotics have thought of everything. When they send you your Robomow, not only do they generously include a length of perimeter wire but they also provide you with a small battery-powered perimeter switch. The perimeter switch helps Robomow find the edge of your lawn by sending a characteristic signal round the perimeter wire.

### A broken perimeter switch

Robomow had been taking good care of my parents’ lawn for several years without any trouble at all and the lawn was in good shape:

My parents’ garden on a day of typical Irish weather (click for larger image)

However this Spring when my father was bringing Robomow out of its Winter hibernation, he was dismayed to discover that the perimeter switch appeared to be lifeless. Remembering that I liked to fiddle around with electronics, he asked me to take a look at the switch to see if there was anything I could do for it. Sadly, it was too late. We had to say our goodbyes to the old perimeter switch and order a new one. The new switch arrived in due course and Robomow was back in action. However my curiosity was now piqued: I wanted to know what signal the perimeter switch sends round the wire. I decided I’d try to analyse it a little and to build my own perimeter switch. I hope that a few people might be interested to read a little about what I learned in the course of my investigations.

## The perimeter signal

### First analog measurements

The perimeter wire is a single solid core copper wire with about 1mm diameter. The connector at the base of the perimeter switch has two terminals: one for each end of the perimeter wire. The perimeter switch creates the perimeter signal by placing a periodic potential difference across these two terminals. Robomow has a circuit which is designed to look for the magnetic field generated by the induced current round the perimeter wire. Also, the perimeter switch will not send a signal unless it detects a resistance of at most about 5 ohms between the two terminals (otherwise it reports a broken perimeter wire using one of its three status LEDs and its Piezo buzzer). I measured about 0.7 ohms of resistance around the perimeter wire in my parents’ garden. To measure the signal I thus placed a 1 ohm resistor across the two terminals of the perimeter switch and connected my oscilloscope (Link Instruments’ MSO-19) across the two terminals. As expected I found quite a simple signal. The perimeter signal (described in more detail later) is exactly 48.001ms (milliseconds) long and is divided into three chunks which are approximately:

1. 16.001ms of silence (i.e. no potential difference) with a brief “blip” in the middle
2. 16ms of 8KHZ sine
3. 16ms of 8KHZ sine but almost exactly out of phase with the previous chunk

The amplitude of the sine waves varies depending on the resistance that is placed across the two terminals but for 1 ohm I measured about a 220mV swing from the minimum to the maximum of the sine waves. Here are some screen-shots from my oscilloscope’s software showing parts of the signal at various frequencies:

Perimeter signal (click for larger image)

The apparent two levels of the sine waves in the above picture are just an aliasing effect caused by the low sample rate. Note the presence of a blip in the silence. We will see the genesis of this later when we examine the digital data.

Perimeter signal (click for larger image)

At a higher sampling rate as in the above picture, the aliasing effect disappears.

Perimeter signal (click for larger image)

At an even higher sampling rate we can see the waveform “charges up” (possibly literally) to its full amplitude over about 5 periods.

Perimeter signal (click for larger image)

Shifting our attention to the transition period between the two 16ms sine wave chunks we can see the signal makes a slightly complex transition and again, the new phase-shifted sine wave seems to charge up over about 5 periods.

As well as examining the signal myself, I searched the web to see if anyone had written anything about the Robomow perimeter signal. The only reference I could find was on this page where on 17 Jun 2005, the user mackenziema wrote:

“I put my perimeter switch on my O’scope over the weekend. It’s a complex waveform that looks like 15ms of off time, followed by 32ms of 8KHz signal. The 8KHz burst is split into two levels. (don’t know why) The waveform of the burst looks like a sin
wave with pointy tops and rounded bottoms.”

Encouraged at the sensible looking signal and its similarity to what mackenziema had observed, I decided to press on with my investigations.

### Perimeter switch circuit board

Perhaps the next most natural thing to do is to open up the perimeter switch and have a look at the circuit board and indeed this is what I did. Here are some photos of what I found:

Perimeter switch circuit board front (click for larger image)

Perimeter switch circuit board back (click for larger image)

Note that I took these photos after I had broken off the connector for the perimeter wire (so that I could use it for my own perimeter signal generator). Also the damage to the board near the empty white circle in the front picture is the result of my removing a corroded electrolytic capacitor in order to see if replacing it would fix the broken switch (which it did not).

The circuit on the board is pretty simple but unfortunately it is a little tricky to work out the exact connectivity because all of the solder globules appear to have a layer of insulation covering them. I think this is most likely to be some sort of conformal coating for weather protection. As a result of this I couldn’t use my multimeter to easily determine the connectivity of the various components and since it was pretty difficult to read the traces I decided I would make do without knowing the exact circuit. However it is a big help even just to identify the most important components on the board. These are:

Not knowing the circuit topology, I can only guess at what these components are used for (e.g., as a friend suggested, perhaps the comparitor and reference diode are used to see whether the switch should report its low battery warning; alternatively it may be used for the broken wire warning). However it is clear that it is the PIC microprocessor that is in charge. The obvious thing to do then is to take a logic analyser and sniff the data on its pins. (Incidentally there was a sticker labelled PRG2007E on top of the PIC which I removed; I suspect this identifies the code which the manufacturers burned onto the chip.)

### Digital measurements

So I set about sniffing the data on the PIC pins using my USBee SX logic analyser (which in contrast to the digital lines on my MSO-19, has a nice big buffer size). I was only interested in PIC’s behaviour when it was in its mode of continuously sending the perimeter signal and so I set it up to do this across a 1 ohm resistor. I found that the only pins which carry data are RA3, RB3, RB7 (these pin names are taken from the PIC16C54C data-sheet). Here is a screen-shot from the USBee Suite (the logic analyser software) with some data from these pins which I sniffed at 8MHz:

Data on pins RA3, RB3, RB7 (click for larger image)

The data on pin RB3 is used to flash one of the perimeter switch status LEDs (so the user knows the switch is working) and I suspect this is its only function.
Zooming in we can examine the data on the other pins in more detail:

Data on pins RA3, RB3, RB7 (click for larger image)

and even more detail:

Data on pins RA3, RB3, RB7 (click for larger image)

and elsewhere at the same zoom level:

Data on pins RA3, RB3, RB7 (click for larger image)

Roughly speaking the digital data on RB7 appears to be periodic with period ~48ms. The signal which it sends is constant for ~16ms (with one state change from high to low in the middle) and has two ~16ms bursts of rectangular waves at 8KHZ with opposite duty cycles. All of this is in line with what we observed in the analog perimeter signal and it seems very likely that the other circuitry on the perimeter switch board passes this digital signal though a low-pass filter and amplifies it to produce the sine wave bursts that we saw above.

However I was keen to make sure I had understood the digital data completely (e.g., that it was truly periodic and there was not some modulated subtle signal contained in it). I wrote a simple script to read through a longish chunk of digital data (about 50 seconds worth, about twice as long as it usually takes Robomow to get started) and report a summary of what it found. Based on this I found that the data was as expected but with one mildly surprising (and probably irrelevant) quirk. To describe the digital data let’s introduce a tiny bit of notation. Let:

• L(l) denote l microseconds of low state
• H(h) denote h microseconds of high state
• LH(l, h) denote l microseconds of low state followed by h microsecond of high state (i.e., a rectangular wave that is low then high with h/(h+l) duty-cycle)
• HL(h, l) denote h microseconds of high state followed by l microseconds of low state (i.e., a rectangular wave that is high, then low with h/(h+l) duty-cycle)
• The notation xN indicate repeating N times, e.g., L(10)x5 is the same as L(50)

The exact behaviour of the digital data then is that the data is almost periodic with period 48.001ms but there are two very similar 48.001ms signals which alternate. Thus strictly speaking the data has period 96.002ms and the fundamental waveform is:

• H(8004.5), L(7996.5), HL(10, 115)x127, L(125), LH(11, 114)x127, H(125)
• followed by

• H(8004.5), L(7996.5), HL(11, 114)x127, L(125), LH(10, 115)x127, H(125)

The above two 48.001ms signals are very similar except that the duty cycles of the rectangular wave bursts in the first are 8%, 91.2% respectively whereas they are 8.8%, 92% respectively in the second. I don’t know why Friendly Robotics have done this and I do not believe it matters but it is nice to have a truly accurate description of the data. Incidentally I sampled the data at 16MHz and also captured the 4MHz clock signal so that I could count cycles to be sure all measurements were absolutely correct. It might be helpful to include a schematic picture of the 96.002ms signal:

Summary of data on RB7 (click for larger image)

As I mentioned above, in view of what I measured with my oscilloscope for the analog perimeter signal, it seems very likely that the RB7 pin is being passed through a low-pass filter to generate the sine waves. I’m not terribly knowledgeable about filtering circuitry but I do know Fourier analysis quite well and thinking in terms of the filtering picking out one of the terms in the Fourier series for a rectangular wave, it would be the case that the complementary duty cycle of the second 16ms burst of rectangular waves in each 48.001ms digital signal would cause the observed half-period phase shift in the analog signal. It would also mean that the alternating 48.001ms digital signals with similar but different duty-cycles would result in an analog signal in which every second 48.001ms burst was very slightly phase shifted relative to the previous.

### Further analog measurements

The measurements of the perimeter signal which I had made using my MSO-19 oscilloscope were useful but unfortunately the limited buffer size (1000 samples) meant that they were rather short. What would be ideal would be to have a 10Mhz, say, oscilloscope with a large buffer size so that I could capture and analyse a large portion of the perimeter signal. Unfortunately oscilloscopes aren’t too cheap and I couldn’t quite justify the expense (though I was tempted to buy Bitscope’s BS100) so I had to make do with the equipment to hand. However based on the measurements I could make with the MSO-19 and based on the digital data, it seemed very likely that the highest frequency signals I cared about were 8KHZ. I thus spent a little time taking high frequency (in the range 5-200MHz) snapshots of the small portions of the perimeter signal that I could using my MSO-19 to increase my confidence that there were indeed not higher frequencies present. After reasonably extensive searching, the only signals I could find were very tiny circuit “recoils” (as a friend put it) when the rectangular wave in the digital signal was changing state:

Recoil of digital state change visible in analog signal (click for larger image)

Fairly confident then that 8KHZ were the highest intended frequencies, I decided that sampling using my laptop’s sound card at 48KHZ (conveniently an integral multiple of the 8KHZ, thus mitigating aliasing effects) would be a fruitful way to capture a longer sample of the analog perimeter signal. I thus connected the perimeter switch up to the microphone of my laptop and recorded a minute of data and saved it as a wave file. I was able to examine the data easily using the python wave module as well as the pylab suite of tools especially matplotlib. Since my sound card is not limited by a small buffer size like the MSO-19, I was able to capture a long chunk of the signal at a sampling frequency which I had good reason to believe would give me a reasonably faithful representation of the signal. Saving a few plots from matplotlib:

A zoomed out view of many copies of the signal sampled at 48KHz (click for larger image)

Two copies of the periodic 96.002ms signal sampled at 48KHz (click for larger image)

One of the of the 48.001ms signals sampled at 48KHz (click for larger image)

The transition in the middle of one of the 48.001ms signals sampled at 48KHz (click for larger image)

Examining this data in pylab, I found everything as expected (aside from a few mild aliasing effects). It was quite satisfying how a simple laptop sound card could be turned into a usable low frequency oscilloscope with just a few lines of python! I found myself able to get the same data that I had previously relied on my oscilloscope to provide, using only my soundcard and it was easy to manipulate it thanks to the wonders of the python wave module and pylab. Also, I thought it might be worth making my recording of this signal available so here is a link to a gzipped wave file containing it.

## Building a perimeter switch

### A discman as a perimeter switch

Based on all of the above, I felt that I understood the perimeter signal sufficiently well to be able to build my own perimeter switch and so I set about doing this.
The convenience that the frequencies involved fell well within the audio range meant that I could use standard audio equipment that I had lying around to help. I decided to use an old discman to play back a signal which I would generate and burn to a CD. The only other thing I needed was a simple amplifier. After searching around a little I found a cheap amplifier chip on eBay, the LM386N-1 (data-sheet). I built a circuit straight from the data-sheet (using the closest resistor/capacitor values I had to hand):

Amplifier circuit

I powered my circuit with a 9V battery, connected the input up to my discman’s headphone jack (through an appropriate resistor) and in place of the speaker shown in the above diagram, I connected up the perimeter cable together with an additional 5 ohms of resistance in series. Using the volume control on the discman as a signal strength controller was very convenient.

My discman perimeter switch! (click for larger image)

### Generating a signal

With the above taken care of, all I needed to do was generate my own perimeter signal and burn it to a CD. I decided to write a simple python script to generate a 44.1KHZ-sampled wave file mimicking those aspects of the perimeter signal which I thought mattered.
Here is the python code which I used to generate my signal:

import wave, math, struct

samp_freq = 44.1e3
#samp_freq = 48e3
#samp_freq = 96e3
sig_len = 5 * 60
blip_len = 1e-4
n_blip_samps = int(blip_len * samp_freq + 0.5)

period = 48001.0 # Note extra microsecond (doubt matters).
n_cycles = int(sig_len / period)

freq = 8e3
on_tm = 16e-3
vol = 16384

data = []
n_samps = int(on_tm * samp_freq)
for i in range(n_cycles):
for j in range(n_samps):
data.append(vol * math.sin(2 * math.pi * freq * j / samp_freq))
for k in range(n_samps):
data.append(-vol * math.sin(2 * math.pi * freq * (j + k) / samp_freq))
T = len(data) / samp_freq
N = int(T / period)
t = (N + 1) * period - T
n = int(t * samp_freq)
k = 0
for j in range((n-n_blip_samps)/2):
data.append(0)
k += 1
for j in range(n_blip_samps):
data.append(vol * math.sin(math.pi * j / (n_blip_samps-1))) # Bit of a guess and probably not necessary.
k += 1
while k < n:
data.append(0)
k += 1

f_data = wave.open('%d_perimeter_signal.wav' % samp_freq, 'w')
f_data.setparams((1, 2, samp_freq, samp_freq, 'NONE', 'not compressed'))
f_data.writeframes(struct.pack('%dh' % len(data), *data))
f_data.close()



### Results

#### Partial success

Let me begin by telling the story of a successful attempt to get Robomow to obey my discman.

I gathered my amplifier circuit, discman, signal CD etc. and went round to my parents’ house. There I set up my equipment, pressed play on my discman and turned up the volume till my oscilloscope (which I had also connected up to the perimeter wire) indicated appropriate signal strength. I then turned on Robomow and asked it to mow. To my absolute delight it complied! It began mowing, reached the perimeter cable at the edge of the lawn, turned correctly and continued mowing. After another pass across the lawn, it reached another edge and turned for a second time. At this point I asked my brother who was present to press pause on the discman. Instantly Robomow stopped and complained that the signal had disappeared. This was all great news. My python script’s CD track was controlling the mower. Success!

#### A calibration problem?

Although I was able to get Robomow to obey my signal, unfortunately I was not able to get it to do so reliably, at all. For the vast majority of my attempts, Robomow indicated my signal was present on its signal strength indicator but when asked to mow would think about it for a long time before eventually refusing. Unfortunately without taking apart Robomow (which I don’t want to do) very little information is obtained on each attempt. Not having Robomow obey my perimeter signal reliably is a little disappointing and I dare say I could overcome this final hurdle but having already spent a lot more time than I ever wanted to on this project, I decided it was time to write up and stop. My partial success was still fairly satisfying!

“The Robomow uses a sophisticated navigation system that utilizes an on-board compass type device, which responds to the magnetic poles of the earth. Magnetic North can vary from one point on the earth to another based on geography. In order to accommodate this variance, it is necessary to calibrate the compass device to the area of the earth where Robomow is being used. This is a one-time process and need not be repeated unless Robomow is moved several hundred miles from its’ [sic] present location.”

The bit about the earth’s magnetic field is interesting but, I think, not relevant. However the above paragraph makes it clear that Robomow permanently (or at least till next calibration) stores data about the magnetic field of the signal it measured during calibration. It seems plausible to me that Robomow is not so keen on the signal I send because it is calibrated to Friendly Robotics’s perimeter switch signal which my father calibrated it to years ago. It may thus be expecting some of the (presumably unintended) artifacts (mentioned above) which I have not included in my cleaner signal. I would expect that if I was willing to recalibrate Robomow with my signal turned on then it would always reliably work with mine. However I don’t want to do this for fear of not being able to successfully recalibrate Robomow back to the perimeter switch afterwards. (I expect I would be able to but Robomow is expensive and I’d rather not take the risk since it’s working well now.) I’d be interested to hear any opinions from readers. One final thing to note here is that I did not succeed in getting Robomow to obey an amplified recording I made of the true perimeter switch signal played through my discman and sampled at 44.1KHZ. This suggests that if Robomow is calibrated to artifacts then it is sensitive to some with reasonably high frequency or that there are artifacts generated by my LM386N-1 amplifier which it doesn’t like (though I haven’t seen these on my oscilloscope).

#### A final thought

Shortly after posting the above I had one final thought on this. If we have a periodic signal of period T then we can take any interval of length T as a fundamental domain to represent the signal. However for some signals some choices may be more natural. In particular, thinking about the digital signal I may not have broken up the signal in the most natural place. I think that it would be more natural to break up the 96.002ms signal into two 48.001ms chunks with complementary duty-cycles for the 8KHz chunks. We can do this if we break down these chunks as:

• 16ms 8KHz oscillation
• 16.001ms silence (with blip)
• 16ms 8KHz oscillation

rather than putting the silence first as I have above. My schematic picture of the digital signal in this case is then:

Alternate summary of data on RB7 (click for larger image)

Looking then at the analog signals I took with my soundcard it looks like there is a simple alternating amplitude modulation taking place between the 48.001ms chunks. I had noticed this before but not taken it very seriously till I saw everything at once when I wrote this post. Perhaps this is the extra detail which I should include to make my perimeter signal more reliable?

## The Telly Terminator

### The Telly Terminator

I received an interesting gift recently. It is a universal IR remote control which can be used to turn a large number of televisions/VCRs etc. on or off and it calls itself the Telly Terminator (strongly suggesting that the “off”, rather than the “on”, function seems to be the intended use of this little device).

Front and back side of the Telly Terminator (click for larger image)

The Telly Terminator is certainly not the only such product on the market. As far as I can tell the original is the TV-B-Gone which is even available in a hacker-friendly kit from Adafruit and has the honour of having its own tag on hackaday (furthermore, as far as I can see, the Telly Terminator has simply ripped its data from TV-B-Gone and I can’t help wondering if the European Database Directive might have something to say about this).

Like Mitch Altman, the inventor of the TV-B-Gone, I find it too easy to waste time with a television and so I don’t own one. For this reason I think the Telly Terminator was an apt gift and it works well. Also, having recently taken up a bit of a hobby in electronics, the Telly Terminator provided me with an excellent opportunity to “teardown” a very simple electronic toy. I wanted to know what it was doing and find out if its creators had simply ripped the TV-B-Gone data.

### The USBee SX logic analyser

Several months ago I bought CWAV‘s USBee SX logic analyser. I had had some trouble getting Telit‘s GM862 GSM module and Maxstream/Digi‘s XBee module to communicate (when I was working on this project) and I bought the logic analyser to help solve the problem. As it turned out I solved the problem without the logic analyser and so my USBee ended up being a bit of a solution waiting for a problem. So this was another reason why I was eagre to have a poke around inside the Telly Terminator.

The USBee SX

Incidentally, although I like both the USBee SX and its accompanying USBee Suite, I can’t resist mentioning the alternative Saleae logic analyzer. I found it hard to decide which of these two logic analyzers to buy and I was tempted to buy the Saleae analyzer simply because the inventor writes an interesting blog discussing his experiences creating and selling it. In the end I opted for the USBee SX because I was more confident of software support going forward and because it can also act as a signal generator which, as far as I could tell, the Saleae Logic cannot.

### Inside the Telly Terminator

Inside the Telly Terminator there is a single simple circuit board:

Inside the Telly Terminator (click for larger image)

The board runs off a CR2032 3V cell battery (smoothed by a large 47uF electrolyic capacitor). There are 3 LEDs: an IR LED which is used to transmit the on/off signals, a small red LED which flashes while the IR LED is transmitting (as a signal to the user) and a white (phosphor based, I think) LED which enables the user of the Telly Terminator to pass it off as a simple torch in the event that he/she is foolish enough to turn off the wrong television. The brains of the operation is a Helios H5A02HP chip (in a 16 pin DIP package). A bit of poking around with a multimeter reveals the circuit details:

Telly Terminator schematic (click for larger image)

We see that pin 10 controls the IR LED (and so is the pin we are most interested in). Pin 16 controls the small red LED and pin 1 is connected to the switch K1. The part of the circuit connected to pin 15 puzzled me a little. The transistor Q2 will be on at power-on when the capacitor C5 is empty, and so pin 15 will be grounded, however once C5 is charged enough, Q2 switches off and never switches on again (indeed C5 will continue to charge through R6 even after Q2 switches off and so the voltage at the base of Q2 will drop well below the base emitter junction voltage). After Q2 switches off, C4 will charge till it reaches the voltage at pin 15. This much is obvious, but the question I asked myself was, why? I showed this to a friend who has more experience with these things and he suggested the plausible explanation that pin 15 is the reset pin, it is active low, it is internally pulled up and that it is desirable to have reset triggered for a brief while at power-on till things have stabilised. So my best guess is that pin 15 is the reset pin. In any case, it doesn’t matter really and the rest of the circuit is straightforward.

Unfortunately Helios do not currently make the datasheet for their H5A02HP chip generally avaliable [UPDATE: see link in comments below] and did not respond to an email I sent them asking for it. For this reason I was unable to probe the chip in as much detail as I would have liked. All that I could get my hands on was the following block diagram:

H5A02HP block diagram

Incidentally, Helios create the H5A02HP chip as a simple speech synthesis chip so it’s interesting to see it is the choice used for the Telly Terminator (presumably it was chosen because it was cost effective). It would be interesting to see exactly what the chip thinks it’s doing when sending the on/off signals.

I used my USBee SX to probe the chip in standard fashion by connecting its various digital lines to the pins of the H5A02HP chip, triggering off several different events (including the state of the IR LED). According to my USBee SX, only pins 1, 10 and 16 change their state. Pin 10 sends out the on/off signals and carries the data I am most interested in. Pin 1 is grounded when the push switch is pressed and pin 16 sends a signal at ~2Hz which is responsible for causing the small red LED to flash (oddly though this signal is not completely regular). I would really like to investigate using an oscilloscope but unfortunately I do not own one (yet).

Screenshot from USBee Suite showing data on pins 1, 10, 16, shown as digital 0, 1, 2 respectively (click for larger image)

If you count the orange groups of spikes in above screenshot from USBee Suite (digital 1, i.e. pin 10), you will find that there are 46. These are the 46 on/off signals that the Telly Terminator sends out when it is triggered. They are all equally spaced approximately 390ms apart. To get a better look at the data, we need to zoom in. Zooming in on just the first spike in the above diagram we can see this on/off signal in more detail (evidently, it is sent twice).

On/off signal 1 (click for larger image)

Zooming in further on the first two pulses in the above diagram we can see that they are themselves a series of high/low states, i.e. we can see the carrier signal.

Signal 1 carrier (click for larger image)

Finally we zoom in on the carrier signal and see that it is a simple repeated square wave at ~38kHz with a 50% duty cycle.

Signal 1 carrier close-up (click for larger image)

Having sniffed the IR data (i.e. the data sent out pin 10), the next step is to try and decode it, at least partially. Before we do this, I thought it might be useful to carry this out for an on/off signal from an IR remote control whose protocol specification is available.

### An RC6 remote control

My laptop is a HP Pavilion dv8000 and it came with a remote control which uses an extension of the Philips RC6 protocol. In this case, the chip which is running the show is Renesas Technology‘s M34283G2 chip (clocked using a 4Mhz (4.00L) ceramic oscillator). As far as I can tell, the protocol that the remote control uses is RC6 mode 6 with a 32 data bits which seems to be referred to both as RC6-6-32 and as MCE (a variation pioneered by Microsoft, I think).

RC6-6-32 on/off data (click for larger image)

The RC6 protocol has a carrier frequency of 36kHz and uses bursts of 16 periods to transmit data. The basic timing unit is thus 16/(36kHz) = 444.44(4)us and bits are encoded using Manchester encoding. The signal contains the following fields which I have marked in corresponding colours in the above screen shot from USBee Suite:

• 1 start bit
• 3 mode bits
• 1 trailer bit
• 32 data bits

The leader pulse and the start bit should always be as above. We read the data in the mode bits by remembering the Manchester encoding and find that the data is 110 in binary, i.e. 6 in decimal. This is expressed by saying that we have an instance of RC6 in mode 6. The trailer bit is a twice as long as a normal bit and contains the value 0. Finally this leaves the data bits, of which there are 32. In the above case these contain the data: 1000 0000 0001 0001 0000 0100 0000 1100 in binary or 0x8011040C in hex. Presumably these 32 bits of information can be broken down further into address/control/information fields etc. but I’m happy to leave things at this point. I can at least confirm that my laptop understands what 0x8011040C means since I accidentally switched it off using the remote the first time I was trying to sniff this signal using my USBee.

It is pretty easy to decode the above data for my laptop remote by hand just because there is so little of it. However I also wanted to decode the signals sent by the Telly Terminator a little so I wrote a quick python script to help analyse the data a little. The script reads the sampled data generated by the USBee, separates out the 46 different on/off signals and in each case calculates the carrier frequency, the total signal length, the carrier duty cycle and the timing data for the pulses of carrier signal in each case. For example, the script generates the following using the data from my laptop remote control.

Signal 0 : 36.17kHz, 36.921ms, 35%
On time (periods)	Off time (periods)
2.654ms (96.0)		0.885ms (32.0)
0.442ms (16.0)		0.443ms (16.0)
0.442ms (16.0)		0.443ms (16.0)
0.442ms (16.0)		0.889ms (32.1)
0.443ms (16.0)		0.881ms (31.9)
1.327ms (48.0)		0.889ms (32.1)
0.443ms (16.0)		0.450ms (16.3)
0.443ms (16.0)		0.450ms (16.3)
0.442ms (16.0)		0.451ms (16.3)
0.442ms (16.0)		0.451ms (16.3)
0.442ms (16.0)		0.451ms (16.3)
0.442ms (16.0)		0.451ms (16.3)
0.442ms (16.0)		0.450ms (16.3)
0.443ms (16.0)		0.450ms (16.3)
0.443ms (16.0)		0.450ms (16.3)
0.885ms (32.0)		0.893ms (32.3)
0.443ms (16.0)		0.450ms (16.3)
0.443ms (16.0)		0.450ms (16.3)
0.885ms (32.0)		0.893ms (32.3)
0.442ms (16.0)		0.451ms (16.3)
0.442ms (16.0)		0.451ms (16.3)
0.442ms (16.0)		0.451ms (16.3)
0.442ms (16.0)		0.451ms (16.3)
0.885ms (32.0)		0.892ms (32.3)
0.443ms (16.0)		0.450ms (16.3)
0.443ms (16.0)		0.450ms (16.3)
0.443ms (16.0)		0.450ms (16.3)
0.443ms (16.0)		0.450ms (16.3)
0.443ms (16.0)		0.450ms (16.3)
0.885ms (32.0)		0.446ms (16.1)
0.443ms (16.0)		0.889ms (32.1)
0.442ms (16.0)		0.451ms (16.3)
0.443ms (16.0)


The script has thus estimated the carrier frequency as 36.17kHz, the total signal length as 36.921ms and the duty cycle of the carrier as 35%. The script has also calculated that the first burst of carrier marking lasts for 2.654ms which is 96.0 periods of carrier frequency, this is followed by 0.885ms of space which is 32.0 periods of carrier frequency and so on.

### Python sourcecode

Although it’s not exactly a work of art (to say the least) I thought I might as well make the source code to the decoding script available.

import sys, csv

SPACE = 0
MARK = 1

class StreamWithPush:
def __init__(self, data):
self.data = data
self.buffer = []
def __iter__(self):
return self
def next(self):
if len(self.buffer) == 0:
return self.data.next()
else:
x = self.buffer[-1]
del self.buffer[-1]
return x
def push(self, x):
self.buffer.append(x)
def push_l(self, l):
self.buffer.extend(l)

class Signal:
def __init__(self, frequency, duty_cycle, data):
self.frequency = frequency
self.duty_cycle = duty_cycle
self.data = data
def __str__(self):
s = 'On time (periods)\tOff time (periods)\n'
tm = 0
assert len(self.data) % 2 == 1, 'Uh oh.'
for i in range(len(self.data) / 2):
s += '%.3fms (%.1f)\t\t%.3fms (%.1f)\n' % (self.data[2*i][1] * 1000, self.data[2*i][1] * self.frequency,
self.data[2*i+1][1] * 1000, self.data[2*i+1][1] * self.frequency)
tm += self.data[2*i][1] + self.data[2*i+1][1]
s += '%.3fms (%.1f)' % (self.data[-1][1] * 1000, self.data[-1][1] * self.frequency)
return ('%.2fkHz, %.3fms, %d%%\n' % (self.frequency * 0.001, tm * 1000, int(self.duty_cycle * 100))) + s

def get_freq_dcycle(mark_data, sample_tm_ns):
# TODO Check mark_data is consistent, i.e. all same frequency and duty cycle (or even same shape).
tot = on = 0
for (h, l) in mark_data:
tot += h + l
on += h
return (len(mark_data) / (tot * sample_tm_ns * 1e-9), float(on) / tot)

def get_signals(pattern, sample_tm_ns):
NEW_SIG_TM = 0.38 # I measure a gap of ~391ms between signals. TODO learn this from the data.
signals = []
i_pattern = iter(pattern)
done = False
while not done:
signal_data = []
mark_data = []
l_mark = 0
while True:
try:
packet = i_pattern.next()
except StopIteration:
signal_data.append((MARK, l_mark * sample_tm_ns * 1e-9))
done = True
break
if packet[0] == SPACE:
signal_data.append((MARK, l_mark * sample_tm_ns * 1e-9))
l_mark = 0
if packet[1] * sample_tm_ns * 1e-9 >= NEW_SIG_TM: # End of this signal.
break
else:
signal_data.append((SPACE, packet[1] * sample_tm_ns * 1e-9))
else: # packet[0] == MARK
l_mark += packet[1] + packet[2]
mark_data.append((packet[1], packet[2]))
(frequency, duty_cycle) = get_freq_dcycle(mark_data, sample_tm_ns)
signals.append(Signal(frequency, duty_cycle, signal_data))
return signals

def markate(csv_data, channel):
pattern = []
marking = False
l_not_marking = l_last_period = 0
slippage = 15 # samples.
sample_tm_ns = 333.333333333 # nanoseconds between samples. TODO learn this from data and verify all samples consistent.
for row in csv_data:
(tm, v) = (int(''.join(row[0].split('.'))), row[channel])
if not marking:
if v == '0':
l_not_marking += 1
else:
marking = True
pattern.append((SPACE, l_not_marking))
l_not_marking = l_last_period = 0
csv_data.push(row)
else: # marking == True
if v == '1':
for (n_ones, row) in enumerate(csv_data):
if row[channel] != '1':
break
n_ones += 1
buffer = []
for (n_zeros, row) in enumerate(csv_data):
if row[channel] != '0':
csv_data.push(row)
break
if l_last_period > 0:
d = n_ones + n_zeros + 2 - l_last_period
if d > 0:
buffer.append(row)
if d > slippage:
csv_data.push_l(buffer)
n_zeros -= slippage
marking = False
break
n_zeros += 1
pattern.append((MARK, n_ones, n_zeros))
l_last_period = n_ones + n_zeros
else: # v == '0'
sys.stderr.write('Uh oh\n') # Shouldn't happen and causes us to lose data.
marking = False
csv_data.push(row)
return (sample_tm_ns, pattern[1:]) # Exclude first element which is just ('SPACE', 0).

def main(argv=None):
if argv is None:
argv = sys.argv
if len(argv) != 3:
sys.stderr.write('Usage: %s <usbee file.csv> <channel number>\n' % argv[0])
sys.exit(1)

(sample_tm_ns, pattern) = markate(csv_data, int(argv[2]) + 1)
signals = get_signals(pattern, sample_tm_ns)
for (i, sig) in enumerate(signals):
print 'Signal %d : %s\n\n' % (i, sig)

if __name__ == '__main__':
sys.exit(main())


### The Telly Terminator data

Using the above script, I decoded the data for the 46 separate on/off signals which the Telly Terminator sends. The results are here. Unsurprisingly, the Telly Terminator data looks like a rip of the North American TV-B-Gone data (rather than the European TV-B-Gone data). I spot checked the first five signals sent by the Telly Terminator and found that they match those sent by the North American TV-B-Gone in the same order and with the same repetitions. Not quite all signals match (indeed the TV-B-Gone sends 56 signals, 10 more than the Telly Terminator) but this is presumably the result of the Telly Terminator data having been ripped from an earlier version of the TV-B-Gone data.

Finally I thought I’d also mention that USBee Suite can be downloaded for free and so I might as well make the session file with my Telly Terminator data and my laptop’s remote control data available for download.