Showing posts with label python. Show all posts
Showing posts with label python. Show all posts

Thursday, 2 May 2013

The Dynamics of Catching Exceptions in Python

In which I discuss dynamism in catching exceptions - something which took me by surprise and could hide bugs - or enable fun...

The Problem Code

The following code - abstracted just slightly(!) from production code - looks perfectly good. It's calling a function to get some statistics and then process them in some way. Getting the values in the first place involves a socket connection, which could fail with a socket error. Since statistics aren't vital to the running of the system, we simply log the error and move on.

(Note I'm using doctest to check this article - this is representative of a script doing real things!)

>>> def get_stats():
...     pass
...
>>> def do_something_with_stats(stats):
...     pass
...
>>> try:
...     stats = get_stats()
... except socket.error:
...     logging.warning("Can't get statistics")
... else:
...     do_something_with_stats(stats)

The Find

Our tests didn't find anything wrong, but actually paying some attention to our static analysis reports showed a problem:

$ flake8 filename.py
filename.py:351:1: F821 undefined name 'socket'
filename.py:352:1: F821 undefined name 'logging'

The problem with the code was that the socket and logging modules weren't imported in the module - and we clearly weren't testing for that case. What surprised me was that this didn't cause a NameError up front - I had assumed that exception clauses would have some eager name lookup - after all, if it needs to catch these exceptions, it needs to know what they are!

It turns out not so much - except clause lookups are done lazily, only evaluated if an exception is raised. Not only are the name lookups lazy, but the 'argument' of an except statement can be any arbitrary expression.

This can be good, bad, or just downright ugly.

The Good

Exception specifications can be handed around as any other values. This allows dynamic specification of the exceptions to be caught.

>>> def do_something():
...    blob
...
>>> def attempt(action, ignore_spec):
...     try:
...         action()
...     except ignore_spec:
...         pass
...
>>> attempt(do_something, ignore_spec=(NameError, TypeError))
>>> attempt(do_something, ignore_spec=TypeError)
Traceback (most recent call last):
  ...
NameError: global name 'blob' is not defined

The Bad

The downside of this dynamism is that mistakes in exception specifications often won't be noticed until it's too late - when the exception is triggered. When using exceptions to catch rare events (failure to open a file for writing for example), unless there is a test for that specific case, it won't be known until an exception (any exception) is triggered, at which point a check kicks in to see whether an exception matches, and causes an error all of its own - typically a NameError.

>>> def do_something():
...     return 1, 2
...
>>> try:
...     a, b = do_something()
... except ValuError:  # oops - someone can't type
...     print("Oops")
... else:
...     print("OK!")   # we are 'ok' until do_something returns a triple...
OK!

The Ugly

>>> try:
...    TypeError = ZeroDivisionError  # now why would we do this...?!
...    1 / 0
... except TypeError:
...    print("Caught!")
... else:
...    print("ok")
...
Caught!

The exception specification needn't just be a name lookup - arbitrary expressions also work:

>>> try:
...     1 / 0
... except eval(''.join('Zero Division Error'.split())):
...     print("Caught!")
... else:
...     print("ok")
...
Caught!

Not only can the exception spec be decided at run-time, it can even use the active exceptions' information. The following is a convoluted way to always catch the exception which is being raised - but nothing else:

>>> import sys
>>> def current_exc_type():
...     return sys.exc_info()[0]
...
>>> try:
...     blob
... except current_exc_type():
...     print ("Got you!")
...
Got you!

Clearly this is what we are _really_ looking for when we write exception handlers, and this should immediately be suggested as best practice :-p

The (Byte) Code

To confirm how it appears that exception handling works, I ran dis.dis() on an exception example. (Note the disassembly here is under Python2.7 - different byte code is produced under Python 3.3, but it's basically similar):

>>> import dis
>>> def x():
...     try:
...         pass
...     except Blobbity:
...         print("bad")
...     else:
...         print("good")
...
>>> dis.dis(x)  # doctest: +NORMALIZE_WHITESPACE
  2           0 SETUP_EXCEPT             4 (to 7)
<BLANKLINE>
  3           3 POP_BLOCK
              4 JUMP_FORWARD            22 (to 29)
<BLANKLINE>
  4     >>    7 DUP_TOP
              8 LOAD_GLOBAL              0 (Blobbity)
             11 COMPARE_OP              10 (exception match)
             14 POP_JUMP_IF_FALSE       28
             17 POP_TOP
             18 POP_TOP
             19 POP_TOP
<BLANKLINE>
  5          20 LOAD_CONST               1 ('bad')
             23 PRINT_ITEM
             24 PRINT_NEWLINE
             25 JUMP_FORWARD             6 (to 34)
        >>   28 END_FINALLY
<BLANKLINE>
  7     >>   29 LOAD_CONST               2 ('good')
             32 PRINT_ITEM
             33 PRINT_NEWLINE
        >>   34 LOAD_CONST               0 (None)
             37 RETURN_VALUE

This shows the 'issue' with my original expectations. Exception handling is done exactly as it 'looks' in the Python itself. The setup doesn't need to know anything about the subsequent 'catching' clauses, and they will be completely ignored if no exception is raised. SETUP_EXCEPT doesn't care what happens, just that if there is an exception, the first handler should be evaluated, and then the second, and so on.

Each handler has two parts: getting an exception spec, and comparing it to the just-raised exception. Everything is lazy, and everything appears exactly as you might expect from just looking at the code line-by-line, thinking about things from the point of view of a naive interpreter. Nothing clever happens, and that's what suddenly makes it seem very clever.

Summary

The dynamism of exception specs caught me by surprise slightly, but it has some interesting applications. Of course actually implementing many of those would probably be a bad idea ;-)

It isn't always intuitive how much dynamism certain Python features support - for example it isn't obvious that both expressions and statements are happily accepted directly in class scope (rather than function, method, or global scope), but not everything is so flexible. Although (I think) it would be nice, expressions are forbidden when applying decorators - the following is a syntax error in Python:

@(lambda fn: fn)
def x():
   pass

Here's a final example of playing with dynamic exception specifications to only propagate the first exception of a given type, silently swallowing repeated exceptions:

>>> class Pushover(object):
...     exc_spec = set()
...
...     def attempt(self, action):
...         try:
...             return action()
...         except tuple(self.exc_spec):
...             pass
...         except BaseException as e:
...             self.exc_spec.add(e.__class__)
...             raise
...
>>> pushover = Pushover()
>>>
>>> for _ in range(4):
...     try:
...         pushover.attempt(lambda: 1 / 0)
...     except:
...         print ("Boo")
...     else:
...         print ("Yay!")
Boo
Yay!
Yay!
Yay!

Thursday, 3 January 2013

New pylibftdi release - 0.11

'pylibftdi' is a library for talking to FTDI devices via the libftdi library. FTDI make a wide range of chipsets and modules for interfacing to a number of protocols via USB, including 8-bit parallel and RS232 serial modes. They're a great way of interfacing to other electronics from your computer.

I've just released pylibftdi 0.11. I'm at the point where I'm looking at getting to RC and then stable status, which I'll release as 1.0 - at which point the API will be considered stable. While it isn't yet, I've taken the opportunity to tidy a couple of things, as well as add some improvements.

Raspberry Pi support; better documentation

Though it worked previously, I've taken the opportunity to test it a bit on Raspberry Pi, and I've updated the docs describing udev rules which allow access to the devices without needing sudo / root access. I think this is now a good option if you want a bidirectional 8 bit port for your Raspberry Pi, and it's certainly lower risk of damaging your Pi than using the GPIO pins directly.

BitBangDevice changes

The new latch property

BitBangDevices provide a simple abstraction of a parallel IO device; a 'direction' property which controls which line is an input or output, and a 'port' property for the actual reads and writes. This is based on systems going all the way back to the BBC micro user port and earlier. direction maps to the 'Data Direction Register' of the Beeb, the 'TRISx' register of the Microchip PIC, and so on. port maps to the 'data' register of the Beeb, or the PORTx register of the Microchip PIC. Just as the PIC18F series introduced the 'LATx' register, so too does pylibftdi 0.11 introduce the latch. Read the documentation for more information - in most cases you simply don't need to care about this.

Initialisation

If a physical FTDI device is not reset between program runs, then it retains its output register state; a pin set high in one run of the program would still be high when the device was opened in a subsequent program run. Prior to pylibftdi v0.11, this was not taken into account, and the assumed state of all output pins was that they were at the reset state, i.e. all low. This meant that operations such as read-modify-write on port bits would not reflect the current state, as they do not do a read based on the output state of the port, but based on the internal view of what output values are set to.

With the change, the following will work as expected:

$ python
>>> from pylibftdi import BitBangDevice
>>> bb = BitBangDevice()
>>> bb.port |= 1
>>> ^D
$ python
>>> from pylibftdi import BitBangDevice
>>> bb = BitBangDevice()
>>> bb.port |= 2
>>> ^D

Previously, the final state of the device pins would have been '2'; the read-modify-write implied by |= 2 would have used '0' is its 'read' source, and have output '2'. The new code initialises the internal latch state to the value read from the pins (it's possible to read the actual state of output pins as well as input pins). With the latest version, the final state of the pins after the above will be '3' - both D0 and D1 set high.

API changes

I've always said in the README for pylibftdi that the API won't be stable until version 1.0, and I've changed two parameters only introduced in 0.10.x to have clearer names.

The following two parameters to the Device constructor have changed name:

interface -> interface_select
I considered interface too generic and unintuitive here. The values and behaviour for this parameter (which selects which interface to use on a multi-interface device) haven't changed.
buffer_size -> chunk_size
This is the maximum number of bytes which will be written / read at a time in read/write calls to the libftdi library, designed to ensure we are regularly executing at least some Python byte code, which we can then interrupt (timely Ctrl-C interruption is the primary use-case for this parameter). It was never about buffering, so I've changed the name to reflect this.

Other changes

The bit_server example now works properly; this can be run as:

$ python -m pylibftdi.examples.bit_server

and will start a basic CGI-based web server, open a web-browser talking to it (on port 8008 by default), and allow you to control the state of each of the 8 output lines on the connected device (which it sets to async bit-bang mode).

This will be further developed in the future - it looks somewhat rough right now :)

The led_flash example has also gained a feature in taking a command line argument of the rate at which to flash - defaulting to 1 Hz. To cause an LED (or a piezo buzzer works just as well - and more annoyingly!) to flash at 10Hz, run:

$ python -m pylibftdi.examples.led_flash 10

Coming next

I'm still trying to improve test coverage. I spent some time trying to port the tests to the Mock library, though my efforts at effectively patching at the ctypes DLL level weren't very successful.

Documentation continues, and thanks to the wonderful readthedocs.org, the documentation isn't necessarily tied to the more sedate release cycle - it always shows the latest version from Bitbucket. If more API changes happen this could be counter-productive, but I'll try really hard to note if this is the case, and it makes things much nicer when updating things like installation instructions (which I have done, adding tested udev rules instructions etc).

libftdi 1.0 is just going through release candidate stage at the moment, so I'll test against that. I expect only the installation docs will need changes.

I've never tested pylibftdi on Windows, and I'm keen to do this in the near future, though I don't have regular access to a Windows machine, so no guarantees about this. I suspect it all 'just works'...

Monday, 24 September 2012

Project Naming in a Google World

I’m a great fan of Python; not only do I think the language itself is clean and readable, the community polite and helpful, and the ecosystem diverse and fascinating, but also the Zen of Python resonates with me.

I think there is significant value in that ‘there should be one - and preferably only one - obvious way to do it’, and that ‘namespaces are one honking great idea’. To me, it is sad that this essence of Python philosophy isn’t applied more widely.

Of course there is an element of tension in the Zen - Namespaces are about nesting, but ‘Flat is better than nested’. Nevertheless, flat within namespaces isn’t the same as not having any namespaces at all.

Namespaces don’t exist in a Google world.

I bet that most project name searches on Google are a single word. ‘jquery’ would get me want I want. ‘requests’ gets me what I want. Even one of my own projects - ‘pylibftdi’ gets me where I want to go. Getting to this point is probably part of choosing a good name. But that’s exactly the problem: how do I choose a good name for my new project? It’s one thing already knowing what project I’m interested in and simply using Google to get me there (sometimes a language name qualifier helps, e.g. ‘python flask’), it’s quite another two problems a) searching for a project to meet a given problem, not knowing what might be available b) searching for a project name I can use for my shiny new thing.

Searchable Project Names

One of the technologies I use the most at work is SSH. I tend to use it mostly in a fairly crude way, via it’s normal client and server programs ssh and sshd with many configuration options, but I have used the paramiko library. Which works well, and has a great name - easily remembered, especially after reading about its etymology on the project page. And very easily searchable. Recently, however, it’s development has slowed. I read in some places that it is now ‘deprecated’, but I’m not sure about that - the github project was last updated 11 days ago as of now… Anyhow, recently it has been forked, and its ‘successor’, has the brilliant name of… wait for it… ‘ssh’. Yes, brilliant. No, actually, it isn’t that helpful. Search for ‘ssh’, and it obviously won’t be there, straightaway, on the first page. Search for ‘python ssh’, and it still won’t be there. I guess it might be in a few months or years once it (presumably) takes off as the ‘one way to do it’, but now? It’s not helpful. Maybe it’s only aimed at people who use the PyPI search engine? And even if / when it is ‘obvious’, it’s still going to be a pain to do web searches for problems relating to use of the package. If I want to know which to use, then ‘paramiko vs ssh’ is of no help. Is the new ssh module ‘preferred’ by the community going forward? Or is it just a random fork by the Fabric guys? Other than the download stats on PyPI, it’s difficult to tell, because searching for info about it is... tricky.

As another example, the pbs package has recently changed its name to sh. Now pbs might not be the bestest name, but changing it to sh causes exactly the same kind of problem as ssh. There can be a real feeling of ‘hijacking’ when something so domain specific is used for a general project name. Using such a name is a clear signal: this is the module you should want to use for this task - you’d would be crazy to try anything else! That may or may not be intended or justified, but when it is a trivial thing for anyone to do, we developers have to be very careful and deliberate. Domain-specific project names, with massively overloaded meanings, only make sense in a very defined namespace: in these cases, the set of Python packages on PyPI.

Except, in a Google world, there aren’t namespaces.

Finding a project name (or rather finding the absence of one)

One of the problems with project naming in a flat unified project namespace (because of course there is one namespace) is project name squatting. For a variety of reasons - good and bad - developers decide that ‘release early, release often’ is a good policy. And one of the first things needed for that first visible release - perhaps the only thing needed - is a project name. So names are snapped up in an eager race. Project names have become the new hot-property. So we have lots of great project ideas, which need and find an awesome project name, make that first release, … and then do nothing. Stagnate. Just like the dot-com crazy days, we have project-name squatting, and permanent project-name ‘under construction’ empty shells… And, like defunct satellites cluttering low-earth orbit, the debris of project names now unused is a danger to every other project, trying to find its own space and path through the knowledge-sphere, avoiding the no-man’s land which has been staked out and left barren, taking juicy spectrum in an interference causing blackout. Soon there will be no more names left and [Sorry, I seem to have got carried away. Ahem.]

So…?

The following are some more thoughts and examples. Most of this is subjective. Hurrah for being able to dump half-finished ideas in a well name-spaced environment!

Over-general names:

  • ‘node’ - really unhelpful.
  • ‘windows’ - key element in GUI programming. WIMP.
  • ‘dropbox’ - to a certain extent.
  • ‘color’ - remember them? Good thing they didn’t take this word away…
  • ‘word’ - a tool for writing words?
  • eliminate a name not just from the project namespace, but increasingly from the word namespace.
  • makes web searching harder

Unpleasant / generally bad names:

  • git
  • gimp
  • My[anything] ;-)
  • Any number of ‘offensive’ or ‘wrong connotation’ names, often leading to name changes, which help no one, except in an ‘any publicity is good publicity’ kind of way:

Duplicate projects with the same name:

Create or recognise our own namespaces:

  • blog articles: author + title
  • PyPI / CPAN etc
  • ‘hungarian notation’ e.g. pyxyz, where the ‘py’ prefix includes some indicator of what namespace it lives in.
  • domain name country code extensions - ‘.io’ etc
  • ‘file extension’ as part of project name: ‘node.js’ etc
  • identification by company or organisation: iOS / iPod / i*, gmail, google maps, etc
  • identification by well-known patterns: xUnit, [j/py]Query etc.

Summary

If I were to produce a new vacuum cleaner and call it ‘Vacuum’, then various people might get upset. We (in software development) don’t really want to have to deal with all the legal & trademark clutter - the fact that we can have an idea, create a project and ‘market’ it all in a weekend is awesome, but requires us to act responsibly. Just because we can launch a new project into the orbital (name)space around us, doesn’t mean we must. Though it is awfully tempting… In addition we need to recognise, use, and educate ourselves and others about the namespaces all around us.

So I guess what I’m really saying, is (to quote Tim Peters)...

Namespaces are one honking great idea - let’s do more of those!

Sunday, 10 June 2012

pylibftdi v0.10 released

I’ve recently released pylibftdi v0.10. pylibftdi is a ‘minimal Pythonic interface to libftdi’, and is intended to be (possibly?) the easiest way to get up and running for simple use cases of FTDI’s range of USB to serial and parallel interface chips and modules. v0.10 adds a couple of new features and bug fixes.

For a long time I suffered under the misapprehension that version numbers should follow the rules of decimal numbers, and that by all reasonable accounts v1.0 should have followed 0.9, and since I want(ed) 1.0 to be ‘stable’ (I currently classify it as ‘beta’), I’d reached an impasse. I can’t remember the exact moment, but I had a realisation that I didn’t have to approach 1.0 via smaller and smaller increments from 0.9 (as in Zeno’s race), but that I could go from 0.9 to 0.10. Anyway, I still want to do better documentation (and a few other things) before reaching 1.0.

Changes in v0.10:

  • Running the unit tests is now easier due to some reorganisation - just run python -m unittest discover in the top level directory.

  • Support for the FT232H device - this has a different USB product ID (PID) to the previous devices I’d been testing with and using - mainly FT245BM/RL, FT232R/RL. All those devices have a PID of 0x6001, while the newer FT232H has a PID of 0x6014. I experimented for a while with having (defaulted) extra parameters for specifying the VID and PID of the target device, but this pushed too much complexity up to the user - I really want pylibftdi to be something which can be used with default options and next-to-no set up code for most basic operations. The approach taken is to have two lists (USB_VID_LIST, USB_PID_LIST) and have the device finding code iterate over the cartesian product of these (i.e. a nested loop, but implemented through the wonderful itertools.product). So adding new PIDs in the future is as simple as appending to USB_PID_LIST, and a device can be opened with no parameters to the Device() constructor if it’s the only FTDI device on the USB bus.

  • Resetting the device to serial mode on open. There’s been discussion about implementing this logic in the library on the libftdi mailing list, but doing it in pylibftdi as well doesn’t hurt. This fixes the unexpected condition that if a previous application had used a device in BitBang mode, reopening it just using Device() would leave it in BitBang mode, rather than the expected serial mode (for devices which have support both).

  • Added a ‘buffer_size’ parameter to the Device() constructor (defaulted to zero, which retains previous behaviour) which chunks reads and writes into accesses of that length at most. This avoids the issue that a call of (for example) dev.write(‘hello’ * 100000) over a 9600 serial link would take an incredibly long time, and since it is all running in the library context (via a ctypes call), it wouldn’t be interruptible by Ctrl-C.

  • Removed the deprecated use of Driver() to be a synonym for Device().

  • Update: I've already done two maintenance releases in the hours since originally writing this - v0.10.2 is now current. One of the major changes is that the examples subpackage is now included in the sdist - so python -m pylibftdi.examples.led_flash should work if you have an LED attached to D0 on your device.

The plan for the next release is just more tidy-ups, examples and more documentation, but I might squeeze a few other things in there…

Monday, 7 May 2012

urlsearch - web searching from the command line

Continuing my pursuit of having a custom vocabulary on my command line, preferably via argv[0] abuse, I've now addressed the subject of web searching. I probably make somewhere between 20 and 50 searches in a typical day, mostly on Google, but Wikipedia comes high up on the list too.

urlsearch is a small script which kicks off a browser search from the command line. The plan is that the task switch associated with switching from the command line (where you usually are, right?) to the browser is eliminated. By complex, spurious and - to-be-frank - non-existent calculations, I estimate that this reduction in friction getting your work done should make you 4.6% more productive, and thus make the world a better place.

Using Python's webbrowser module, it's straightforward to open a webbrowser to a particular page:

>>> import webbrowser
>>> webbrowser.open('http://google.com/search?q=standard+library+pep8')

What urlsearch gives is the equivalent to the above from the following at a Bash prompt:

$ google standard library pep8

It's simple, short, and in it's basic form is just a couple of lines of Python:

#!/usr/bin/env python
import sys, urllib, webbrowser

webbrowser.open('http://google.com/search?q=' +
                urllib.quote_plus(' '.join(sys.argv[1:])))

Make that executable, put it in the path, and you're good-to-go with google [1] searching from the command line. However, as always, complexity is lurking, and desires to have it's way...

The following things are addressed in urlsearch:

Automatic gTLD checking

A range of gTLDs are searched in turn using socket.getaddrinfo(HOSTNAME, 'http'). By default this list starts with the empty gTLD (so local search domains are tried first), then .com, .org, .net, and .co.uk are tried in that order - these being most relevant to my uses. Changing it to default to '.fr' first might be reasonable for French-speakers, for example, but avoiding having to think about this is one more thing not to have to think about. As it were.

Generic search engine support

This is where the wonder of argv[0] fits in :-) Via symlinks to urlsearch, various search engines can be supported. An argv[0] of 'google' will cause a google.com search, while 'wiki' is special-cased to wikipedia. The search query format also needs special-casing for many search engines - the default of /search?q={terms} works for Google, Bing and several other sites.

The following sites are directly supported or special cased:

argv[0] search engine
google Google
bing Bing
wiki Wikipedia
duckduckgo DuckDuckGo
pylib Python standard libraries (direct jump)
jquery jQuery API search (direct jump)

These are managed in the code by a very dull if/elif chain, though something a bit less 'hackish' would probably be wanted to scale to further engines.

Trac support

Trac [2] follows the same search query format as Google, and has a great 'quickjump' feature, where certain search query formats take the user directly to the relevant page. For example, a search for r5678 will go directly to the changeset for revision 5678, and a search for #1234 will go directly to ticket 1234. This ticket search can't be done from a Bash prompt however, as it will be treated as a comment and ignored. This is special cased such that if the search term is an integer, it will be preceded with '#'.

Other tweaks

Output from the browser writing into the command prompt (as happens with Chrome, for example) is redirected to /dev/null.

The Code.

The code is here: http://bitbucket.org/codedstructure/urlsearch

[1]Other search vendors are available
[2]http://trac.edgewall.org

Monday, 18 July 2011

HTTP or it doesn't exist.

This is my 'or it doesn't exist' blog post[1][2][3]. I think everyone should have one ;-)

A big chunk of my life is processing electronic information. Since I would like it to be a (slightly) smaller chunk of my life, I want to automate as much as possible. Now ideally, I don't want a massive disconnect between what I have to do as a human processor of information and what I need to tell a computer to do to do that job done without my help. Because it's easier that way.

So when I hear that the information I need to process is in some spreadsheet or other on a Windows share, it makes me a little sad. When I hear that it is available via a sensible REST interface in a sensible format, my heart leaps for joy just a little.

With something like Python's standard library (and third-party package) support for HTTP (requests), XML (ElementTree) and JSON, I should be able to get my computer to do most of the manual data processing tasks which involve 'documents' of some form or other.

In a previous job I worked at convincing anyone who would listen that 'XML over HTTP' was the best thing since sliced bread. With appropriate XSLT and CSS links, the same data source (i.e. URI) could be happily consumed by both man and machine. Admittedly most of the information was highly structured data - wire protocols and the like, but it still needed to be understandable by real people and real programs.

I'm not an XML expert, but I think I 'get' it. I never understood why it needed so much baggage though, and can't say I'm sad that the whole web services thing seems to be quietly drifting into the background - though maybe it was always trying to.

A lot changes in web technology in a short time, and XML is no longer 'cool', so I won't be quite as passionate about 'XML over HTTP' as I once was. For short fragments it is far more verbose than JSON, though I'd argue that for longer documents, XML's added expressiveness makes the verbosity worth it. Maybe it was ever thus, but whenever two technologies have even the slightest overlap, there seems to be a territorial defensiveness which makes the thought of using both in one project seem somewhat radical. So while I've used JSON much more than XML in the last couple of years, I've not turned against it. If done right (Apple, what were you thinking with plist files!?) - it is great. Compared to JSON-like representations, the ability to have attributes for every node in the tree is a whole new dimension in making a data source more awesome or usable (or terribly broken and rubbish). I've seen too many XML documents where either everything is an attribute or nothing is, but it's not exactly rocket science.

Things I liked about XML:

Simplicity
I like to think I could write a parser for XML 1.0 without too much effort. If it's not well formed, stop. Except for trivial whitespace normalisation etc, there is a one-to-one mapping of structure to serialisation. Compare that with the mess of HTML parsers. While HTML5 might now specify how errored documents should be parsed (i.e. what the resulting DOM should be), I suspect that a HTML5 -> DOM parser is a far more complex beast.
Names! Sensible Names!
Because HTML is limited in its domain, it has a fixed (though growing thanks to the living standard[4] which is HTML) set of tags. When another domain is imposed on top of that, the class attribute tends to get pressed into service in a ugly and overloaded way. By allowing top-level tags to be domain-specific, we can make the document abstraction more 'square'[5].
Attributes
Attributes allow metadata to be attached to document nodes. Just as a lower-level language is fully capable of creating a solution to any given problem, having 'zero mental cost' abstractions (such as the data structures provided by high-level languages) enables new ways of thinking about problems. In the same way, having attributes on data nodes doesn't give us anything we couldn't implement without them, but it provides another abstraction which I've found invaluable and missed when using or creating JSON data sources.

What does make me slightly(!) sad though is the practical demise of XHTML and any priority that browsers might give to processing XML. There is now a many-to-one mapping of markup to DOM, and pre HTML5 (and still in practice for the foreseeable future considering browser idiosyncrasies and bugs) - a many-to-many mapping. It wouldn't surprise me if XSLT transform support eventually disappeared from browsers.

Maybe there's a bit of elitism here - if you can't code well-formed markup and some decent XSLT (preferably with lots of convoluted functional programming thrown in) - then frankly 'get of my lawn!'. I love the new features in HTML(5), but part of me wishes that there was an implied background 'X' unquestionably preceding that, for all things. The success of the web is that it broke out of that mould. But in doing that it has compromised the formalisms which machines demand and require. Is the dream of the machine-readable semantic web getting further away - even as cool and accessible (and standards compliant - at last) web content finally looks like it might possibly start to achieve its goal? Is it too much (and too late) to dream of 'data' (rather than ramblings like this one) being available in the same form for both the human viewer and the computer automaton?

I'm prepared to be realistic and accept where we've come to. It's not all bad, and the speed with which technology is changing has never been faster. It's an exciting time to wield electronic information, and we've got the tools to move forward from inaccessible files stored on closed, disconnected systems. So where I used to say 'XML over HTTP', my new mantra shall now be 'HTTP or it doesn't exist'. At least for a while.

Sunday, 5 June 2011

Bugs: they hide where you're not looking

I bought a new Pogoplug over the weekend (only £50 new at PC World), and after being genuinely impressed by the Pogoplug software, decided it was far too easy and put PlugApps Linux on it. These 'plug' type devices are fairly amazing - cheap, very low power (measured mine at just under 4 watts, with only a single USB flash stick), but with a decent 1.2GHz ARM processor. I'm already thinking my next computer might be another 'plug.

After hacking for I while (why won't my printer work?), I decided to check whether my pylibftdi package worked on it. To my shock, a quick 'pacman -S libftdi; pip install pylibftdi', installed fine, and I could open a device connection to a FTDI device! But then things got worse. Trying to run examples/lcd.py failed with an exception in BitBangDevice, and I quickly realised that the API changes I'd done in 0.8 to make device access more 'file-like' had broken things in BitBangDriver. I was slightly sad that I'd released something where the examples didn't even work, and part of the whole reason the package might be useful to people (the abstraction over bit-bang device operation) was broken.

pylibftdi is fairly simple, and basically consists of Driver, Device, and BitBangDevice classes. Most of the interesting is in the Device class - so this is where I started when I finally got round to adding some tests for the 0.8 release. Having achieved reasonable coverage (though shamefully less than the 100% Uncle Bob demands), I considered my initial testing 'done'. I knew there was more to add later, and had (and still have) full intentions to 'get around to it'.

What I failed to anticipate has the unintended side-effect of writing tests. In the same way teachers might teach how to pass an exam rather than the depth and breadth of a subject, once tests exist, the purpose can simply be to pass them. Old manual acceptance tests get ignored as the 'old way' of doing things. Ideally of course this isn't a problem, because full test cases exist for every feature and code-path in the system, but that was very far from the case here. So somehow, my standard acceptance test (do the example programs still work) got omitted, in preference for 'there are tests now, so it must be better! And the tests pass!'

So beware - a little testing can be a dangerous thing. The bugs hide where you're not looking for them. This is great motivation for achieving full test coverage, for automating acceptance testing (as well as unit / component level testing) so far as possible, and for being humble when it comes to writing tests. My motivations for writing them in the first place were two-fold: the feeling it was 'what I should do', and the idea that at some future point when I added or refactored things later I could be confident I hadn't broken things. I had no thought that the software was already broken; that I needed tests.

Anyway, pylibftdi 0.8.1 is now out, with trivial but important fixes and lots more tests.

Saturday, 7 May 2011

pylibftdi 0.8 released; testing, coverage, and mocking

pylibftdi is a file-like wrapper to FTDI USB devices such as the UB232R (a USB<->serial converter) and the UM245R (8 bit parallel I/O).

No big changes for the 0.8 release, but a couple of new things:

  • ability to specify the device identifier in the Device([device_id]) constructor as either a serial number (as previously), or a device description. So can now specify Device('UB232R'), and the first attached UB232R device will be opened. The code initially tries to open by serial number, and if that fails will try to open by description, which I'm fairly confident will be useful rather than annoying :-)
  • more file-like API functions (flush, readline()/readlines()/writelines(), iterating over lines). These probably only make sense for text over serial lines, but that's a use-case worth supporting, considering pylibftdi already has unicode support.

As well as that, I finally got round to adding some tests, and discovered something wonderful: checking test coverage isn't just practical icing on the cake to make sure things are tested well, but is a powerful and effective motivation for writing tests. I'm using coverage, and have to say it's one of those things I wish I had got round to sooner.

Speaking of which, at some point I'll probably end up saying the same about Mock, which I've read around and know I should probably start using, but it's just so easy in Python to knock up something like this:

fn_log = []
class SimpleMock(object):
    """
    This is a simple mock plugin for fdll which logs any calls
    made through it to fn_log, which is currently rather ugly
    global state.
    """
    def __init__(self, name="<base>"):
        self.__name = name

    def __getattr__(self, key):
        # This makes me smile :)
        return self.__dict__.get(key, SimpleMock(key))

    def __call__(self, *o, **k):
        fn_log.append(self.__name)
        # most fdll calls return 0 for success
        return 0

def get_calls(fn):
    "return the called function names which the fdll mock object made"
    del fn_log[:]
    fn()
    return fn_log

Sometimes I think Python makes 'clever' things like that too easy, and is perhaps the reason that although in the Python language there is only-one-way-to-do-it, in the Python ecosystem there is perhaps a tendency to reinvent the wheel over and over again. Because it's easy - and it's fun.

As always code is at bitbucket. For the next release (0.9) I'm planning to add more tests and docs (which are rather scarce), as well as one or two of the other things I've got planned (possible D2XX support, or at least some notes on libftdi on Windows, more examples & protocol adapters, maybe even a web interface for 8 bit IO...)

Wednesday, 6 April 2011

xmlcmd: adding an --xml option, one command at a time

In my last post, I wrote some thoughts on how using the text based meta-language[1] of regular expressions to filter and manipulate structured data from UNIX commands was not fully exploiting the jigsaw-puzzle approach of 'the unix philosophy'[2], and that XPath (and by implication XML) provided an alternative where structured data on the command line was concerned.[3]

I also mentioned how great things could be if, like subversion, every POSIX command line tool had an --xml switch which could output XML. (There are many programs with XML output, but the main POSIX ones[4] don't have this as an option)

Here's one I made earlier

I was always aware of the danger of overstating the case, but sometimes that can be helpful. Or at least fun. And I'd already started prototyping something which looked fun, dangerous, and potentially useful. This is intended to be illustrative rather than a serious suggestion, but there might be many other cases where the concepts can be used more seriously.

1. Add a path

There isn't any magic to what we're doing in adding --xml options, and we're not touching the original programs. We're just using the fact that the PATH in POSIX operating systems contains an ordered list of entries, and we're simply inserting a 'hook' early on in the path which can catch and redirect certain formats of command, while transparently forwarding others.

I tend to have a ~/bin directory on my path anyway (keeping good care that it is only writable by myself) - so I'm already set, but if not, you'll need a directory which appears first on the PATH.

ben$ mkdir -p ~/bin
add that path to the start of your login path (e.g. in .bashrc or .bash_profile):
export PATH=$HOME/bin:$PATH
Once that is done, anything in that directory will be run in preference to anything else. Put an 'ls' file in there something like the following:
#!/usr/bin/env python
print("These are not the files you're looking for")
make it executable (chmod +x ~/bin/ls) and you won't be able to run 'ls' anymore. Except you are running it, it's just a different ls, and not doing anything particularly helpful. You can always run the original ls with a fully specified path (or try using $(whereis ls)).

Two more things make this potentially useful:

  • Finding the next program on the PATH, which would have been run if something else hadn't sneaked in first
  • Selectively running either this 'original' program or some different code based on relevant criteria (e.g. existence of --xml in the command line options)
and the following makes things practical:
  • Making the two things above easily reusable for any command.

2. The magic of args[0]

Most of the time most programs ignore args[0] - the program's own name. But what if args[0] could be treated as a command line option, just like all the others? What makes this possible is having multiple symbolic links to a single program. args[0] is then the name of the original symlink by which the process was called, so although the same program is ultimately running, it can determine in what way it was called. It can therefore change its own operation. This technique is used in the busybox project to implement a generous number of commands in a single executable.

3. Introducing xmlcmd

xmlcmd is a Python package which supports this terrible corruption of POSIX as it should always be. The main xmlcmd module code is fairly straightforward, and is shown below. This finds the original program (which would have been run if we weren't first on the path), and then either execs that (if no --xml option is provided), or runs some Python code in an dynamically imported Python module (_xyz from the xmlcmd package, where xyz is the 'original' command name) if --xml is present.

#!/usr/bin/python
"""
xmlcmd.py - support for adding an --xml option to various commands

Ben Bass 2011. (Public Domain)
"""

import sys
import os
import which  # from PyPI 'which' package

def process_cmd(cmd_name, args, orig_cmd_path):
    """
    import and call the main() function from the module
    xmlcmd._{cmd}
    """
    module = __import__('xmlcmd._%s' % cmd_name,
                        fromlist=['_%s' % cmd_name])
    raise SystemExit(module.main(args, orig_cmd_path))

def main(args=None):
    """
    run system command from sys.argv[:], where sys.argv[0]
    implies the real command to run (e.g. via symlinks to us)
    """
    if args is None:
        args = sys.argv

    # args[0] will be a full path - we only want the command name
    cmd_name = os.path.basename(args[0])
    if cmd_name.startswith('xmlcmd'):
        raise SystemExit('xmlcmd should not be called directly')

    # get the command which would have run if we hadn't sneaked
    # ahead of it in the $PATH
    cmd_path_gen = which.whichgen(cmd_name)
    cmd_path_gen.next()   # skip first match (us)
    orig_cmd_path = cmd_path_gen.next()

    if '--xml' in args:
        args.remove('--xml')
        # forward to our xmlized version...
        process_cmd(cmd_name, args, orig_cmd_path)
    else:
        # execv *replaces* this process, so it has no idea it
        # wasn't called directly. Total transparency.
        os.execv(orig_cmd_path, args)

if __name__ == '__main__':
    main()

4. The implementations

The real work is all handled in the _{cmd} modules of course, so admittedly we've really only moved the problem around a bit. But the point of this exercise is about the ease with which we can add these new entry points into existing systems. Nothing slows down in any noticeable way, and it would be easy to extend an entire class of commands, one at a time, by nothing more than adding a Python module and creating a symlink.

For reference, the main() function from _ls.py looks something like this:

def main(args=None, orig_cmd_path=None):
    """very basic xml directory listing"""
    if len(args) > 1:
        target_dir = args[-1]
        if not os.path.isdir(target_dir):
            raise SystemExit('%s is not a directory' % (target_dir,))
    else:
        target_dir = os.getcwd()

    root = ET.Element('directory', name=target_dir)
    for fn in os.listdir(target_dir):
        stat = os.stat(os.path.join(target_dir, fn))
        f_el = ET.SubElement(root, 'file', mtime=str(stat.st_mtime))
        ET.SubElement(f_el, 'name').text = fn
        ET.SubElement(f_el, 'size').text = str(stat.st_size)
    ET.ElementTree(root).write(sys.stdout, 'utf-8')
    sys.stdout.write('\n')

5. Example

ben$ sudo pip install which xmlcmd
(yup, it's on PyPI) will install the xmlcmd Python package (and the 'which' dependency), and an xmlcmd wrapper script which should end up on the path. With that done, you can now create the magic symlinks:
ben$ ln -sf $(which xmlcmd) ~/bin/ls
ben$ ln -sf $(which xmlcmd) ~/bin/ps
And now, assuming things are working properly (a quick hash -r/rehash can't hurt), you should now be able to do wonderful things like this:
ben$ ps --xml aux | xpath '//process/command/text()[../../cpu > 2.5]'
which in this case displays the command name of all processes currently taking more than 2.5% of the CPU. Sure the XPath isn't exactly elegant. But the point is that patterns of this micro-language would be shared between tasks, and manipulating structured data on the UNIX command line would become as easy as text manipulation is now.

Here's some they made earlier...

Having said and done all that, a few searches later (for 'posix commands' in this case) brought up xmlsh.org, which seems to do some very similar things.

I also found (via [2]) xmltk, which at first glance seems to have beaten me to these ideas by about 9 years... :-)

Notes

[1]
'Regular expressions are notations for describing patterns of text and, in effect, make up a special-purpose language for pattern matching.' Brian Kerninghan, Beautiful Code (ed. Andy Oram & Greg Wilson, O'Reilly Media Inc).
[2]
The Art of Unix Programming, Eric S. Raymond. Especially the 'Rule of Composition'; see Chapter 1. (Note this book also praises text of course...)
[3]
What a pointlessly long sentence.
[4]
POSIX 2 (Commands and Utilities) covers these, e.g. see reference here

Monday, 21 March 2011

XPath is to XML as regex is to text

Anyone who has been a developer for a while gets familiar with regular expressions. They eat text for breakfast, and spit out desired answers. For all their cryptic terseness, they are at least in part reusable, and are based on only a handful (or two...) of rules. 'regexes' are a domain-specific micro-language for searching and filtering text.

But once we get outside of text, what is there?

With XML, we have XPath. I had one of those light-bulb realisations recently that what regexes are to text, XPath is to XML. And it made me think:

Why would I want to use a data substrate which doesn't have such a tool?

What I mean is this; text has regex. XML has XPath. RDBMS have SQL. Markup language of the day has... oh, it doesn't. Not really, in the sense of a standardised domain-specific micro-language. Regular expressions, XPath and SQL have history and mindshare. They 'work' specifically because they are DSLs, rather than high-level code. (OK, SQL is pushing it further than I'd like here, but it's still a DSL. Just not a micro-sized one.) To me, this is a problem which many 'NoSQL' tools have. I want the features of them, but CouchDB wants me to write map-reduce functions in JavaScript. MongoDB wants me to use a JSON-based query language. There is no commonality; no reuse; no lingua franca which will let me abstract the processing concepts away from the tools. Perhaps that will come in time for more data-representations (this seems to be an attempt for JSON, for example), but there is a significant barrier before such a tool gains widespread acceptance as a common abstraction across an entire data layer.

Pipelines and Data Processing

The 'UNIX philosophy' of connecting together a number of single-purpose programs to accomplish larger tasks is one of the keys to its power. These tools can be plugged together in ways which the original creators may never have thought of. Tools such as sed and awk are often employed as regex-based filters to command pipelines. I wish more tools had XML output options, because the tools we use in our pipelines often output structured data in textual format, often in tabular form. Tables are great for human consumption (provided they are modest in size), but when we start getting empty columns, cells flowing onto multiple lines, and other inconsistencies, it becomes a pain to parse. How great things could be if every tool following subversion's lead and had an --xml option:

svn diff -r $(svn log --stop-on-copy --xml | xpath -q -e '//log/logentry[last()]/@revision' | cut -d '"' -f 2):HEAD
(This command does a diff from a branch base to the most recent revision.  It still does some basic text processing, because the end result of XPath expressions are still text nodes).

Just imagine if POSIX defined an XML schema for each relevant command, and mandated an --xml option. Life would be so much easier. In many environments, data is structured but we still represent it as text. The pipeline philosophy might be nice, but it isn't exploited to the full when we need to write convoluted awk scripts and inscrutable regular expressions (or worse, Perl ;) ) to try and untangle the structure from the text. Consider something straightforward like the output of 'mount' on a *nix box. On my Mac it looks like this:

ben$ mount
/dev/disk0s2 on / (hfs, local, journaled)
devfs on /dev (devfs, local, nobrowse)
map -hosts on /net (autofs, nosuid, automounted, nobrowse)
map auto_home on /home (autofs, automounted, nobrowse)
/dev/disk1s1 on /Volumes/My Book (msdos, local, nodev, nosuid, noowners)

This is structured data, but getting the information out of that text blob would not be trivial, and would probably take many minutes of trial and error with regexes to get something reasonable. And the crucial thing is that you couldn't be sure it would always work. Plug a new device in which gets mounted in some new and interesting way, and who is to say that the new output of mount won't suddenly break your hand-crafted regex? That's where XML shines. Adding new information doesn't change anything in the old information. The way to access it doesn't change. Nothing breaks in the face of extension. Compare this to something like CSV, where the insertion of an extra column means all the indices from that column onwards need to change in every producer and consumer of the data.

XML and the Web

I'm somewhat saddened that XHTML didn't win outright in the last decade, and that XML on the web never really took off. I spent months at a previous job trying to convince everyone that 'XML-over-HTTP' was the best thing since sliced bread. A single source of data, which could be consumed by man (via XSLT & CSS in the browser) and machine alike. Just think how much energy the likes of Google could save if our web content didn't focus almost entirely on human consumption and discriminate against machines ;-)

One interesting thing which has happened as XML-on-the-web has declined is the increase in use of CSS selectors, first via frameworks such as Sizzle (used in jQuery), and later in the standard querySelectorAll DOM method. There is clearly a need for these DSL micro-languages, and as the 'CSS selector' DSL shows, they can quickly establish themselves if there is a clear need and sufficient backing from the community. Also apparent is that existing solutions can be usurped - users could do virtually everything CSS selectors could do (and far more besides) with XPath, but that didn't happen. Simplicity won here. But just because XPath was (arguably) wrong for Web development, doesn't mean it is wrong everywhere, and I contend that there are places where we have over-simplified, forcing regular expressions and text manipulation to (and beyond) breaking point, when XML processing would make things simpler everywhere.

Conclusion

In terms of practicalities, if I had ever spent too long in the world of Java, I would probably see XML as an unwelcome and persistent pest. But living in the happier climes of Python-ville, I have access to the wonderful ElementTree API, via both ElementTree itself (included in the standard library) and lxml.

Both of these support XPath as well as high-level abstractions of XML documents to and from lists and dictionaries. With ElementTree, XML access from Python is (almost) as easy as JSON access from JavaScript. And with technologies like XPath and XSLT available, I think it's worth it.

As a final thought, I've just had a quick glance through Greg Wilson's excellent Data Crunching, which contains chapters on Text, Regular Expressions, Binary data (rather a short ad-hoc chapter), XML, and Relational Databases. Perhaps the 'binary data' chapter is short because there simply aren't many patterns available. There is no language to describe unabstracted data. And perhaps when we consider the data layer we should be using, we should think not only of features and performance, but also the power, expressiveness, and concision of the languages available to reason about the information. Perhaps too often we settle for a lowest common denominator solution (text) when a higher level one might be more powerful, especially if we don't have to give up on the concepts of fine-grained interoperability which micro-DSLs such as XPath give us.

To be continued...

Sunday, 20 February 2011

Concurrent Queue.get() with timeouts eats CPU

...or how adding a timeout can make your program suffer

Call me lazy, but I like threads. Or at least I like the programming model they provide. I very rarely use explicit locks, and find the combination of threads and queues a great mental abstraction of parallel processing. Often though, the abstraction is so leaky that it gets me annoyed. Here is a case in point...

I noticed this problem in a Python process which sits in a loop doing readline() on a file object and dispatching the incoming lines to different worker threads to do some various asynchronous actions. With no input on the source file, the process was still taking 5% CPU. I would have expected next-to-nothing, since everything should have been blocking.

strace -fc -p $PID showed that the process was anything but idle though, and after further investigation, I found the culprit.

Concurrent Queue.get() with timeouts eats CPU.

A test case for this is the following Python (2 & 3) code. It intentionally doesn't do anything, simply starting WORKER threads, each of which performs a blocking Queue.get. The main thread simply waits for a newline on stdin. I wouldn't expect this to take any significant CPU time - in theory all the threads are blocked - either waiting on stdin input, or on something to be available in the various worker queues (which nothing ever gets sent to).

import threading, sys, time
try: import queue
except ImportError: import Queue as queue

WORKERS = 100

class Worker(threading.Thread):
    def __init__(self):
        self.queue = queue.Queue()
        threading.Thread.__init__(self)
        self.daemon = True

    def run(self):
        while True:
            next_item = self.queue.get()
            print(next_item)

def test():
    w_set = set()
    for i in range(WORKERS):
        new_w = Worker()
        new_w.start()
        w_set.add(new_w)

    print('Running: Press Enter to finish')
    sys.stdin.readline()

if __name__ == '__main__':
    test()

Sure enough, running and monitoring this shows 0% CPU usage, but WORKER+1 threads in use (I'm using OS X's Activity Monitor at the moment).

But let's suppose we want to change the worker threads to wake up occasionally to do some background activity. No problem: provide a timeout on the Queue.get():

class TimeoutWorker(Worker):
    def run(self):
        while True:
            try:
                next_item = self.queue.get(timeout=1)
            except queue.Empty:
                # do whatever background check needs doing
                pass
            else:
                print(next_item)

OK, so now the threads can wake up occasionally and perform whatever activity they want.

Except...

CPU usage just went up from ~0% to 10%. Increasing WORKERS shows that the CPU load of this program which still does nothing (the queues never get anything put in them) is proportional to the number of threads (95% at 1000 worker threads). I'm not inclined to look further than assuming this is some artifact of the GIL (pthread activity seems to be the culprit).

This is fairly independent of the length of the timeout. For very short timeouts, I'd expect CPU usage to go up, as the worker thread is spending more time doing work rather than being blocked. But there is no noticeable difference between timeout=10 and timeout=sys.maxint. In the latter case, the get() is never plausibly going to timeout, but the same high-CPU behaviour still occurs.

Fixing the code

I'm not inclined to delve deep into CPython to look at what Queue.get() is doing under the hood. It's clearly something very different depending on whether it has a timeout or not. For now I'm content to fix the code to eliminate the situations where these problems can occur. Hopefully the fact that I've written this will keep me aware of this potential issue and I'll manage to avoid it in future :)

The code where I found this issue was using a 1 second timeout to continually check the while condition and exit if required. This was easily fixed with sending a poison-pill of None into the queue rather than setting a flag on the thread instance, and checking for this once we've got a next_item. This is cleaner anyway, allowing immediate thread termination and the use of timeout-less get(). For more complex cases where some background activity is required in the worker threads, it might make more sense to keep all threads using timeout-less Queue.get()s and have a separate thread sending sentinel values into each queue according to some schedule, which cause the background activity to be run.

Conclusion

It seems fairly unintuitive that simply adding a timeout to a Queue.get() can totally change the CPU characteristics of a multi-threaded program. Perhaps this could be documented and explained. But then in CPython it seems many threading issues are entirely unintuitive. The scientific part of my brain won't stop thinking threads are wonderful, but the engineering part is becoming increasingly sceptical about threads and enamoured with coroutines, especially with PEP 380 on the horizon.

Wednesday, 9 February 2011

pylibftdi 0.7 - multiple device support

pylibftdi has always been about minimalism, which means that if you wanted to do something it didn't support, things got tricky. One of it's glaring deficiencies until now was that it only supported a single FTDI device - if you had multiple devices plugged in, it would pick one - seemingly - at random.

With pylibftdi 0.7, that has finally changed, and devices can now be opened by name. Or at least by serial number, which is nearly as good. A new example script (which I've just remembered is hideously raw and lacks any tidying up at all) examples/list_devices.py in the source distribution will enumerate the attached devices, displaying the manufacturer (which should be FTDI in all cases), description, and serial number.

The API has changed slightly to cope with this; whereas previously there was just a single Driver class, now the primary interface is the Device class. Driver still exists, and holds the CDLL reference, as well as supporting device enumeration and providing backwards compatibility.

(As an aside, using ftdi_usb_find_all was (not) fun - it sets a pointer to pointer which is then used to traverse a linked list. Trivial in C, an hour of frustration in ctypes. Anyway, I got there in the end).

>>> from pylibftdi import Device
>>> import time
>>> 
>>> # make some noise
>>> with Device('FTE4FFVQ', mode='b') as midi_dev:
...     midi_dev.baudrate = 31250
...     for count in range(3):
...         midi_dev.write(b'\x90\x3f\x7f')
...         time.sleep(0.5)
...         midi_dev.write(b'\x90\x3f\x00')
...         time.sleep(0.25)
... 

Both Device() and BitBangDevice take device_id as the (optional) first parameter to select the target device. If porting from an earlier version, one of the first changes is probably to use named parameters for options when instantiating these classes. My intention is that device_id will always be the first parameter, but the order and number of subsequent parameters could change.

Another change is that Devices are now opened implicitly on instantiation unless told not to (see the docstrings). Previously the Driver class only opened automatically when used as a context manager. There is no harm in opening devices multiple times though - subsequent open()s have no effect.

I've also finally figured out that I need to set long_description in setup.py to get documentation to appear on the PyPI front page. After all, without docs, it doesn't exist.

It's only been a few days since 0.6, but I wanted to get this release out - I think it is a big improvement since 0.5, and It'll probably be a while till the next release. In the mean time, I'll try and get a vaguely useful example going - which will probably involve MIDI and an LCD...

Sunday, 6 February 2011

pylibftdi 0.6 released: now with Python 3 goodness

pylibftdi 0.6 has been out the door and onto PyPI for the last few days, but I'm only just getting round to blogging about it. It's basically some minor work for Python 3 compatibility - the same code now works on both Python 2 (2.6/2.7) and Python 3. This means support for Python 2.5 has been dropped (due to use of bytearray/bytes types). I can always add it back in if people shout.

Other than trivially fixing a print statement to be a function call, the main change required was the expected bytes/string issue. The driver also gains a couple of parameters; mode = 't' ('t':text, 'b':binary) and encoding = 'latin1'.

In binary mode (the default - so no user code changes are required for this release), read() and write() take and return instances of type bytes. For text mode, write() will take either bytes/bytearray, or a string which it will encode with the given driver encoding, and read() will return a string. I've set the default to be latin1 rather than using utf-8 as it is an equivalence mapping over the first 256 code points.

Coming soon...

I've started work on 0.7 - the main feature of which is support for multiple devices. I had a few problems getting the right ctypes incantations to follow the linked-list which ftdi_usb_find_all sets, but that's sorted now. The bigger issue is that it really needs a split between driver and device, which could cause the API to change. I'm thinking of various ways to keep existing code working, and will probably go for something like:

  • 0.7 - set pylibftdi.SUPPORT_MULTIPLE to True to use new API / support multiple devices
  • 0.8 - set pylibftdi.SUPPORT_MULTIPLE to False to use old API / only support a single device / get a deprecation warning
  • 0.9 - SUPPORT_MULTIPLE no longer used; old API disappears.

So 0.7 is all about multiple device support, 0.8 will probably be support for Windows (supporting D2XX, for example), and 0.9 (or maybe just 1.0) will be a tidy-up / bug-fix / improve docs release. In parallel with all of this I'm writing some test code which will gradually bring this side of things up-to-standard. I'm not allowing myself to do a 1.0 release without decent testing & docs. All that will probably take a two months; I only get a couple of hours a week looking at this. But it could be sooner - or later.

pylibftdi 0.7 should be out in within a week or so, and I'll elaborate more then, hence the lack of any examples here. I'm on the case!

Friday, 31 December 2010

HTTP 'streaming' from Python generators

One of the more annoying things about HTTP is that it wants to send things in complete chunks: you ask for an object with a particular URL, and some point later you get that object. There isn't a lot you can do (at least from JavaScript) until the complete resource has loaded. For more fine grained control, it's a bit annoying.

Of course web sockets will solve all of this (maybe) once the spec has gone through the mill a few more times. But in the mean-time there is often an impedance mismatch between how we'd like to be able to do things on the server-side and how we are forced to do them because of the way HTTP works.

The following is an example of one way to manage splitting things up. It allows Python generators to be used on the server side and sends an update to the client on every yield, with the client doing long-polling to get the data. This shouldn't be confused with CherryPy's support for yield streaming single responses back to the server (which is discouraged) - the yield functionality is hijacked for other purposes if the method decorator is applied. Also note that this is only of any help for clients which can use AJAX to repeatedly poll the generator.

Example

Let's suppose that we want to generate a large (or infinite) volume of data and send it to a web client. It could be a long text document served line-by-line. But let's use the sequence of prime numbers (because that's good enough for aliens). We want to send it to the client, and have it processed as it arrives. The principle is to use a generator on the server side rather than a basic request function, but wrap that in something which translates the generator into a sequence of responses, each serving one chunk of the response.

Server implementation using CherryPy - note the json_yield decorator.

import cherrypy

class PrimeGen(object):
    @cherrypy.expose
    def index(self):
        return INDEX_HTML # see below

    @cherrypy.expose
    @json_yield   # see below
    def prime(self):
        # this isn't supposed to be efficient.
        probe = 1
        while True:
           for i in range(2, probe):
               if probe % i == 0:
                   break
           else:
               yield probe
           probe += 2

cherrypy.quickstart(PrimeGen)

The thing which turns this generator into something usable with long-polling is the following 'json_yield' decorator.

Because we might want more than one such generator on the server (not to mention generator instances from multiple clients), we need a key - passed in from the client - which associates a particular client with the generator instance. This isn't really handled in this example, see the source file download at the end of the post for that.

The major win is that the client doesn't have to store a 'next-index' or anything else. State is stored implicitly in the Python generator on the server side. Both client and server code should be simpler. Of course this goes against REST principles, where one of the fundamental tenets is that state should be stored only on the Client. But there is a place for everything.

import functools
import json

def json_yield(fn):
    # each application of this decorator has its own id
    json_yield._fn_id += 1

    # put it into the local scope so our internal function
    # can use it properly
    fn_id = json_yield._fn_id

    @functools.wraps(fn)
    def _(self, key, *o, **k):
        """
        key should be unique to a session.
        Multiple overlapping calls with the same 
        key should not happen (will result in
        ValueError: generator already executing)
        """

        # create generator if it hasn't already been
        if (fn_id,key) not in json_yield._gen_dict:
            new_gen = fn(self, *o, **k)
            json_yield._gen_dict[(fn_id,key)] = new_gen

        # get next result from generator
        try:
           # get, assuming there is more.
           gen = json_yield._gen_dict[(fn_id, key)]
           content = gen.next()
           # send it
           return json.dumps({'state': 'ready',
                              'content':content})
        except StopIteration:
           # remove the generator object
           del json_yield._gen_dict[(fn_id,key)]
           # signal we are finished.
           return json.dumps({'state': 'done',
                              'content': None})
    return _
# some function data...
json_yield._gen_dict = {}
json_yield._fn_id = 0

The HTML to go with this is basic long-polling, and separating out the state from the content. Here I'm using jQuery:

INDEX_HTML = """
<html>
<head>
<script src="http://code.jquery.com/jquery-1.4.4.min.js">
</script>
<script>
$(function() {

  function update() {
    $.getJSON('/prime?key=1', {}, function(data) {
      if (data.state != 'done') {
        $('#status').text(data.content);
        //// alternative for appending:
        // $('#status').append($('
'+data.content+'
')); setTimeout(update, 0); } }); } update(); }); </script> </head> <body> <div id="status"></div> </body> </html> """

Uses

The example above is contrived, but there are plenty of possibilities if the json_yield decorator is extended in a number of ways. Long running server side processes can send status information back to the client with minimal hassle. Client-side processing of large text documents can begin before they have finished downloading. One issue is that a chunk of the data should be semantically understandable. Using things like this on binary files or XML (where it is only valid once the root element is closed) won't have sensible results.

There are plenty of possibilities in extending this; the decorator could accumulate the content (rather than the client) and send the entire results up-to-now back, or (given finite memory) some portion of it using a length-limited deque. Additional meta-data (e.g. count of messages so far, or the session key) could be added to the JSON information sent to the client each poll.

Disclaimer: I've not done any research into how others do this, because coding is more fun than research. There are undoubtedly better ways of accomplishing similar goals. In particular there are issues with memory usage and timeouts which aren't handled with json_yield. Also note that the example is obviously silly - it would be much faster to compute the primes on the client.

Download Files

Thursday, 25 November 2010

pylibftdi updated to 0.5

I've done some tidying up of pylibftdi, fixing a few bugs, refactored the pylibftdi package to contain several modules instead of everything dumped in a __init__.py file, and generally made it a bit cleaner. It even has docstrings for most things now, and a test or two!

pylibftdi is a simple interface to libftdi, which in turn allows accessing FTDI's range of USB parallel and serial chips. See here for more details, but very briefly, serial access is provided by a file-like (read/write) interface with a baudrate property, and parallel access is provided by a pair of properties - direction (data direction register) and port (data IO register).

I haven't yet added any more examples, hopefully I'll get round to that in the next week or so. I have been using it as a MIDI interface though, which is fun - I'll get an appropriate example of that out in the next version, together with some diagrams / photos etc.

Sunday, 19 September 2010

Announcing pylibftdi - a minimal Pythonic wrapper for libftdi

[edit 2013-11-25: note that recent versions of pylibftdi have deprecated and then removed the ability to use 'Driver' in this way; replace Driver and BitBangDriver with Device and BitBangDevice in the code below]

The playing-around I've done with FTDI devices seemed like a good opportunity to actually release something as open source, and so I present 'pylibftdi'. Undoubtedly not the greatest, but right now most likely the latest FTDI-Python bridge in the rather large open source field. There are a few features I know I want to add to it (primarily support for multiple devices), but for a flavour of things:

Toggling an LED or other device from pin D0

from pylibftdi import BitBangDriver
import time

with BitBangDriver() as bb:
    while True:
        bb.port ^= 1
        time.sleep(0.5)

Sending a string to a serial terminal

from pylibftdi import Driver

with Driver() as drv:
    drv.baudrate = 19200
    drv.write('Hello World!')

It's been tested on Linux (Ubuntu 10.10) and Mac OS X (10.6), with libftdi 0.17 and 0.18, but doesn't have any intended platform specific requirements other than having the libftdi library installed. The following goals for this project differentiate it from similar projects:
  • Fully supported on Linux and Mac OS X, using Intra2net's open source libftdi driver.
  • Simple things are very simple, as the example above shows. The basic functionality is made as simple as possible to use, with properties and context managers used where it makes sense. Most other FTDI Python wrappers are 'just' low level bindings of the underlying API.
  • There will be an increasing library of examples showing interaction with various protocols and devices - note this is a goal, not yet an accomplishment, though there is an LCD example there.
pylibftdi is easy_installable (or equivalent) from PyPI, and the code is developed on bitbucket, where you can also raise tickets for problems and feature requests.

For reference, other similar projects include PyUSB, pyftdi, python-ftdi, and ftd2xx. There are probably others...

Monday, 30 August 2010

libftdi on Mac OS X

Update 2011/02/02: In trying to port pylibftdi to Python3, I found that the libraries which get built following the instructions below are 64-bit. All well and good, but the Python3.1 installation on Mac OS X 10.6 is 32-bit only (unlike the Python2.6 install).

While it's possible to get both 32-bit and 64-bit universal libraries, I haven't tried that yet. The solution to built 32-bit libraries was to use the following:

CFLAGS="-arch i386" ./configure && make && sudo make install

(Note I omitted libusb and sudo (for make install) in previous instructions - I've updated below). I also found I needed to do:

export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig/

prior to building libusb-compat - I can't remember if this was a problem or not the first time round. Anyway, hopefully I'll have pylibftdi working with Python3 soon.
End of Update

I've not written much about my ftdi projects recently, so here's an update. I've played around with reading values from an encoder and outputting to a RGB LED with PWM, but both of these require a fairly high constant poll rate (or in the case of reading the encoder, ideally interrupt-on-change). The jitter is quite annoying, and decent PWM seems tricky, even when following FTDI's instructions (pdf link) for maxmising the buffer size. There's always going to be a gap when one buffer has finished and the next starts.

On a different note, I'm now mostly using my new Mac mini, so I've spent a few moments getting things to work there. Prerequisites:

These should be installed in the order listed; for each file, download it, untar it (tar -xf filename), then run './configure && make && sudo make install' in the new directory which tar has created.

On the Python side, there were a couple of issues when I tried to run the old code. First was that the library could not be found at all. This was due to the library extension (.so) being hardcoded. For a cross-platform solution, it seems the best way is to use the following:

from ctypes import *
from ctypes.util import find_library

dll = CDLL(find_library('ftdi'))
Rather than specifying 'libftdi.so', we have stripped off the 'lib' prefix and the .so extension. The simple 'ftdi' string would be used if we were linking this library with gcc using the -l flag. Try 'gcc -lftdi' and it should report only that it can't find _main, not that the library is missing (try gcc -lmadeupname and it should complain about being unable to find 'madeupname')

Once this was done, the program would some times run (especially under pdb, which lead me up the garden path of thinking it was a timing issue...), but other times would cause a segfault. This was tracked down to another hard-coded value - the size of the context structure. The following will report the size of the ftdi_context structure:

echo -e '#include \nint main(){return sizeof(struct ftdi_context);}' | gcc -xc - && (./a.out; echo $?)
This is somewhat fragile, as it will fail if the structure ever gets larger than 255 bytes, but this seems unlikely for the time being. On this Mac this gives 112 bytes; on my Linux boxes it gives 84 - though they are running on the previous (0.17) version of libftdi. There is also the issue that the Mac library is x86_64 (i.e. 64-bit), while the Linux libraries are 32-bit.

One solution, though not exactly a purist's, is to allocate far more memory than will be needed. It won't slow anything down, as only a pointer to the start of the block is passed around, and probably won't make a difference to the memory consumption as applications will always use whole numbers of pages (4KB minimum) anyway. So for now, an allocation of 1KB seems a good solution.

The result of this is that the changes needed to the earlier code for compatibility with both Mac OS X and Linux are as follows:

...

from ctypes.util import find_library

def ftdi_start():
    global ctx, fdll  # frown... :P
    fdll = CDLL(find_library('libftdi'))
    # size of ftdi_context varies
    # seen 112 (x86_64, v0.18). 84 (i386, v0.17)
    # over-allocate to 1KB.
    ctx = create_string_buffer(1024)
    fdll.ftdi_init(byref(ctx))
    fdll.ftdi_usb_open(byref(ctx), 0x0403, 0x6001)
    fdll.ftdi_set_bitmode(byref(ctx), 0xFF, 0x01)

...

Me loves os.name == 'posix' :-)

Wednesday, 16 June 2010

A Python LCD status display

This is the third in my series of blog posts about doing IO with FTDI's Async BitBang mode on the UM245R / UM232R eval boards, and this time we're actually going to do something useful - have an LCD update with interesting system information. The LCD in question is based on the ubiquitous HD44780 controller, which is interfaced to microcontrollers throughout the world...

Anyway, with only 8 IO lines on the UM2xxR boards, we need to use the 4bit interface mode (two extra IO lines are needed beyond the 'data' path - one to act as a 'data-ready' strobe, and the other to select between 'data' and 'commands'). The wiring up is basically DB0-DB3 on the FTDI device going to D4-D7 on the LCD, with DB6 on the FTDI going to the 'RS' (register select) line on the LCD, and DB7 to the 'E' strobe. If that makes no sense, then hopefully the photo makes things slightly clearer...

I've also got an LED backlight for my display, which makes it look a whole lot cooler :-)

I'll present the code in chunks and try to explain it as I go along. First the initialisation and shutdown code, which are fundamentally unchanged from the previous examples, though they are now in functions to make them just a little tidier. (Note there is still no error checking here...)
"""
Write a string (argv[1] if run from command line) to a HD44780
LCD module connected via a FTDI UM232R/245R module

example usage:

# while true;
>   do python lcd.py $( awk '{print $1}' /proc/loadavg);
>   sleep 5;
> done
"""

from ctypes import *
import time, sys

def ftdi_start():
    global ctx, fdll  # frown... :P
    fdll = CDLL('libftdi.so')
    ctx = create_string_buffer(84)
    fdll.ftdi_init(byref(ctx))
    fdll.ftdi_usb_open(byref(ctx), 0x0403, 0x6001)
    fdll.ftdi_set_bitmode(byref(ctx), 0xFF, 0x01)

def ftdi_end():
    fdll.ftdi_usb_close(byref(ctx))
    fdll.ftdi_deinit(byref(ctx))
The following class is an abstraction of a bus - a collection of one (probably two, technically...) or more electrical lines which should be treated as a single unit. The aim here is to be able to program in a similar style to the embedded programming on a microcontroller, where registers are typically memory mapped and writing to a bus is simply writing into a bitfield. The parameters of this abstraction are the width of the bus (in bits), and the offset from the LSB of the entire port being accessed. It also needs a reference to a driver which allows it to do the reading and writing to the port. By using this as a descriptor, we can define buses within classes representing the various devices we are using; in this case the LCD.
class Bus(object):
    """
    This class is a descriptor for a bus of a given width starting
    at a given offset (0 = LSB).  It needs a driver which does the
    actual reading and writing - see FtdiDriver below
    """
    def __init__(self, driver, offset, width=1):
        self.offset = offset
        self.width = width
        self._mask = ((1<<width)-1)
        self.driver = driver

    def __get__(self):
        val = self.driver.read()
        return (val >> offset) & self._mask

    def __set__(self, obj, value):
        value = value & self._mask
        # in a multi-threaded environment, would
        # want to ensure following was locked, eg
        # by acquiring a driver lock
        val = self.driver.latch()
        val &= ~(self._mask << self.offset)
        val |= value << self.offset
        self.driver.write(val)
The following is the driver which will be used to do the actual data access. Note the use of the latch to store the last value written to the port, which cannot generally be read from the device after having been written (Latch registers for the IO ports was a big advance for the PIC18F series over the earlier 16F series, which needed the application to store this separately in order to do read-modify-write operations properly on the IO ports).
class FtdiDriver(object):
    def __init__(self):
        self._latch = 0

    def read(self):
        z = c_byte()
        fdll.ftdi_read(byref(ctx), byref(z), 1)
        return z.value

    def latch(self):
        return self._latch

    def write(self, val):
        self._latch = val
        z = c_byte(val)
        fdll.ftdi_write_data(byref(ctx), byref(z), 1)
        # the following is a hack specifically to allow
        # me to ignore all the timing constraints of the
        # LCD.  For more advanced LCD usage, this wouldn't
        # be acceptable...
        time.sleep(0.005)
Now we've got a Bus class and a driver to use with it, we can define the LCD module interface. I'm not going to cover the details of the interface, but the wikipedia HD44780 page has some pointers.
# need to instantiate this in global context so LCD
# class can be defined. Could tidy this up...
ftdi_driver = FtdiDriver()

class LCD(object):
    """
    The UM232R/245R is wired to the LCD as follows:
       DB0..3 to LCD D4..D7 (pin 11..pin 14)
       DB6 to LCD 'RS' (pin 4)
       DB7 to LCD 'E' (pin 6)
    """
    data = Bus(ftdi_driver, 0, 4)
    rs = Bus(ftdi_driver, 6)
    e = Bus(ftdi_driver, 7)

    def init_four_bit(self):
        """
        set the LCD's 4 bit mode, since we only have
        8 data lines and need at least 2 to strobe
        data into the module and select between data
        and commands.
        """
        self.rs = 0
        self.data = 3
        self.e = 1; self.e = 0
        self.e = 1; self.e = 0
        self.e = 1; self.e = 0
        self.data = 2
        self.e = 1; self.e = 0

    def _write_raw(self, rs, x):
        # rs determines whether this is a command
        # or a data byte. Write the data as two
        # nibbles. Ahhh... nibbles. QBasic anyone?
        self.rs = rs
        self.data = x >> 4
        self.e = 1; self.e = 0
        self.data = x & 0x0F
        self.e = 1; self.e = 0

    def write_cmd(self, x):
        self._write_raw(0, x)

    def write_data(self, x):
        self._write_raw(1, x)
All that remains is to initialise the FTDI device, initialise the LCD module, and write some data to it.
def display(string):
    ftdi_start()

    lcd = LCD()
    lcd.init_four_bit()

    # 001xxxxx - function set
    lcd.write_cmd(0x20)
    # 00000001 - clear display
    lcd.write_cmd(0x01)
    # 000001xx - entry mode set
    # bit 1: inc(1)/dec(0)
    # bit 0: shift display
    lcd.write_cmd(0x06)
    # 00001xxx - display config
    # bit 2: display on
    # bit 1: display cursor
    # bit 0: blinking cursor
    lcd.write_cmd(0x0C)

    for i in string:
        lcd.write_data(ord(i))

    ftdi_end()


if __name__ == '__main__':
    # note blatant lack of error checking...
    display(sys.argv[1])
and there we have it; a slightly cumbersome but also slightly cool and slightly useful little display utility. In the spirit of UNIX programming, this only does one thing - display the command line argument on the LCD. Obviously it needs major error handling if robustness is required...
while true; do python lcd.py $( awk '{print $1}' /proc/loadavg); sleep 5; done