Monday, November 30, 2009

You Built a Metaclass for *what*?

Recently I had a bit of an interesting problem, I needed to define a way to represent a C++ API in Python. So, I figured the best way to represent that was one class in Python for each class in C++, with a functions dictionary to track each of the methods on each class. Seems simple enough right, do something like this:

class String(object):
functions = {
"size": Function(Integer, []),
}


We've got a String class with a functions dictionary that maps method names to Function objects. The Function constructor takes a return type and a list of arguments. Unfortunately we run into a problem when we want to do something like this:

class String(object):
functions = {
"size": Function(Integer, []),
"append": Function(None, [String])
}


If we try to run this code we're going to get a NameError, String isn't defined yet. Django models have a similar issue, with recursive foreign keys. Django's solution is to use the placeholder string "self", and have a metaclass translate it into the right class. Also having a slightly more declarative API might be nice, so something like this:

class String(DeclarativeObject):
size = Function(Integer, [])
append = Function(None, ["self"])


So now that we have a nice pretty API we need our metaclass to make it happen:

RECURSIVE_TYPE_CONSTANT = "self"

class DeclarativeObjectMetaclass(type):
def __new__(cls, name, bases, attrs):
functions = dict([(n, attr) for n, attr in attrs.iteritems()
if isinstance(attr, Function)])
for attr in functions:
attrs.pop(attr)
new_cls = super(DeclarativeObjectMetaclass, cls).__new__(cls, name, bases, attrs)
new_cls.functions = {}
for name, function in functions.iteritems():
if function.return_type == RECURSIVE_TYPE_CONSTANT:
function.return_type = new_cls
for i, argument in enumerate(function.arguments):
if argument == RECURSIVE_TYPE_CONSTANT:
function.arguments[i] = new_cls
new_cls.functions[name] = function
return new_cls

class DeclarativeObject(object):
__metaclass__ = DeclarativeObjectMetaclass



And that's all their is to it. We take each of the functions on the class out of the attributes, create a normal class instance without the functions, and then we do the replacements on the function objects and stick them in a functions dictionary.

Simple patterns like this can be used to build beautiful APIs, as is seen in Django with the models and forms API.

Sunday, November 29, 2009

Getting Started with Testing in Django

Following yesterday's post another hotly requested topic was testing in Django. Today I wanted to give a simple overview on how to get started writing tests for your Django applications. Since Django 1.1, Django has automatically provided a tests.py file when you create a new application, that's where we'll start.

For me the first thing I want to test with my applications is, "Do the views work?". This makes sense, the views are what the user sees, they need to at least be in a working state (200 OK response) before anything else can happen (business logic). So the most basic thing you can do to start testing is something like this:

from django.tests import TestCase
class MyTests(TestCase):
def test_views(self):
response = self.client.get("/my/url/")
self.assertEqual(response.status_code, 200)


By just making sure you run this code before you commit something you've already eliminated a bunch of errors, syntax errors in your URLs or views, typos, forgotten imports, etc. The next thing I like to test is making sure that all the branches of my code are covered, the most common place my views have branches is in views that handle forms, one branch for GET and one for POST. So I'll write a test like this:

from django.tests import TestCase
class MyTests(TestCase):
def test_forms(self):
response = self.client.get("/my/form/")
self.assertEqual(response.status_code, 200)

response = self.client.post("/my/form/", {"data": "value"})
self.assertEqual(response.status_code, 302) # Redirect on form success

response = self.client.post("/my/form/", {})
self.assertEqual(response.status_code, 200) # we get our page back with an error


Now I've tested both the GET and POST conditions on this view, as well the form is valid and form is invalid cases. With this strategy you can have a good base set of tests for any application with not a lot of work. The next step is setting up tests for your business logic. These are a little more complicated, you need to make sure models are created and edited in the right cases, emails are sent in the right places, etc. Django's testing documentation is a great place to read more on writing tests for your applications.

Saturday, November 28, 2009

Django and Python 3

Today I'm starting off doing some of the posts people want to see, and the number one item on that list is Django and Python 3. Python 3 has been out for about a year at this point, and so far Django hasn't really started to move towards it (at least at a first glance). However, Django has already begun the long process towards moving to Python 3, this post is going to recap exactly what Django's migration strategy is (most of this post is a recap of a message James Bennett sent to the django-developers mailing list after the 1.0 release, available here).

One of the most important things to recognize in this that though there are many developers using Django for smaller projects, or new projects that want to start these on Python 3, there are also a great many more with legacy (as if we can call recent deployments on Python2.6 and Django 1.1 legacy) deployments that they want to maintain and update. Further, Django's latest release, 1.1, has support for Python releases as old as 2.3, and a migration to Python 3 from 2.3 is nontrivial. However, it is significantly easier to make this migration from Python 2.6. This is the crux of James's plan, people want to move to Python 3.0 and moving towards Python 2.6 makes this easier for them and us. Therefore, since the 1.1 release Django has been removing support for one point version of Python per Django release. So, Django 1.1 will be the last release to support Python 2.3, 1.2 will be the last to support 2.4, etc. This plan isn't guaranteed, if there's a compelling reason to maintain support for a version for longer it will likely override this plan (for example if a particularly common deployment platform only offered Python 2.5 removing support for it might be delayed an additional release).

At the end of this process Django is going to end up only supporting Python 2.6. At this point (or maybe even before), a strategy will need to be devised for how to actually handle the switch. Some possibilities are, 1) having an official breakpoint, only one version is supported at a given time, 2) Python 3 support begins in a branch that tracks trunk and eventually it switches to become trunk once Python 3 is the more common deployment, 3) Python 2.6 and 3 are supported from a single codebase. I'm not sure which one of these is easiest, other projects such as PLY have chosen to go with option 3, however my inclination is that option 2 will be best for Django since issues like bytes vs. string are particularly prominent in Django (since it talks to so many external data sources).

For people who are interested Martin von Löwis actually put together a patch that, at the time, gave Django Python 3 support (at least enough to run the tutorial under SQLite). If you're very interested in Django on Python 3 the best path would probably be to bring that patch up to date (unless it's wildly out of date, I haven't checked), and starting to fix new things that have been introduced since the patch was written. This work isn't likely to get any official support, since maintaining Python 2.4 support and Python 3 would be far too difficult, however there's no reason you can't maintain the patch externally on something like Github or Bitbucket.

Friday, November 27, 2009

Why Meta.using was removed

Recently Russell Keith-Magee and I decided that the Meta.using option needed to be removed from the multiple-db work on Django, and so we did. Yesterday someone tweeted that this change caught them off guard, so I wanted to provide a bit of explanation as to why we made that change.

The first thing to note is that Meta.using was very good for one specific use case, horizontal partitioning by model. Meta.using allowed you to tie a specific model to a specific database by default. This meant that if you wanted to do things like have users be in one db and votes in another this was basically trivial. Making this use case this simple was definitely a good thing.

The downside was that this solution was very poorly designed, particularly in light on Django's reusable application philosophy. Django emphasizes the reusability of application, and having the Meta.using option tied your partitioning logic to your models, it also meant that if you wanted to partition a reusable application onto another DB this easily the solution was to go in and edit the source for the reusable application. Because of this we had to go in search of a better solution.

The better solution we've come up with is having some sort of callback you can define that lets you decide what database each query should be executed on. This would let you do simple things like direct all queries on a given model to a specific database, as well as more complex sharding logic like sending queries to the right database depending on which primary key value the lookup is by. We haven't figured out the exact API for this, and as such this probably won't land in time for 1.2, however it's better to have the right solution that has to wait than to implement a bad API that would become deprecated in the very next release.

Thursday, November 26, 2009

Just a Small Update

Unfortunately, I don't have an interesting, contentful post today. Just a small update about this blog instead. I now have a small widget on the right hand side where you can enter topics you'd like to hear about. I don't always have a good idea of what readers are interested in, and far too often I reject blog post ideas because I think either, "no one cares about that" or "everyone always knows that" so hopefully this will be both a good way for me to write interesting content that people want to read about, as well as a good way for me to overcome any writers block. So please submit anything you'd like to hear about, Python, Django, the web, programming in general, compilers, or me ranting about politics, I'm willing to consider any topic.

To my American readers: Happy Thanksgiving!

Wednesday, November 25, 2009

Final Review of Python Essential Reference

Disclosure: I received a free review copy of the book.

Today I finished reading the Python Essential Reference and I wanted to share my final thoughts on the book. I'll start by saying I still agree with everything I wrote in my initial review, specifically that it's both a great resource as well as a good way to find out what you don't already know. Reading the second half of the book there were a few things that really exemplified this for me.

The first instance of this is the chapter on concurrency. I've done some concurrent programming with Python, but it's mostly been small scripts, a multiprocess and multithreaded web scraper for example, so I'm familiar with the basic APIs for threading and multiprocessing. However, this chapter goes into the full details, really covering the stuff you need to know if you want to build bigger applications that leverage these techniques. Things like shared data for processes or events and condition variables for threads and the kind of things that the book gives a good explanation of, as well as good examples of how to use them.

The other chapter that really stood out for me is the one on network programming and sockets. This chapter describes everything from the low-level select module up through through the included socket servers. The most valuable part is an example of how to build an asynchronous IO system. This example is about 2 pages long and it's a brilliant example of how to use the modules, how to make an asynchronous API feel natural, and what the tradeoffs of asynchronous versus concurrency are. In addition, in the wake of the "* in Unix" posts from a while ago I found the section on the socket module interesting as it's something I've never actually worked directly with.

The rest of the book is a handy reference, but for me these two chapters are the types of things that earns this a place on my bookshelf. The way Python Essential Reference balances depth with conciseness is excellent, it shows you the big picture for everything and gives you super details on the things that are really important. I just got my review copy of Dive into Python 3 today, so I look forward to giving a review of it in the coming days.

Tuesday, November 24, 2009

Filing a Good Ticket

I read just about every single ticket that's filed in Django's trac, and at this point I'e gotten a pretty good sense of what (subjectively) makes a useful ticket. Specifically there are a few things that can make your ticket no better than spam, and a few that can instantly bump your ticket to the top of my "TODO" list. Hopefully, these will be helpful in both filing ticket's for Django as well as other open source projects.

  • Search for a ticket before filing a new one. Django's trac, for example, has at least 10 tickets describing "Decoupling urls in the tutorial, part 3". These have all been wontfixed (or closed as a duplicate of one of the others). Each time one of these is filed it takes time for someone to read through it, write up an appropriate closing message, and close it. Of course, the creator of the ticket also invested time in filing the ticket. Unfortunately, for both parties this is time that could be better spent doing just about anything else, as the ticket has been decisively dealt with plenty of times.
  • On a related note, please don't reopen a ticket that's been closed before. This one depends more on the policy of the project, in Django's case the policy is that once a ticket has been closed by a core developer the appropriate next step is to start a discussion on the development mailing list. Again this results in some wasted time for everyone, which sucks.
  • Read the contributing documentation. Not every project has something like this, but when a project does it's definitely the right starting point. It will hopefully contain useful general bits of knowledge (like what I'm trying to put here) as well as project specific details, what the processes are, how to dispute a decision, how to check the status of a patch, etc.
  • Provide a minimal test case. If I see a ticket who's description involves a 30 field model, it drops a few rungs on my TODO list. Large blocks of code like this take more time to wrap ones head around, and most of it will be superfluous. If I see just a few lines of code it takes way less time to understand, and it will be easier to spot the origin of the problem. As an extension to this if the test case comes in the form of a patch to Django's test suite it becomes even easier for a developer to dive into the problem.
  • Don't file a ticket advocating a major feature or sweeping change. Pretty much if it's going to require a discussion the right place to start is the mailing list. Trac is lousy at facilitating discussions, mailing lists are designed explicitly for that purpose. A discussion on the mailing list can more clearly outline what needs to happen, and it may turn out that several tickets are needed. For example filing a ticket saying, "Add CouchDB support to the ORM" is pretty useless, this requires a huge amount of underlying changes to make it even possible, and after that a database backend can live external to Django, so there's plenty of design decisions to go around.

These are some of the issues I've found to be most pressing while reviewing tickets for Django. I realize they are mostly in the "don't" category, but filing a good ticket can sometimes be as good as clearly stating what the problem is, and how to reproduce it.

Monday, November 23, 2009

Using PLY for Parsing Without Using it for Lexing

Over the past week or so I've been struggling with attempting to write my own parser (or parser generator) by hand. A few days ago I finally decided to give up on this notion (after all the parser isn't my end goal) as it was draining my time from the interesting work to be done. However, I wanted to keep my existing lexer. I wrote the lexer by hand in the method I described in a previous post, it's fast, easy to read, and I rather like my handiwork, so I wanted to keep it if possible. I've used PLY before (as I described last year) so I set out to see if it would be possible to use it for parsing without using it for lexing.

As it turns out PLY expects only a very minimal interface from it's lexer. In fact it only needs one method, token(), which returns a new token (or None at the end). Tokens are expected to have just 4 attributes. Having this knowledge I now set out to write a pair of compatibility classes for my existing lexer and token classes, I wanted to do this without altering the lexer/token API so that if and when I finally write my own parser I don't have to remove legacy compatibility stuff. My compatibility classes are very small, just this:

class PLYCompatLexer(object):
def __init__(self, text):
self.text = text
self.token_stream = Lexer(text).parse()

def token(self):
try:
return PLYCompatToken(self.token_stream.next())
except StopIteration:
return None


class PLYCompatToken(object):
def __init__(self, token):
self.type = token.name
self.value = token.value
self.lineno = None
self.lexpos = None

def __repr__(self):
return "<Token: %r %r>" % (self.type, self.value)


This is the entirety of the API that PLY needs. Now I can write my parser exactly as I would normally with PLY.

Sunday, November 22, 2009

A Bit of Benchmarking

PyPy recently posted some interesting benchmarks from the computer language shootout, and in my last post about Unladen Swallow I described a patch that would hopefully be landing soon. I decided it would be interesting to benchmarks something with this patch. For this I used James Tauber's Mandelbulb application, at both 100x100 and 200x200. I tested CPython, Unladen Swallow Trunk, Unladen Swallow Trunk with the patch, and a recent PyPy trunk (compiled with the JIT). My results were as follows:

100
CPython 2.6.4 17s
Unladen Swallow Trunk 16s
Unladen Swallow Trunk + Patch 13s
PyPy Trunk 10s

200
CPython 2.6.4 64s
Unladen Swallow Trunk 52s
Unladen Swallow Trunk + Patch 49s
PyPy 46s

Interesting results. At 100x100 PyPy smokes everything else, and the patch shows a clear benefit for Unladen. However, at 200x200 both PyPy and the patch show diminishing returns. I'm not clear on why this is, but my guess is that something about the increased size causes a change in the parameters that makes the generated code less efficient for some reason.

It's important to note that Unladen Swallow has been far less focussed on numeric benchmarks than PyPy, instead focusing on more web app concerns (like template languages). I plan to benchmark some of these as time goes on, particularly after PyPy merges their "faster-raise" branch, which I'm told improves PyPy's performance on Django's template language dramatically.

Saturday, November 21, 2009

Things College Taught me that the "Real World" Didn't

A while ago Eric Holscher blogged about things he didn't learn in college. I'm going to take a different spin on it, looking at both things that I did learn in school that I wouldn't have learned else where (henceforth defined as my job, or open source programming), as well as thinks I learned else where instead of at college.

Things I learned in college:

Big O notation, and algorithm analysis. This is the biggest one, I've had little cause to consider this in my open source or professional work, stuff is either fast or slow and that's usually enough. Learning rigorous algorithm analysis doesn't come up all the time, but every once in a while it pops up, and it's handy.

C++. I imagine that I eventually would have learned it myself, but my impetus to learn it was that's what was used for my CS2 class, so I started learning with the class then dove in head first. Left to my own devices I may very well have stayed in Python/Javascript land.

Finite automaton and push down automaton. I actually did lexing and parsing before I ever started looking at these in class (see my blog posts from a year ago) using PLY, however, this semester I've actually been learning about the implementation of these things (although sadly for class projects we've been using Lex/Yacc).


Things I learned in the real world:

Compilers. I've learned everything I know about compilers from reading my papers from my own interest and hanging around communities like Unladen Swallow and PyPy (and even contributing a little).

Scalability. Interesting this is a concept related to algorithm analysis/big O, however this is something I've really learned from talking about this stuff with guys like Mike Malone and Joe Stump.

APIs, Documentation. These are the core of software development (in my opinion), and I've definitely learned these skills in the open source world. You don't know what a good API or documentation is until it's been used by someone you've never met and it just works for them, and they can understand it perfectly. One of the few required, advanced courses at my school is titled, "Software Design and Documentation" and I'm deathly afraid it's going to waste my time with stuff like UML, instead of focusing on how to write APIs that people want to use and documentation that people want to read.


So these are my short lists. I've tried to highlight items that cross the boundaries between what people traditionally expect are topics for school and topics for the real world. I'd be curious to hear what other people's experience with topics like these are.

Thursday, November 19, 2009

Another Pair of Unladen Swallow Optimizations

Today a patch of mine was committed to Unladen Swallow. In the past weeks I've described some of the optimizations that have gone into Unladen Swallow, in specific I looked at removing the allocation of an argument tuple for C functions. One of the "on the horizon" things I mentioned was extending this to functions with a variable arity (that is the number of arguments they take can change). This has been implemented for functions that take a finite range of argument numbers (that is, they don't take *args, they just have a few arguments with defaults). This support was used to optimize a number of builtin functions (dict.get, list.pop, getattr for example).

However, there were still a number of functions that weren't updated for this support. I initially started porting any functions I saw, but it wasn't a totally mechanical translation so I decided to do a little profiling to better direct my efforts. I started by using the cProfile module to see what functions were called most frequently in Unladen Swallow's Django template benchmark. Imagine my surprise when I saw that unicode.encode was called over 300,000 times! A quick look at that function showed that it was a perfect contender for this optimization, it was currently designated as a METH_VARARGS, but in fact it's argument count was a finite range. After about of dozen lines of code, to change the argument parsing, I ran the benchmark again, comparing it a control version of Unladen Swallow, and it showed a consistent 3-6% speedup on the Django benchmark. Not bad for 30 minutes of work.

Another optimization I want to look at, which hasn't landed yet, is one of optimize various operations. Right now Unladen Swallow tracks various data about the types seen in the interpreter loop, however for various operators this data isn't actually used. What this patch does is check at JIT compilation time whether the operator site is monomorphic (that is there is only one pair of types ever seen there), and if it is, and it is one of a few pairings that we have optimizations for (int + int, list[int], float - float for example) then optimized code is emitted. This optimized code checks the types of both the arguments that they are the expected ones, if they are then the optimized code is executed, otherwise the VM bails back to the interpreter (various literature has shown that a single compiled optimized path is better than compiling both the fast and slow paths). For simple algorithm code this optimization can show huge improvements.

The PyPy project has recently blogged about the results of the results of some benchmarks from the Computer Language Shootout run on PyPy, Unladen Swallow, and CPython. In these benchmarks Unladen Swallow showed that for highly algorithmic code (read: mathy) it could use some work, hopefully patches like this can help improve the situation markedly. Once this patch lands I'm going to rerun these benchmarks to see how Unladen Swallow improves, I'm also going to add in some of the more macro benchmarks Unladen Swallow uses to see how it compares with PyPy in those. Either way, seeing the tremendous improvements PyPy and Unladen Swallow have over CPython gives me tremendous hope for the future.

Announcing django-admin-histograms

This is just a quick post because it's already past midnight here. Last week I realized some potentially useful code that I extracted from the DjangoDose and typewar codebases. Basically this code let's you get simple histogram reports for models in the admin, grouped by a date field. After I released it David Cramer did some work to make the code slightly more flexible, and to provide a generic templatetag for creating these histograms anywhere. The code can be found on github, and if you're curious what it looks like there's a screenshot on the wiki. Enjoy.

Tuesday, November 17, 2009

Writing a Lexer

People who have been reading this blog since last year (good lord) may recall that once upon a time I did a short series of posts on lexing and parsing using PLY. Back then I was working on a language named Al. This past week or so I've started working on another personal project (not public yet) and I've once again had the need to lex things, but this time I wrote my lexer by hand, instead of using any sort of generator. This has been an exceptional learning experience, so I'd like to pass some of that on to you.

The first thing to note is that writing a lexer is a great place to TDD (test driven development), I've rewritten various parts of my lexer five or more times, I've needed my tests to keep me sane. Got your tests written? Ok it's time to dive right into our lexer.

I've structured my lexer as a single class that takes an input string, and has a parse method which returns a generator that yields tokens (tokens are just a namedtuple with a name and value field). The parser has two important attributes, state which is a string that says what state the lexer is in (this is used for tokens that are more than one character long), and current_val which is a list containing characters that will eventually become the value for the current token being found.

The parse method iterates through characters in the text and then it checks, if the parser has a state (self.state is not None) it does getattr(self, self.state)(character). Otherwise it calls self.generic(character). Then the various "state methods" are responsible for mutating self.current_val and self.state and returning a Token. So for example the string state looks like this:

def string(self, ch):
if ch == '"':
sym = Symbol("STRING", "".join(self.current_val))
self.current_val = []
self.state = None
return sym
elif ch == "\\":
self.state = "string_escaped"
else:
self.current_val.append(ch)


If the character is a quote then we're closing our string so we return our string Symbol, reset the current_val and state. If the character is a \ then we switch into a string_escaped state which knows to handle the character as a literal and then go back to string state. If the character is anything else then we just append it to the current_val, it will get handled at the end of the string.

I've found this to be an exceptionally powerful method, and it makes my end result code very readable. Hopefully I'll be able to reveal my project in the coming weeks, as I'm very excited about it, even if it's not ready I'll continue to share these lessons learned as I go.

Monday, November 16, 2009

My Next Blog

I've been using this Blogspot powered blog for over a year now, and it is starting to get a little long in the tooth. Therefore I've been planning on moving to a new, shinny, blog of my own in the coming months. Specifically it's going to be my goal to get myself 100% migrated over my winter break. Therefore I've started building myself a list of features I want:

  • Able to migrate all my old posts. This isn't going to be a huge deal, but it has to happen as I have quite a few posts I don't want to lose.
  • Accepts restructured text. I'm sick of writing my posts in reST and then converting it into HTML for Blogspot.
  • Pretty code highlighting.
  • Disqus for comments. I don't want to have to deal with spam or anything else, let users post with Disqus and they can deal with all the trouble.
  • Looks decent. Design is a huge weak point for me, so most of my time is going to be dedicated to this I think.
  • Django powered!
That's my pony list thus far. There's a good bet I'll use django-mingus since I've hear such good things about it, but for now I'm just dreaming of being able to write these posts in reST.

Sunday, November 15, 2009

Initial Review: Python Essential Reference

Disclosure: I received a free review copy of Python Essential Reference, Fourth Edition.

I've never really used reference material, I've always loved tutorials, howtos, and guides for learning things, but I've usually shunned reference material in favor of reading the source. Therefore, I didn't think I'd have a huge use for this book. However, so far (I've read about half the book so far) I've found it to be an exceptional resource, and I definitely plan on keeping it on my bookshelf.

The first third or so of the book is a reference on the syntax and other basic constructs of Python, it's probably not the part of the book you'll be consulting very frequently if you're an experienced Python programmer, however the end of this section is a bit of "Testing, Debugging, Profiling, and Tuning", this I can see myself flipping back to, as it extensively documents the doctests, unittests, pdb, cProfile, and dis modules.

The next third of the book is all about the Python library, including both the builtins and the standard library. This section is organized by functionality and I can definitely see myself using it. For example it has sections on "Python Runtime Services" (like atexit, gc, marshal, and weakref), "Data Structures, Algorithms, and Code Simplification" (bisect, collections, heapq for example), "String and Text Handling" (codecs, re, struct), and "Python Database Access" (PEP249, sqlite, and dbm). There's more, but this is as far as I've read. Reading through like a novel each of these sections has exposed me to things I wasn't aware of or don't use as frequently as I should, and I plan on using this book as a resource for exploring them. David Beazley has painstakingly documented the details of these modules, paying particular attention to the functions and classes you are likely to need most.

All in all I've found the Python Essential Reference to be a good book, especially for people who like reference documentation. Depending on how you use Python this book can serve as an excellent eye opener into other parts of the language and standard library, and for me I think that's where a ton of value will come from, as a day to day Python user I don't need a reference for most of the language, but for the bits it's introducing me to, having it handy will be a leg up.

Saturday, November 14, 2009

Why jQuery shouldn't be in the admin

This summer as a part of the Google Summer of Code program Zain Memon worked on improving the UI for Django's admin, specifically he integrated jQuery for various interface improvements. I am opposed to including jQuery in Django's admin, as far as I know I'm the only one. I should note that on a personal level I love jQuery, however I don't think that means it should be included in Django proper. I'm going to try to explain why I think it's a bad idea and possibly even convince you.

The primary reason I'm opposed is because it lowers the pool of people who can contribute to developing Django's admin. I can hear the shouts from the audience, "jQuery makes Javascript easy, how can it LOWER the pool". By using jQuery we prevent people who know Javascript, but not jQuery from contributing to Django's admin. If we use more "raw" Javascript then anyone who knows jQuery should be able to contribute, as well as anyone who knows Mootools, or Dojo, or just vanilla Javascript. I'm sure there are some people who will say, "but it's possible to use jQuery without knowing Javascript", I submit to you that this is a bad thing and certainly shouldn't be encouraged. We need to look no further than Jacob Kaplan-Moss's talks on Django where he speaks of his concern at job postings that look for Django experience with no mention of Python.

The other reason I'm opposed is because selecting jQuery for the admin gives the impression that Django has a blessed Javascript toolkit. I'm normally one to say, "if people make incorrect inferences that's their own damned problem," however in this case I think they would be 100% correct, Django would have blessed a Javascript toolkit. Once again I can hear the calls, "But, it's in contrib, not Django core", and again I disagree, Django's contrib isn't like other projects' contrib directories that are just a dumping ground for random user contributed scripts and other half working features. Django's contrib is every bit as official as parts of Django that live elsewhere in the source tree. Jacob Kaplan-Moss has described what django.contrib is, no part of that description involves it being less official, quite the opposite in fact.

For these reasons I believe Django's admin should avoid selecting a Javascript toolkit, and instead maintain it's own handrolled code. Though this brings an increase burden on developers I believe it is more important to these philosophies than to take small development wins. People saying this stymies the admin's development should note that Django's admin's UI has changed only minimally over the past years, and only a small fraction of that can be attributed to difficulties in Javascript development.

Friday, November 13, 2009

Syntax Matters

Yesterday I wrote about why I wasn't very interested in Go. Two of my three major complaints were about the syntax of Go, and based on the comments I got here and on Hacker News a lot of people didn't seem to mind the syntax, or at least didn't think it was worth talking about. However, the opposite is true, for me the syntax is among the single most important things about a programming language.

I'd estimate that I spend about 60% of my day thinking about and reading code and 40% actually writing code. This means that code needs to be easy to read, that means no stray punctuation or anything else that distracts me from what I want to see in my code: what does it do when I run it. This means any code I'm looking at better be properly indented. It also means that I find braces and semicolons to be noise, stuff that just distracts me from what I'm reading the code to do. Therefore, code ought to use the existing, nonintrusive, structure, instead of obligating me to add more noise.

"Programs must be written for people to read, and only incidentally for machines to execute." This is a quote from Structure and Interpretation of Computer Programs, by Harold Abelson and Gerald Sussman. It has always struck me as odd that the people who wrote that chose to use Scheme for their text book. In my view Lisp and Scheme are the height of writing for a machine to execute. I think David Heinemeier Hansson got it right when he said, "code should be beautiful", I spent 5+ hours a day reading it, I damned well better want to look at it.

Thursday, November 12, 2009

Why I'm not very excited about Go

When I first heard Rob Pike and Ken Thompson had created a new programming language my first instinct is, "I'm sure it'll be cool, these are brilliant guys!" Unfortunately that is not the case, mostly because I think they made some bad design choices, and because the "cool new things" aren't actually that new, or cool.

The first major mistake was using a C derived syntax: braces and semicolons. I know some people don't like significant whitespace, but if you aren't properly indenting your code it's already beyond the point of hopelessness. The parser ought to use the existing structure instead of adding more punctuation. Readability is one of the most important attributes of code, and this syntax detracts from that.

The next mistake is having a separate declaration and assignment operator. I understand that the point of this operator is to reduce the repetition of typing out the types name both in declaring the variable and in initializing the value. Yet it seems the purpose of the := operator is to avoid typos in variable names that are possible by making all assignment an implicit declaration if the variable wasn't already declared. I can see myself making many more typos by forgetting to use the := operator, and in cases where I make a typo in a variable name it would inevitably be caught by the compiler when I attempted to actually use it (the fact that this is an attempted declaration means the variable won't have been declared elsewhere).

The final mistake was not providing generics. C++'s templates is one of the things that make the language head and shoulders more useful for me than C for tasks I need to preform; generics allow one to provide reusable data structures. While Go seems to have something akin to generics with their map data structure, it disappointingly doesn't appear to be exposed in any way to user code. One of the things I've found makes me most productive in Python is that any time I need to perform a task I simply pick the data structure that does what I want and it is efficiently implemented. Without generics, I don't see a way for a statically typed language to offer this same breadth of data structures without each of them being a special case.

In terms of features which I believe are overhyped the most important one is the "goroutine". As best I can tell these are an implementation of fibers. Constructs like concurrency should not be granted their own syntax, especially when they can be cleanly implemented using the other constructs of a language, look at the C library libtask as an example.

Further, the handling of interfaces, though interesting, appears to be an implementation of C++0x's proposed concepts (which won't be in the final standard). I view this feature as something that is most useful in the context of generics, which Go doesn't have. The reason for this is to be able to make compile time assertions about the types that are being templated over. Doing anything else is more clearly implemented as abstract inheritance.

I'm not writing off Go permanently, but I'm not enthused about it, someone will probably have to let me know when I need to look again as I won't be following along. Enjoy your internet travels.

Wednesday, November 11, 2009

When Django Fails? (A response)

I saw an article on reddit (or was in hacker news?) that asked the question: what happens when newbies make typos following the Rails tutorial, and how good of a job does Rails do at giving useful error messages? I decided it would be interesting to apply this same question to Django, and see what the results are. I didn't have the time to review the entire Django tutorial, so instead I'm going to make the same mistakes the author of that article did and see what the results are, I've only done the first few where the analogs in Django were clear.

Mistake #1: Point a URL at a non-existent view:

I pointed a URL at the view "django_fails.views.homme" when it should have been "home". Let's see what the error is:

ViewDoesNotExist at /
Tried homme in module django_fails.views. Error was: 'module' object has no attribute 'homme'


So the exception name is definitely a good start, combined with the error text I think it's pretty clear that the view doesn't exist.

Mistake #2: misspell url in the mapping file

Instead of doing url("^$" ...) I did urll:

NameError at /
name 'urll' is not defined


The error is a normal Python exception, which for a Python programmer is probably decently helpful, the cake is that if you look at the traceback it points to the exact line, in user code, that has the typo, which is exactly what you need.

Mistake #3: Linking to non-existent pages

I created a template and tried to use the {% url %} tag on a nonexistent view.

TemplateSyntaxError at /
Caught an exception while rendering: Reverse for 'homme' with arguments '()' and keyword arguments '{}' not found.


It points me at the exact line of the template that's giving me the error and it says that the reverse wasn't found, it seems pretty clear to me, but it's been a while since I was new, so perhaps a new users perspective on an error like this would be important.


It seems clear to me that Django does a pretty good job with providing useful exceptions, in particular the tracebacks on template specific exceptions can show you where in your templates the errors are. One issue I'll note that I've experience in my own work is that when you have an exception from within a templatetag it's hard to get the Python level traceback, which is important when you are debugging your own templatetags. However, there's a ticket that's been filed for that in Django's trac.

Tuesday, November 10, 2009

The State of MultiDB (in Django)

As you, the reader, may know this summer I worked for the Django Software Foundation via the Google Summer of Code program. My task was to implement multiple database support for Django. Assisting me in this task were my mentors Russell Keith-Magee and Nicolas Lara (you may recognize them as the people responsible for aggregates in Django). By the standards of the Google Summer of Code project my work was considered a success, however, it's not yet merged into Django's trunk, so I'm going to outline what happened, and what needs to happen before this work is considered complete.

Most of the major things happened, settings were changed from a series of DATABASE_* to a DATABASES setting that's keyed by DB aliases and who's values are dictionaries containing the usual DATABASE* options, QuerySets grew a using() method which takes a DB alias and says what DB the QuerySet should be evaluated against, save() and delete() grew similar using keyword arguments, a using option was added to the inner Meta class for models, transaction support was expanded to include support for multiple databases, as did the testing framework. In terms of internals almost every internal DB related function grew explicit passing of the connection or DB alias around, rather than assuming the global connection object as they used to. As I blogged previously ManyToMany relations were completely refactored. If it sounds like an awful lot got done, that's because it did, I knew going in that multi-db was a big project and it might not all happen within the confines of the summer.

So if all of that stuff got done, what's left? Right before the end of the GSOC time frame Russ and I decided that a fairly radical rearchitecting of the Query class (the internal datastructure that both tracks the state of an operation and forms its SQL) was needed. Specifically the issue was that database backends come in two varieties. One is something like a backend for App Engine, or CouchDB. These have a totally different design than SQL, they need different datastructures to track the relevant information, and they need different code generation. The second type of database backend is one for a SQL database. By contrast these all share the same philosophies and basic structure, in most cases their implementation just involves changing the names of database column types or the law LIMIT/OFFSET is handled. The problem is Django treated all the backends equally. For SQL backends this meant that they got their own Query classes even though they only needed to overide half of the Query functionality, the SQL generation half, as the datastructure part was identical since the underlying model is the same. What this means is that if you make a call to using() on a QuerySet half way through it's construction you need to change the class of the Query representation if you switch to a database with a different backend. This is obviously a poor architecture since the Query class doesn't need to be changed, just the bit at the end that actually constructs the SQL. To solve this problem Russ and I decided that the Query class should be split into two parts, a Query class that stores bits about the current query, and a SQLCompiler which generated SQL at the end of the process. And this is the refactoring that's holding up the merger of my multi-db work primarily.

This work is largely done, however the API needs to be finalized and the Oracle backend ported to the new system. In terms of other work that needs to be done, GeoDjango needs to be shown to shown to still work (or fixed). In my opinion everything else on the TODO list (available here, please don't deface) is optional for multi-db to be merge ready, with the exception of more example documentation.

There are already people using the multi-db branch (some even in production), so I'm confident about it's stability. For the next 6 weeks or so (until the 1.2 feature deadline), my biggest priority is going to be getting this branch into a merge ready state. If this is something that interests you please feel free to get in contact with me (although if you don't come bearing a patch I might tell you that I'll see you in 6 weeks ;)), if you happen to find bugs they can be filed on the Django trac, with version "soc2009/multidb". As always contributors are welcome, you can find the absolute latest work on my Github and a relatively stable version in my SVN branch (this doesn't contain the latest, in progress, refactoring). Have fun.

Monday, November 9, 2009

Software that deserves a thank you

Today's post was originally supposed to be about my work on multiple database support for Django, but I'm exceptionally tired so that's been bumped to tomorrow, sorry. Instead I'm going to use today's post to give a thank you to some software I use that doesn't get enough press, and that surely deserves a thanks. I'm not going to be listing any libraries or programming languages just the desktop software I run day to day:

  • Chromium - I've been super please since switching to Chromium for my day to day browser, it's very fast, but I wish it had plugin support, and a debug tool like Firebug.
  • Firefox - My development browser, the cornucopia of plugins makes my life easier, everything from Firebug to DownloadThemAll.
  • XChat - It's my IRC client, I probably log about a dozen hours on IRC per day, maybe more. The biggest wart I have with it is that ctrl+l clears the screen, and that's not bad for software I use for 70+ hours a week.
  • Gedit - It's my text editor. Syntax highlighting, file browser, proper indentation support, I'm not sure there's a whole lot more to ask for.
  • Skype - Between using it to record DjangoDose to catching up with friends it's an invaluable asset.
  • Ubuntu - My operating system, by extension the Linux kernel, Gnome desktop and everything else that goes into it should all take a bow.


And that's pretty much it for desktop software, I took a peak back at my list from last year and it's almost identical, I guess all of this software is doing something right. I also wanted to take a minute to thank various web services I use:

  • Pandora - I blow through my monthly allowance of 40 hours in one week. I think that says something.
  • last.fm - I love the fact that it tracks all of the stats about the music I listen to. I really wish it had a way to combine the "music neighborhood" and friends features to find people in my social graph with similar taste in music.
  • Github - It really is the bee's knees of code hosting software. There's very little to say other than the number of repositories I have should stand as a testament to its quality.
  • Blogger - I use it to host this blog, and while I'm actively working towards moving away from it, for now it stays and it deserves a thank you.
  • Invoice Machine - It takes most of the tedium out of dealing with invoicing, I'm grateful for that.

And that's my list. I promise that tomorrow I'll have my post on multiple database support. See you then.

Sunday, November 8, 2009

Another Unladen Swallow Optimization

This past week I described a few optimizations that the Unladen Swallow team have done in order to speed up CPython. In particular one of the optimizations I described was to emit direct calls to C functions that took either zero or one argument. This improves the speed of Python when calling functions like len() or repr(), who only take one argument. However, there are plenty of builtin functions that take a fixed number of arguments that is greater than one. This is the source of the latest optimization.

As I discussed previously there were two relevant flags, METH_O and METH_NOARGS. These described functions that take either one or zero arguments. However, this doesn't cover a wide gamut of functions. Therefore the first stage of these optimizations was to replace these two flags with METH_FIXED, which indicates that the function takes a fixed number of arguments. There was also an additional slot added to the struct that holds C functions to store the arity of the function (the number of arguments it takes). Therefore something like:

{"id", builtin_id, METH_O, id_doc}


Which is what the struct for a C function looks like would be replaced with:

{"id", builtin_id, METH_FIXED, id_doc, 1}


This allows Unladen Swallow to emit direct calls to functions that take more than 1 argument, specifically up to 3 arguments. This results in functions like hasattr() and setattr() to be better optimized. This change ultimately results in a 7% speed increase in Unladen Swallow's Django benchmark. Here the speed gains will largely come from avoiding allocating a tuple for the arguments, as Python used to have to do since the functions were defined as METH_VARARGS (which results in it receiving it's arguments as a tuple), as well as avoiding parsing that tuple.

This change isn't as powerful as it could be, specifically it requires that the function always take the same number of arguments. This prevents optimizing calls to getattr() for example, which can take either 2 or 3 arguments. This optimization doesn't hold because C doesn't have any way of expressing default arguments for the function, therefore the CPython runtime must pass all of the needed arguments to a function, which means C functions need to have a way to encode their defaults in a way that CPython can understand. One of the proposed solutions to this problem is to have functions be able to provide the minimum number of arguments they take and then CPython could pad the provided arguments with NULLs to achieve the correct number of arguments to the function (interestingly the C standard allows more arguments to be passed to a function than it takes). This type of optimization would speed up calls to things like dict.get() and getattr().

As you can see the speed of a Python application can be fairly sensitive to how various internal things are handled, in this case the speed increase can be shown to come exclusively from eliminating a tuple allocation and some extra logic on certain function calls. If you're interested in seeing the full changeset it's available on the internet.

Saturday, November 7, 2009

My Workflow

About a year ago I blogged about how I didn't like easy_install, and I alluded to the fact that I didn't really like any programming language specific package managers. I'm happy to say I've changed my tune quite drastically in the past 2 months. Since I started working with Eldarion I've dived head first into the pip and virtualenv system and I'm happy to say it works brilliantly. The nature of the work is that we have lots of different projects all at once, often using wildly different versions of packages in all sorts of incompatible ways. The only way to stay sane is to have isolated environments for each of them. Enter virtualenv stage left.

If you work with multiple Python projects that use different versions of things virtualenv is indispensable. It allows you to have totally isolated execution environments for different projects. I'm also using Doug Hellmann's virtualenvwrapper, which wraps up a few virtualenv commands and gives you some hooks you can use. When I start a new project it looks something like this:

$ git checkout some_repo
$ cd some_repo/
$ mkvirtualenv project_name


The first two steps are probably self explanatory. What mkvirtualenv does is to a new virtual environment, and activate it. I also have a hook set up with virtualenvwrapper to install the latest development version of pip, as well as ipython and ipdb. pip is a tremendous asset to this process. It has a requirements file that makes it very easy to keep track of all the dependencies for a given project, plus pip allows you to install packages out of a version control system which is tremendously useful.

When I want to work on an existing project all I need to do is:

$ workon project_name


This activates the environment for that project. Now the PATH prioritizes stuff installed into that virtualenv, and my Python path only has stuff installed into this virtualenv. I can't imagine what my job would be like without these tools, if I had to manually manage the dependencies for each project I'd probably go crazy within a week. Another advantage is it makes it easy to test things against multiple versions of a library. I can test if something works on Django 1.0 and 1.1 just by switching which environment I'm in.

As promised tomorrow I'll be writing about an optimization that just landed in Unladen Swallow, and I'm keeping Monday's post a secret. I'm not sure what Tuesday's post will be, but I think I'll be writing something Django related, either about my new templatetag library, or the state of my multiple database work. See you around the internet.

Friday, November 6, 2009

Towards a Better Template Tag Definition Syntax

Eric Holscher has blogged a few times this month about various template tag definition syntax ideas. In particular he's looked at a system based on Surlex (which is essentially an alternate syntax for certain parts of regular expressions), and a system based on keywords. I highly recommend giving his posts a read as they explain the ideas he's looked at in far better detail than I could. However, I wasn't particularly satisfied with either of these solution, I love Django's use of regular expressions for URL resolving, however, for whatever reason I don't really like the look of using regular expressions (or an alternate syntax like Surlex) for template tag parsing. Instead I've been thinking about an object based parsing syntax, similar to PyParsing.

This is an idea I've been thinking about for several months now, but Eric's posts finally gave me the kick in the pants I needed to do the work. Therefore, I'm pleased to announce that I've released django-kickass-templatetags. Yes, I'm looking for a better name, it's already been pointed out to me that a name like that won't fly in the US government, or most corporate environments. This library is essentially me putting to code everything I've been thinking about, but enough talking let's take a look at what the template tag definition syntax:

@tag(register, [Constant("for"), Variable(), Optional([Constant("as"), Name()])]):
def example_tag(context, val, asvar=None):


As you can see it's a purely object based syntax, with different classes for different components of a template tag. For example this would parse something like:

{% example_tag for variable %}
{% example_tag for variable as new_var %}


It's probably clear that this is significantly less code than the manual parsing, manual node construction, and manual resolving of variable that you would have needed to do with a raw templatetag definition. Then the function you have gets the resolved values for each of its parameters, and at that point it's basically the same as Node.render, it is expected to either return a string to be inserted into the template or alter the context. I'm looking forward to never writing manual template parsing again. However, there are still a few scenarios it doens't handle, it won't handle something like the logic in the {% if %} tag, and it won't handle tags with {% end %} type tags. I feel like these should both be solvable problems, but since it's a bolt-on addition to the existing tools it ultimately doesn't have to cover every use case, just the common ones (when's the last time you wrote your own implementation of the {% if %} or {% for %} tags).

It's my hope that something like this becomes popular, as a) developers will be happier, b) moving towards a community standard is the first step towards including a solution out of the box. The pain and boilerplate of defining templatetags has long been a complain about Django's template language (especially because the language itself limits your ability to perform any sort of advanced logic), therefore making it as painless as possible ultimately helps make the case for the philosophy of the language itself (which I very much agree with it).

In keeping with my promise I'm giving an overview of what my next post will be, and this time I'm giving a full 3-day forecast :). Tommorow I'm going to blog about pip, virtualenv, and my development workflow. Sunday's post will cover a new optimization that just landed in Unladen Swallow. And finally Monday's post will contain a strange metaphor, and I'm not saying any more :P. Go checkout the code and enjoy.

Edit: Since this article was published the name of the library was changed to be: django-templatetag-sugar. I've updated all the links in this post.

Thursday, November 5, 2009

The Pycon Program Committee and my PyCon Talk

Last year I presented at a conference for the first time in my life at PyCon, I moderated a panel on ORMs and I enjoyed it a ton (and based on the feedback I've gotten at least a few people enjoyed attending it). Above and beyond that the two PyCons I've attended have both been amazing conferences, tons of awesome talks, great people to hang out with, and an awesome environment for maximizing both. For both of the last two years I've hung around the PyCon organizers mailing list since the conference was in Chicago and I lived there, however this year I really wanted to increase my contributions to such a great conference. Therefore, I joined the Pycon program committee. This committee is responsible for reviewing all talk submissions and selecting the talks that will ultimately appear at PyCon.

This year the PyCon programming committee had a really gargantuan task. There were more talks submitted then ever before, more than 170 of them, for only 90 or so slots. Unfortunately this meant that we had to reject some really good talks, which always sucks. There's been a fair bit of discussion about the process this year and what can be done to improve it. As a reviewer the one thing I wish I'd known going in was that the votes left on talks were just a first round, and ultimately didn't count for a huge amount. Had I known this I would have been less tepid in giving positive reviews to talks which merely looked interesting.

Another hot topic in the aftermath is whether or not the speaker's identity should factor into a reviewer's decision. My position is that it should wherein the speaker has a reputation, be it good or bad. If I know a speaker is awesome I'm way more likely to give them the +1, likewise if I see a speaker has a history of poor talks I'm more likely to give them a -1. That being said I don't think new speakers, or slightly inexperienced ones should be penalized for that, I was a brand new speaker last time and I'm grateful I was given the chance to present.

To give an example of this one of the talks I'm really looking forward to is Mark Ramm's, "To relate or not to relate, that is the question". Mark and I spoke about this topic for quite a while at PyOhio, and every talk from Mark I've ever seen has been excellent. Therefore I was more than willing to +1 it. However, had I not known the speaker it would still have been a good proposal, and an interesting topic, I just would not have gotten immediately excited about going to see the talk.

As an attendee one of the things I've always found is that speakers who are very passionate about their topics almost always give talks I really enjoy. Thinking back to my first PyCon Maciej Fijalkowski managed to get me excited and interested in PyPy in just 30 minutes, because he was so passionate in speaking about the topic.

All that being said I wanted to share a short list of the talks I'm excited about this year, before I dive into what my own talk will be about:
  • Optimizations And Micro-Optimizations In CPython
  • Unladen Swallow: fewer coconuts, faster Python
  • Managing the world's oldest Django project
  • Understanding the Python GIL
  • The speed of PyPy
  • Teaching compilers with python
  • To relate or not to relate, that is the question
  • Modern version control: Mercurial internals
  • Eventlet: Asynchronous I/O with a synchronous interface
  • Hg and Git : Can't we all just get along?
  • Mastering Team Play: Four powerful examples of composing Python tools


It's a bit of a long list, but compared to the size of the list of accepted talks I'm sure there are quite a few gems I've missed.

The talk I'm going to be giving this year is about the real time web, also known as HTTP push, Comet, or reverse Ajax. All of those are basically synonyms for the server being able to push data to the browser, rather than having the browser constantly poll the server for data. Specifically I'm going to be looking at my experience building three different things, LeafyChat, DjangoDose's DjangoCon stream, and Hurricane.

Leafychat is an IRC client built for the DjangoDash by myself, Leah Culver, and Chris Wanstrath. The DjangoDose DjangoCon stream was a live stream of all the Twitter items about DjangoCon that Eric Florenzano and I built in the week leading up to DjangoCon. Finally, Hurricane is the library Eric Florenzano and I have been working on in order to abstract the lessons learned from our experience building "real time" applications in Python.

In the talk I'm going to try to zero in on what we did for each of these projects, what worked, what didn't, and what I'm taking away from the experience. Finally, Eric Florenzano and I are working to put together a new updated, better version of the DjangoCon stream for PyCon. I'm going to discuss what we do with that project, and why we do it that way in light of the lessons of previous projects.

I'm hoping both my talk, and all of them will be awesome. One thing's for sure, I'm already looking forward to PyCon 2010. Tomorrow I'm going to be writing about my thoughts on a more ideal template tag definition syntax for Django, and hopefully sharing some code if I have time to start working on it. See you then (and in Atlanta ;))!

Wednesday, November 4, 2009

Django's ManyToMany Refactoring

If you follow Django's development, or caught next week's DjangoDose Tracking Trunk episode (what? that's not how time flows you say? too bad) you've seen the recent ManyToManyField refactoring that Russell Keith-Magee committed. This refactoring was one of the results of my work as a Google Summer of Code student this summer. The aim of that work was to bring multiple database support to Django's ORM, however, along the way I ran into refactor the way ManyToManyField's were handled, the exact changes I made are the subject of tonight's post.

If you've looked at django.db.models.fields.related you may have come away asking how code that messy could possibly underlie Django's amazing API for handling related objects, indeed the mess so is so bad that there's a comment which says:

# HACK


which applies to an entire class. However, one of the real travesties of this module was that it contained a large swath of raw SQL in the manager for ManyToMany relations, for example the clear() method's implementation looks like:

cursor = connection.cursor()
cursor.execute("DELETE FROM %s WHERE %s = %%s" % \
(self.join_table, source_col_name),
[self._pk_val])
transaction.commit_unless_managed()


As you can see this hits the trifecta, raw SQL, manual transaction handling, and the use of a global connection object. From my perspective the last of these was the biggest issue. One of the tasks in my multiple database branch was to remove all uses of the global connection object, and since this uses it it was a major target for refactoring. However, I really didn't want to rewrite any of the connection logic I'd already implemented in QuerySets. This desire to avoid any new code duplication, coupled with a desire to remove the existing duplication (and flat out ugliness), led me to the simple solution: use the existing machinery.

Since Django 1.0 developers have been able to use a full on model for the intermediary table of a ManyToMany relation, thanks to the work of Eric Florenzano and Russell Keith-Magee. However, that support was only used when the user explicitly provided a through model. This of course leads to a lot of methods that basically have two implementation: one for the through model provided case, and one for the normal case -- which is yet another case of code bloat that I was now looking to eliminate. After reviewing these items my conclusion was that the best course was to use the provided intermediary model if it was there, otherwise create a full fledged model with the same fields (and everything else) as the table that would normally be specially created for the ManyToManyField.

The end result was dynamic class generation for the intermediary model, and simple QuerySet methods for the methods on the Manager, for example the clear() method I showed earlier now looks like this:

self.through._default_manager.filter(**{
source_field_name: self._pk_val
}).delete()


Short, simple, and totally readable to anyone with familiarity with Python and Django. In addition this move allowed Russell to fix another ticket with just two lines of code. All in all this switch made for cleaner, smaller code and fewer bugs.

Tomorrow I'm going to be writing about both the talk I'm going to be giving at PyCon, as well as my experience as a member of the PyCon program committee. See you then.

Tuesday, November 3, 2009

Diving into Unladen Swallow's Optimizations

Yesterday I described the general architecture of Unladen Swallow, and I said that just by switching to a JIT compiler and removing the interpretation overhead Unladen Swallow was able to get a performance gain. However, that gain is nowhere near what the engineers at Google are hoping to accomplish, and as such they've been working on building various optimizations into their JIT. Here I'm going to describe two particularly interesting ones they implemented during the 3rd quarter (they're doing quarterly releases).

Before diving into the optimizations themselves I should note there's one piece of the Unladen Swallow architecture I didn't discuss in yesterday's post. The nature of dynamic languages is that given code can do nearly anything depending on the types of the variables present, however in practice usually very few types are seen. Therefore it is necessary to collect information about the types seen in practice in order to perform optimizations. Therefore what Unladen Swallow has done is added data collection to the interpreter while it is executing bytecode. For example the BINARY_ADD opcode records the types of both of it's operands, the CALL_FUNCTION opcode records the function it is calling, and the UNPACK_SEQUENCE opcode records the type of the sequence it's unpacking. This data is then used when the function is compiled to generate optimal code for the most likely scenarios.

The first optimization I'm going to look at is one for the CALL_FUNCTION opcode. Python has a number of flags that functions defined in C can have, the two relevant to this optimization are METH_NOARGS and METHO_O. These flags indicate that the function (or method) in question take either 0 or 1 argument respectively (this is excluding the self argument on methods). Normally when Python calls a function it builds up a tuple of the arguments, and a dictionary for keyword arguments. For functions defined in Python CPython lines up the arguments with those the function takes and then sets them as local variables for the new function. For C functions they are given the tuple and dictionary directly and are responsible for parsing them themselves. By contrast functions with METH_NOARGS or METH_O receive their arguments (or nothing in the case of METH_NOARGS) directly.

Because calling METH_NOARGS and METH_O functions is so much easier than the alternate case (which involves several allocations and complex logic) when possible it is best to special case them in the generated assembly. Therefore, when compiling a CALL_FUNCTION opcode if using the data recorded there is only ever one function called (imagine a call to len, it is going to be the same len function every time), and that function is METH_NOARGS or METH_O then instead of generating a call to the usual function call machinery Unladen Swallow instead emits a check to make sure the function is actually the expected one and if it passes emits a call directly to the function with the correct arguments. If this guard fails then Unladen Swallow jumps back to the regular interpreter, leaving the optimized assembly. The reason for this is that the ultimately generated assembly can be more efficient when it only has to consider one best case scenario, as opposed to needing to deal with a large series of if else statements, which catalogue every single best case and the corresponding normal case. Ultimately, this results in more efficient code for calls to functions like len(), which are basically never redefined.

The next optimization we're going to look at is one for the LOAD_GLOBAL function. The LOAD_GLOBAL opcode is used for getting the value of a global variable, such as a builtin function, an imported class, or a global variable in the same module. In the interpreter the code for this opcode looks something like:

name = OPARG()
try:
value = globals[name]
except KeyError:
try:
value = builtins[name]
except KeyError:
raise_exception(KeyError, name)
PUSH(value)

As you can see in the case of a builtin object (something like len, str, or dict) there are two dictionary lookups. While the Python dictionary is an exceptionally optimized datastructure it still isn't fast compared to a lookup of a local value (which is a single lookup in a C array). Therefore the goal of this optimization is to reduce the number of dictionary lookups needed to find the value for a global or builtin.

The way this was done was for code objects (the datastructures that hold the opcodes and various other internal details for functions) to register themselves with the globals and builtin dictionaries. By registering themselves the dictionaries are able to notify the code objects (similar to Django signals) whenever they are modified. The result of this is that the generated assembly for a LOAD_GLOBAL can perform the dictionary lookup once at compilation time and then the resulting assembly will be valid until the globals or builtins dictionary notifies the code object that they have been modified, thus rendering the assembly invalid. In practice this is very efficient because globals and builtins are very rarely modified.

Hopefully you've gotten a sense of the type of work that the people behind Unladen Swallow are doing. If you're interested in reading more on this type of work I'd highly recommend taking a look at the literature listed on the Unladen Swallow wiki, as they note that there is no attempt to do any original research, all the work being done is simply the application of existing, proven techniques to the CPython interpreter.

For the rest of this month I'm going to try to give a preview of the next day's post with each post, that way I can start thinking about it well in advance. Tomorrow I'm going to shift gears a little bit and write about the ManyToManyField refactoring I did over the summer and which was just committed to Django.

Monday, November 2, 2009

Introduction to Unladen Swallow

Unless you've been living under a rock for the past year (or have zero interest in either Python or dynamic languages, it which case why are you here?) you've probably heard of Unladen Swallow. Unladen Swallow is a Google funded branch of the CPython interpreter, with a goal of making CPython significantly faster while retaining both API and ABI compatibility. In this post I'm going to try to explain what it is Unladen Swallow is doing to bring a new burst of speed to the Python world.

In terms of virtual machines there are a few levels of complexity, which roughly correspond to their speed. The simplest type of interpreter is an AST evaluator, these are more or less the lowest of the low on the speed totem pole, up until YARV was merged into the main Ruby interpreter, MRI (Matz Ruby Interpreter) was this type of virtual machine. The next level of VM is a bytecode interpreter, this means that the language is compiled to an intermediary format (bytecode) which is then executed. Strictly speaking this is an exceptionally broad category which encompasses most virtual machines today, however for the purposes of this article I'm going to exclude any VMs with a Just-In-Time compiler from this section (more on them later). The current CPython VM is this type of interpreter. The most complex (and fastest) type of virtual machine is one with a Just-In-Time compiler, this means that the bytecode that the virtual machine interprets is also dynamically compiled into assembly and executed. This type of VM includes modern Javascript interpreters such as V8, Tracemonkey, and Squirellfish, as well as other VMs like the Hotspot Java virtual machine.

Now that we know where CPython is, and what the top of the totem pole looks like it's probably clear what Unladen Swallow is looking to accomplish, however there is a bit of prior art here that's worthy of taking a look. There is actually currently a JIT for CPython, named Psyco. Psyco is pretty commonly used to speed up numerical code, as that's what it's best at, but it can speed up most of the Python language. However, Psyco is extremely difficult to maintain and update. It only recently gained support for modern Python language features like generators, and it still only supports x86 CPUs. For these reasons the developers at Google chose to build their JIT rather than work to improve the existing solution (they also chose not to use one of the alternative Python VMs, I'll be discussing these in another post).

I just said that Unladen Swallow looked to build their own JIT, but that's not entirely true. The developers have chosen not to develop their own JIT (meaning their own assembly generator, and register allocator, and optimizer, and everything else that goes along with a JIT), they have instead chosen to utilize the LLVM (Low Level Virtual Machine) JIT for all the code generation. What this means is that instead of doing all the work I've alluded the devs can instead translate the CPython bytecode into LLVM IR (intermediate representation) and then use LLVM's existing JIT infrastructure to do some of the heavy lifting. This gives the devs more time to focus on the interesting work of how to optimize the Python language.

Now that I've layed out the background I'm going to dive into what exactly it is that Unladen Swallow does. Right now the CPython virtual machine looks something like this:

for opcode in opcodes:
if opcode == BINARY_ADD:
x, y = POP(), POP()
z = x + y
PUSH(z)
elif opcode == JUMP_ABSOLUTE:
pc = OPARG()
# ...

This is both hugely simplified and translated into a Pythonesque psuedocode, but hopefully it makes the point clear, right now the CPython VM runs through the opcodes and based on what the opcode is executes some C code. This is particularly inefficient because there is a fairly substantial overhead to actually doing the dispatch on the opcode. What Unladen Swallow does is count the number of times a given Python function is called (the heuristic is actually slightly more complicated than this, but it's a good approximation of what happens), and when it reaches 10000 (the same value the JVM uses) it stops to compile the function using LLVM. Here what it does is essentially unrolls the interpreter loop, into the LLVM IR. So if you had the bytecode:

BINARY_ADD

Unladen Swallow would generate code like:

x, y = POP(), POP()
z = x + y
PUSH(z)


This eliminates all of the overhead of the large loop in the interpreter. Unladen Swallow also performs a number of optimizations based on Python's semantics, but I'll be getting into those in another post, for now LLVM run it's optimizers, which can improve the generated code somewhat, and then CPython executes the generated function. Now whenever this function is called in the future the optimized, assembly version of it is called.

This concludes the introduction to Unladen Swallow. Hopefully you've learned something about the CPython VM, Unladen Swallow, or virtual machines in general. In future posts I'm going to be diving in to some of the optimizations Unladen Swallow does, as well as what other players are doing in this space (particularly PyPy).

Sunday, November 1, 2009

Another month of blogging?

Last year I started this blog during the November, blog every day for a month month. This year I'm hoping to repeat the feat, blogging every single day this month. Today's post is a bit light on content, but I'm hoping to give a preview of what I'm going to be blogging about. This month I'm hopping to blog about advanced Django techniques, Comet (and other HTTP push technology), and I'm probably going to be adding some more political content to the usual technical content of this blog.

Lastly, I'm hoping to move this blog either to another hosted provider, or something on my own servers, but somewhere where I can get a tad bit more control.

In any event here's to a productive month of bloggging!