Applying a decorator to a class method results in an error

You might have seen this one before – you wrote a decorator in Python and tried to apply it to a class method (or static method, for that matter), only to see an error.

from functools import wraps

def logged(func):
    """A decorator printing a message before invoking the wrapped function."""
    @wraps(func)
    def wrapped_func(*args, **kwargs):
        print('Invoking', func)
        return func(*args, **kwargs)
    return wrapped_func


class Foo(object):
    @logged
    @classmethod
    def get_name(cls):
        return cls.__name__

As the docstring explains, the logged decorator simply prints a message before invoking the decorated function, and it is applied to the get_name() class method of the class Foo. The @wraps decorator makes sure the original function’s metadata is copied to the wrapper function returned by the decorator (docs).

But despite this essentially being a textbook example of a decorator in Python, invoking the get_name() method results in an error (using Python3 below):

>>> Foo.get_name()
Invoking <classmethod object at 0x7f8e7473e0f0>
Traceback (most recent call last):
    ...
TypeError: 'classmethod' object is not callable

If you just want to quickly fix this issue, because it annoys you, here’s the TL;DR fix – just swap the order of the decorators, making sure that the @classmethod decorator is applied last:

class Foo(object):
    @classmethod
    @logged
    def get_name(cls):
        return cls.__name__

>>> Foo.get_name()
Invoking <function Foo.get_name at 0x7fce90356c80>
'Foo'

On the other hand, if you are curious what is actually happening behind the scenes, please keep reading.

The first thing to note is the output in each example immediately after calling Foo.get_name(). Our decorator prints the object that is about to invoke in the very next line, and in the non-working example that object is actually not a function!

Invoking <classmethod object at 0x7f8e7473e0f0>

Instead, the thing that our decorator tries to invoke is a “classmethod” object, but the latter is not callable, causing the Python interpreter to complain.

Meet descriptors

Let’s take a closer look at a stripped-down version of the Foo class:

class Foo(object):
    @classmethod
    def get_name(cls):
        return cls.__name__

>>> thing = Foo.__dict__['get_name']
>>> thing
<classmethod object at 0x7f295ffc6d30>
>>> hasattr(thing, '__get__')
True
>>> callable(thing)
False

As it turns out, get_name is an object which is not callable, i.e. we can not say get_name() and expect it to work. By the presence of the __get__ attribute we can also see, that it is a descriptor.

Descriptors are object that behave differently than “normal” attributes. When accessing a descriptor, what happens is that its __get__() method gets called behind the scenes, returning the actual value. The following two expressions are thus equivalent:

>>> Foo.get_name
<bound method Foo.get_name of <class '__main__.Foo'>>
>>> Foo.__dict__['get_name'].__get__(None, Foo)
<bound method Foo.get_name of <class '__main__.Foo'>>

__get__() gets called with two parameters – the object instance the attribute belongs to (None here, because accessing the attribute through a class), and the owner class, i.e. the one the descriptor is defined on (Foo in this case)1.

What the classmethod descriptor does is binding the original get_name() function to its class (Foo), and returning a bound method object. When the latter gets called, it invokes get_name(), passing class Foo as the first argument (cls) along with any other arguments the bound method was originally called with.

Armed with this knowledge it is now clear why our logged decorator from the beginning does not always work. It assumes that the object passed to it is directly callable, and does not take the descriptor protocol into account.

Making it right

Describing how to adjust the logged decorator to work correctly is quite a lengthy topic, and out of scope of this post. If interested, you should definitely read the blog series by Graham Dumpleton, as it addresses many more aspects than just working well with classmethods. Or just use his wrapt library for writing decorators:

import wrapt

@wrapt.decorator
def logged(wrapped, instance, args, kwargs):
    print('Invoking', wrapped)
    return wrapped(*args, **kwargs)

class Foo(object):
    @logged
    @classmethod
    def get_name(cls):
        return cls.__name__

>>> Foo.get_name()
Invoking <bound method Foo.get_name of <class 'main2.Foo'>>
'Foo'

Yup, it works.


  1. On the other hand, if retrieving a descriptor object directly from the class’s __dict__, the descriptor’s __get__() method is bypassed, and that’s why we used Foo.__dict__['get_name'] at a few places in the examples. 

Comparing objects of different types in Python 2

On the project I currently work on, we were recently (briefly) bitten by an interesting bug that initially slipped through the tests and code review. We had a custom database-mapped type that had two attributes defined, let’s call them value_low and value_high. As their names suggest, the value of the former should be lower than the value of the latter. The model type also defined a validator that would enforce this rule, and assure internal consistency of the model instances.

A simplified version of the type is shown below:

from sqlalchemy import Column, Integer
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import validates

Base = declarative_base()


class MyModel(Base):
  __tablename__ = 'my_models'

  id = Column(Integer, primary_key=True)
  value_low = Column(Integer)
  value_high = Column(Integer)

  @validates('value_low', 'value_high')
  def validate_thresholds(self, key, value):
    """Assure that relative ordering of both values remains consistent."""
    if key == 'value_low' and value >= self.value_high:
      raise ValueError('value_low must be strictly lower than value_high')
    elif key == 'value_high' and value <= self.value_low:
       raise ValueError('value_high must be strictly higher than value_low')

    return value

The model definition seems pretty sensible, and it works, too:

>>> instance = MyModel(value_low=2, value_high=10)
>>> instance.value_low, instance.value_high
(2, 10)

Except when you run it on another machine with the same Python version:

>>> instance = MyModel(value_low=2, value_high=10)
Traceback (most recent call last):
  ...
ValueError: value_low must be strictly lower than value_high

Uhm, what?

After a bit of research, the following behavior can be noticed, even on the machine where the code initially seemed to work just fine:

>>> instance = MyModel()
>>> instance.value_high = 10
>>> instance.value_low = 2
>>> instance.value_low, instance.value_high
(2, 10)

# now let's try to reverse the order of assignments
>>> instance = MyModel()
>>> instance.value_low = 2
# ValueError ...

What’s going on here? Why is the order of attribute value assignments important?

This, by the way, explains why the first example worked on one machine, but failed on another. The values passed to MyModel() as keyword arguments were applied in a different order1.

Debugging showed that if value_low is set first when value_high is still None, the following expression evaluates to True (key equals to value_low), resulting in a failed validation:

# MIND: value == 2
if key == 'value_low' and value >= self.value_high:
  raise ValueError(...)

On the other hand, if we first set value_high when value_low is None, the corresponding if statement condition evaluates to False and the error does not get raised. Stripping it down to the bare essentials gives us the following2:

>>> 2 >= None
True
>>> 10 <= None
False

If we further explore this, it gets even better:

>>> None < -1
True
>>> None < -float('inf')
True
>>> None < 'foobar'
True
>>> 'foobar' > -1
True
>>> 'foobar' > 42
True
>>> tuple >= (lambda x: x)
True
>>> type > Ellipsis
True

Oh… of course. It’s Python2 that we use on the project (doh).

You see, Python2 is perfectly happy to compare objects of different types, even when that does not make much sense. Peeking into the CPython source code reveals that None is smaller than anything:

...
/* None is smaller than anything */
if (v == Py_None)
    return -1;
if (w == Py_None)
    return 1;

Different types are ordered by name, with number types being smaller than other types3.

Now that we know this, the bug from the beginning makes perfect sense. It was just a coincidence that high_value could be set on an instance, because the validator’s check for an error (value <= self.value_low) would never return True when self.value_low is None, because the latter is smaller than everything.

In the end the issue was, fortunately, quickly discovered, and fixing it was straightforward. We just needed to add an extra check:

if (
    key == 'value_low' and
    self.value_high is not None and  # <-- THIS
    value >= self.value_high):
    ...
# and the same for the other if...

And the application worked correctly and happily ever after…


  1. That’s probably because when creating a new instance of the model, SQLAlchemy loops over the kwargs dict somewhere down the rabbit hole, but the order of the dictionary keys is arbitrary. 
  2. Python2 only. In Python3 we would get TypeError: unorderable types
  3. Provided that types do not define their own custom rich comparison methods, of course. 

Hello, World!

Hey, look, I now have my own technical blog, too! It’s technical, because its posts (will) contain snippets of program code like the one below:

from __future__ import print_function

print('Hello, World!')

See?

The code above has been carefully crafted to run both under Python 21 and Python 3.

Expect more of this in the future, but for now I will just cut this post short, because hello world programs posts should be short and simple, right?


  1. Well, 2.6+ at least when print_function was added to the __future__ module (yeah, I know, it would also work without that import in this particular case…). 
%d bloggers like this: