Not a complete answer but some suggestions:
Can you eliminate the value comparison? Of course that's a feature change of your implementation. But the overhead in runtime will be even worse if more complex objects than integers are being stored in attributes.
Every call to a method via
self
needs to go through full method resolution order checking. I don't know if Python could do any MRO caching itself. Probably not because of the types-being-dynamic principle. Thus, you should be able to reduce some overhead by changing anyself.method(args)
toclassname.method(self, args)
. That removes the MRO overhead from the calls. This applies toself.setModified()
in yoursettattr()
implementation. In most places you have done this already with references toobject
.Every single function call takes time. You could eliminate them and e.g. move
setModified
's functionality into__setattr__
itself.
Let us know how the timing changes for each of these. I'd split out the experiment.
Edit: Thanks for the timing numbers.
The overhead may seem drastic (still a factor of 10 it seems). However put that into perspective of overall runtime. In other words: How much of you overall runtime will be spent in setting those tracked attributes and how much time is spent elsewhere?
In a single-thread application Amdahl's Law is a simple rule to set expectations straight. An illustration: If 1/3 of the time is spend setting attributes, and 2/3 doing other stuff. Then slowing down the attribute setting by 10x will only slow down the 30%. The smaller the percentage of time spent with the attributes, the less we have to care. But this may not help you at all if your percentage is high...