Fix other re-entrancy nits for the lru_cache.
Keep references for oldkey and oldvalue so they can't
trigger a __del__ method to reenter our thread.
Move the cache[key]=link step to the end, after the link
data is in a consistent state.
Under exotic circumstances, the cache[key]=link step could
trigger reentrancy (i.e. the key would have to have a hash
exactly equal to that for another key in the cache and the
key would need a __eq__ method that makes a reentrant call
our cached function).
diff --git a/Lib/functools.py b/Lib/functools.py
index 36466f9..87c1b69 100644
--- a/Lib/functools.py
+++ b/Lib/functools.py
@@ -267,19 +267,23 @@
# computed result and update the count of misses.
pass
elif full:
- # use root to store the new key and result
- root[KEY] = key
- root[RESULT] = result
- cache[key] = root
+ # use the old root to store the new key and result
+ oldroot = root
+ oldroot[KEY] = key
+ oldroot[RESULT] = result
# empty the oldest link and make it the new root
- root = root[NEXT]
- del cache[root[KEY]]
+ root = oldroot[NEXT]
+ oldkey = root[KEY]
+ oldvalue = root[RESULT]
root[KEY] = root[RESULT] = None
+ # now update the cache dictionary for the new links
+ del cache[oldkey]
+ cache[key] = oldroot
else:
# put result in a new link at the front of the queue
last = root[PREV]
link = [last, root, key, result]
- cache[key] = last[NEXT] = root[PREV] = link
+ last[NEXT] = root[PREV] = cache[key] = link
currsize += 1
full = (currsize == maxsize)
misses += 1