* Note how dummy entry re-use benefits use cases with interspersed deletes
  and adds.

* Note that dictionary iteration is negatively impacted by additional
  sparseness.
diff --git a/Objects/dictnotes.txt b/Objects/dictnotes.txt
index dcfb7a0..46dabb7 100644
--- a/Objects/dictnotes.txt
+++ b/Objects/dictnotes.txt
@@ -43,6 +43,10 @@
     Similar access patterns occur with replacement dictionaries
         such as with the % formatting operator.
 
+Dynamic Mappings
+    Characterized by deletions interspersed with adds and replacments.
+    Performance benefits greatly from the re-use of dummy entries.
+
 
 Data Layout (assuming a 32-bit box with 64 bytes per cache line)
 ----------------------------------------------------------------
@@ -91,6 +95,12 @@
 hash values of the keys (some sets of values have fewer collisions than
 others).  Any one test or benchmark is likely to prove misleading.
 
+While making a dictionary more sparse reduces collisions, it impairs
+iteration and key listing.  Those methods loop over every potential
+entry.  Doubling the size of dictionary results in twice as many
+non-overlapping memory accesses for keys(), items(), values(),
+__iter__(), iterkeys(), iteritems(), itervalues(), and update().
+
 
 Results of Cache Locality Experiments
 -------------------------------------
@@ -165,7 +175,7 @@
 1) For example, if membership testing or lookups dominate runtime and memory
    is not at a premium, the user may benefit from setting the maximum load
    ratio at 5% or 10% instead of the usual 66.7%.  This will sharply
-   curtail the number of collisions.
+   curtail the number of collisions but will increase iteration time.
 
 2) Dictionary creation time can be shortened in cases where the ultimate
    size of the dictionary is known in advance.  The dictionary can be