XXXROUNDUP():  Turns out this fixed Andrew MacIntyre's memory-mgmt
disaster too, so this change is here to stay.  Beefed up the comments
and added some stats Andrew reported.  Also a small change to the
macro body, to make it obvious how XXXROUNDUP(0) ends up returning 0.
See SF patch 578297 for context.

Not a bugfix candidate, as the functional changes here have already
been backported to the 2.2 line (this patch just improves clarity).
diff --git a/Parser/node.c b/Parser/node.c
index 9ed34b8..780c230 100644
--- a/Parser/node.c
+++ b/Parser/node.c
@@ -34,21 +34,44 @@
 }
 
 /* A gimmick to make massive numbers of reallocs quicker.  The result is
- * a number >= the input.  For n=0 we must return 0.
- * For n=1, we return 1, to avoid wasting memory in common 1-child nodes
- * (XXX are those actually common?).
- * Else for n <= 128, round up to the closest multiple of 4.  Why 4?
- * Rounding up to a multiple of an exact power of 2 is very efficient.
- * Else call fancy_roundup() to grow proportionately to n.  We've got an
+ * a number >= the input.  In PyNode_AddChild, it's used like so, when
+ * we're about to add child number current_size + 1:
+ *
+ *     if XXXROUNDUP(current_size) < XXXROUNDUP(current_size + 1):
+ *         allocate space for XXXROUNDUP(current_size + 1) total children
+ *     else:
+ *         we already have enough space
+ *
+ * Since a node starts out empty, we must have
+ *
+ *     XXXROUNDUP(0) < XXXROUNDUP(1)
+ *
+ * so that we allocate space for the first child.  One-child nodes are very
+ * common (presumably that would change if we used a more abstract form
+ * of syntax tree), so to avoid wasting memory it's desirable that
+ * XXXROUNDUP(1) == 1.  That in turn forces XXXROUNDUP(0) == 0.
+ *
+ * Else for 2 <= n <= 128, we round up to the closest multiple of 4.  Why 4?
+ * Rounding up to a multiple of an exact power of 2 is very efficient, and
+ * most nodes with more than one child have <= 4 kids.
+ *
+ * Else we call fancy_roundup() to grow proportionately to n.  We've got an
  * extreme case then (like test_longexp.py), and on many platforms doing
  * anything less than proportional growth leads to exorbitant runtime
  * (e.g., MacPython), or extreme fragmentation of user address space (e.g.,
  * Win98).
- * This would be straightforward if a node stored its current capacity.  The
- * code is tricky to avoid that.
+ *
+ * In a run of compileall across the 2.3a0 Lib directory, Andrew MacIntyre
+ * reported that, with this scheme, 89% of PyMem_RESIZE calls in
+ * PyNode_AddChild passed 1 for the size, and 9% passed 4.  So this usually
+ * wastes very little memory, but is very effective at sidestepping
+ * platform-realloc disasters on vulnernable platforms.
+ *
+ * Note that this would be straightforward if a node stored its current
+ * capacity.  The code is tricky to avoid that.
  */
-#define XXXROUNDUP(n) ((n) == 1 ? 1 : \
-		       (n) <= 128 ? (((n) + 3) & ~3) : \
+#define XXXROUNDUP(n) ((n) <= 1 ? (n) : 		\
+		       (n) <= 128 ? (((n) + 3) & ~3) :	\
 		       fancy_roundup(n))