bpo-35808: Retire pgen and use pgen2 to generate the parser (GH-11814)

Pgen is the oldest piece of technology in the CPython repository, building it requires various #if[n]def PGEN hacks in other parts of the code and it also depends more and more on CPython internals. This commit removes the old pgen C code and replaces it for a new version implemented in pure Python. This is a modified and adapted version of lib2to3/pgen2 that can generate grammar files compatibles with the current parser.

This commit also eliminates all the #ifdef and code branches related to pgen, simplifying the code and making it more maintainable. The regen-grammar step now uses $(PYTHON_FOR_REGEN) that can be any version of the interpreter, so the new pgen code maintains compatibility with older versions of the interpreter (this also allows regenerating the grammar with the current CI solution that uses Python3.5). The new pgen Python module also makes use of the Grammar/Tokens file that holds the token specification, so is always kept in sync and avoids having to maintain duplicate token definitions.
diff --git a/Parser/pgen/token.py b/Parser/pgen/token.py
new file mode 100644
index 0000000..f9d45c4
--- /dev/null
+++ b/Parser/pgen/token.py
@@ -0,0 +1,40 @@
+import itertools
+
+def generate_tokens(tokens):
+    numbers = itertools.count(0)
+    for line in tokens:
+        line = line.strip()
+
+        if not line:
+            continue
+        if line.strip().startswith('#'):
+            continue
+
+        name = line.split()[0]
+        yield (name, next(numbers))
+
+    yield ('N_TOKENS', next(numbers))
+    yield ('NT_OFFSET', 256)
+
+def generate_opmap(tokens):
+    for line in tokens:
+        line = line.strip()
+
+        if not line:
+            continue
+        if line.strip().startswith('#'):
+            continue
+
+        pieces = line.split()
+
+        if len(pieces) != 2:
+            continue
+
+        name, op = pieces
+        yield (op.strip("'"), name)
+
+    # Yield independently <>. This is needed so it does not collide
+    # with the token generation in "generate_tokens" because if this
+    # symbol is included in Grammar/Tokens, it will collide with !=
+    # as it has the same name (NOTEQUAL).
+    yield ('<>', 'NOTEQUAL')