normalize latin-1 and utf-8 variant encodings like the builtin tokenizer does
diff --git a/Misc/NEWS b/Misc/NEWS
index 61f91ed..f542bcb 100644
--- a/Misc/NEWS
+++ b/Misc/NEWS
@@ -87,6 +87,9 @@
 Library
 -------
 
+- Make tokenize.detect_coding() normalize utf-8 and iso-8859-1 variants like the
+  builtin tokenizer.
+
 - Issue #7048: Force Decimal.logb to round its result when that result
   is too large to fit in the current precision.