Don't accept default precision for uint

Default precision can only be specified for float, int and sampler types.
Default precision for int also applies to uint and uvec declarations.

BUG=angleproject:1221
TEST=angle_unittests

Change-Id: I31fdcde80da16e2ea8771838f7c1a6ab4e478194
Reviewed-on: https://chromium-review.googlesource.com/313314
Tryjob-Request: Olli Etuaho <oetuaho@nvidia.com>
Tested-by: Olli Etuaho <oetuaho@nvidia.com>
Reviewed-by: Corentin Wallez <cwallez@chromium.org>
Reviewed-by: Zhenyao Mo <zmo@chromium.org>
Reviewed-by: Jamie Madill <jmadill@chromium.org>
diff --git a/src/compiler/translator/SymbolTable.h b/src/compiler/translator/SymbolTable.h
index 2f5eb36..3ceabdc 100644
--- a/src/compiler/translator/SymbolTable.h
+++ b/src/compiler/translator/SymbolTable.h
@@ -435,6 +435,8 @@
     {
         if (!SupportsPrecision(type.type))
             return false;
+        if (type.type == EbtUInt)
+            return false;  // ESSL 3.00.4 section 4.5.4
         if (type.isAggregate())
             return false; // Not allowed to set for aggregate types
         int indexOfLastElement = static_cast<int>(precisionStack.size()) - 1;