Allow GLSL processor keys to use floats with full precision.
Previously, floats would be converted to int32 using a C-style cast when
added to a processor key, truncating any fractional component of the
value. This would cause two processors compiled with different floats to
generate the same key.
The float is now sk_bit_cast to a uint32 instead; this will preserve the
uniqueness of all values.
Additionally, the CPP generator will now abort if asked to add an
unsupported type to the processor key. Previously, it would do a C-style
cast to int32 for all unrecognized types and hope for the best.
Change-Id: I270ae8b3036497a9e9a77747255f763d9aad1927
Bug: skia:10486
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/305572
Commit-Queue: John Stiles <johnstiles@google.com>
Reviewed-by: Brian Osman <brianosman@google.com>
diff --git a/src/gpu/gradients/generated/GrSweepGradientLayout.cpp b/src/gpu/gradients/generated/GrSweepGradientLayout.cpp
index 5ca21ae..b71b389 100644
--- a/src/gpu/gradients/generated/GrSweepGradientLayout.cpp
+++ b/src/gpu/gradients/generated/GrSweepGradientLayout.cpp
@@ -10,6 +10,7 @@
**************************************************************************************************/
#include "GrSweepGradientLayout.h"
+#include "src/core/SkUtils.h"
#include "src/gpu/GrTexture.h"
#include "src/gpu/glsl/GrGLSLFragmentProcessor.h"
#include "src/gpu/glsl/GrGLSLFragmentShaderBuilder.h"