Optimize shaders for dithered gradients
It's faster to compute a dither calculation in the vertex shader and use
a varying (letting the GPU interpolate the fragment values) than to perform
that calculation in the fragment shader as part of a texture lookup.
Issue #7207600 Prime mr1 shader performance issues
Issue #7158326 Bad framerates on MR1 (Mako, Manta, Prime)
Change-Id: I15789582a6e9e2d8b9dd22aa5b0f72f0ba1cce7f
diff --git a/libs/hwui/Dither.cpp b/libs/hwui/Dither.cpp
index 5817977..e80b325 100755
--- a/libs/hwui/Dither.cpp
+++ b/libs/hwui/Dither.cpp
@@ -76,8 +76,10 @@
bindDitherTexture();
+ float ditherSize = 1.0f / DITHER_KERNEL_SIZE;
glUniform1i(program->getUniform("ditherSampler"), textureSlot);
- glUniform1f(program->getUniform("ditherSize"), 1.0f / DITHER_KERNEL_SIZE);
+ glUniform1f(program->getUniform("ditherSize"), ditherSize);
+ glUniform1f(program->getUniform("ditherSizeSquared"), ditherSize * ditherSize);
}
}; // namespace uirenderer