Convert bitmaps to sRGB/scRGB when they have a color profile

This change also fixes an issue with RGBA16F bitmaps when modulated
with a color (for instance by setting an alpha on the Paint object).

The color space conversion is currently done entirely in the shader,
by doing these operations in order:

1. Sample the texture
2. Un-premultiply alpha
3. Apply the EOTF
4. Multiply by the 3x3 color space matrix
5. Apply the OETF
6. Premultiply alpha

Optimizations:
- Steps 2 & 6 are skipped for opaque (common) bitmaps
- Step 3 is skipped when the color space's EOTF is close
  to sRGB (Display P3 for instance). Instead, we use
  a hardware sRGB fetch (when the GPU supports it)
- When step 3 is necessary, we use one of four standard
  EOTF implementations, to save cycles when possible:
  + Linear (doesn't do anything)
  + Full parametric (ICC parametric curve type 4 as defined
    in ICC.1:2004-10, section 10.15)
  + Limited parametric (ICC parametric curve type 3)
  + Gamma (ICC parametric curve type 0)

Color space conversion could be done using texture samplers
instead, for instance 3D LUTs, with or without transfer
functions baked in, or 1D LUTs for transfer functions. This
would result in dependent texture fetches which may or may
not be an advantage over an ALU based implementation. The
current solution favor the use of ALUs to save precious
bandwidth.

Test: CtsUiRenderingTests, CtsGraphicsTests
Bug: 32984164
Change-Id: I10bc3db515e13973b45220f129c66b23f0f7f8fe
diff --git a/libs/hwui/Texture.h b/libs/hwui/Texture.h
index e7fbf20..052c018 100644
--- a/libs/hwui/Texture.h
+++ b/libs/hwui/Texture.h
@@ -19,6 +19,13 @@
 
 #include "GpuMemoryTracker.h"
 #include "hwui/Bitmap.h"
+#include "utils/Color.h"
+
+#include <memory>
+
+#include <math/mat3.h>
+
+#include <ui/ColorSpace.h>
 
 #include <GLES2/gl2.h>
 #include <EGL/egl.h>
@@ -42,8 +49,7 @@
 public:
     static SkBitmap uploadToN32(const SkBitmap& bitmap,
             bool hasLinearBlending, sk_sp<SkColorSpace> sRGB);
-    static bool hasUnsupportedColorType(const SkImageInfo& info,
-            bool hasLinearBlending, SkColorSpace* sRGB);
+    static bool hasUnsupportedColorType(const SkImageInfo& info, bool hasLinearBlending);
     static void colorTypeToGlFormatAndType(const Caches& caches, SkColorType colorType,
             bool needSRGB, GLint* outInternalFormat, GLint* outFormat, GLint* outType);
 
@@ -130,9 +136,26 @@
     }
 
     /**
+     * Returns nullptr if this texture does not require color space conversion
+     * to sRGB, or a valid pointer to a ColorSpaceConnector if a conversion
+     * is required.
+     */
+    constexpr const ColorSpaceConnector* getColorSpaceConnector() const {
+        return mConnector.get();
+    }
+
+    constexpr bool hasColorSpaceConversion() const {
+        return mConnector.get() != nullptr;
+    }
+
+    TransferFunctionType getTransferFunctionType() const;
+
+    /**
      * Returns true if this texture uses a linear encoding format.
      */
-    bool isLinear() const;
+    constexpr bool isLinear() const {
+        return mInternalFormat == GL_RGBA16F;
+    }
 
     /**
      * Generation of the backing bitmap,
@@ -171,8 +194,8 @@
     // and external texture wrapper
     friend class GlLayer;
 
-    // Returns true if the size changed, false if it was the same
-    bool updateSize(uint32_t width, uint32_t height, GLint internalFormat,
+    // Returns true if the texture layout (size, format, etc.) changed, false if it was the same
+    bool updateLayout(uint32_t width, uint32_t height, GLint internalFormat,
             GLint format, GLenum target);
     void uploadHardwareBitmapToTexture(GraphicBuffer* buffer);
     void resetCachedParams();
@@ -196,6 +219,8 @@
     GLenum mMagFilter = GL_LINEAR;
 
     Caches& mCaches;
+
+    std::unique_ptr<ColorSpaceConnector> mConnector;
 }; // struct Texture
 
 class AutoTexture {