Simplify analytical GPU gradient impls

1) reformulate the gradient data as a series of interpolation intervals,
   defined as tuples of (color_scale, color_bias) such that

       color(t) = t * color_scale + color_bias

   (this allows us to skip the relative_t computation and simply feed
    tiled_t into a fast MAD)

2) then, the existing specializations can be generalized as

   a) select an interpolation interval (possibly based on a threshold)
   b) compute the interpolated color using the method in #1

3) simplify the hard-edge cases by using clamp intervals
   (color_scale == 0) and relaxing the clamping step (allowing
   tiled_t < 0 or tiled_t > 1, in order to hit the clamping intervals
   during the selection step)

The existing specializations are converted as follows:

* kTwo_ColorType
  -> single interpolation interval, normal clamping
* kThree_ColorType
  -> two interpolation intervals, normal clamping, threshold == pos[1]
* kSingleHardStop_ColorType
  -> two interpolation intervals, normal clamping, threshold == pos[1/2]
* kHardStopLeftEdged_ColorType
  -> two interpolation intervals, clamping (-inf, 1], threshold == 0
* kHardStopRightEdged_ColorType
  -> two interpolation intervals, clamping [0, +inf), threshold == 1

This reduces the SkSL overhead in a couple of ways:

  * the clamp stage is sometimes reduced to min/max vs. full clamp()
  * the color interpolation stage is just a MAD vs. full mix()


Change-Id: I65be84d131d56136ec5e946c2b3dba149a4473cf
Reviewed-on: https://skia-review.googlesource.com/68218
Reviewed-by: Brian Salomon <bsalomon@google.com>
Commit-Queue: Florin Malita <fmalita@chromium.org>
2 files changed