Improve performance of applyRenderTarget and applyTextures

Get rid of unnecessary GetSurfaceLevel/Release calls, move invariants out of the applyTextures loop, and fix the if in getSemanticIndex so we can avoid calling getMaximumCombinedTextureImageUnits. Gets donuts NaCl demo from 14->16 fps.

BUG=
TEST=webgl conformance tests

Review URL: http://codereview.appspot.com/5248057

git-svn-id: https://angleproject.googlecode.com/svn/trunk@787 736b8ea6-26fd-11df-bfd4-992fa37f6226
diff --git a/src/libGLESv2/Program.cpp b/src/libGLESv2/Program.cpp
index d49fded..27e6550 100644
--- a/src/libGLESv2/Program.cpp
+++ b/src/libGLESv2/Program.cpp
@@ -202,7 +202,7 @@
 // index (0-15 for the pixel shader and 0-3 for the vertex shader).
 GLint Program::getSamplerMapping(SamplerType type, unsigned int samplerIndex)
 {
-    GLuint logicalTextureUnit = -1;
+    GLint logicalTextureUnit = -1;
 
     switch (type)
     {
@@ -225,7 +225,7 @@
       default: UNREACHABLE();
     }
 
-    if (logicalTextureUnit >= 0 && logicalTextureUnit < getContext()->getMaximumCombinedTextureImageUnits())
+    if (logicalTextureUnit >= 0 && logicalTextureUnit < (GLint)getContext()->getMaximumCombinedTextureImageUnits())
     {
         return logicalTextureUnit;
     }