In generated shaders, output +INF and -INF as largest single precision floating point number.

C++ streams seem to use the representation 1.$ for INF and that isn't valid syntax in GLSL or HLSL.

Also preserve the sign of INF in constant expressions that divide by zero. I can't figure out what to do about 0/0 because the shader models we are using do not support NaN. Treating it as +INF as before.
Review URL: https://codereview.appspot.com/7057046

git-svn-id: https://angleproject.googlecode.com/svn/branches/dx11proto@1706 736b8ea6-26fd-11df-bfd4-992fa37f6226
diff --git a/src/compiler/Intermediate.cpp b/src/compiler/Intermediate.cpp
index 9032b3a..c0b08c1 100644
--- a/src/compiler/Intermediate.cpp
+++ b/src/compiler/Intermediate.cpp
@@ -1162,7 +1162,7 @@
             case EbtFloat:
                 if (rightUnionArray[i] == 0.0f) {
                     infoSink.info.message(EPrefixWarning, "Divide by zero error during constant folding", getLine());
-                    tempConstArray[i].setFConst(FLT_MAX);
+                    tempConstArray[i].setFConst(unionArray[i].getFConst() < 0 ? -FLT_MAX : FLT_MAX);
                 } else
                     tempConstArray[i].setFConst(unionArray[i].getFConst() / rightUnionArray[i].getFConst());
                 break;