In generated shaders, output +INF and -INF as largest single precision floating point number.

C++ streams seem to use the representation 1.$ for INF and that isn't valid syntax in GLSL or HLSL.

Also preserve the sign of INF in constant expressions that divide by zero. I can't figure out what to do about 0/0 because the shader models we are using do not support NaN. Treating it as +INF as before.
Review URL: https://codereview.appspot.com/7057046

git-svn-id: https://angleproject.googlecode.com/svn/branches/dx11proto@1706 736b8ea6-26fd-11df-bfd4-992fa37f6226
diff --git a/src/compiler/OutputHLSL.cpp b/src/compiler/OutputHLSL.cpp
index 0a54952..e1b8092 100644
--- a/src/compiler/OutputHLSL.cpp
+++ b/src/compiler/OutputHLSL.cpp
@@ -13,6 +13,7 @@
 #include "compiler/SearchSymbol.h"
 #include "compiler/DetectDiscontinuity.h"
 
+#include <limits.h>
 #include <stdio.h>
 #include <algorithm>
 
@@ -2512,7 +2513,7 @@
         {
             switch (constUnion->getType())
             {
-              case EbtFloat: out << constUnion->getFConst(); break;
+              case EbtFloat: out << std::min(FLT_MAX, std::max(-FLT_MAX, constUnion->getFConst())); break;
               case EbtInt:   out << constUnion->getIConst(); break;
               case EbtBool:  out << constUnion->getBConst(); break;
               default: UNREACHABLE();