Merge changes from github.
PiperOrigin-RevId: 194031845
diff --git a/CODEOWNERS b/CODEOWNERS
index 007a304..b9f0313 100644
--- a/CODEOWNERS
+++ b/CODEOWNERS
@@ -45,7 +45,7 @@
# /tensorflow/contrib/session_bundle/ @nfiedel @sukritiramesh
# /tensorflow/contrib/slim/ @sguada @thenbasilmanran
# /tensorflow/contrib/stateless/ @girving
-# /tensorflow/contrib/tensor_forest/ @gilberthendry @thomascolthurst
+# /tensorflow/contrib/tensor_forest/ @gilberthendry @thomascolthurst @yupbank
# /tensorflow/contrib/testing/ @dandelionmane
# /tensorflow/contrib/timeseries/ @allenlavoie
# /tensorflow/contrib/tpu/ @frankchn @saeta @jhseu
diff --git a/README.md b/README.md
index 29418dc..e1a50c8 100644
--- a/README.md
+++ b/README.md
@@ -14,7 +14,7 @@
the graph edges represent the multidimensional data arrays (tensors) that flow
between them. This flexible architecture enables you to deploy computation to one
or more CPUs or GPUs in a desktop, server, or mobile device without rewriting
-code. TensorFlow also includes TensorBoard, a data visualization toolkit.
+code. TensorFlow also includes [TensorBoard](https://www.tensorflow.org/programmers_guide/summaries_and_tensorboard), a data visualization toolkit.
TensorFlow was originally developed by researchers and engineers
working on the Google Brain team within Google's Machine Intelligence Research
diff --git a/RELEASE.md b/RELEASE.md
index e845953..2717c75 100644
--- a/RELEASE.md
+++ b/RELEASE.md
@@ -1,3 +1,61 @@
+# Release 1.8.0
+
+## Major Features And Improvements
+* Can now pass `tf.contrib.distribute.MirroredStrategy()` to `tf.estimator.RunConfig()` to run an Estimator model on multiple GPUs on one machine.
+* Add `tf.contrib.data.prefetch_to_device()`, which supports prefetching to GPU memory.
+* Added Gradient Boosted Trees as pre-made Estimators: BoostedTreesClassifier, BoostedTreesRegressor.
+* Add 3rd generation pipeline config for Cloud TPUs which improves performance and usability.
+* `tf.contrib.bayesflow` is moving out to it's own repo.
+* Added `tf.contrib.{proto,rpc}` to allow generic proto parsing and RPC communication.
+
+## Bug Fixes and Other Changes
+* `tf.data`:
+ * Add `tf.contrib.data.prefetch_to_device`, which enables prefetching dataset elements to GPU memory.
+ * Add `tf.contrib.data.AUTOTUNE`, which allows the tf.data runtime to automatically tune the prefetch buffer sizes based on your system and environment.
+ * Add `tf.contrib.data.make_csv_dataset` for building datasets of CSV files.
+* Eager Execution:
+ * With eager execution Datasets can now be used as standard python iterators (`for batch in dataset:`). Both `Dataset.__iter__()` and `Dataset.make_one_shot_iterator()` can now be used to create iterators when eager execution is enabled.
+ * Automatic device placement has been enabled (i.e., use a GPU if available automatically, without requiring an explicit `with tf.device(“/gpu:0”)`) (Fixes #14133)
+ * `tf.GradientTape` has moved out of contrib.
+* `tf.keras`:
+ * Added the fashion mnist dataset.
+ * New data preprocessing functions: `image/random_brightness`, `sequence/TimeseriesGenerator`, and `text/hashing_trick`.
+* Accelerated Linear Algebra (XLA):
+ * Select and scatter in reference util and evaluator now use lexicographical order to break ties.
+* TensorFlow Debugger (tfdbg) CLI:
+ * During tensor-filter operations, allow exclusion of nodes by regular expressions.
+ * Fix spurious background colors in some text terminals.
+* `tf.contrib`:
+ * Add meta-distribution BatchReshape which reshapes batch dimensions.
+ * `tf.contrib.layers.recompute_grad` works for explicit gradient checkpointing on TPU.
+ * Add `tf.contrib.framework.argsort`.
+ * Allow `DNNBoostedTreeCombinedEstimator` to work with core versions of feature columns and losses.
+ * Add non-linear image warping ops: `tf.contrib.image.sparse_image_warp`, `tf.contrib.image.dense_image_warp`, and `tf.contrib.image.interpolate_spline`.
+ * Fix bug in `tf.contrib.opt.MultitaskOptimizerWrapper` where types of tensors were mismatched.
+* Other:
+ * Low-level graph construction now calls the TensorFlow C API. This change should be invisible to most users, but can be disabled by setting the environment variable `TF_C_API_GRAPH_CONSTRUCTION=0` in this release. Future releases will remove the ability to disable this change. Please [file a bug](https://github.com/tensorflow/tensorflow/issues/new) if you find yourself using this escape hatch.
+ * Add description of shapes and a pointer to tutorial notebook in `tf.distributions.Distribution`.
+ * Update scatter operations:
+ * Add `tf.scatter_min` and `tf.scatter_max`
+ * Extend scatter operations to work with a scalar update parameter.
+ * Move cuDNN RNN ops to core for use in TensorFlow codebase only.
+ * Add `float64` support for `Conv2d`, `Conv2dBackpropInput`, and `Conv2dBackpropFilter`.
+ * Add `float64` support for `AvgPool`/`AvgPoolGrad`.
+ * Make graph name scope thread local so that they work correctly in multi-threaded environments.
+ * Update nsync synchronization library to avoid slow primitives on Linux.
+ * Removed need to put nsync/public on C include path when building custom ops.
+ * Add `tf.image.psnr`, `tf.image.ssim`, `tf.image.ssim_multiscale`, `tf.image.image_gradients`, `tf.image.sobel_edges`.
+ * Add links to https://js.tensorflow.org.
+ * Fix non-uniformity of orthogonal matrices.
+ * Fix bug where multi-image Estimator eval summaries were not displayed correctly.
+
+## Thanks to our Contributors
+
+This release contains contributions from many people at Google, as well as:
+
+4d55397500, Aghasy, Alan Du, Alan Lee, Alan Yee, Alex Wiltschko, Animesh Karnewar, Ankit Gupta, Anton Matosov, Aris L, Ben Barsdell, Brent Yi, Brett Koonce, Carl Thomé, cbockman, Chikanaga Tomoyuki, Chris Tava, CéDric Deltheil, Dahan Gong, Dalmo Cirne, Daniel Erenrich, David Norman, DavidNorman, Edd Wilder-James, Fanjin Zeng, Felix Abecassis, fo40225, George Sterpu, Giovanni Terlingen, Gor Baghdasaryan, Guillaume Klein, Hanchen Li, Ilya Polenov, Jakub Kolodziejczyk, Jason Sadler, Jayaram Bobba, Jerry Liu, jinghuangintel, Jiongyan Zhang (张炯衍), Joel Shor, Jong Wook Kim, Julian Eisenschlos, Karl Lessard, Krish Ravindranath, Loo Rong Jie, Lukas Geiger, Luke Iwanski, Mahmoud Abuzaina, ManHyuk, Marvin Richter, Maximilian Mitchell, Mohammad Ashraf Bhuiyan, msofka, Mustafa Kasap, Nathan Burnham, Nathan Luehr, Naveen Marri, ngc92, nio1814, Oleg Zabluda, Ou Changkun, Panos Ipeirotis, Paul Van Eck, Peter Lee, Piotr Czapla, qjivy, Rholais Lii, Rodrigo Formigone, Russell Klopfer, ryantimjohn, Sang Han, SebastiáN RamíRez, shengfuintel, Siby Jose Plathottam, Silver Chan, Stanislaw Antol, Taehoon Lee, Tarang Chugh, Ted Chang, Thomas Bastiani, Xian Xu, Xiaoming (Jason) Cui, Yan Facai (颜发才), yaox12, Yashal Shakti Kanungo, Yong Tang, Yuan (Terry) Tang, Yuxin Wu, Ziyue(Louis) Lu
+
+
# Release 1.7.0
## Major Features And Improvements
diff --git a/WORKSPACE b/WORKSPACE
index 11c5cdb..4ddfb9a 100644
--- a/WORKSPACE
+++ b/WORKSPACE
@@ -2,11 +2,11 @@
http_archive(
name = "io_bazel_rules_closure",
- sha256 = "6691c58a2cd30a86776dd9bb34898b041e37136f2dc7e24cadaeaf599c95c657",
- strip_prefix = "rules_closure-08039ba8ca59f64248bb3b6ae016460fe9c9914f",
+ sha256 = "a38539c5b5c358548e75b44141b4ab637bba7c4dc02b46b1f62a96d6433f56ae",
+ strip_prefix = "rules_closure-dbb96841cc0a5fb2664c37822803b06dab20c7d1",
urls = [
- "https://mirror.bazel.build/github.com/bazelbuild/rules_closure/archive/08039ba8ca59f64248bb3b6ae016460fe9c9914f.tar.gz",
- "https://github.com/bazelbuild/rules_closure/archive/08039ba8ca59f64248bb3b6ae016460fe9c9914f.tar.gz", # 2018-01-16
+ "https://mirror.bazel.build/github.com/bazelbuild/rules_closure/archive/dbb96841cc0a5fb2664c37822803b06dab20c7d1.tar.gz",
+ "https://github.com/bazelbuild/rules_closure/archive/dbb96841cc0a5fb2664c37822803b06dab20c7d1.tar.gz", # 2018-04-13
],
)
diff --git a/tensorflow/c/c_api.h b/tensorflow/c/c_api.h
index fe85f8e..c859434 100644
--- a/tensorflow/c/c_api.h
+++ b/tensorflow/c/c_api.h
@@ -72,7 +72,7 @@
#ifdef SWIG
#define TF_CAPI_EXPORT
#else
-#if defined(COMPILER_MSVC)
+#if defined(_WIN32)
#ifdef TF_COMPILE_LIBRARY
#define TF_CAPI_EXPORT __declspec(dllexport)
#else
@@ -80,7 +80,7 @@
#endif // TF_COMPILE_LIBRARY
#else
#define TF_CAPI_EXPORT __attribute__((visibility("default")))
-#endif // COMPILER_MSVC
+#endif // _WIN32
#endif // SWIG
#ifdef __cplusplus
diff --git a/tensorflow/c/c_api_experimental.cc b/tensorflow/c/c_api_experimental.cc
index 9678ee9..d3916bc 100644
--- a/tensorflow/c/c_api_experimental.cc
+++ b/tensorflow/c/c_api_experimental.cc
@@ -184,6 +184,7 @@
return std::move(functions[0]);
}
+#if not defined(PLATFORM_WINDOWS)
// On success, returns a set of TF_Function instances encoding a dataset
// node stack that reads a Imagenet TFRecordFile dataset from `file_path`, and
// sets `dataset_name` to the created dataset name. The returned functions must
@@ -7076,7 +7077,9 @@
return CreateFunctionsFromTextProto(func_def, &mutate_proto_func, status);
#endif
}
+#endif
+#if not defined(PLATFORM_WINDOWS)
// On success, returns a set of TF_Function instances encoding a dataset
// node stack that reads an MNIST file dataset from `file_path`, and
// sets `dataset_name` to the created dataset name. The returned functions must
@@ -8221,6 +8224,7 @@
return CreateFunctionsFromTextProto(func_def, &mutate_proto_func, status);
#endif
}
+#endif
// Adds the input functions to `graph`. On success, returns the created
// IteratorGetNext node.
@@ -8314,6 +8318,13 @@
TF_Operation* TF_MakeFileBasedIteratorGetNextWithDatasets(
TF_Graph* graph, const char* file_path, int batch_size,
unsigned char is_mnist, TF_Status* status) {
+#if defined(PLATFORM_WINDOWS)
+ // TODO(ashankar): get these functions working on Windows.
+ status->status = tensorflow::errors::Unimplemented(
+ "TF_MakeFileBasedIteratorGetNextWithDatasets in the experimental C API "
+ "is not implemented for Windows");
+ return nullptr;
+#else
tensorflow::Status s;
std::string dataset_name;
@@ -8355,4 +8366,5 @@
<< graph->graph.ToGraphDefDebug().DebugString();
return getnext_node;
+#endif
}
diff --git a/tensorflow/c/c_api_experimental.h b/tensorflow/c/c_api_experimental.h
index 6663429..88cb173 100644
--- a/tensorflow/c/c_api_experimental.h
+++ b/tensorflow/c/c_api_experimental.h
@@ -35,7 +35,7 @@
#ifdef SWIG
#define TF_CAPI_EXPORT
#else
-#if defined(COMPILER_MSVC)
+#if defined(_WIN32)
#ifdef TF_COMPILE_LIBRARY
#define TF_CAPI_EXPORT __declspec(dllexport)
#else
@@ -43,7 +43,7 @@
#endif // TF_COMPILE_LIBRARY
#else
#define TF_CAPI_EXPORT __attribute__((visibility("default")))
-#endif // COMPILER_MSVC
+#endif // _WIN32
#endif // SWIG
#ifdef __cplusplus
diff --git a/tensorflow/c/eager/c_api.h b/tensorflow/c/eager/c_api.h
index 15ac0f3..ba77f3c 100644
--- a/tensorflow/c/eager/c_api.h
+++ b/tensorflow/c/eager/c_api.h
@@ -30,7 +30,7 @@
#ifdef SWIG
#define TF_CAPI_EXPORT
#else
-#if defined(COMPILER_MSVC)
+#if defined(_WIN32)
#ifdef TF_COMPILE_LIBRARY
#define TF_CAPI_EXPORT __declspec(dllexport)
#else
@@ -38,7 +38,7 @@
#endif // TF_COMPILE_LIBRARY
#else
#define TF_CAPI_EXPORT __attribute__((visibility("default")))
-#endif // COMPILER_MSVC
+#endif // _WIN32
#endif // SWIG
#ifdef __cplusplus
diff --git a/tensorflow/compiler/aot/runtime.cc b/tensorflow/compiler/aot/runtime.cc
index 5772776..5e74079 100644
--- a/tensorflow/compiler/aot/runtime.cc
+++ b/tensorflow/compiler/aot/runtime.cc
@@ -31,7 +31,7 @@
inline void* aligned_malloc(size_t size, int minimum_alignment) {
#if defined(__ANDROID__) || defined(OS_ANDROID) || defined(OS_CYGWIN)
return memalign(minimum_alignment, size);
-#elif defined(COMPILER_MSVC)
+#elif defined(_WIN32)
return _aligned_malloc(size, minimum_alignment);
#else // !__ANDROID__ && !OS_ANDROID && !OS_CYGWIN
void* ptr = nullptr;
@@ -48,7 +48,7 @@
}
inline void aligned_free(void* aligned_memory) {
-#if defined(COMPILER_MSVC)
+#if defined(_WIN32)
_aligned_free(aligned_memory);
#else
free(aligned_memory);
diff --git a/tensorflow/compiler/tests/binary_ops_test.py b/tensorflow/compiler/tests/binary_ops_test.py
index d1d7379..1e4dd32 100644
--- a/tensorflow/compiler/tests/binary_ops_test.py
+++ b/tensorflow/compiler/tests/binary_ops_test.py
@@ -360,11 +360,13 @@
np.array([2, -1], dtype=dtype),
expected=np.array([[[[3, 1], [5, 3]]]], dtype=dtype))
- self._testBinary(
- math_ops.add,
- np.array([0xffffffff, 0xfffffffff, 1, 1], dtype=np.int64),
- np.array([1, 1, 0xffffffff, 0xfffffffff], dtype=np.int64),
- expected=np.array([1 << 32, 1 << 36, 1 << 32, 1 << 36], dtype=np.int64))
+ if np.int64 in self.numeric_types:
+ self._testBinary(
+ math_ops.add,
+ np.array([0xffffffff, 0xfffffffff, 1, 1], dtype=np.int64),
+ np.array([1, 1, 0xffffffff, 0xfffffffff], dtype=np.int64),
+ expected=np.array([1 << 32, 1 << 36, 1 << 32, 1 << 36],
+ dtype=np.int64))
def testComplexOps(self):
for dtype in self.complex_types:
diff --git a/tensorflow/compiler/xla/python/xla_client_test.py b/tensorflow/compiler/xla/python/xla_client_test.py
index 6fe7b24..c073c02 100644
--- a/tensorflow/compiler/xla/python/xla_client_test.py
+++ b/tensorflow/compiler/xla/python/xla_client_test.py
@@ -1161,7 +1161,6 @@
c, expected=np.sum(input_array, axis=tuple(dims)))
_ReduceAndTest(0)
- _ReduceAndTest(0)
_ReduceAndTest(0, 1)
_ReduceAndTest(0, 2)
_ReduceAndTest(1, 2)
diff --git a/tensorflow/compiler/xla/service/gpu/cudnn_convolution_algorithm_picker.cc b/tensorflow/compiler/xla/service/gpu/cudnn_convolution_algorithm_picker.cc
index 1790c50..c4c56c5 100644
--- a/tensorflow/compiler/xla/service/gpu/cudnn_convolution_algorithm_picker.cc
+++ b/tensorflow/compiler/xla/service/gpu/cudnn_convolution_algorithm_picker.cc
@@ -97,9 +97,9 @@
const ConvolutionDimensionNumbers& dnums,
se::StreamExecutor* stream_exec) {
// Skip this check for cudnn7 and newer.
- se::port::StatusOr<std::tuple<int, int, int>> version =
+ auto version =
stream_exec->AsDnn()->GetVersion();
- if (version.ok() && std::get<0>(version.ValueOrDie()) >= 7) {
+ if (version.ok() && version.ValueOrDie().major_version() >= 7) {
return true;
}
diff --git a/tensorflow/compiler/xla/tests/dot_operation_test.cc b/tensorflow/compiler/xla/tests/dot_operation_test.cc
index 7b994a4..c4031df 100644
--- a/tensorflow/compiler/xla/tests/dot_operation_test.cc
+++ b/tensorflow/compiler/xla/tests/dot_operation_test.cc
@@ -50,6 +50,13 @@
using TypesF16F32F64 = ::testing::Types<Eigen::half, float, double>;
using TypesF16F32F64CF64 =
::testing::Types<Eigen::half, float, double, complex64>;
+#elif !defined(XLA_BACKEND_DOES_NOT_SUPPORT_FLOAT16) && \
+ defined(XLA_BACKEND_DOES_NOT_SUPPORT_FLOAT64) && \
+ defined(XLA_BACKEND_DOES_NOT_SUPPORT_COMPLEX)
+using TypesF16F32 = ::testing::Types<Eigen::half, float>;
+using TypesF16F32F64 = ::testing::Types<Eigen::half, float>;
+using TypesF16F32F64CF64 =
+ ::testing::Types<Eigen::half, float>;
#else
#error "Situation not handled yet"
#endif
diff --git a/tensorflow/contrib/autograph/converters/call_trees.py b/tensorflow/contrib/autograph/converters/call_trees.py
index 2e5590b..554f047 100644
--- a/tensorflow/contrib/autograph/converters/call_trees.py
+++ b/tensorflow/contrib/autograph/converters/call_trees.py
@@ -146,7 +146,7 @@
# Inspect the target function decorators. If any include a @convert
# or @graph_ready annotation, then they must be called as they are.
# TODO(mdan): This may be quite heavy.
- # To parse and re-analize each function for every call site could be quite
+ # To parse and re-analyze each function for every call site could be quite
# wasteful. Maybe we could cache the parsed AST?
try:
target_node, _ = parser.parse_entity(target_entity)
diff --git a/tensorflow/contrib/autograph/converters/call_trees_test.py b/tensorflow/contrib/autograph/converters/call_trees_test.py
index c666dcb..303dd54 100644
--- a/tensorflow/contrib/autograph/converters/call_trees_test.py
+++ b/tensorflow/contrib/autograph/converters/call_trees_test.py
@@ -34,7 +34,7 @@
def test_basic(self):
def test_fn_1(_):
- raise ValueError('This should not be called in the compiled verison.')
+ raise ValueError('This should not be called in the compiled version.')
def renamed_test_fn_1(a):
return a + 1
diff --git a/tensorflow/contrib/autograph/converters/decorators_test.py b/tensorflow/contrib/autograph/converters/decorators_test.py
index e67ab1c..9c01f68 100644
--- a/tensorflow/contrib/autograph/converters/decorators_test.py
+++ b/tensorflow/contrib/autograph/converters/decorators_test.py
@@ -28,7 +28,7 @@
# The Python parser only briefly captures decorators into the AST.
# The interpreter desugars them on load, and the decorated function loses any
-# trace of the decorator (which is notmally what you would expect, since
+# trace of the decorator (which is normally what you would expect, since
# they are meant to be transparent).
# However, decorators are still visible when you analyze the function
# from inside a decorator, before it was applied - as is the case
diff --git a/tensorflow/contrib/autograph/impl/api.py b/tensorflow/contrib/autograph/impl/api.py
index d874ef1..24f87b2 100644
--- a/tensorflow/contrib/autograph/impl/api.py
+++ b/tensorflow/contrib/autograph/impl/api.py
@@ -49,7 +49,7 @@
function is called. This means the parameter values are known at compilation.
Args:
- recursive: Whether to recusrively convert any functions that the decorator
+ recursive: Whether to recursively convert any functions that the decorator
function may call.
verbose: Whether to output the compiled code in the logs.
arg_types: See to_graph.
@@ -215,7 +215,7 @@
Args:
e: A Python entity.
- recursive: Whether to recusrively convert any functions that the decorator
+ recursive: Whether to recursively convert any functions that the decorator
function may call.
verbose: Whether to output the compiled code in the logs.
arg_values: A dict containing value hints for symbols like function
diff --git a/tensorflow/contrib/autograph/impl/conversion.py b/tensorflow/contrib/autograph/impl/conversion.py
index e7230a5..55a30dc 100644
--- a/tensorflow/contrib/autograph/impl/conversion.py
+++ b/tensorflow/contrib/autograph/impl/conversion.py
@@ -61,7 +61,7 @@
This object is mutable, and is updated as functions are converted.
Attributes:
- recursive: Whether to recusrively convert any functions that the decorator
+ recursive: Whether to recursively convert any functions that the decorator
function may call.
nocompile_decorators: tuple of decorator functions that toggle compilation
off.
diff --git a/tensorflow/contrib/autograph/pyct/static_analysis/activity.py b/tensorflow/contrib/autograph/pyct/static_analysis/activity.py
index b81f5c7..2c14c2c 100644
--- a/tensorflow/contrib/autograph/pyct/static_analysis/activity.py
+++ b/tensorflow/contrib/autograph/pyct/static_analysis/activity.py
@@ -162,11 +162,11 @@
self.parent.mark_returned(name)
-class ActivityAnalizer(transformer.Base):
+class ActivityAnalyzer(transformer.Base):
"""Annotates nodes with local scope information. See Scope."""
def __init__(self, context, parent_scope):
- super(ActivityAnalizer, self).__init__(context)
+ super(ActivityAnalyzer, self).__init__(context)
self.scope = Scope(parent_scope)
self._in_return_statement = False
@@ -356,4 +356,4 @@
def resolve(node, context, parent_scope=None):
- return ActivityAnalizer(context, parent_scope).visit(node)
+ return ActivityAnalyzer(context, parent_scope).visit(node)
diff --git a/tensorflow/contrib/autograph/pyct/static_analysis/activity_test.py b/tensorflow/contrib/autograph/pyct/static_analysis/activity_test.py
index d1c4a94..ef79a29 100644
--- a/tensorflow/contrib/autograph/pyct/static_analysis/activity_test.py
+++ b/tensorflow/contrib/autograph/pyct/static_analysis/activity_test.py
@@ -108,7 +108,7 @@
self.assertFalse(QN('a') in child.referenced)
-class ActivityAnalizerTest(test.TestCase):
+class ActivityAnalyzerTest(test.TestCase):
def _parse_and_analyze(self, test_fn):
node, source = parser.parse_entity(test_fn)
diff --git a/tensorflow/contrib/autograph/pyct/static_analysis/annos.py b/tensorflow/contrib/autograph/pyct/static_analysis/annos.py
index d6d9f7e..b929b35 100644
--- a/tensorflow/contrib/autograph/pyct/static_analysis/annos.py
+++ b/tensorflow/contrib/autograph/pyct/static_analysis/annos.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
-"""Annotations used by the static analizer."""
+"""Annotations used by the static analyzer."""
from __future__ import absolute_import
from __future__ import division
@@ -28,15 +28,15 @@
class NodeAnno(NoValue):
- """Additionnal annotations used by the static analyzer.
+ """Additional annotations used by the static analyzer.
These are in addition to the basic annotations declared in anno.py.
"""
# Symbols
# These flags are boolean.
- IS_LOCAL = 'Symbol is local to the function scope being analized.'
- IS_PARAM = 'Symbol is a parameter to the function being analized.'
+ IS_LOCAL = 'Symbol is local to the function scope being analyzed.'
+ IS_PARAM = 'Symbol is a parameter to the function being analyzed.'
IS_MODIFIED_SINCE_ENTRY = (
'Symbol has been explicitly replaced in the current function scope.')
diff --git a/tensorflow/contrib/autograph/utils/builtins.py b/tensorflow/contrib/autograph/utils/builtins.py
index dfc3c86..211e8ea 100644
--- a/tensorflow/contrib/autograph/utils/builtins.py
+++ b/tensorflow/contrib/autograph/utils/builtins.py
@@ -77,7 +77,7 @@
def dynamic_print(*values):
- """Implementartion of print using dynamic dispatch.
+ """Implementation of print using dynamic dispatch.
The function attempts to use tf.Print if all the values are compatible.
Otherwise, it will fall back to py_func.
diff --git a/tensorflow/contrib/bayesflow/python/ops/monte_carlo_impl.py b/tensorflow/contrib/bayesflow/python/ops/monte_carlo_impl.py
index d193a8459..032b859 100644
--- a/tensorflow/contrib/bayesflow/python/ops/monte_carlo_impl.py
+++ b/tensorflow/contrib/bayesflow/python/ops/monte_carlo_impl.py
@@ -44,15 +44,13 @@
n=None,
seed=None,
name='expectation_importance_sampler'):
- r"""Monte Carlo estimate of `\\(E_p[f(Z)] = E_q[f(Z) p(Z) / q(Z)]\\)`.
+ r"""Monte Carlo estimate of \\(E_p[f(Z)] = E_q[f(Z) p(Z) / q(Z)]\\).
- With `\\(p(z) := exp^{log_p(z)}\\)`, this `Op` returns
+ With \\(p(z) := exp^{log_p(z)}\\), this `Op` returns
- ```
\\(n^{-1} sum_{i=1}^n [ f(z_i) p(z_i) / q(z_i) ], z_i ~ q,\\)
\\(\approx E_q[ f(Z) p(Z) / q(Z) ]\\)
\\(= E_p[f(Z)]\\)
- ```
This integral is done in log-space with max-subtraction to better handle the
often extreme values that `f(z) p(z) / q(z)` can take on.
@@ -121,14 +119,12 @@
name='expectation_importance_sampler_logspace'):
r"""Importance sampling with a positive function, in log-space.
- With `\\(p(z) := exp^{log_p(z)}\\)`, and `\\(f(z) = exp{log_f(z)}\\)`,
+ With \\(p(z) := exp^{log_p(z)}\\), and \\(f(z) = exp{log_f(z)}\\),
this `Op` returns
- ```
\\(Log[ n^{-1} sum_{i=1}^n [ f(z_i) p(z_i) / q(z_i) ] ], z_i ~ q,\\)
\\(\approx Log[ E_q[ f(Z) p(Z) / q(Z) ] ]\\)
\\(= Log[E_p[f(Z)]]\\)
- ```
This integral is done in log-space with max-subtraction to better handle the
often extreme values that `f(z) p(z) / q(z)` can take on.
@@ -196,13 +192,11 @@
def expectation(f, samples, log_prob=None, use_reparametrization=True,
axis=0, keep_dims=False, name=None):
- """Computes the Monte-Carlo approximation of `\\(E_p[f(X)]\\)`.
+ """Computes the Monte-Carlo approximation of \\(E_p[f(X)]\\).
This function computes the Monte-Carlo approximation of an expectation, i.e.,
- ```none
\\(E_p[f(X)] \approx= m^{-1} sum_i^m f(x_j), x_j\ ~iid\ p(X)\\)
- ```
where:
@@ -216,8 +210,8 @@
parameterless distribution (e.g.,
`Normal(Y; m, s) <=> Y = sX + m, X ~ Normal(0,1)`), we can swap gradient and
expectation, i.e.,
- `grad[ Avg{ \\(s_i : i=1...n\\) } ] = Avg{ grad[\\(s_i\\)] : i=1...n }` where
- `S_n = Avg{\\(s_i\\)}` and `\\(s_i = f(x_i), x_i ~ p\\)`.
+ grad[ Avg{ \\(s_i : i=1...n\\) } ] = Avg{ grad[\\(s_i\\)] : i=1...n } where
+ S_n = Avg{\\(s_i\\)}` and `\\(s_i = f(x_i), x_i ~ p\\).
However, if p is not reparameterized, TensorFlow's gradient will be incorrect
since the chain-rule stops at samples of non-reparameterized distributions.
@@ -296,7 +290,7 @@
Args:
f: Python callable which can return `f(samples)`.
samples: `Tensor` of samples used to form the Monte-Carlo approximation of
- `\\(E_p[f(X)]\\)`. A batch of samples should be indexed by `axis`
+ \\(E_p[f(X)]\\). A batch of samples should be indexed by `axis`
dimensions.
log_prob: Python callable which can return `log_prob(samples)`. Must
correspond to the natural-logarithm of the pdf/pmf of each sample. Only
@@ -317,7 +311,7 @@
Returns:
approx_expectation: `Tensor` corresponding to the Monte-Carlo approximation
- of `\\(E_p[f(X)]\\)`.
+ of \\(E_p[f(X)]\\).
Raises:
ValueError: if `f` is not a Python `callable`.
@@ -329,7 +323,7 @@
if not callable(f):
raise ValueError('`f` must be a callable function.')
if use_reparametrization:
- return math_ops.reduce_mean(f(samples), axis=axis, keep_dims=keep_dims)
+ return math_ops.reduce_mean(f(samples), axis=axis, keepdims=keep_dims)
else:
if not callable(log_prob):
raise ValueError('`log_prob` must be a callable function.')
@@ -349,7 +343,7 @@
# "Is there a floating point value of x, for which x-x == 0 is false?"
# http://stackoverflow.com/q/2686644
fx += stop(fx) * (logpx - stop(logpx)) # Add zeros_like(logpx).
- return math_ops.reduce_mean(fx, axis=axis, keep_dims=keep_dims)
+ return math_ops.reduce_mean(fx, axis=axis, keepdims=keep_dims)
def _sample_mean(values):
diff --git a/tensorflow/contrib/boosted_trees/python/training/functions/gbdt_batch_test.py b/tensorflow/contrib/boosted_trees/python/training/functions/gbdt_batch_test.py
index 17dcb49..f9c2228 100644
--- a/tensorflow/contrib/boosted_trees/python/training/functions/gbdt_batch_test.py
+++ b/tensorflow/contrib/boosted_trees/python/training/functions/gbdt_batch_test.py
@@ -45,7 +45,7 @@
def _squared_loss(label, unused_weights, predictions):
"""Unweighted loss implementation."""
loss = math_ops.reduce_sum(
- math_ops.square(predictions - label), 1, keep_dims=True)
+ math_ops.square(predictions - label), 1, keepdims=True)
return loss
diff --git a/tensorflow/contrib/checkpoint/python/split_dependency_test.py b/tensorflow/contrib/checkpoint/python/split_dependency_test.py
index cb964c8..f1d9d19 100644
--- a/tensorflow/contrib/checkpoint/python/split_dependency_test.py
+++ b/tensorflow/contrib/checkpoint/python/split_dependency_test.py
@@ -73,7 +73,7 @@
class SplitTests(test.TestCase):
- @test_util.run_in_graph_and_eager_modes(assert_no_eager_garbage=True)
+ @test_util.run_in_graph_and_eager_modes()
def testSaveRestoreSplitDep(self):
save_checkpoint = checkpointable_utils.Checkpoint(
dep=SaveTensorSlicesAsDeps())
diff --git a/tensorflow/contrib/cmake/CMakeLists.txt b/tensorflow/contrib/cmake/CMakeLists.txt
index bdf3e98..5f38a8e 100644
--- a/tensorflow/contrib/cmake/CMakeLists.txt
+++ b/tensorflow/contrib/cmake/CMakeLists.txt
@@ -31,10 +31,14 @@
option(tensorflow_BUILD_MORE_PYTHON_TESTS "Build more python unit tests for contrib packages" OFF)
option(tensorflow_BUILD_SHARED_LIB "Build TensorFlow as a shared library" OFF)
option(tensorflow_OPTIMIZE_FOR_NATIVE_ARCH "Enable compiler optimizations for the native processor architecture (if available)" ON)
-option(tensorflow_WIN_CPU_SIMD_OPTIONS "Enables CPU SIMD instructions")
option(tensorflow_ENABLE_SNAPPY_SUPPORT "Enable SNAPPY compression support" ON)
option(tensorflow_DISABLE_EIGEN_FORCEINLINE "Disable forceinline, to speed up build on windows." OFF)
+# SIMD, MKL and MKLDNN options
+option(tensorflow_WIN_CPU_SIMD_OPTIONS "Enables CPU SIMD instructions" OFF)
+option(tensorflow_ENABLE_MKL_SUPPORT "Enable Intel MKL support" OFF)
+option(tensorflow_ENABLE_MKLDNN_SUPPORT "Enable Intel MKLDNN support, requires MKL enabled" OFF)
+
# GPU, CUDA and cuDNN options
option(tensorflow_ENABLE_GPU "Enable GPU support" OFF)
set(tensorflow_CUDA_VERSION "9.0" CACHE STRING "CUDA version to build against")
@@ -124,8 +128,16 @@
add_definitions(-DEIGEN_AVOID_STL_ARRAY)
if(WIN32)
- add_definitions(-DNOMINMAX -D_WIN32_WINNT=0x0A00 -DLANG_CXX11 -DCOMPILER_MSVC)
- add_definitions(-DWIN32 -DOS_WIN -D_MBCS -DWIN64 -DWIN32_LEAN_AND_MEAN -DNOGDI -DPLATFORM_WINDOWS)
+ if(CMAKE_SIZEOF_VOID_P EQUAL 8)
+ # 64 bits
+ add_definitions(-DWIN64)
+ elseif(CMAKE_SIZEOF_VOID_P EQUAL 4)
+ # 32 bits
+ # temporary fix for #18241
+ add_definitions(-DEIGEN_DEFAULT_DENSE_INDEX_TYPE=std::int64_t)
+ endif()
+ add_definitions(-DNOMINMAX -D_WIN32_WINNT=0x0A00 -DLANG_CXX11)
+ add_definitions(-DWIN32 -DOS_WIN -D_MBCS -DWIN32_LEAN_AND_MEAN -DNOGDI -DPLATFORM_WINDOWS)
add_definitions(-DTENSORFLOW_USE_EIGEN_THREADPOOL -DEIGEN_HAS_C99_MATH)
add_definitions(-DTF_COMPILE_LIBRARY)
add_definitions(/bigobj /nologo /EHsc /GF /MP /Gm-)
@@ -162,12 +174,21 @@
# MSVC SIMD instructions
if (tensorflow_WIN_CPU_SIMD_OPTIONS)
+ include(CheckCXXCompilerFlag)
+ if (tensorflow_ENABLE_MKL_SUPPORT)
+ add_definitions(-DINTEL_MKL -DEIGEN_USE_VML)
+ if (NOT tensorflow_ENABLE_MKLDNN_SUPPORT)
+ add_definitions(-DINTEL_MKL_ML)
+ endif()
+ endif()
+ CHECK_CXX_COMPILER_FLAG("-fopenmp" COMPILER_OPT_OPENMP_SUPPORT)
+ if (COMPILER_OPT_OPENMP_SUPPORT)
+ set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fopenmp")
+ endif()
if (WIN32)
- CHECK_CXX_COMPILER_FLAG("${tensorflow_WIN_CPU_SIMD_OPTIONS}" COMPILER_OPT_WIN_CPU_SIMD_SUPPORTED)
+ CHECK_CXX_COMPILER_FLAG(${tensorflow_WIN_CPU_SIMD_OPTIONS} COMPILER_OPT_WIN_CPU_SIMD_SUPPORTED)
if(COMPILER_OPT_WIN_CPU_SIMD_SUPPORTED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${tensorflow_WIN_CPU_SIMD_OPTIONS}")
- else()
- message(FATAL_ERROR "${tensorflow_WIN_CPU_SIMD_OPTIONS} not supported")
endif()
endif()
endif()
@@ -302,6 +323,43 @@
list(APPEND tensorflow_EXTERNAL_LIBRARIES network)
endif()
+if (tensorflow_ENABLE_MKL_SUPPORT)
+ if (WIN32)
+ find_path(MKL_HOME_PLATFORM mkl
+ PATHS ${MKL_HOME} ${MKL_HOME}/../ ${MKL_HOME}/../../
+ PATH_SUFFIXES windows)
+ set(MKL_INCLUDE_DIRS ${MKL_HOME_PLATFORM}/mkl/include)
+ set(MKL_LINK_DIRS
+ ${MKL_HOME_PLATFORM}/mkl/lib/intel64
+ ${MKL_HOME_PLATFORM}/tbb/lib/intel64/vc_mt
+ ${MKL_HOME_PLATFORM}/compiler/lib/intel64
+ ${MKL_HOME_PLATFORM}/mkl/tools/builder/lib)
+ set(MKL_REDIST_DLL_DIRS
+ ${MKL_HOME_PLATFORM}/redist/intel64/mkl
+ ${MKL_HOME_PLATFORM}/redist/intel64/tbb/vc_mt
+ ${MKL_HOME_PLATFORM}/redist/intel64/compiler)
+ list(APPEND tensorflow_EXTERNAL_LIBRARIES
+ mkl_intel_lp64_dll mkl_sequential_dll mkl_core_dll mkl_rt mkl_cdll_intel64)
+ endif()
+ if (UNIX)
+ # Fix me: complete the path on linux
+ find_path(MKL_HOME_PLATFORM mkl
+ HINTS ${MKL_HOME} ${MKL_HOME}/../ ${MKL_HOME}/../../
+ PATH_SUFFIXES linux)
+ set(MKL_INCLUDE_DIRS ${MKL_HOME_PLATFORM}/mkl/include)
+ set(MKL_LINK_DIRS) # incompleted
+ set(MKL_REDIST_SO_DIRS) # incompleted
+ endif()
+ include_directories(${MKL_INCLUDE_DIRS})
+ link_directories(${MKL_LINK_DIRS})
+ if (tensorflow_ENABLE_MKLDNN_SUPPORT)
+ include(mkldnn)
+ list(APPEND tensorflow_EXTERNAL_LIBRARIES ${mkldnn_STATIC_LIBRARIES})
+ list(APPEND tensorflow_EXTERNAL_DEPENDENCIES mkldnn)
+ include_directories(${mkldnn_INCLUDE_DIRS})
+ endif()
+endif (tensorflow_ENABLE_MKL_SUPPORT)
+
if (tensorflow_ENABLE_GPU)
if (NOT WIN32)
# Default install paths for cuda libraries in Linux
diff --git a/tensorflow/contrib/cmake/README.md b/tensorflow/contrib/cmake/README.md
index fe83bb3..0b79f71 100644
--- a/tensorflow/contrib/cmake/README.md
+++ b/tensorflow/contrib/cmake/README.md
@@ -128,6 +128,18 @@
D:\local\cuda\bin
```
+ * When building with MKL support after installing [MKL](https://software.intel.com/en-us/mkl) from INTEL, append its bin directories to your PATH environment variable.
+
+ In case TensorFlow fails to find the MKL dll's during initialization, check your PATH environment variable.
+ It should contain the directory of the MKL dlls. For example:
+
+ ```
+ D:\Tools\IntelSWTools\compilers_and_libraries\windows\redist\intel64\mkl
+ D:\Tools\IntelSWTools\compilers_and_libraries\windows\redist\intel64\compiler
+ D:\Tools\IntelSWTools\compilers_and_libraries\windows\redist\intel64\tbb\vc_mt
+ ```
+
+
* We assume that `cmake` and `git` are installed and in your `%PATH%`. If
for example `cmake` is not in your path and it is installed in
`C:\Program Files (x86)\CMake\bin\cmake.exe`, you can add this directory
@@ -166,7 +178,15 @@
More? -Dtensorflow_ENABLE_GPU=ON ^
More? -DCUDNN_HOME="D:\...\cudnn"
```
+ To build with MKL support add "^" at the end of the last line above following with:
+
+ ```
+ More? -Dtensorflow_ENABLE_MKL_SUPPORT=ON ^
+ More? -DMKL_HOME="D:\...\compilers_and_libraries"
+ ```
+
To enable SIMD instructions with MSVC, as AVX and SSE, define it as follows:
+
```
More? -Dtensorflow_WIN_CPU_SIMD_OPTIONS=/arch:AVX
```
@@ -226,6 +246,7 @@
```
ctest -C RelWithDebInfo
```
+
* `-Dtensorflow_BUILD_MORE_PYTHON_TESTS=(ON|OFF)`. Defaults to `OFF`. This enables python tests on
serveral major packages. This option is only valid if this and tensorflow_BUILD_PYTHON_TESTS are both set as `ON`.
After building the python wheel, you need to install the new wheel before running the tests.
@@ -234,6 +255,12 @@
ctest -C RelWithDebInfo
```
+ * `-Dtensorflow_ENABLE_MKL_SUPPORT=(ON|OFF)`. Defaults to `OFF`. Include MKL support. If MKL is enabled you need to install the [Intel Math Kernal Library](https://software.intel.com/en-us/mkl).
+ CMake will expect the location of MKL in -MKL_HOME=path_you_install_mkl.
+
+ * `-Dtensorflow_ENABLE_MKLDNN_SUPPORT=(ON|OFF)`. Defaults to `OFF`. Include MKL DNN support. MKL DNN is [Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN)](https://github.com/intel/mkl-dnn). You have to add `-Dtensorflow_ENABLE_MKL_SUPPORT=ON` before including MKL DNN support.
+
+
4. Invoke MSBuild to build TensorFlow.
To build the C++ example program, which will be created as a `.exe`
@@ -251,6 +278,7 @@
D:\...\build> MSBuild /p:Configuration=Release tf_python_build_pip_package.vcxproj
```
+
Linux Continuous Integration build
==================================
diff --git a/tensorflow/contrib/cmake/external/gemmlowp.cmake b/tensorflow/contrib/cmake/external/gemmlowp.cmake
index a235442..cdaa6b7 100644
--- a/tensorflow/contrib/cmake/external/gemmlowp.cmake
+++ b/tensorflow/contrib/cmake/external/gemmlowp.cmake
@@ -14,8 +14,8 @@
# ==============================================================================
include (ExternalProject)
-set(gemmlowp_URL https://github.com/google/gemmlowp/archive/6a2a90822e8546fc2bfa7044de0faf1c1cb4862f.zip)
-set(gemmlowp_HASH SHA256=3447948d219f3270383766bbe08942888c0eb4e0ca6663c0e0548502ec5bb77d)
+set(gemmlowp_URL https://github.com/google/gemmlowp/archive/38ebac7b059e84692f53e5938f97a9943c120d98.zip)
+set(gemmlowp_HASH SHA256=b87faa7294dfcc5d678f22a59d2c01ca94ea1e2a3b488c38a95a67889ed0a658)
set(gemmlowp_BUILD ${CMAKE_CURRENT_BINARY_DIR}/gemmlowp/src/gemmlowp)
set(gemmlowp_INCLUDE_DIR ${CMAKE_CURRENT_BINARY_DIR}/gemmlowp/src/gemmlowp)
diff --git a/tensorflow/contrib/cmake/external/mkldnn.cmake b/tensorflow/contrib/cmake/external/mkldnn.cmake
new file mode 100644
index 0000000..a639fde
--- /dev/null
+++ b/tensorflow/contrib/cmake/external/mkldnn.cmake
@@ -0,0 +1,44 @@
+# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+include (ExternalProject)
+
+set(mkldnn_INCLUDE_DIRS ${CMAKE_CURRENT_BINARY_DIR}/mkldnn/src/mkldnn/include)
+set(mkldnn_URL https://github.com/01org/mkl-dnn.git)
+set(mkldnn_BUILD ${CMAKE_CURRENT_BINARY_DIR}/mkldnn/src/mkldnn/src)
+set(mkldnn_TAG 3063b2e4c943983f6bf5f2fb9a490d4a998cd291)
+
+if(WIN32)
+ if(${CMAKE_GENERATOR} MATCHES "Visual Studio.*")
+ set(mkldnn_STATIC_LIBRARIES ${CMAKE_CURRENT_BINARY_DIR}/mkldnn/src/mkldnn/src/Release/mkldnn.lib)
+ else()
+ set(mkldnn_STATIC_LIBRARIES ${CMAKE_CURRENT_BINARY_DIR}/mkldnn/src/mkldnn/src/mkldnn.lib)
+ endif()
+else()
+ set(mkldnn_STATIC_LIBRARIES ${CMAKE_CURRENT_BINARY_DIR}/mkldnn/src/mkldnn/src/libmkldnn.a)
+endif()
+
+ExternalProject_Add(mkldnn
+ PREFIX mkldnn
+ GIT_REPOSITORY ${mkldnn_URL}
+ GIT_TAG ${mkldnn_TAG}
+ DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
+ BUILD_IN_SOURCE 1
+ BUILD_BYPRODUCTS ${mkldnn_STATIC_LIBRARIES}
+ INSTALL_COMMAND ""
+ CMAKE_CACHE_ARGS
+ -DCMAKE_BUILD_TYPE:STRING=Release
+ -DCMAKE_VERBOSE_MAKEFILE:BOOL=OFF
+ -DMKLINC:STRING=${MKL_INCLUDE_DIRS}
+)
diff --git a/tensorflow/contrib/cmake/external/png.cmake b/tensorflow/contrib/cmake/external/png.cmake
index 6cd66a6..ad2af01 100644
--- a/tensorflow/contrib/cmake/external/png.cmake
+++ b/tensorflow/contrib/cmake/external/png.cmake
@@ -15,32 +15,33 @@
include (ExternalProject)
set(png_INCLUDE_DIR ${CMAKE_CURRENT_BINARY_DIR}/external/png_archive)
-set(png_URL https://storage.googleapis.com/libpng-public-archive/libpng-1.2.53.tar.gz)
-set(png_HASH SHA256=e05c9056d7f323088fd7824d8c6acc03a4a758c4b4916715924edc5dd3223a72)
+set(png_URL https://mirror.bazel.build/github.com/glennrp/libpng/archive/v1.6.34.tar.gz)
+set(png_HASH SHA256=e45ce5f68b1d80e2cb9a2b601605b374bdf51e1798ef1c2c2bd62131dfcf9eef)
set(png_BUILD ${CMAKE_BINARY_DIR}/png/src/png)
set(png_INSTALL ${CMAKE_BINARY_DIR}/png/install)
if(WIN32)
if(${CMAKE_GENERATOR} MATCHES "Visual Studio.*")
set(png_STATIC_LIBRARIES
- debug ${CMAKE_BINARY_DIR}/png/install/lib/libpng12_staticd.lib
- optimized ${CMAKE_BINARY_DIR}/png/install/lib/libpng12_static.lib)
+ debug ${CMAKE_BINARY_DIR}/png/install/lib/libpng16_staticd.lib
+ optimized ${CMAKE_BINARY_DIR}/png/install/lib/libpng16_static.lib)
else()
if(CMAKE_BUILD_TYPE EQUAL Debug)
set(png_STATIC_LIBRARIES
- ${CMAKE_BINARY_DIR}/png/install/lib/libpng12_staticd.lib)
+ ${CMAKE_BINARY_DIR}/png/install/lib/libpng16_staticd.lib)
else()
set(png_STATIC_LIBRARIES
- ${CMAKE_BINARY_DIR}/png/install/lib/libpng12_static.lib)
+ ${CMAKE_BINARY_DIR}/png/install/lib/libpng16_static.lib)
endif()
endif()
else()
- set(png_STATIC_LIBRARIES ${CMAKE_BINARY_DIR}/png/install/lib/libpng12.a)
+ set(png_STATIC_LIBRARIES ${CMAKE_BINARY_DIR}/png/install/lib/libpng16.a)
endif()
set(png_HEADERS
- "${png_INSTALL}/include/libpng12/png.h"
- "${png_INSTALL}/include/libpng12/pngconf.h"
+ "${png_INSTALL}/include/libpng16/png.h"
+ "${png_INSTALL}/include/libpng16/pngconf.h"
+ "${png_INSTALL}/include/libpng16/pnglibconf.h"
)
ExternalProject_Add(png
diff --git a/tensorflow/contrib/cmake/external/sqlite.cmake b/tensorflow/contrib/cmake/external/sqlite.cmake
index 57c4ae7..7f835d2 100644
--- a/tensorflow/contrib/cmake/external/sqlite.cmake
+++ b/tensorflow/contrib/cmake/external/sqlite.cmake
@@ -15,8 +15,8 @@
include (ExternalProject)
set(sqlite_INCLUDE_DIR ${CMAKE_CURRENT_BINARY_DIR}/external/sqlite)
-set(sqlite_URL https://mirror.bazel.build/www.sqlite.org/2017/sqlite-amalgamation-3200000.zip)
-set(sqlite_HASH SHA256=208780b3616f9de0aeb50822b7a8f5482f6515193859e91ed61637be6ad74fd4)
+set(sqlite_URL https://mirror.bazel.build/www.sqlite.org/2018/sqlite-amalgamation-3230100.zip)
+set(sqlite_HASH SHA256=4239a1f69e5721d07d9a374eb84d594225229e54be4ee628da2995f4315d8dfc)
set(sqlite_BUILD ${CMAKE_CURRENT_BINARY_DIR}/sqlite/src/sqlite)
set(sqlite_INSTALL ${CMAKE_CURRENT_BINARY_DIR}/sqlite/install)
diff --git a/tensorflow/contrib/cmake/tf_core_framework.cmake b/tensorflow/contrib/cmake/tf_core_framework.cmake
index a1c3203..b47c32f 100644
--- a/tensorflow/contrib/cmake/tf_core_framework.cmake
+++ b/tensorflow/contrib/cmake/tf_core_framework.cmake
@@ -276,7 +276,7 @@
add_custom_command(OUTPUT
${VERSION_INFO_CC}
COMMAND ${PYTHON_EXECUTABLE} ${tensorflow_source_dir}/tensorflow/tools/git/gen_git_source.py
- --raw_generate ${VERSION_INFO_CC}
+ ARGS --raw_generate ${VERSION_INFO_CC} --source_dir ${tensorflow_source_dir} --git_tag_override=${GIT_TAG_OVERRIDE}
DEPENDS __force_rebuild)
set(tf_version_srcs ${tensorflow_source_dir}/tensorflow/core/util/version_info.cc)
@@ -341,9 +341,3 @@
tf_core_lib
proto_text
)
-
-if(WIN32)
- # Cmake > 3.6 will quote this as -D"__VERSION__=\"MSVC\"" which nvcc fails on.
- # Instead of defining this global, limit it to tf_core_framework where its used.
- target_compile_definitions(tf_core_framework PRIVATE __VERSION__="MSVC")
-endif()
diff --git a/tensorflow/contrib/cmake/tf_python.cmake b/tensorflow/contrib/cmake/tf_python.cmake
index f6aaf41..c4bdb69 100755
--- a/tensorflow/contrib/cmake/tf_python.cmake
+++ b/tensorflow/contrib/cmake/tf_python.cmake
@@ -554,12 +554,13 @@
set(pywrap_tensorflow_deffile "${CMAKE_CURRENT_BINARY_DIR}/pywrap_tensorflow.def")
endif()
set_source_files_properties(${pywrap_tensorflow_deffile} PROPERTIES GENERATED TRUE)
-
+ math(EXPR tensorflow_target_bitness "${CMAKE_SIZEOF_VOID_P}*8")
add_custom_command(TARGET pywrap_tensorflow_internal_static POST_BUILD
COMMAND ${PYTHON_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/tools/create_def_file.py
--input "${pywrap_tensorflow_internal_static_dependencies}"
--output "${pywrap_tensorflow_deffile}"
--target _pywrap_tensorflow_internal.pyd
+ --bitness "${tensorflow_target_bitness}"
BYPRODUCTS ${pywrap_tensorflow_deffile} # Required for Ninja
)
endif(WIN32)
@@ -589,6 +590,12 @@
${pywrap_tensorflow_deffile}
)
+# There is a bug in GCC 5 resulting in undefined reference to a __cpu_model function when
+# linking to the tensorflow library. Adding the following libraries fixes it.
+if(CMAKE_COMPILER_IS_GNUCC AND CMAKE_CXX_COMPILER_VERSION VERSION_GREATER 5.0)
+ target_link_libraries(pywrap_tensorflow_internal PRIVATE gcc_s gcc)
+endif()
+
if(WIN32)
add_dependencies(pywrap_tensorflow_internal pywrap_tensorflow_internal_static)
endif(WIN32)
diff --git a/tensorflow/contrib/cmake/tf_shared_lib.cmake b/tensorflow/contrib/cmake/tf_shared_lib.cmake
index 9738bbe..38f4045 100644
--- a/tensorflow/contrib/cmake/tf_shared_lib.cmake
+++ b/tensorflow/contrib/cmake/tf_shared_lib.cmake
@@ -52,12 +52,13 @@
set(tensorflow_deffile "${CMAKE_CURRENT_BINARY_DIR}/tensorflow.def")
endif()
set_source_files_properties(${tensorflow_deffile} PROPERTIES GENERATED TRUE)
-
+ math(EXPR tensorflow_target_bitness "${CMAKE_SIZEOF_VOID_P}*8")
add_custom_command(TARGET tensorflow_static POST_BUILD
COMMAND ${PYTHON_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/tools/create_def_file.py
--input "${tensorflow_static_dependencies}"
--output "${tensorflow_deffile}"
--target tensorflow.dll
+ --bitness "${tensorflow_target_bitness}"
)
endif(WIN32)
diff --git a/tensorflow/contrib/cmake/tf_stream_executor.cmake b/tensorflow/contrib/cmake/tf_stream_executor.cmake
index 91ca33f..af48ef1 100644
--- a/tensorflow/contrib/cmake/tf_stream_executor.cmake
+++ b/tensorflow/contrib/cmake/tf_stream_executor.cmake
@@ -65,6 +65,12 @@
file(GLOB tf_stream_executor_gpu_srcs
"${tensorflow_source_dir}/tensorflow/stream_executor/cuda/*.cc"
)
+ if (NOT tensorflow_BUILD_CC_TESTS)
+ file(GLOB tf_stream_executor_gpu_tests
+ "${tensorflow_source_dir}/tensorflow/stream_executor/cuda/*_test.cc"
+ )
+ list(REMOVE_ITEM tf_stream_executor_gpu_srcs ${tf_stream_executor_gpu_tests})
+ endif()
list(APPEND tf_stream_executor_srcs ${tf_stream_executor_gpu_srcs})
endif()
diff --git a/tensorflow/contrib/cmake/tools/create_def_file.py b/tensorflow/contrib/cmake/tools/create_def_file.py
index 53c2285..cffe069 100644
--- a/tensorflow/contrib/cmake/tools/create_def_file.py
+++ b/tensorflow/contrib/cmake/tools/create_def_file.py
@@ -63,7 +63,7 @@
r"^(TFE_\w*)$|"
r"tensorflow::|"
r"functor::|"
- r"nsync_|"
+ r"\?nsync_|"
r"perftools::gputools")
# We want to identify data members explicitly in the DEF file, so that no one
@@ -87,6 +87,7 @@
required=True)
parser.add_argument("--output", help="output deffile", required=True)
parser.add_argument("--target", help="name of the target", required=True)
+ parser.add_argument("--bitness", help="build target bitness", required=True)
args = parser.parse_args()
return args
@@ -125,7 +126,10 @@
# Header for the def file.
def_fp.write("LIBRARY " + args.target + "\n")
def_fp.write("EXPORTS\n")
- def_fp.write("\t ??1OpDef@tensorflow@@UEAA@XZ\n")
+ if args.bitness == "64":
+ def_fp.write("\t??1OpDef@tensorflow@@UEAA@XZ\n")
+ else:
+ def_fp.write("\t??1OpDef@tensorflow@@UAE@XZ\n")
# Each symbols returned by undname matches the same position in candidates.
# We compare on undname but use the decorated name from candidates.
diff --git a/tensorflow/contrib/crf/python/kernel_tests/crf_test.py b/tensorflow/contrib/crf/python/kernel_tests/crf_test.py
index 721dc4d..a5e065b 100644
--- a/tensorflow/contrib/crf/python/kernel_tests/crf_test.py
+++ b/tensorflow/contrib/crf/python/kernel_tests/crf_test.py
@@ -281,6 +281,21 @@
self.assertEqual(list(tf_actual_max_sequence[:sequence_lengths]),
expected_max_sequence[:sequence_lengths])
+ def testCrfDecodeZeroSeqLength(self):
+ """
+ Test that crf_decode works when sequence_length contains one or more zeros.
+ """
+ with self.test_session() as sess:
+ inputs = constant_op.constant(np.ones([2, 10, 5],
+ dtype=np.float32))
+ transition_params = constant_op.constant(np.ones([5, 5],
+ dtype=np.float32))
+ sequence_lengths = constant_op.constant(np.zeros([2],
+ dtype=np.int32))
+ values = crf.crf_decode(inputs, transition_params, sequence_lengths)
+ tags, scores = sess.run(values)
+ self.assertEqual(len(tags.shape), 2)
+ self.assertEqual(len(scores.shape), 1)
if __name__ == "__main__":
test.main()
diff --git a/tensorflow/contrib/crf/python/ops/crf.py b/tensorflow/contrib/crf/python/ops/crf.py
index 1233c8f..e37c029 100644
--- a/tensorflow/contrib/crf/python/ops/crf.py
+++ b/tensorflow/contrib/crf/python/ops/crf.py
@@ -479,15 +479,17 @@
initial_state = array_ops.slice(potentials, [0, 0, 0], [-1, 1, -1])
initial_state = array_ops.squeeze(initial_state, axis=[1]) # [B, O]
inputs = array_ops.slice(potentials, [0, 1, 0], [-1, -1, -1]) # [B, T-1, O]
+ # sequence length is not allowed to be less than zero
+ sequence_length_less_one = math_ops.maximum(0, sequence_length - 1)
backpointers, last_score = rnn.dynamic_rnn( # [B, T - 1, O], [B, O]
crf_fwd_cell,
inputs=inputs,
- sequence_length=sequence_length - 1,
+ sequence_length=sequence_length_less_one,
initial_state=initial_state,
time_major=False,
dtype=dtypes.int32)
backpointers = gen_array_ops.reverse_sequence( # [B, T - 1, O]
- backpointers, sequence_length - 1, seq_dim=1)
+ backpointers, sequence_length_less_one, seq_dim=1)
# Computes backward decoding. Extract tag indices from backpointers.
crf_bwd_cell = CrfDecodeBackwardRnnCell(num_tags)
@@ -497,7 +499,7 @@
decode_tags, _ = rnn.dynamic_rnn( # [B, T - 1, 1]
crf_bwd_cell,
inputs=backpointers,
- sequence_length=sequence_length - 1,
+ sequence_length=sequence_length_less_one,
initial_state=initial_state,
time_major=False,
dtype=dtypes.int32)
diff --git a/tensorflow/contrib/cudnn_rnn/python/layers/cudnn_rnn.py b/tensorflow/contrib/cudnn_rnn/python/layers/cudnn_rnn.py
index 00d9544..d58198f 100644
--- a/tensorflow/contrib/cudnn_rnn/python/layers/cudnn_rnn.py
+++ b/tensorflow/contrib/cudnn_rnn/python/layers/cudnn_rnn.py
@@ -358,7 +358,8 @@
"CUDA/CuDNN generations.")
# Initialize opaque params with a tensor.
self.kernel = vs.get_variable(
- "opaque_kernel", initializer=opaque_params_t, validate_shape=False)
+ "opaque_kernel", dtype=self._plain_dtype,
+ initializer=opaque_params_t, validate_shape=False)
# Create saveable in the outer scope of the cudnn subgraph, such that
# alternative subgraph with platform-independent rnn cells can load the
# checkpoints directly.
diff --git a/tensorflow/contrib/data/python/kernel_tests/BUILD b/tensorflow/contrib/data/python/kernel_tests/BUILD
index 9d1e8b2..d59dd17 100644
--- a/tensorflow/contrib/data/python/kernel_tests/BUILD
+++ b/tensorflow/contrib/data/python/kernel_tests/BUILD
@@ -4,7 +4,7 @@
exports_files(["LICENSE"])
-load("//tensorflow:tensorflow.bzl", "py_test", "tf_py_test")
+load("//tensorflow:tensorflow.bzl", "cuda_py_test", "py_test", "tf_py_test")
py_test(
name = "batch_dataset_op_test",
@@ -482,12 +482,11 @@
],
)
-py_test(
+cuda_py_test(
name = "prefetching_ops_test",
size = "small",
srcs = ["prefetching_ops_test.py"],
- srcs_version = "PY2AND3",
- deps = [
+ additional_deps = [
"//tensorflow/contrib/data/python/ops:prefetching_ops",
"//tensorflow/core:protos_all_py",
"//tensorflow/python:client_testlib",
diff --git a/tensorflow/contrib/data/python/kernel_tests/dataset_serialization_test_base.py b/tensorflow/contrib/data/python/kernel_tests/dataset_serialization_test_base.py
index dbc3509..78ecce8 100644
--- a/tensorflow/contrib/data/python/kernel_tests/dataset_serialization_test_base.py
+++ b/tensorflow/contrib/data/python/kernel_tests/dataset_serialization_test_base.py
@@ -163,7 +163,7 @@
num_outputs,
sparse_tensors=False,
verify_exhausted=True):
- """Verifies that restoring into an already initilized iterator works.
+ """Verifies that restoring into an already initialized iterator works.
Args:
ds_fn: See `run_core_tests`.
diff --git a/tensorflow/contrib/data/python/kernel_tests/interleave_dataset_op_test.py b/tensorflow/contrib/data/python/kernel_tests/interleave_dataset_op_test.py
index f8556a1..43aa4b1 100644
--- a/tensorflow/contrib/data/python/kernel_tests/interleave_dataset_op_test.py
+++ b/tensorflow/contrib/data/python/kernel_tests/interleave_dataset_op_test.py
@@ -409,7 +409,7 @@
def _testTwoThreadsNoContentionWithRaces(self, sloppy=False):
"""Tests where all the workers race in producing elements.
- Note: this is in contrast with the prevous test which carefully sequences
+ Note: this is in contrast with the previous test which carefully sequences
the execution of the map functions.
Args:
@@ -495,7 +495,7 @@
def _testTwoThreadsNoContentionWithRacesAndBlocking(self, sloppy=False):
"""Tests where all the workers race in producing elements.
- Note: this is in contrast with the prevous test which carefully sequences
+ Note: this is in contrast with the previous test which carefully sequences
the execution of the map functions.
@@ -928,8 +928,7 @@
sess.run(next_element)
def _normalize(self, vec):
- batched = (len(vec.shape) == 2)
- return vec / vec.sum(axis=1, keepdims=True) if batched else vec / vec.sum()
+ return vec / vec.sum()
def _chi2(self, expected, actual):
actual = np.asarray(actual)
@@ -938,35 +937,43 @@
chi2 = np.sum(diff * diff / expected, axis=0)
return chi2
+ def _testSampleFromDatasetsHelper(self, weights, num_datasets, num_samples):
+ # Create a dataset that samples each integer in `[0, num_datasets)`
+ # with probability given by `weights[i]`.
+ dataset = interleave_ops.sample_from_datasets([
+ dataset_ops.Dataset.from_tensors(i).repeat(None)
+ for i in range(num_datasets)
+ ], weights)
+ dataset = dataset.take(num_samples)
+ iterator = dataset.make_one_shot_iterator()
+ next_element = iterator.get_next()
+
+ with self.test_session() as sess:
+ freqs = np.zeros([num_datasets])
+ for _ in range(num_samples):
+ freqs[sess.run(next_element)] += 1
+ with self.assertRaises(errors.OutOfRangeError):
+ sess.run(next_element)
+
+ return freqs
+
def testSampleFromDatasets(self):
- random_seed.set_random_seed(1618)
+ random_seed.set_random_seed(1619)
num_samples = 10000
- rand_probs = self._normalize(np.random.random_sample((10,)))
- rand_probs2 = self._normalize(np.random.random_sample((15,)))
+ rand_probs = self._normalize(np.random.random_sample((15,)))
- for probs in [[.5, .5], [.85, .05, .1], rand_probs, rand_probs2]:
+ # Use chi-squared test to assert that the observed distribution matches the
+ # expected distribution. Based on the implementation in
+ # "tensorflow/python/kernel_tests/multinomial_op_test.py".
+ for probs in [[.85, .05, .1], rand_probs]:
probs = np.asarray(probs)
+ classes = len(probs)
+ freqs = self._testSampleFromDatasetsHelper(probs, classes, num_samples)
+ self.assertLess(self._chi2(probs, freqs / num_samples), 1e-3)
- # Create a dataset that samples each integer in `[0, probs.shape[0])`
- # with probability given by `probs[i]`.
- dataset = interleave_ops.sample_from_datasets([
- dataset_ops.Dataset.from_tensors(i).repeat(None)
- for i in range(probs.shape[0])
- ], probs)
- dataset = dataset.take(num_samples)
- iterator = dataset.make_one_shot_iterator()
- next_element = iterator.get_next()
-
- with self.test_session() as sess:
- freqs = np.zeros_like(probs)
- for _ in range(num_samples):
- freqs[sess.run(next_element)] += 1
- with self.assertRaises(errors.OutOfRangeError):
- sess.run(next_element)
-
- # Use chi-squared test to assert that the observed distribution
- # matches the expected distribution. Based on the implementation
- # in "tensorflow/python/kernel_tests/multinomial_op_test.py".
+ # Also check that `weights` as a dataset samples correctly.
+ probs_ds = dataset_ops.Dataset.from_tensors(probs).repeat()
+ freqs = self._testSampleFromDatasetsHelper(probs_ds, classes, num_samples)
self.assertLess(self._chi2(probs, freqs / num_samples), 1e-3)
def testErrors(self):
diff --git a/tensorflow/contrib/data/python/kernel_tests/stats_dataset_ops_test.py b/tensorflow/contrib/data/python/kernel_tests/stats_dataset_ops_test.py
index 7acbc67..5c74ed6 100644
--- a/tensorflow/contrib/data/python/kernel_tests/stats_dataset_ops_test.py
+++ b/tensorflow/contrib/data/python/kernel_tests/stats_dataset_ops_test.py
@@ -201,6 +201,14 @@
lambda x: array_ops.tile([x], ops.convert_to_tensor([x]))).apply(
stats_ops.bytes_produced_stats("bytes_produced"))
+ def test_bytes_produced_stats_invalid_tag_shape(self):
+ with self.assertRaisesRegexp(
+ ValueError, 'Shape must be rank 0 but is rank 1'):
+ self.run_core_tests(
+ lambda: dataset_ops.Dataset.range(100).apply(
+ stats_ops.bytes_produced_stats(["bytes_produced"])),
+ None, 100)
+
def testBytesStatsDatasetSaveableCore(self):
num_outputs = 100
self.run_core_tests(
@@ -218,6 +226,14 @@
return dataset_ops.Dataset.range(num_elements).apply(
stats_ops.latency_stats(tag1)).apply(stats_ops.latency_stats(tag2))
+ def test_latency_stats_invalid_tag_shape(self):
+ with self.assertRaisesRegexp(
+ ValueError, 'Shape must be rank 0 but is rank 1'):
+ self.run_core_tests(
+ lambda: dataset_ops.Dataset.range(100).apply(
+ stats_ops.latency_stats(["record_latency", "record_latency_2"])),
+ None, 100)
+
def testLatencyStatsDatasetSaveableCore(self):
num_outputs = 100
diff --git a/tensorflow/contrib/data/python/ops/interleave_ops.py b/tensorflow/contrib/data/python/ops/interleave_ops.py
index 106a1ef..812a50e 100644
--- a/tensorflow/contrib/data/python/ops/interleave_ops.py
+++ b/tensorflow/contrib/data/python/ops/interleave_ops.py
@@ -200,10 +200,11 @@
Args:
datasets: A list of @{tf.data.Dataset} objects with compatible structure.
- weights: (Optional.) A list of `len(datasets)` floating-point values,
- where `weights[i]` represents the probability with which an element
- should be sampled from `datasets[i]`. Defaults to a uniform distribution
- across `datasets`.
+ weights: (Optional.) A list of `len(datasets)` floating-point values where
+ `weights[i]` represents the probability with which an element should be
+ sampled from `datasets[i]`, or a @{tf.data.Dataset} object where each
+ element is such a list. Defaults to a uniform distribution across
+ `datasets`.
seed: (Optional.) A `tf.int64` scalar `tf.Tensor`, representing the
random seed that will be used to create the distribution. See
@{tf.set_random_seed} for behavior.
@@ -219,24 +220,23 @@
"""
num_datasets = len(datasets)
if weights is None:
- weights = array_ops.ones(
- [num_datasets], dtype=dtypes.float32, name="weights")
- else:
+ weights = dataset_ops.Dataset.from_tensors([1.0] * num_datasets).repeat()
+ elif not isinstance(weights, dataset_ops.Dataset):
weights = ops.convert_to_tensor(weights, name="weights")
if weights.dtype not in (dtypes.float32, dtypes.float64):
raise TypeError("`weights` must be convertible to a tensor of "
"`tf.float32` or `tf.float64` elements.")
if not weights.shape.is_compatible_with([num_datasets]):
raise ValueError("`weights` must be a vector of length `len(datasets)`.")
+ weights = dataset_ops.Dataset.from_tensors(weights).repeat()
# The `stateless_multinomial()` op expects log-probabilities, as opposed to
# weights.
- logits = math_ops.log(weights, name="logits")
-
- def select_dataset(seed):
+ logits_ds = weights.map(lambda *p: math_ops.log(p, name="logits"))
+ def select_dataset(logits, seed):
return array_ops.squeeze(
- stateless.stateless_multinomial([logits], 1, seed=seed), axis=[0, 1])
-
- selector_input = random_ops.RandomDataset(seed).batch(2).map(select_dataset)
+ stateless.stateless_multinomial(logits, 1, seed=seed), axis=[0, 1])
+ selector_input = dataset_ops.Dataset.zip(
+ (logits_ds, random_ops.RandomDataset(seed).batch(2))).map(select_dataset)
return DirectedInterleaveDataset(selector_input, datasets)
diff --git a/tensorflow/contrib/data/python/ops/prefetching_ops.py b/tensorflow/contrib/data/python/ops/prefetching_ops.py
index 89c04dc..e4c9f8b 100644
--- a/tensorflow/contrib/data/python/ops/prefetching_ops.py
+++ b/tensorflow/contrib/data/python/ops/prefetching_ops.py
@@ -114,11 +114,13 @@
ret = remote_iterator.get_next()
return nest.flatten(sparse.serialize_sparse_tensors(ret))
+ iterator_device = gen_dataset_ops.iterator_get_device(
+ self._input_iterator._iterator_resource)
+
with ops.device(device):
self._buffering_resource = function_buffering_resource(
f=_prefetch_fn,
- target_device=gen_dataset_ops.iterator_get_device(
- self._input_iterator._iterator_resource),
+ target_device=iterator_device,
string_arg=input_iterator_handle,
buffer_size=buffer_size,
shared_name=shared_name)
diff --git a/tensorflow/contrib/data/python/ops/scan_ops.py b/tensorflow/contrib/data/python/ops/scan_ops.py
index 711a538..60ef7ef 100644
--- a/tensorflow/contrib/data/python/ops/scan_ops.py
+++ b/tensorflow/contrib/data/python/ops/scan_ops.py
@@ -57,7 +57,7 @@
self._output_shapes = None
self._output_types = None
- # Iteratively rerun the scan function until reaching a fixed pont on
+ # Iteratively rerun the scan function until reaching a fixed point on
# `self._state_shapes`.
need_to_rerun = True
while need_to_rerun:
diff --git a/tensorflow/contrib/distributions/python/kernel_tests/shape_test.py b/tensorflow/contrib/distributions/python/kernel_tests/shape_test.py
index c8d795c..243b5a0 100644
--- a/tensorflow/contrib/distributions/python/kernel_tests/shape_test.py
+++ b/tensorflow/contrib/distributions/python/kernel_tests/shape_test.py
@@ -585,7 +585,6 @@
def testDistributionShapeGetDimsStatic(self):
with self.test_session():
shaper = _DistributionShape(batch_ndims=0, event_ndims=0)
- shaper = _DistributionShape(batch_ndims=0, event_ndims=0)
x = 1
self.assertAllEqual((_empty_shape, _empty_shape, _empty_shape),
_constant(shaper.get_dims(x)))
diff --git a/tensorflow/contrib/eager/python/saver_test.py b/tensorflow/contrib/eager/python/saver_test.py
index 1a7f7b8..4032e75 100644
--- a/tensorflow/contrib/eager/python/saver_test.py
+++ b/tensorflow/contrib/eager/python/saver_test.py
@@ -102,7 +102,6 @@
# Can still restore it.
saver.restore(ckpt_prefix)
self.assertEqual(v1.read_value().numpy(), 1.0)
- self.assertEqual(v1.read_value().numpy(), 1.0)
# However, cannot restore it with default name.
with self.assertRaisesOpError('not found in checkpoint'):
saver = _saver.Saver([v1, v2]).restore(ckpt_prefix)
diff --git a/tensorflow/contrib/estimator/python/estimator/head.py b/tensorflow/contrib/estimator/python/estimator/head.py
index ae2fd8b..3dcf037 100644
--- a/tensorflow/contrib/estimator/python/estimator/head.py
+++ b/tensorflow/contrib/estimator/python/estimator/head.py
@@ -485,7 +485,7 @@
reduction=losses.Reduction.NONE)
# Averages loss over classes.
unweighted_loss = math_ops.reduce_mean(
- unweighted_loss, axis=-1, keep_dims=True)
+ unweighted_loss, axis=-1, keepdims=True)
weights = head_lib._get_weights_and_check_match_logits( # pylint:disable=protected-access,
features=features, weight_column=self._weight_column, logits=logits)
training_loss = losses.compute_weighted_loss(
diff --git a/tensorflow/contrib/estimator/python/estimator/replicate_model_fn.py b/tensorflow/contrib/estimator/python/estimator/replicate_model_fn.py
index fa26978..a8774d6 100644
--- a/tensorflow/contrib/estimator/python/estimator/replicate_model_fn.py
+++ b/tensorflow/contrib/estimator/python/estimator/replicate_model_fn.py
@@ -456,7 +456,7 @@
def _split_batch(features, labels, number_of_shards, device):
- """Split input features and labes into batches."""
+ """Split input features and labels into batches."""
def ensure_divisible_by_shards(sequence):
batch_size = ops_lib.convert_to_tensor(sequence).get_shape()[0]
@@ -602,7 +602,7 @@
def _scale_tower_loss(tower_spec, loss_reduction, number_of_towers):
- """Produce an EstimatorSpec with approproriately scaled loss."""
+ """Produce an EstimatorSpec with appropriately scaled loss."""
if tower_spec.loss is None:
return tower_spec
diff --git a/tensorflow/contrib/factorization/python/ops/gmm_ops.py b/tensorflow/contrib/factorization/python/ops/gmm_ops.py
index 5d77bc7..ccdd679 100644
--- a/tensorflow/contrib/factorization/python/ops/gmm_ops.py
+++ b/tensorflow/contrib/factorization/python/ops/gmm_ops.py
@@ -54,10 +54,10 @@
diagonal matrix just the diagonal is returned.
"""
num_points = math_ops.to_float(array_ops.shape(x)[0])
- x -= math_ops.reduce_mean(x, 0, keep_dims=True)
+ x -= math_ops.reduce_mean(x, 0, keepdims=True)
if diag:
cov = math_ops.reduce_sum(
- math_ops.square(x), 0, keep_dims=True) / (num_points - 1)
+ math_ops.square(x), 0, keepdims=True) / (num_points - 1)
else:
cov = math_ops.matmul(x, x, transpose_a=True) / (num_points - 1)
return cov
@@ -313,7 +313,7 @@
# TODO(xavigonzalvo): look into alternatives to log for
# reparametrization of variance parameters.
det_expanded = math_ops.reduce_sum(
- math_ops.log(self._covs + 1e-3), 1, keep_dims=True)
+ math_ops.log(self._covs + 1e-3), 1, keepdims=True)
diff = shard - self._means
x2 = math_ops.square(diff)
cov_expanded = array_ops.expand_dims(1.0 / (self._covs + 1e-3), 2)
@@ -351,7 +351,7 @@
shard_id: id of current shard_id.
"""
self._prior_probs[shard_id] = math_ops.reduce_logsumexp(
- self._probs[shard_id], axis=1, keep_dims=True)
+ self._probs[shard_id], axis=1, keepdims=True)
def _define_expectation_operation(self, shard_id):
# Shape broadcasting.
@@ -375,7 +375,7 @@
"""
# Soft assignment of each data point to each of the two clusters.
self._points_in_k[shard_id] = math_ops.reduce_sum(
- self._w[shard_id], 0, keep_dims=True)
+ self._w[shard_id], 0, keepdims=True)
# Partial means.
w_mul_x = array_ops.expand_dims(
math_ops.matmul(
@@ -454,7 +454,7 @@
for shard_id, prior_probs in enumerate(self._prior_probs):
op.append(prior_probs + math_ops.log(self._w[shard_id]))
self._scores = array_ops.squeeze(
- math_ops.reduce_logsumexp(op, axis=2, keep_dims=True), axis=0)
+ math_ops.reduce_logsumexp(op, axis=2, keepdims=True), axis=0)
def gmm(inp,
diff --git a/tensorflow/contrib/factorization/python/ops/kmeans.py b/tensorflow/contrib/factorization/python/ops/kmeans.py
index bfe338c..9ffdd3b 100644
--- a/tensorflow/contrib/factorization/python/ops/kmeans.py
+++ b/tensorflow/contrib/factorization/python/ops/kmeans.py
@@ -374,11 +374,11 @@
than `num_clusters`, a TensorFlow runtime error occurs.
distance_metric: The distance metric used for clustering. One of:
* `KMeansClustering.SQUARED_EUCLIDEAN_DISTANCE`: Euclidean distance
- between vectors `u` and `v` is defined as `\\(||u - v||_2\\)`
+ between vectors `u` and `v` is defined as \\(||u - v||_2\\)
which is the square root of the sum of the absolute squares of
the elements' difference.
* `KMeansClustering.COSINE_DISTANCE`: Cosine distance between vectors
- `u` and `v` is defined as `\\(1 - (u . v) / (||u||_2 ||v||_2)\\)`.
+ `u` and `v` is defined as \\(1 - (u . v) / (||u||_2 ||v||_2)\\).
random_seed: Python integer. Seed for PRNG used to initialize centers.
use_mini_batch: A boolean specifying whether to use the mini-batch k-means
algorithm. See explanation above.
diff --git a/tensorflow/contrib/framework/__init__.py b/tensorflow/contrib/framework/__init__.py
index bb4f1eb..11397e8 100644
--- a/tensorflow/contrib/framework/__init__.py
+++ b/tensorflow/contrib/framework/__init__.py
@@ -118,12 +118,13 @@
from tensorflow.python.framework.smart_cond import smart_constant_value
from tensorflow.python.framework.tensor_spec import BoundedTensorSpec
from tensorflow.python.framework.tensor_spec import TensorSpec
+from tensorflow.python.ops.array_ops import broadcast_to
from tensorflow.python.ops.init_ops import convolutional_delta_orthogonal
from tensorflow.python.ops.init_ops import convolutional_orthogonal_1d
from tensorflow.python.ops.init_ops import convolutional_orthogonal_2d
from tensorflow.python.ops.init_ops import convolutional_orthogonal_3d
from tensorflow.python.util.all_util import remove_undocumented
-_allowed_symbols = ['nest']
+_allowed_symbols = ['nest', 'broadcast_to']
remove_undocumented(__name__, allowed_exception_list=_allowed_symbols)
diff --git a/tensorflow/contrib/framework/python/framework/tensor_util_test.py b/tensorflow/contrib/framework/python/framework/tensor_util_test.py
index a2834b6..8fc4f60 100644
--- a/tensorflow/contrib/framework/python/framework/tensor_util_test.py
+++ b/tensorflow/contrib/framework/python/framework/tensor_util_test.py
@@ -48,7 +48,7 @@
variables = variables_lib.local_variables()
self.assertEquals(2, len(variables))
self.assertRaises(errors_impl.OpError, sess.run, variables)
- variables_lib.initialize_variables(variables).run()
+ variables_lib.variables_initializer(variables).run()
self.assertAllEqual(set([value0, value1]), set(sess.run(variables)))
diff --git a/tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_op.py b/tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_op.py
index a97adf6..983b6dc 100644
--- a/tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_op.py
+++ b/tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_op.py
@@ -65,7 +65,7 @@
side_input_scale: A scalar `float32` that will be multiplied by side_input.
This is optional and defaults to 0.
side_input: A `Tensor` of the format specified by `data_format`.
- This is useful for imlementing ResNet blocks.
+ This is useful for implementing ResNet blocks.
activation_mode: (optional) currently must be the default "Relu".
Note that in qint8 mode, it also clips to 127, so acts like ReluX.
data_format: Specifies the data format.
diff --git a/tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_op_test.py b/tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_op_test.py
index bb155aa..3d0ed89 100644
--- a/tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_op_test.py
+++ b/tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_op_test.py
@@ -566,7 +566,7 @@
return Test
-def CalculateCovolvedOutputDim(input_dim, filter_dim, stride, padding_type):
+def CalculateConvolvedOutputDim(input_dim, filter_dim, stride, padding_type):
"""Calculates the size of an output dimension of a strided convolution.
Given the sizes of the corresponding dimension of the input and filter shapes,
@@ -827,10 +827,10 @@
maxval=1.0,
dtype=dtypes.float32), -1.0, 1.0, dtypes.qint8)
- output_height = CalculateCovolvedOutputDim(input_height, filter_height,
- vertical_stride, padding_type)
- output_width = CalculateCovolvedOutputDim(input_width, filter_width,
- horizontal_stride, padding_type)
+ output_height = CalculateConvolvedOutputDim(input_height, filter_height,
+ vertical_stride, padding_type)
+ output_width = CalculateConvolvedOutputDim(input_width, filter_width,
+ horizontal_stride, padding_type)
print("output_height=", output_height, ", output_width=", output_width)
side_input, _, _ = gen_array_ops.quantize_v2(
diff --git a/tensorflow/contrib/gan/python/eval/python/sliced_wasserstein_impl.py b/tensorflow/contrib/gan/python/eval/python/sliced_wasserstein_impl.py
index 4b10bc0..4b1105f 100644
--- a/tensorflow/contrib/gan/python/eval/python/sliced_wasserstein_impl.py
+++ b/tensorflow/contrib/gan/python/eval/python/sliced_wasserstein_impl.py
@@ -161,7 +161,7 @@
proj = random_ops.random_normal(
[array_ops.shape(a)[1], random_projection_dim])
proj *= math_ops.rsqrt(
- math_ops.reduce_sum(math_ops.square(proj), 0, keep_dims=True))
+ math_ops.reduce_sum(math_ops.square(proj), 0, keepdims=True))
# Project both distributions and sort them.
proj_a = math_ops.matmul(a, proj)
proj_b = math_ops.matmul(b, proj)
diff --git a/tensorflow/contrib/gan/python/features/python/virtual_batchnorm_impl.py b/tensorflow/contrib/gan/python/features/python/virtual_batchnorm_impl.py
index f8b3725..650eab9 100644
--- a/tensorflow/contrib/gan/python/features/python/virtual_batchnorm_impl.py
+++ b/tensorflow/contrib/gan/python/features/python/virtual_batchnorm_impl.py
@@ -64,11 +64,11 @@
y = math_ops.cast(x, dtypes.float32) if x.dtype == dtypes.float16 else x
# Compute true mean while keeping the dims for proper broadcasting.
- shift = array_ops.stop_gradient(math_ops.reduce_mean(y, axes, keep_dims=True))
+ shift = array_ops.stop_gradient(math_ops.reduce_mean(y, axes, keepdims=True))
- shifted_mean = math_ops.reduce_mean(y - shift, axes, keep_dims=True)
+ shifted_mean = math_ops.reduce_mean(y - shift, axes, keepdims=True)
mean = shifted_mean + shift
- mean_squared = math_ops.reduce_mean(math_ops.square(y), axes, keep_dims=True)
+ mean_squared = math_ops.reduce_mean(math_ops.square(y), axes, keepdims=True)
mean = array_ops.squeeze(mean, axes)
mean_squared = array_ops.squeeze(mean_squared, axes)
diff --git a/tensorflow/contrib/hvx/README.md b/tensorflow/contrib/hvx/README.md
index 163993a..68e34f3 100644
--- a/tensorflow/contrib/hvx/README.md
+++ b/tensorflow/contrib/hvx/README.md
@@ -42,11 +42,12 @@
### Build libhexagon\_nn\_skel.so
-Download Hexagon NN library from codeaurora.org and build it.
+Download Hexagon NN library from codeaurora.org and build it. For Hexagon SDK 3.0, we need use the compatible version([721b2d58f](https://source.codeaurora.org/quic/hexagon_nn/nnlib/commit/?id=721b2d58f0f4e2d5b182f41e6b7c4db5356bf0fb)) of nnlib.
```shell
git clone https://source.codeaurora.org/quic/hexagon_nn/nnlib
cd nnlib
+git reset 721b2d58f --hard
```
Just follow the instructions in `README.HOW_TO_BUILD`. You can find the file `libhexagon_nn_skel.so` in `hexagon_Release_dynamic_toolv72_v60/ship`.
diff --git a/tensorflow/contrib/image/kernels/adjust_hsv_in_yiq_op_gpu.cu.cc b/tensorflow/contrib/image/kernels/adjust_hsv_in_yiq_op_gpu.cu.cc
index 1be97ae..bbb3a3b 100644
--- a/tensorflow/contrib/image/kernels/adjust_hsv_in_yiq_op_gpu.cu.cc
+++ b/tensorflow/contrib/image/kernels/adjust_hsv_in_yiq_op_gpu.cu.cc
@@ -53,7 +53,7 @@
OP_REQUIRES_OK(ctx, ctx->allocate_temp(
DT_FLOAT, TensorShape({kChannelSize * kChannelSize}),
&tranformation_matrix));
- // TODO(huangyp): It takes about 3.5 us to comute tranformation_matrix
+ // TODO(huangyp): It takes about 3.5 us to compute tranformation_matrix
// with one thread. Improve its performance if necessary.
internal::compute_tranformation_matrix_cuda<<<1, 1, 0, cu_stream>>>(
delta_h, scale_s, scale_v, tranformation_matrix.flat<float>().data(),
diff --git a/tensorflow/contrib/image/ops/distort_image_ops.cc b/tensorflow/contrib/image/ops/distort_image_ops.cc
index b169b0b..ca49635 100644
--- a/tensorflow/contrib/image/ops/distort_image_ops.cc
+++ b/tensorflow/contrib/image/ops/distort_image_ops.cc
@@ -36,9 +36,9 @@
Adjust the YIQ hue of one or more images.
`images` is a tensor of at least 3 dimensions. The last dimension is
-interpretted as channels, and must be three.
+interpreted as channels, and must be three.
-We used linear transfomation described in:
+We used linear transformation described in:
beesbuzz.biz/code/hsv_color_transforms.php
The input image is considered in the RGB colorspace. Conceptually, the RGB
colors are first mapped into YIQ space, rotated around the Y channel by
diff --git a/tensorflow/contrib/image/ops/image_ops.cc b/tensorflow/contrib/image/ops/image_ops.cc
index e97267f..295908d 100644
--- a/tensorflow/contrib/image/ops/image_ops.cc
+++ b/tensorflow/contrib/image/ops/image_ops.cc
@@ -137,7 +137,7 @@
If `row_to_col_match_indices[i]` is not -1, row i is matched to column
`row_to_col_match_indices[i]`.
col_to_row_match_indices: A vector of length num_columns, which is the number
- of columns of the input ditance matrix.
+ of columns of the input distance matrix.
If `col_to_row_match_indices[j]` is not -1, column j is matched to row
`col_to_row_match_indices[j]`.
)doc");
diff --git a/tensorflow/contrib/image/ops/single_image_random_dot_stereograms_ops.cc b/tensorflow/contrib/image/ops/single_image_random_dot_stereograms_ops.cc
index 8139d42..bd784c6 100755
--- a/tensorflow/contrib/image/ops/single_image_random_dot_stereograms_ops.cc
+++ b/tensorflow/contrib/image/ops/single_image_random_dot_stereograms_ops.cc
@@ -69,7 +69,7 @@
Given the 2-D tensor 'depth_values' with encoded Z values, this operation will
encode 3-D data into a 2-D image. The output of this Op is suitable for the
encode_PNG/JPG ops. Be careful with image compression as this may corrupt the
-encode 3-D data witin the image.
+encode 3-D data within the image.
This Op is based upon:
'http://www.learningace.com/doc/4331582/b6ab058d1e206d68ab60e4e1ead2fe6e/sirds-paper'
@@ -111,7 +111,7 @@
output_data_window: Size of "DATA" window, must be equal to or smaller than 'output_image_shape', will be centered
and use 'convergence_dots_size' for best fit to avoid overlap if possible
-image:= A tensor of size 'output_image_shape' with the encloded 'depth_values'
+image:= A tensor of size 'output_image_shape' with the encoded 'depth_values'
)doc");
} // namespace tensorflow
diff --git a/tensorflow/contrib/image/python/ops/image_ops.py b/tensorflow/contrib/image/python/ops/image_ops.py
index a8d8cf8..d3c114a 100644
--- a/tensorflow/contrib/image/python/ops/image_ops.py
+++ b/tensorflow/contrib/image/python/ops/image_ops.py
@@ -438,7 +438,7 @@
of rows of the input `distance_matrix`. If `row_to_col_match_indices[i]`
is not -1, row i is matched to column `row_to_col_match_indices[i]`.
col_to_row_match_indices: A vector of length num_columns, which is the
- number of columns of the input ditance matrix.
+ number of columns of the input distance matrix.
If `col_to_row_match_indices[j]` is not -1, column j is matched to row
`col_to_row_match_indices[j]`.
"""
diff --git a/tensorflow/contrib/image/python/ops/single_image_random_dot_stereograms.py b/tensorflow/contrib/image/python/ops/single_image_random_dot_stereograms.py
index d4a6a5b..0ceb683 100755
--- a/tensorflow/contrib/image/python/ops/single_image_random_dot_stereograms.py
+++ b/tensorflow/contrib/image/python/ops/single_image_random_dot_stereograms.py
@@ -45,7 +45,7 @@
Given the 2-D tensor 'depth_values' with encoded Z values, this operation
will encode 3-D data into a 2-D image. The output of this Op is suitable
for the encode_PNG/JPG ops. Be careful with image compression as this may
- corrupt the encode 3-D data witin the image.
+ corrupt the encode 3-D data within the image.
Based upon [this
paper](http://www.learningace.com/doc/4331582/b6ab058d1e206d68ab60e4e1ead2fe6e/sirds-paper).
diff --git a/tensorflow/contrib/kfac/python/ops/loss_functions.py b/tensorflow/contrib/kfac/python/ops/loss_functions.py
index e7d4243..42d525c 100644
--- a/tensorflow/contrib/kfac/python/ops/loss_functions.py
+++ b/tensorflow/contrib/kfac/python/ops/loss_functions.py
@@ -613,19 +613,19 @@
def multiply_fisher(self, vector):
probs = self._probs
return vector * probs - probs * math_ops.reduce_sum(
- vector * probs, axis=-1, keep_dims=True)
+ vector * probs, axis=-1, keepdims=True)
def multiply_fisher_factor(self, vector):
probs = self._probs
sqrt_probs = self._sqrt_probs
return sqrt_probs * vector - probs * math_ops.reduce_sum(
- sqrt_probs * vector, axis=-1, keep_dims=True)
+ sqrt_probs * vector, axis=-1, keepdims=True)
def multiply_fisher_factor_transpose(self, vector):
probs = self._probs
sqrt_probs = self._sqrt_probs
return sqrt_probs * vector - sqrt_probs * math_ops.reduce_sum(
- probs * vector, axis=-1, keep_dims=True)
+ probs * vector, axis=-1, keepdims=True)
def multiply_fisher_factor_replicated_one_hot(self, index):
assert len(index) == 1, "Length of index was {}".format(len(index))
diff --git a/tensorflow/contrib/kfac/python/ops/loss_functions_lib.py b/tensorflow/contrib/kfac/python/ops/loss_functions_lib.py
index 705a871..4279cb2 100644
--- a/tensorflow/contrib/kfac/python/ops/loss_functions_lib.py
+++ b/tensorflow/contrib/kfac/python/ops/loss_functions_lib.py
@@ -33,7 +33,6 @@
"CategoricalLogitsNegativeLogProbLoss",
"OnehotCategoricalLogitsNegativeLogProbLoss",
"MultiBernoulliNegativeLogProbLoss",
- "MultiBernoulliNegativeLogProbLoss",
"insert_slice_in_zeros",
]
diff --git a/tensorflow/contrib/labeled_tensor/python/ops/ops_test.py b/tensorflow/contrib/labeled_tensor/python/ops/ops_test.py
index 0727f4c..39e9d65 100644
--- a/tensorflow/contrib/labeled_tensor/python/ops/ops_test.py
+++ b/tensorflow/contrib/labeled_tensor/python/ops/ops_test.py
@@ -660,7 +660,7 @@
sum_lt = ops.reduce_sum(self.original_lt, {('channel', 'hihowareyou')})
golden_lt = core.LabeledTensor(
math_ops.reduce_sum(
- self.original_lt.tensor, 1, keep_dims=True),
+ self.original_lt.tensor, 1, keepdims=True),
[self.a0, ('channel', ['hihowareyou']), self.a2, self.a3])
self.assertLabeledTensorsEqual(sum_lt, golden_lt)
@@ -668,7 +668,7 @@
sum_lt = ops.reduce_sum(self.original_lt, ('channel', 'hihowareyou'))
golden_lt = core.LabeledTensor(
math_ops.reduce_sum(
- self.original_lt.tensor, 1, keep_dims=True),
+ self.original_lt.tensor, 1, keepdims=True),
[self.a0, ('channel', ['hihowareyou']), self.a2, self.a3])
self.assertLabeledTensorsEqual(sum_lt, golden_lt)
diff --git a/tensorflow/contrib/layers/python/kernel_tests/sparse_feature_cross_op_test.py b/tensorflow/contrib/layers/python/kernel_tests/sparse_feature_cross_op_test.py
index f701647..28ddaa6 100644
--- a/tensorflow/contrib/layers/python/kernel_tests/sparse_feature_cross_op_test.py
+++ b/tensorflow/contrib/layers/python/kernel_tests/sparse_feature_cross_op_test.py
@@ -200,7 +200,7 @@
self._assert_sparse_tensor_equals(expected_out, sess.run(op))
def test_large_batch(self):
- """Tests with large batch size to force multithreding.
+ """Tests with large batch size to force multithreading.
"""
batch_size = 5000
col1 = []
diff --git a/tensorflow/contrib/layers/python/layers/feature_column.py b/tensorflow/contrib/layers/python/layers/feature_column.py
index 9ccb589..3ae07ce 100644
--- a/tensorflow/contrib/layers/python/layers/feature_column.py
+++ b/tensorflow/contrib/layers/python/layers/feature_column.py
@@ -48,7 +48,7 @@
recommended.
embedded_dept_column = embedding_column(
- sparse_column_with_keys("department", ["math", "philosphy", ...]),
+ sparse_column_with_keys("department", ["math", "philosophy", ...]),
dimension=10)
* Wide (aka linear) models (`LinearClassifier`, `LinearRegressor`).
diff --git a/tensorflow/contrib/layers/python/layers/feature_column_ops.py b/tensorflow/contrib/layers/python/layers/feature_column_ops.py
index 78affea..06060b9 100644
--- a/tensorflow/contrib/layers/python/layers/feature_column_ops.py
+++ b/tensorflow/contrib/layers/python/layers/feature_column_ops.py
@@ -815,7 +815,7 @@
"""
def __init__(self, columns_to_tensors):
- """Initializes transfomer.
+ """Initializes transformer.
Args:
columns_to_tensors: A mapping from feature columns to tensors. 'string'
@@ -908,7 +908,7 @@
def _check_forbidden_sequence_columns(feature_columns):
- """Recursively cecks `feature_columns` for `_FORBIDDEN_SEQUENCE_COLUMNS`."""
+ """Recursively checks `feature_columns` for `_FORBIDDEN_SEQUENCE_COLUMNS`."""
all_feature_columns = _gather_feature_columns(feature_columns)
for feature_column in all_feature_columns:
if isinstance(feature_column, _FORBIDDEN_SEQUENCE_COLUMNS):
diff --git a/tensorflow/contrib/layers/python/layers/layers.py b/tensorflow/contrib/layers/python/layers/layers.py
index 25c3b1e..2f3e576 100644
--- a/tensorflow/contrib/layers/python/layers/layers.py
+++ b/tensorflow/contrib/layers/python/layers/layers.py
@@ -932,7 +932,8 @@
variables_collections=None,
outputs_collections=None,
trainable=True,
- scope=None):
+ scope=None,
+ conv_dims=None):
"""Adds an N-D convolution followed by an optional batch_norm layer.
It is required that 1 <= N <= 3.
@@ -993,6 +994,10 @@
trainable: If `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
scope: Optional scope for `variable_scope`.
+ conv_dims: Optional convolution dimensionality, when set it would use the
+ corresponding convolution (e.g. 2 for Conv 2D, 3 for Conv 3D, ..). When
+ leaved to None it would select the convolution dimensionality based on
+ the input rank (i.e. Conv ND, with N = input_rank - 2).
Returns:
A tensor representing the output of the operation.
@@ -1015,6 +1020,9 @@
inputs = ops.convert_to_tensor(inputs)
input_rank = inputs.get_shape().ndims
+ if conv_dims is not None and conv_dims + 2 != input_rank:
+ raise ValueError('Convolution expects input with rank %d, got %d' %
+ (conv_dims + 2, input_rank))
if input_rank == 3:
layer_class = convolutional_layers.Convolution1D
elif input_rank == 4:
@@ -1061,10 +1069,134 @@
outputs = activation_fn(outputs)
return utils.collect_named_outputs(outputs_collections, sc.name, outputs)
+@add_arg_scope
+def convolution1d(inputs,
+ num_outputs,
+ kernel_size,
+ stride=1,
+ padding='SAME',
+ data_format=None,
+ rate=1,
+ activation_fn=nn.relu,
+ normalizer_fn=None,
+ normalizer_params=None,
+ weights_initializer=initializers.xavier_initializer(),
+ weights_regularizer=None,
+ biases_initializer=init_ops.zeros_initializer(),
+ biases_regularizer=None,
+ reuse=None,
+ variables_collections=None,
+ outputs_collections=None,
+ trainable=True,
+ scope=None):
+ return convolution(inputs,
+ num_outputs,
+ kernel_size,
+ stride,
+ padding,
+ data_format,
+ rate,
+ activation_fn,
+ normalizer_fn,
+ normalizer_params,
+ weights_initializer,
+ weights_regularizer,
+ biases_initializer,
+ biases_regularizer,
+ reuse,
+ variables_collections,
+ outputs_collections,
+ trainable,
+ scope,
+ conv_dims=1)
-convolution2d = convolution
-convolution3d = convolution
+convolution1d.__doc__ = convolution.__doc__
+@add_arg_scope
+def convolution2d(inputs,
+ num_outputs,
+ kernel_size,
+ stride=1,
+ padding='SAME',
+ data_format=None,
+ rate=1,
+ activation_fn=nn.relu,
+ normalizer_fn=None,
+ normalizer_params=None,
+ weights_initializer=initializers.xavier_initializer(),
+ weights_regularizer=None,
+ biases_initializer=init_ops.zeros_initializer(),
+ biases_regularizer=None,
+ reuse=None,
+ variables_collections=None,
+ outputs_collections=None,
+ trainable=True,
+ scope=None):
+ return convolution(inputs,
+ num_outputs,
+ kernel_size,
+ stride,
+ padding,
+ data_format,
+ rate,
+ activation_fn,
+ normalizer_fn,
+ normalizer_params,
+ weights_initializer,
+ weights_regularizer,
+ biases_initializer,
+ biases_regularizer,
+ reuse,
+ variables_collections,
+ outputs_collections,
+ trainable,
+ scope,
+ conv_dims=2)
+
+convolution2d.__doc__ = convolution.__doc__
+
+@add_arg_scope
+def convolution3d(inputs,
+ num_outputs,
+ kernel_size,
+ stride=1,
+ padding='SAME',
+ data_format=None,
+ rate=1,
+ activation_fn=nn.relu,
+ normalizer_fn=None,
+ normalizer_params=None,
+ weights_initializer=initializers.xavier_initializer(),
+ weights_regularizer=None,
+ biases_initializer=init_ops.zeros_initializer(),
+ biases_regularizer=None,
+ reuse=None,
+ variables_collections=None,
+ outputs_collections=None,
+ trainable=True,
+ scope=None):
+ return convolution(inputs,
+ num_outputs,
+ kernel_size,
+ stride,
+ padding,
+ data_format,
+ rate,
+ activation_fn,
+ normalizer_fn,
+ normalizer_params,
+ weights_initializer,
+ weights_regularizer,
+ biases_initializer,
+ biases_regularizer,
+ reuse,
+ variables_collections,
+ outputs_collections,
+ trainable,
+ scope,
+ conv_dims=3)
+
+convolution3d.__doc__ = convolution.__doc__
@add_arg_scope
def convolution2d_in_plane(
@@ -1411,7 +1543,7 @@
Args:
tensor: An `int` `Tensor` to be converted to a `Sparse`.
eos_token: An integer.
- It is part of the target label that signfies the end of a sentence.
+ It is part of the target label that signifies the end of a sentence.
outputs_collections: Collection to add the outputs.
scope: Optional scope for name_scope.
"""
@@ -1555,7 +1687,7 @@
output_collections: Collection to which the outputs will be added.
scope: Optional scope for `name_scope`.
Returns:
- A `Tensor` or `SparseTensor` conataining the same values as `inputs`, but
+ A `Tensor` or `SparseTensor` containing the same values as `inputs`, but
with innermost dimensions flattened to obtain rank `new_rank`.
Raises:
diff --git a/tensorflow/contrib/layers/python/layers/layers_test.py b/tensorflow/contrib/layers/python/layers/layers_test.py
index 997f910..b01fd5d 100644
--- a/tensorflow/contrib/layers/python/layers/layers_test.py
+++ b/tensorflow/contrib/layers/python/layers/layers_test.py
@@ -310,6 +310,17 @@
class ConvolutionTest(test.TestCase):
+ def testInvalidShape(self):
+ with self.test_session():
+ images_2d = random_ops.random_uniform((5, 7, 9, 3), seed=1)
+ with self.assertRaisesRegexp(
+ ValueError, 'Convolution expects input with rank 5, got 4'):
+ layers_lib.convolution3d(images_2d, 32, 3)
+ images_3d = random_ops.random_uniform((5, 6, 7, 9, 3), seed=1)
+ with self.assertRaisesRegexp(
+ ValueError, 'Convolution expects input with rank 4, got 5'):
+ layers_lib.convolution2d(images_3d, 32, 3)
+
def testInvalidDataFormat(self):
height, width = 7, 9
with self.test_session():
@@ -3155,7 +3166,7 @@
with self.test_session():
images = np.random.uniform(size=(5, height, width, 3)).astype(np.float32)
output = _layers.repeat(images, 3, layers_lib.conv2d, 32, [3, 3])
- self.assertEqual(output.op.name, 'Repeat/convolution_3/Relu')
+ self.assertEqual(output.op.name, 'Repeat/convolution2d_3/Relu')
self.assertListEqual(output.get_shape().as_list(), [5, 3, 3, 32])
def testRepeatWithScope(self):
@@ -3749,7 +3760,7 @@
layers_lib.convolution2d, [10, 20, 30],
kernel_size=[3, 3],
padding='SAME')
- self.assertEqual(output.op.name, 'Stack/convolution_3/Relu')
+ self.assertEqual(output.op.name, 'Stack/convolution2d_3/Relu')
self.assertListEqual(output.get_shape().as_list(), [5, 3, 3, 30])
def testStackWithScope(self):
diff --git a/tensorflow/contrib/layers/python/layers/rev_block_lib_test.py b/tensorflow/contrib/layers/python/layers/rev_block_lib_test.py
index 392a490..8c11840 100644
--- a/tensorflow/contrib/layers/python/layers/rev_block_lib_test.py
+++ b/tensorflow/contrib/layers/python/layers/rev_block_lib_test.py
@@ -60,8 +60,8 @@
sess.run(variables.global_variables_initializer())
x1, x2, x1_inv, x2_inv = sess.run([x1, x2, x1_inv, x2_inv])
- self.assertAllClose(x1, x1_inv)
- self.assertAllClose(x2, x2_inv)
+ self.assertAllClose(x1, x1_inv, atol=1e-5)
+ self.assertAllClose(x2, x2_inv, atol=1e-5)
def testBackwardForward(self):
diff --git a/tensorflow/contrib/layers/python/layers/utils_test.py b/tensorflow/contrib/layers/python/layers/utils_test.py
index 3409860..645dc12 100644
--- a/tensorflow/contrib/layers/python/layers/utils_test.py
+++ b/tensorflow/contrib/layers/python/layers/utils_test.py
@@ -294,7 +294,6 @@
self.assertEqual(utils.n_positive_integers(2, 2), (2, 2))
self.assertEqual(utils.n_positive_integers(2, (2, 3)), (2, 3))
self.assertEqual(utils.n_positive_integers(3, (2, 3, 1)), (2, 3, 1))
- self.assertEqual(utils.n_positive_integers(3, (2, 3, 1)), (2, 3, 1))
self.assertEqual(
utils.n_positive_integers(3, tensor_shape.TensorShape([2, 3, 1])),
(2, 3, 1))
diff --git a/tensorflow/contrib/learn/python/learn/estimators/kmeans_test.py b/tensorflow/contrib/learn/python/learn/estimators/kmeans_test.py
index b28835a..5845569 100644
--- a/tensorflow/contrib/learn/python/learn/estimators/kmeans_test.py
+++ b/tensorflow/contrib/learn/python/learn/estimators/kmeans_test.py
@@ -36,7 +36,6 @@
from tensorflow.python.ops import data_flow_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import random_ops
-from tensorflow.python.ops import random_ops
from tensorflow.python.platform import benchmark
from tensorflow.python.platform import flags
from tensorflow.python.platform import test
diff --git a/tensorflow/contrib/learn/python/learn/estimators/run_config.py b/tensorflow/contrib/learn/python/learn/estimators/run_config.py
index 8c85c43..14ee2ba 100644
--- a/tensorflow/contrib/learn/python/learn/estimators/run_config.py
+++ b/tensorflow/contrib/learn/python/learn/estimators/run_config.py
@@ -299,6 +299,7 @@
# so instead of breaking compatibility with that assumption, we
# just manually initialize this field:
self._train_distribute = None
+ self._device_fn = None
gpu_options = config_pb2.GPUOptions(
per_process_gpu_memory_fraction=gpu_memory_fraction)
diff --git a/tensorflow/contrib/lite/Makefile b/tensorflow/contrib/lite/Makefile
index b4504f2..65fba52 100644
--- a/tensorflow/contrib/lite/Makefile
+++ b/tensorflow/contrib/lite/Makefile
@@ -90,7 +90,8 @@
$(wildcard tensorflow/contrib/lite/kernels/internal/*.c) \
$(wildcard tensorflow/contrib/lite/kernels/internal/optimized/*.c) \
$(wildcard tensorflow/contrib/lite/kernels/internal/reference/*.c) \
-$(wildcard tensorflow/contrib/lite/downloads/farmhash/src/farmhash.cc)
+$(wildcard tensorflow/contrib/lite/downloads/farmhash/src/farmhash.cc) \
+$(wildcard tensorflow/contrib/lite/downloads/fft2d/fftsg.c)
# Remove any duplicates.
CORE_CC_ALL_SRCS := $(sort $(CORE_CC_ALL_SRCS))
CORE_CC_EXCLUDE_SRCS := \
diff --git a/tensorflow/contrib/lite/download_dependencies.sh b/tensorflow/contrib/lite/download_dependencies.sh
index a93ed20..436c3e1 100755
--- a/tensorflow/contrib/lite/download_dependencies.sh
+++ b/tensorflow/contrib/lite/download_dependencies.sh
@@ -30,12 +30,15 @@
fi
EIGEN_URL="$(grep -o 'http.*bitbucket.org/eigen/eigen/get/.*tar\.gz' "${BZL_FILE_PATH}" | grep -v mirror.bazel | head -n1)"
-GEMMLOWP_URL="$(grep -o 'https://mirror.bazel.build/github.com/google/gemmlowp/.*zip' "${BZL_FILE_PATH}" | head -n1)"
+# TODO (yongtang): Replace the following with 'https://mirror.bazel.build/github.com/google/gemmlowp/.*zip' once
+# the archive has been propagated in mirror.bazel.build.
+GEMMLOWP_URL="$(grep -o 'https://github.com/google/gemmlowp/.*zip' "${BZL_FILE_PATH}" | head -n1)"
GOOGLETEST_URL="https://github.com/google/googletest/archive/release-1.8.0.tar.gz"
ABSL_URL="$(grep -o 'https://github.com/abseil/abseil-cpp/.*tar.gz' "${BZL_FILE_PATH}" | head -n1)"
NEON_2_SSE_URL="https://github.com/intel/ARM_NEON_2_x86_SSE/archive/master.zip"
FARMHASH_URL="https://mirror.bazel.build/github.com/google/farmhash/archive/816a4ae622e964763ca0862d9dbd19324a1eaf45.tar.gz"
FLATBUFFERS_URL="https://github.com/google/flatbuffers/archive/master.zip"
+FFT2D_URL="https://mirror.bazel.build/www.kurims.kyoto-u.ac.jp/~ooura/fft.tgz"
# TODO(petewarden): Some new code in Eigen triggers a clang bug with iOS arm64,
# so work around it by patching the source.
@@ -91,6 +94,7 @@
download_and_extract "${NEON_2_SSE_URL}" "${DOWNLOADS_DIR}/neon_2_sse"
download_and_extract "${FARMHASH_URL}" "${DOWNLOADS_DIR}/farmhash"
download_and_extract "${FLATBUFFERS_URL}" "${DOWNLOADS_DIR}/flatbuffers"
+download_and_extract "${FFT2D_URL}" "${DOWNLOADS_DIR}/fft2d"
replace_by_sed 's#static uint32x4_t p4ui_CONJ_XOR = vld1q_u32( conj_XOR_DATA );#static uint32x4_t p4ui_CONJ_XOR; // = vld1q_u32( conj_XOR_DATA ); - Removed by script#' \
"${DOWNLOADS_DIR}/eigen/Eigen/src/Core/arch/NEON/Complex.h"
diff --git a/tensorflow/contrib/lite/examples/ios/camera/tflite_camera_example.xcodeproj/project.pbxproj b/tensorflow/contrib/lite/examples/ios/camera/tflite_camera_example.xcodeproj/project.pbxproj
index b0236e9..98d3b5b 100644
--- a/tensorflow/contrib/lite/examples/ios/camera/tflite_camera_example.xcodeproj/project.pbxproj
+++ b/tensorflow/contrib/lite/examples/ios/camera/tflite_camera_example.xcodeproj/project.pbxproj
@@ -326,10 +326,6 @@
GCC_WARN_UNUSED_VARIABLE = YES;
HEADER_SEARCH_PATHS = (
"$(inherited)",
- ../../../../../../,
- ../../../downloads/flatbuffers/include/,
- ../../../downloads/eigen/,
- ../../../downloads/,
);
IPHONEOS_DEPLOYMENT_TARGET = 8.0;
MTL_ENABLE_DEBUG_INFO = YES;
@@ -373,10 +369,6 @@
GCC_WARN_UNUSED_VARIABLE = YES;
HEADER_SEARCH_PATHS = (
"$(inherited)",
- ../../../../../../,
- ../../../downloads/flatbuffers/include/,
- ../../../downloads/eigen/,
- ../../../downloads/,
);
IPHONEOS_DEPLOYMENT_TARGET = 8.0;
MTL_ENABLE_DEBUG_INFO = NO;
diff --git a/tensorflow/contrib/lite/g3doc/apis.md b/tensorflow/contrib/lite/g3doc/apis.md
index fe208e4..50cc146 100644
--- a/tensorflow/contrib/lite/g3doc/apis.md
+++ b/tensorflow/contrib/lite/g3doc/apis.md
@@ -29,7 +29,7 @@
float* input = interpreter->typed_input_tensor<float>(0);
// Fill `input`.
interpreter->Invoke();
-float* output = interpreter->type_output_tensor<float>(0);
+float* output = interpreter->typed_output_tensor<float>(0);
```
### Data Alignment
diff --git a/tensorflow/contrib/lite/java/demo/app/src/main/java/com/example/android/tflitecamerademo/Camera2BasicFragment.java b/tensorflow/contrib/lite/java/demo/app/src/main/java/com/example/android/tflitecamerademo/Camera2BasicFragment.java
index 300786c..18f6465 100644
--- a/tensorflow/contrib/lite/java/demo/app/src/main/java/com/example/android/tflitecamerademo/Camera2BasicFragment.java
+++ b/tensorflow/contrib/lite/java/demo/app/src/main/java/com/example/android/tflitecamerademo/Camera2BasicFragment.java
@@ -54,6 +54,9 @@
import android.view.TextureView;
import android.view.View;
import android.view.ViewGroup;
+import android.widget.CompoundButton;
+import android.widget.NumberPicker;
+import android.widget.ToggleButton;
import android.widget.TextView;
import android.widget.Toast;
import java.io.IOException;
@@ -82,6 +85,8 @@
private boolean runClassifier = false;
private boolean checkedPermissions = false;
private TextView textView;
+ private ToggleButton toggle;
+ private NumberPicker np;
private ImageClassifier classifier;
/** Max preview width that is guaranteed by Camera2 API */
@@ -289,6 +294,24 @@
public void onViewCreated(final View view, Bundle savedInstanceState) {
textureView = (AutoFitTextureView) view.findViewById(R.id.texture);
textView = (TextView) view.findViewById(R.id.text);
+ toggle = (ToggleButton) view.findViewById(R.id.button);
+
+ toggle.setOnCheckedChangeListener(new CompoundButton.OnCheckedChangeListener() {
+ public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) {
+ classifier.setUseNNAPI(isChecked);
+ }
+ });
+
+ np = (NumberPicker) view.findViewById(R.id.np);
+ np.setMinValue(1);
+ np.setMaxValue(10);
+ np.setWrapSelectorWheel(true);
+ np.setOnValueChangedListener(new NumberPicker.OnValueChangeListener() {
+ @Override
+ public void onValueChange(NumberPicker picker, int oldVal, int newVal){
+ classifier.setNumThreads(newVal);
+ }
+ });
}
/** Load the model and labels. */
diff --git a/tensorflow/contrib/lite/java/demo/app/src/main/java/com/example/android/tflitecamerademo/ImageClassifier.java b/tensorflow/contrib/lite/java/demo/app/src/main/java/com/example/android/tflitecamerademo/ImageClassifier.java
index c57bb34..d32c077 100644
--- a/tensorflow/contrib/lite/java/demo/app/src/main/java/com/example/android/tflitecamerademo/ImageClassifier.java
+++ b/tensorflow/contrib/lite/java/demo/app/src/main/java/com/example/android/tflitecamerademo/ImageClassifier.java
@@ -142,6 +142,16 @@
}
}
+ public void setUseNNAPI(Boolean nnapi) {
+ if (tflite != null)
+ tflite.setUseNNAPI(nnapi);
+ }
+
+ public void setNumThreads(int num_threads) {
+ if (tflite != null)
+ tflite.setNumThreads(num_threads);
+ }
+
/** Closes tflite to release resources. */
public void close() {
tflite.close();
diff --git a/tensorflow/contrib/lite/java/demo/app/src/main/res/layout/fragment_camera2_basic.xml b/tensorflow/contrib/lite/java/demo/app/src/main/res/layout/fragment_camera2_basic.xml
index 15305c4..db557ad 100644
--- a/tensorflow/contrib/lite/java/demo/app/src/main/res/layout/fragment_camera2_basic.xml
+++ b/tensorflow/contrib/lite/java/demo/app/src/main/res/layout/fragment_camera2_basic.xml
@@ -22,24 +22,59 @@
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentStart="true"
+ android:layout_alignParentLeft="true"
android:layout_alignParentTop="true" />
<FrameLayout
android:id="@+id/control"
android:layout_width="match_parent"
- android:layout_height="112dp"
+ android:layout_height="135dp"
android:layout_alignParentBottom="true"
android:layout_alignParentStart="true"
+ android:layout_alignParentLeft="true"
+ android:layout_alignParentEnd="true"
+ android:layout_alignParentRight="true"
+ android:layout_marginEnd="150dp"
+ android:layout_marginRight="150dp"
android:background="@color/control_background">
- <TextView android:id="@+id/text"
+ <TextView
+ android:id="@+id/text"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
- android:paddingLeft="80dp"
+ android:paddingLeft="20dp"
android:textColor="#FFF"
android:textSize="20sp"
android:textStyle="bold" />
</FrameLayout>
+ <RelativeLayout
+ android:id="@+id/control2"
+ android:layout_width="match_parent"
+ android:layout_height="135dp"
+ android:layout_alignParentLeft="true"
+ android:layout_alignParentStart="true"
+ android:layout_alignTop="@+id/control"
+ android:layout_marginLeft="300dp"
+ android:layout_marginStart="300dp"
+ android:background="@color/control_background">
+
+ <ToggleButton
+ android:id="@+id/button"
+ android:textOff="@string/tflite"
+ android:textOn="@string/nnapi"
+ android:layout_width="wrap_content"
+ android:layout_height="wrap_content"
+ android:layout_alignParentLeft="true"
+ android:layout_alignParentStart="true" />
+
+ <NumberPicker
+ android:id="@+id/np"
+ android:layout_width="wrap_content"
+ android:layout_height="wrap_content"
+ android:layout_below="@+id/button"
+ android:visibility="visible" />
+ </RelativeLayout>
+
</RelativeLayout>
diff --git a/tensorflow/contrib/lite/java/demo/app/src/main/res/values/strings.xml b/tensorflow/contrib/lite/java/demo/app/src/main/res/values/strings.xml
index a08ec3e..29a033b 100644
--- a/tensorflow/contrib/lite/java/demo/app/src/main/res/values/strings.xml
+++ b/tensorflow/contrib/lite/java/demo/app/src/main/res/values/strings.xml
@@ -21,4 +21,6 @@
<string name="toggle_turn_on">NN:On</string>
<string name="toggle_turn_off">NN:Off</string>
<string name="toggle">Use NNAPI</string>
+ <string name="tflite">tflite</string>
+ <string name="nnapi">NNAPI</string>
</resources>
diff --git a/tensorflow/contrib/lite/java/src/main/java/org/tensorflow/lite/Interpreter.java b/tensorflow/contrib/lite/java/src/main/java/org/tensorflow/lite/Interpreter.java
index e915e65..e84ee71 100644
--- a/tensorflow/contrib/lite/java/src/main/java/org/tensorflow/lite/Interpreter.java
+++ b/tensorflow/contrib/lite/java/src/main/java/org/tensorflow/lite/Interpreter.java
@@ -215,6 +215,13 @@
}
}
+ public void setNumThreads(int num_threads) {
+ if (wrapper == null) {
+ throw new IllegalStateException("The interpreter has already been closed.");
+ }
+ wrapper.setNumThreads(num_threads);
+ }
+
/** Release resources associated with the {@code Interpreter}. */
@Override
public void close() {
diff --git a/tensorflow/contrib/lite/java/src/main/java/org/tensorflow/lite/NativeInterpreterWrapper.java b/tensorflow/contrib/lite/java/src/main/java/org/tensorflow/lite/NativeInterpreterWrapper.java
index dfc8ac1..2fc8037 100644
--- a/tensorflow/contrib/lite/java/src/main/java/org/tensorflow/lite/NativeInterpreterWrapper.java
+++ b/tensorflow/contrib/lite/java/src/main/java/org/tensorflow/lite/NativeInterpreterWrapper.java
@@ -153,6 +153,10 @@
useNNAPI(interpreterHandle, useNNAPI);
}
+ void setNumThreads(int num_threads) {
+ numThreads(interpreterHandle, num_threads);
+ }
+
/** Gets index of an input given its name. */
int getInputIndex(String name) {
if (inputsIndexes == null) {
@@ -324,6 +328,8 @@
private static native void useNNAPI(long interpreterHandle, boolean state);
+ private static native void numThreads(long interpreterHandle, int num_threads);
+
private static native long createErrorReporter(int size);
private static native long createModel(String modelPathOrBuffer, long errorHandle);
diff --git a/tensorflow/contrib/lite/java/src/main/native/nativeinterpreterwrapper_jni.cc b/tensorflow/contrib/lite/java/src/main/native/nativeinterpreterwrapper_jni.cc
index ccfdfd8..45f510d 100644
--- a/tensorflow/contrib/lite/java/src/main/native/nativeinterpreterwrapper_jni.cc
+++ b/tensorflow/contrib/lite/java/src/main/native/nativeinterpreterwrapper_jni.cc
@@ -320,6 +320,16 @@
interpreter->UseNNAPI(static_cast<bool>(state));
}
+JNIEXPORT void JNICALL
+Java_org_tensorflow_lite_NativeInterpreterWrapper_numThreads(JNIEnv* env,
+ jclass clazz,
+ jlong handle,
+ jint num_threads) {
+ tflite::Interpreter* interpreter = convertLongToInterpreter(env, handle);
+ if (interpreter == nullptr) return;
+ interpreter->SetNumThreads(static_cast<int>(num_threads));
+}
+
JNIEXPORT jlong JNICALL
Java_org_tensorflow_lite_NativeInterpreterWrapper_createErrorReporter(
JNIEnv* env, jclass clazz, jint size) {
diff --git a/tensorflow/contrib/lite/java/src/main/native/nativeinterpreterwrapper_jni.h b/tensorflow/contrib/lite/java/src/main/native/nativeinterpreterwrapper_jni.h
index 0e28a77..eaa765c 100644
--- a/tensorflow/contrib/lite/java/src/main/native/nativeinterpreterwrapper_jni.h
+++ b/tensorflow/contrib/lite/java/src/main/native/nativeinterpreterwrapper_jni.h
@@ -61,7 +61,7 @@
/*
* Class: org_tensorflow_lite_NativeInterpreterWrapper
* Method:
- * Signature: (JZ)
+ * Signature: (JZ)V
*/
JNIEXPORT void JNICALL
Java_org_tensorflow_lite_NativeInterpreterWrapper_useNNAPI(JNIEnv* env,
@@ -72,6 +72,16 @@
/*
* Class: org_tensorflow_lite_NativeInterpreterWrapper
* Method:
+ * Signature: (JI)V
+ */
+JNIEXPORT void JNICALL
+Java_org_tensorflow_lite_NativeInterpreterWrapper_numThreads(JNIEnv* env,
+ jclass clazz,
+ jlong handle,
+ jint num_threads);
+/*
+ * Class: org_tensorflow_lite_NativeInterpreterWrapper
+ * Method:
* Signature: (I)J
*/
JNIEXPORT jlong JNICALL
diff --git a/tensorflow/contrib/lite/kernels/add.cc b/tensorflow/contrib/lite/kernels/add.cc
index 63ea89d..e0aa070 100644
--- a/tensorflow/contrib/lite/kernels/add.cc
+++ b/tensorflow/contrib/lite/kernels/add.cc
@@ -176,7 +176,7 @@
output);
} else {
context->ReportError(context,
- "Inputs and outputs not all float|unit8 types.");
+ "Inputs and outputs not all float|uint8 types.");
return kTfLiteError;
}
diff --git a/tensorflow/contrib/lite/kernels/div.cc b/tensorflow/contrib/lite/kernels/div.cc
index 6dd243a..ec380c8 100644
--- a/tensorflow/contrib/lite/kernels/div.cc
+++ b/tensorflow/contrib/lite/kernels/div.cc
@@ -106,6 +106,8 @@
#undef TF_LITE_DIV
}
+
+
template <KernelType kernel_type>
TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {
auto* params = reinterpret_cast<TfLiteDivParams*>(node->builtin_data);
@@ -118,7 +120,8 @@
if (output->type == kTfLiteFloat32) {
EvalFloat<kernel_type>(context, node, params, data, input1, input2, output);
} else {
- context->ReportError(context, "Inputs and outputs not all float types.");
+ context->ReportError(context,
+ "Div only supports FLOAT32 and quantized UINT8 now.");
return kTfLiteError;
}
diff --git a/tensorflow/contrib/lite/kernels/internal/optimized/optimized_ops.h b/tensorflow/contrib/lite/kernels/internal/optimized/optimized_ops.h
index d585bcc..9e9aba01 100644
--- a/tensorflow/contrib/lite/kernels/internal/optimized/optimized_ops.h
+++ b/tensorflow/contrib/lite/kernels/internal/optimized/optimized_ops.h
@@ -4374,7 +4374,7 @@
using FixedPointAccum = gemmlowp::FixedPoint<int32, kAccumulationIntegerBits>;
using FixedPoint0 = gemmlowp::FixedPoint<int32, 0>;
- gemmlowp::ScopedProfilingLabel label("Softmax/8bit");
+gemmlowp::ScopedProfilingLabel label("Softmax/8bit");
const int batches = MatchingArraySize(input_dims, 3, output_dims, 3);
const int height = MatchingArraySize(input_dims, 2, output_dims, 2);
const int width = MatchingArraySize(input_dims, 1, output_dims, 1);
diff --git a/tensorflow/contrib/lite/kernels/internal/reference/reference_ops.h b/tensorflow/contrib/lite/kernels/internal/reference/reference_ops.h
index ae295cc..4c8cbe4 100644
--- a/tensorflow/contrib/lite/kernels/internal/reference/reference_ops.h
+++ b/tensorflow/contrib/lite/kernels/internal/reference/reference_ops.h
@@ -1403,6 +1403,33 @@
output_data, output_dims);
}
+inline void Div(const float* input1_data, const Dims<4>& input1_dims,
+ const float* input2_data, const Dims<4>& input2_dims,
+ float output_activation_min, float output_activation_max,
+ float* output_data, const Dims<4>& output_dims) {
+ const int batches =
+ MatchingArraySize(input1_dims, 3, input2_dims, 3, output_dims, 3);
+ const int height =
+ MatchingArraySize(input1_dims, 2, input2_dims, 2, output_dims, 2);
+ const int width =
+ MatchingArraySize(input1_dims, 1, input2_dims, 1, output_dims, 1);
+ const int depth =
+ MatchingArraySize(input1_dims, 0, input2_dims, 0, output_dims, 0);
+ for (int b = 0; b < batches; ++b) {
+ for (int y = 0; y < height; ++y) {
+ for (int x = 0; x < width; ++x) {
+ for (int c = 0; c < depth; ++c) {
+ output_data[Offset(output_dims, c, x, y, b)] =
+ ActivationFunctionWithMinMax(
+ input1_data[Offset(input1_dims, c, x, y, b)] /
+ input2_data[Offset(input2_dims, c, x, y, b)],
+ output_activation_min, output_activation_max);
+ }
+ }
+ }
+ }
+}
+
// TODO(jiawen): We can implement BroadcastDiv on buffers of arbitrary
// dimensionality if the runtime code does a single loop over one dimension
// that handles broadcasting as the base case. The code generator would then
@@ -1444,18 +1471,6 @@
}
}
-inline void Div(const float* input1_data, const Dims<4>& input1_dims,
- const float* input2_data, const Dims<4>& input2_dims,
- float output_activation_min, float output_activation_max,
- float* output_data, const Dims<4>& output_dims) {
- const int flat_size = MatchingFlatSize(input1_dims, input2_dims, output_dims);
- for (int i = 0; i < flat_size; ++i) {
- output_data[i] = ActivationFunctionWithMinMax(
- input1_data[i] / input2_data[i], output_activation_min,
- output_activation_max);
- }
-}
-
inline void Sub(const float* input1_data, const Dims<4>& input1_dims,
const float* input2_data, const Dims<4>& input2_dims,
float output_activation_min, float output_activation_max,
diff --git a/tensorflow/contrib/lite/kernels/sub.cc b/tensorflow/contrib/lite/kernels/sub.cc
index 66b06ae..7c60a4f 100644
--- a/tensorflow/contrib/lite/kernels/sub.cc
+++ b/tensorflow/contrib/lite/kernels/sub.cc
@@ -174,7 +174,8 @@
EvalQuantized<kernel_type>(context, node, params, data, input1, input2,
output);
} else {
- context->ReportError(context, "Inputs and outputs not all float types.");
+ context->ReportError(context,
+ "Inputs and outputs not all float|uint8 types.");
return kTfLiteError;
}
diff --git a/tensorflow/contrib/lite/toco/graph_transformations/resolve_tensorflow_merge.cc b/tensorflow/contrib/lite/toco/graph_transformations/resolve_tensorflow_merge.cc
index 477e7f1..38e0005 100644
--- a/tensorflow/contrib/lite/toco/graph_transformations/resolve_tensorflow_merge.cc
+++ b/tensorflow/contrib/lite/toco/graph_transformations/resolve_tensorflow_merge.cc
@@ -32,7 +32,7 @@
}
// We need to yield until this Merge node has only 1 input, which will mean
- // that that is the selected input. Other graph transformations on other nodes
+ // that is the selected input. Other graph transformations on other nodes
// such as ResolveTensorFlowSwitch, will take care of trimming the
// non-selected inputs, so that at some point there will be only 1 input left.
if (merge_op->inputs.size() > 1) {
diff --git a/tensorflow/contrib/lite/toco/model.h b/tensorflow/contrib/lite/toco/model.h
index 705a9d6..482cc71 100644
--- a/tensorflow/contrib/lite/toco/model.h
+++ b/tensorflow/contrib/lite/toco/model.h
@@ -152,9 +152,9 @@
};
// The type of the scalars in an array.
-// Note that that does not by itself tell whether the values in the array are
-// real (are literally interpreted as real numbers) or quantized (only acquire
-// a meaning as real numbers in conjunction with QuantizationParams).
+// Note that the type does not by itself tell whether the values in the array
+// are real (are literally interpreted as real numbers) or quantized (only
+// acquire a meaning as real numbers in conjunction with QuantizationParams).
//
// In practice though:
// float values are always real
diff --git a/tensorflow/contrib/losses/python/losses/loss_ops.py b/tensorflow/contrib/losses/python/losses/loss_ops.py
index 8c3a8af..bdad34a 100644
--- a/tensorflow/contrib/losses/python/losses/loss_ops.py
+++ b/tensorflow/contrib/losses/python/losses/loss_ops.py
@@ -29,6 +29,7 @@
from tensorflow.python.ops import nn_ops
from tensorflow.python.util.deprecation import deprecated
from tensorflow.python.util.deprecation import deprecated_args
+from tensorflow.python.util.deprecation import deprecated_argument_lookup
__all__ = [
"absolute_difference", "add_loss", "cosine_distance",
@@ -651,11 +652,9 @@
ValueError: If `predictions` shape doesn't match `labels` shape, or
`weights` is `None`.
"""
- if dim is not None:
- if axis is not None:
- raise ValueError("Cannot specify both 'axis' and 'dim'")
- axis = dim
- if axis is None and dim is None:
+ axis = deprecated_argument_lookup(
+ "axis", axis, "dim", dim)
+ if axis is None:
raise ValueError("You must specify 'axis'.")
with ops.name_scope(scope, "cosine_distance_loss",
[predictions, labels, weights]) as scope:
diff --git a/tensorflow/contrib/losses/python/metric_learning/metric_loss_ops.py b/tensorflow/contrib/losses/python/metric_learning/metric_loss_ops.py
index 2b9eee4..de76acb 100644
--- a/tensorflow/contrib/losses/python/metric_learning/metric_loss_ops.py
+++ b/tensorflow/contrib/losses/python/metric_learning/metric_loss_ops.py
@@ -711,7 +711,7 @@
candidate_scores, margin_multiplier * nmi_scores)
argmax_index = math_ops.to_int32(
- math_ops.argmax(candidate_scores, dimension=0))
+ math_ops.argmax(candidate_scores, axis=0))
return candidate_ids[argmax_index]
@@ -811,7 +811,7 @@
candidate_scores = math_ops.add(scores_fac, margin_multiplier * scores_margin)
argmax_index = math_ops.to_int32(
- math_ops.argmax(candidate_scores, dimension=0))
+ math_ops.argmax(candidate_scores, axis=0))
best_medoid = math_ops.to_int32(cluster_member_ids[argmax_index])
chosen_ids = update_1d_tensor(chosen_ids, cluster_idx, best_medoid)
diff --git a/tensorflow/contrib/makefile/download_dependencies.sh b/tensorflow/contrib/makefile/download_dependencies.sh
index 48953e2..eff9081 100755
--- a/tensorflow/contrib/makefile/download_dependencies.sh
+++ b/tensorflow/contrib/makefile/download_dependencies.sh
@@ -27,7 +27,9 @@
fi
EIGEN_URL="$(grep -o 'http.*bitbucket.org/eigen/eigen/get/.*tar\.gz' "${BZL_FILE_PATH}" | grep -v mirror.bazel | head -n1)"
-GEMMLOWP_URL="$(grep -o 'https://mirror.bazel.build/github.com/google/gemmlowp/.*zip' "${BZL_FILE_PATH}" | head -n1)"
+# TODO (yongtang): Replace the following with 'https://mirror.bazel.build/github.com/google/gemmlowp/.*zip' once
+# the archive has been propagated in mirror.bazel.build.
+GEMMLOWP_URL="$(grep -o 'https://github.com/google/gemmlowp/.*zip' "${BZL_FILE_PATH}" | head -n1)"
GOOGLETEST_URL="https://github.com/google/googletest/archive/release-1.8.0.tar.gz"
NSYNC_URL="$(grep -o 'https://mirror.bazel.build/github.com/google/nsync/.*tar\.gz' "${BZL_FILE_PATH}" | head -n1)"
PROTOBUF_URL="$(grep -o 'https://mirror.bazel.build/github.com/google/protobuf/.*tar\.gz' "${BZL_FILE_PATH}" | head -n1)"
diff --git a/tensorflow/contrib/meta_graph_transform/meta_graph_transform.py b/tensorflow/contrib/meta_graph_transform/meta_graph_transform.py
index 4090c1f..f37a259 100644
--- a/tensorflow/contrib/meta_graph_transform/meta_graph_transform.py
+++ b/tensorflow/contrib/meta_graph_transform/meta_graph_transform.py
@@ -348,7 +348,7 @@
input_saver_def, input_checkpoint):
"""Converts all variables in a graph and checkpoint into constants.
- During this process, we need to retain certain initialzer nodes (e.g. table
+ During this process, we need to retain certain initializer nodes (e.g. table
initializer nodes). Instead of determining which dependencies
of the shared initializer node (e.g. group_deps) to keep, we
reconstruct the connections between the individual initializer nodes and
diff --git a/tensorflow/contrib/metrics/python/ops/metric_ops.py b/tensorflow/contrib/metrics/python/ops/metric_ops.py
index 5364e30..00a933e 100644
--- a/tensorflow/contrib/metrics/python/ops/metric_ops.py
+++ b/tensorflow/contrib/metrics/python/ops/metric_ops.py
@@ -2834,7 +2834,9 @@
name=name)
-@deprecated(None, 'Please switch to tf.metrics.mean.')
+@deprecated(None,
+ 'Please switch to tf.metrics.mean_absolute_error. Note that the '
+ 'order of the labels and predictions arguments has been switched.')
def streaming_mean_absolute_error(predictions,
labels,
weights=None,
@@ -2953,7 +2955,9 @@
updates_collections=updates_collections,
name=name)
-
+@deprecated(None,
+ 'Please switch to tf.metrics.mean_squared_error. Note that the '
+ 'order of the labels and predictions arguments has been switched.')
def streaming_mean_squared_error(predictions,
labels,
weights=None,
@@ -3011,7 +3015,10 @@
updates_collections=updates_collections,
name=name)
-
+@deprecated(
+ None,
+ 'Please switch to tf.metrics.root_mean_squared_error. Note that the '
+ 'order of the labels and predictions arguments has been switched.')
def streaming_root_mean_squared_error(predictions,
labels,
weights=None,
@@ -3351,7 +3358,7 @@
radial_diffs = math_ops.reduce_sum(
radial_diffs, reduction_indices=[
dim,
- ], keep_dims=True)
+ ], keepdims=True)
mean_distance, update_op = streaming_mean(radial_diffs, weights, None, None,
name or 'mean_cosine_distance')
mean_distance = math_ops.subtract(1.0, mean_distance)
diff --git a/tensorflow/contrib/nn/python/ops/sampling_ops.py b/tensorflow/contrib/nn/python/ops/sampling_ops.py
index 63fc487..e659256 100644
--- a/tensorflow/contrib/nn/python/ops/sampling_ops.py
+++ b/tensorflow/contrib/nn/python/ops/sampling_ops.py
@@ -88,7 +88,7 @@
return math_ops.reduce_logsumexp(
math_ops.matmul(embeddings, reweighted_inputs, transpose_b=True),
axis=1,
- keep_dims=False)
+ keepdims=False)
# Calling this protected form of embedding_lookup allows co-locating
# the logsumexp computation with the partitioned weights, which yields
diff --git a/tensorflow/contrib/opt/BUILD b/tensorflow/contrib/opt/BUILD
index c57c5e3..612ecc3 100644
--- a/tensorflow/contrib/opt/BUILD
+++ b/tensorflow/contrib/opt/BUILD
@@ -14,6 +14,7 @@
name = "opt_py",
srcs = [
"__init__.py",
+ "python/training/adamax.py",
"python/training/addsign.py",
"python/training/drop_stale_gradient_optimizer.py",
"python/training/elastic_average_optimizer.py",
@@ -43,12 +44,28 @@
"//tensorflow/python:util",
"//tensorflow/python:variable_scope",
"//tensorflow/python:variables",
+ "//tensorflow/python/eager:context",
"//third_party/py/numpy",
"@six_archive//:six",
],
)
py_test(
+ name = "adamax_test",
+ srcs = ["python/training/adamax_test.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":opt_py",
+ "//tensorflow/python:array_ops",
+ "//tensorflow/python:client_testlib",
+ "//tensorflow/python:framework_for_generated_wrappers",
+ "//tensorflow/python:math_ops",
+ "//tensorflow/python:training",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_test(
name = "external_optimizer_test",
srcs = ["python/training/external_optimizer_test.py"],
srcs_version = "PY2AND3",
diff --git a/tensorflow/contrib/opt/__init__.py b/tensorflow/contrib/opt/__init__.py
index 6c1bb1a..4c13c8e 100644
--- a/tensorflow/contrib/opt/__init__.py
+++ b/tensorflow/contrib/opt/__init__.py
@@ -19,6 +19,7 @@
from __future__ import print_function
# pylint: disable=wildcard-import
+from tensorflow.contrib.opt.python.training.adamax import *
from tensorflow.contrib.opt.python.training.addsign import *
from tensorflow.contrib.opt.python.training.drop_stale_gradient_optimizer import *
from tensorflow.contrib.opt.python.training.external_optimizer import *
@@ -36,6 +37,7 @@
_allowed_symbols = [
+ 'AdaMaxOptimizer',
'PowerSignOptimizer',
'AddSignOptimizer',
'DelayCompensatedGradientDescentOptimizer',
diff --git a/tensorflow/contrib/opt/python/training/adamax.py b/tensorflow/contrib/opt/python/training/adamax.py
new file mode 100644
index 0000000..686bac0
--- /dev/null
+++ b/tensorflow/contrib/opt/python/training/adamax.py
@@ -0,0 +1,191 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+
+"""AdaMax for TensorFlow."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.eager import context
+from tensorflow.python.framework import ops
+from tensorflow.python.ops import array_ops
+from tensorflow.python.ops import control_flow_ops
+from tensorflow.python.ops import math_ops
+from tensorflow.python.ops import resource_variable_ops
+from tensorflow.python.ops import state_ops
+from tensorflow.python.training import adam
+from tensorflow.python.training import training_ops
+
+
+class AdaMaxOptimizer(adam.AdamOptimizer):
+ """Optimizer that implements the AdaMax algorithm.
+
+ Adamax is sometimes superior to adam, specially in models with embeddings,
+ see [Kingma et al., 2014](http://arxiv.org/abs/1412.6980)
+ ([pdf](http://arxiv.org/pdf/1412.6980.pdf)).
+ """
+
+ def __init__(self, learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-8,
+ use_locking=False, name="AdaMax"):
+ """Construct a new AdaMax optimizer.
+
+ Initialization:
+
+ ```
+ m_0 <- 0 (Initialize initial 1st moment vector)
+ v_0 <- 0 (Initialize the exponentially weighted infinity norm)
+ t <- 0 (Initialize timestep)
+ ```
+
+ The update rule for `variable` with gradient `g` uses an optimization
+ described at the end of section 7.1 of the paper:
+
+ ```
+ t <- t + 1
+
+ m_t <- beta1 * m_{t-1} + (1 - beta1) * g
+ v_t <- max(beta2 * v_{t-1}, abs(g))
+ variable <- variable - learning_rate / (1 - beta1^t) * m_t / (v_t + epsilon)
+ ```
+
+ Similar to AdamOptimizer, the epsilon is added for numerical stability
+ (especially to get rid of division by zero when v_t = 0).
+
+ Contrast to AdamOptimizer, the sparse implementation of this algorithm
+ (used when the gradient is an IndexedSlices object, typically because of
+ `tf.gather` or an embedding lookup in the forward pass) only updates
+ variable slices and corresponding `m_t`, `v_t` terms when that part of
+ the variable was used in the forward pass. This means that the sparse
+ behavior is contrast to the dense behavior (similar to some momentum
+ implementations which ignore momentum unless a variable slice was actually
+ used).
+
+ Args:
+ learning_rate: A Tensor or a floating point value. The learning rate.
+ beta1: A float value or a constant float tensor.
+ The exponential decay rate for the 1st moment estimates.
+ beta2: A float value or a constant float tensor.
+ The exponential decay rate for the exponentially weighted infinity norm.
+ epsilon: A small constant for numerical stability.
+ use_locking: If True use locks for update operations.
+ name: Optional name for the operations created when applying gradients.
+ Defaults to "AdaMax".
+ """
+ super(AdaMaxOptimizer, self).__init__(learning_rate, beta1, beta2,
+ epsilon, use_locking, name)
+
+ def _get_beta_accumulators(self):
+ if context.executing_eagerly():
+ graph = None
+ else:
+ graph = ops.get_default_graph()
+ return self._get_non_slot_variable("beta1_power", graph=graph)
+
+ def _create_slots(self, var_list):
+ # Create the beta1 accumulators on the same device as the first
+ # variable. Sort the var_list to make sure this device is consistent across
+ # workers (these need to go on the same PS, otherwise some updates are
+ # silently ignored).
+ first_var = min(var_list, key=lambda x: x.name)
+ self._create_non_slot_variable(initial_value=self._beta1,
+ name="beta1_power",
+ colocate_with=first_var)
+
+ # Create slots for the first and second moments.
+ for v in var_list:
+ self._zeros_slot(v, "m", self._name)
+ self._zeros_slot(v, "v", self._name)
+
+ def _apply_dense(self, grad, var):
+ m = self.get_slot(var, "m")
+ v = self.get_slot(var, "v")
+ beta1_power = self._get_beta_accumulators()
+ return training_ops.apply_ada_max(
+ var, m, v,
+ math_ops.cast(beta1_power, var.dtype.base_dtype),
+ math_ops.cast(self._lr_t, var.dtype.base_dtype),
+ math_ops.cast(self._beta1_t, var.dtype.base_dtype),
+ math_ops.cast(self._beta2_t, var.dtype.base_dtype),
+ math_ops.cast(self._epsilon_t, var.dtype.base_dtype),
+ grad, use_locking=self._use_locking).op
+
+ def _resource_apply_dense(self, grad, var):
+ m = self.get_slot(var, "m")
+ v = self.get_slot(var, "v")
+ beta1_power = self._get_beta_accumulators()
+ return training_ops.resource_apply_ada_max(
+ var.handle, m.handle, v.handle,
+ math_ops.cast(beta1_power, grad.dtype.base_dtype),
+ math_ops.cast(self._lr_t, grad.dtype.base_dtype),
+ math_ops.cast(self._beta1_t, grad.dtype.base_dtype),
+ math_ops.cast(self._beta2_t, grad.dtype.base_dtype),
+ math_ops.cast(self._epsilon_t, grad.dtype.base_dtype),
+ grad, use_locking=self._use_locking)
+
+ def _apply_sparse_shared(self, grad, var, indices,
+ scatter_add, scatter_update):
+ beta1_power = self._get_beta_accumulators()
+ beta1_power = math_ops.cast(beta1_power, var.dtype.base_dtype)
+ lr_t = math_ops.cast(self._lr_t, var.dtype.base_dtype)
+ beta1_t = math_ops.cast(self._beta1_t, var.dtype.base_dtype)
+ beta2_t = math_ops.cast(self._beta2_t, var.dtype.base_dtype)
+ epsilon_t = math_ops.cast(self._epsilon_t, var.dtype.base_dtype)
+ # m_t = beta1 * m + (1 - beta1) * g_t
+ m = self.get_slot(var, "m")
+ m_slice = array_ops.gather(m, indices)
+ m_t_slice = m_slice * beta1_t + grad * (1 - beta1_t)
+ with ops.control_dependencies([m_t_slice]):
+ m_t = scatter_update(m, indices, m_t_slice)
+ # u_t = max(beta2 * u, abs(g_t))
+ v = self.get_slot(var, "v")
+ v_slice = array_ops.gather(v, indices)
+ v_t_slice = math_ops.maximum(v_slice * beta2_t, math_ops.abs(grad))
+ with ops.control_dependencies([v_t_slice]):
+ v_t = scatter_update(v, indices, v_t_slice)
+ # theta_t = theta - lr / (1 - beta1^t) * m_t / u_t
+ var_slice = -lr_t / (1 - beta1_power) * (m_t_slice /
+ (v_t_slice + epsilon_t))
+ with ops.control_dependencies([var_slice]):
+ var_update = scatter_add(var, indices, var_slice)
+ return control_flow_ops.group(*[var_update, m_t, v_t])
+
+ def _apply_sparse(self, grad, var):
+ return self._apply_sparse_shared(
+ grad.values, var, grad.indices,
+ lambda x, i, v: state_ops.scatter_add( # pylint: disable=g-long-lambda
+ x, i, v, use_locking=self._use_locking),
+ lambda x, i, v: state_ops.scatter_update( # pylint: disable=g-long-lambda
+ x, i, v, use_locking=self._use_locking))
+
+ def _resource_scatter_update(self, x, i, v):
+ with ops.control_dependencies(
+ [resource_variable_ops.resource_scatter_update(
+ x.handle, i, v)]):
+ return x.value()
+
+ def _resource_apply_sparse(self, grad, var, indices):
+ return self._apply_sparse_shared(
+ grad, var, indices,
+ self._resource_scatter_add, self._resource_scatter_update)
+
+ def _finish(self, update_ops, name_scope):
+ # Update the power accumulators.
+ with ops.control_dependencies(update_ops):
+ beta1_power = self._get_beta_accumulators()
+ with ops.colocate_with(beta1_power):
+ update_beta1 = beta1_power.assign(
+ beta1_power * self._beta1_t, use_locking=self._use_locking)
+ return control_flow_ops.group(*update_ops + [update_beta1],
+ name=name_scope)
diff --git a/tensorflow/contrib/opt/python/training/adamax_test.py b/tensorflow/contrib/opt/python/training/adamax_test.py
new file mode 100644
index 0000000..bc92a70
--- /dev/null
+++ b/tensorflow/contrib/opt/python/training/adamax_test.py
@@ -0,0 +1,348 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for AdaMax."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+
+from tensorflow.contrib.opt.python.training import adamax
+from tensorflow.python.client import session
+from tensorflow.python.eager import context
+from tensorflow.python.framework import constant_op
+from tensorflow.python.framework import dtypes
+from tensorflow.python.framework import ops
+from tensorflow.python.framework import test_util
+from tensorflow.python.ops import array_ops
+from tensorflow.python.ops import math_ops
+from tensorflow.python.ops import resource_variable_ops
+from tensorflow.python.ops import variables
+from tensorflow.python.platform import test
+
+
+def adamax_update_numpy(param,
+ g_t,
+ t,
+ m,
+ v,
+ alpha=0.001,
+ beta1=0.9,
+ beta2=0.999,
+ epsilon=1e-8):
+ m_t = beta1 * m + (1 - beta1) * g_t
+ v_t = np.maximum(beta2 * v, np.abs(g_t))
+ param_t = param - (alpha / (1 - beta1**t)) * (m_t / (v_t + epsilon))
+ return param_t, m_t, v_t
+
+
+def adamax_sparse_update_numpy(param,
+ indices,
+ g_t,
+ t,
+ m,
+ v,
+ alpha=0.001,
+ beta1=0.9,
+ beta2=0.999,
+ epsilon=1e-8):
+ m_t, v_t, param_t = np.copy(m), np.copy(v), np.copy(param)
+ m_t_slice = beta1 * m[indices] + (1 - beta1) * g_t
+ v_t_slice = np.maximum(beta2 * v[indices], np.abs(g_t))
+ param_t_slice = param[indices] - ((alpha / (1 - beta1**t)) *
+ (m_t_slice / (v_t_slice + epsilon)))
+ m_t[indices] = m_t_slice
+ v_t[indices] = v_t_slice
+ param_t[indices] = param_t_slice
+ return param_t, m_t, v_t
+
+
+class AdaMaxOptimizerTest(test.TestCase):
+
+ def doTestSparse(self, use_resource=False):
+ for dtype in [dtypes.half, dtypes.float32, dtypes.float64]:
+ with self.test_session():
+ # Initialize variables for numpy implementation.
+ zero_slots = lambda: np.zeros((3), dtype=dtype.as_numpy_dtype)
+ m0, v0, m1, v1 = zero_slots(), zero_slots(), zero_slots(), zero_slots()
+ var0_np = np.array([1.0, 2.0, 3.0], dtype=dtype.as_numpy_dtype)
+ grads0_np = np.array([0.1, 0.1], dtype=dtype.as_numpy_dtype)
+ var1_np = np.array([4.0, 5.0, 6.0], dtype=dtype.as_numpy_dtype)
+ grads1_np = np.array([0.01, 0.01], dtype=dtype.as_numpy_dtype)
+
+ if use_resource:
+ var0 = resource_variable_ops.ResourceVariable(var0_np)
+ var1 = resource_variable_ops.ResourceVariable(var1_np)
+ else:
+ var0 = variables.Variable(var0_np)
+ var1 = variables.Variable(var1_np)
+ grads0_np_indices = np.array([0, 1], dtype=np.int32)
+ grads0 = ops.IndexedSlices(
+ constant_op.constant(grads0_np),
+ constant_op.constant(grads0_np_indices), constant_op.constant([2]))
+ grads1_np_indices = np.array([2, 1], dtype=np.int32)
+ grads1 = ops.IndexedSlices(
+ constant_op.constant(grads1_np),
+ constant_op.constant(grads1_np_indices), constant_op.constant([2]))
+ opt = adamax.AdaMaxOptimizer()
+ update = opt.apply_gradients(zip([grads0, grads1], [var0, var1]))
+ variables.global_variables_initializer().run()
+
+ # Fetch params to validate initial values
+ self.assertAllClose([1.0, 2.0, 3.0], var0.eval())
+ self.assertAllClose([4.0, 5.0, 6.0], var1.eval())
+
+ beta1_power = opt._get_beta_accumulators()
+
+ # Run 3 steps of AdaMax
+ for t in range(1, 4):
+ self.assertAllCloseAccordingToType(0.9**t, beta1_power.eval())
+ update.run()
+
+ var0_np, m0, v0 = adamax_sparse_update_numpy(
+ var0_np, grads0_np_indices, grads0_np, t, m0, v0)
+ var1_np, m1, v1 = adamax_sparse_update_numpy(
+ var1_np, grads1_np_indices, grads1_np, t, m1, v1)
+
+ # Validate updated params
+ self.assertAllCloseAccordingToType(var0_np, var0.eval())
+ self.assertAllCloseAccordingToType(var1_np, var1.eval())
+
+ def testSparse(self):
+ self.doTestSparse(use_resource=False)
+
+ def testResourceSparse(self):
+ self.doTestSparse(use_resource=True)
+
+ def testSparseDevicePlacement(self):
+ for index_dtype in [dtypes.int32, dtypes.int64]:
+ with self.test_session(force_gpu=test.is_gpu_available()):
+ # If a GPU is available, tests that all optimizer ops can be placed on
+ # it (i.e. they have GPU kernels).
+ var = variables.Variable([[1.0], [2.0]])
+ indices = constant_op.constant([0, 1], dtype=index_dtype)
+ gathered_sum = math_ops.reduce_sum(array_ops.gather(var, indices))
+ optimizer = adamax.AdaMaxOptimizer(3.0)
+ minimize_op = optimizer.minimize(gathered_sum)
+ variables.global_variables_initializer().run()
+ minimize_op.run()
+
+ def testSparseRepeatedIndices(self):
+ for dtype in [dtypes.half, dtypes.float32, dtypes.float64]:
+ with self.test_session():
+ repeated_index_update_var = variables.Variable(
+ [[1.0], [2.0]], dtype=dtype)
+ aggregated_update_var = variables.Variable(
+ [[1.0], [2.0]], dtype=dtype)
+ grad_repeated_index = ops.IndexedSlices(
+ constant_op.constant(
+ [0.1, 0.1], shape=[2, 1], dtype=dtype),
+ constant_op.constant([1, 1]),
+ constant_op.constant([2, 1]))
+ grad_aggregated = ops.IndexedSlices(
+ constant_op.constant(
+ [0.2], shape=[1, 1], dtype=dtype),
+ constant_op.constant([1]),
+ constant_op.constant([2, 1]))
+ repeated_update = adamax.AdaMaxOptimizer().apply_gradients(
+ [(grad_repeated_index, repeated_index_update_var)])
+ aggregated_update = adamax.AdaMaxOptimizer().apply_gradients(
+ [(grad_aggregated, aggregated_update_var)])
+ variables.global_variables_initializer().run()
+ self.assertAllClose(aggregated_update_var.eval(),
+ repeated_index_update_var.eval())
+ for _ in range(3):
+ repeated_update.run()
+ aggregated_update.run()
+ self.assertAllClose(aggregated_update_var.eval(),
+ repeated_index_update_var.eval())
+
+ def doTestBasic(self, use_resource=False):
+ for i, dtype in enumerate([dtypes.half, dtypes.float32, dtypes.float64]):
+ with self.test_session(graph=ops.Graph()):
+ # Initialize variables for numpy implementation.
+ m0, v0, m1, v1 = 0.0, 0.0, 0.0, 0.0
+ var0_np = np.array([1.0, 2.0], dtype=dtype.as_numpy_dtype)
+ grads0_np = np.array([0.1, 0.1], dtype=dtype.as_numpy_dtype)
+ var1_np = np.array([3.0, 4.0], dtype=dtype.as_numpy_dtype)
+ grads1_np = np.array([0.01, 0.01], dtype=dtype.as_numpy_dtype)
+
+ if use_resource:
+ var0 = resource_variable_ops.ResourceVariable(
+ var0_np, name="var0_%d" % i)
+ var1 = resource_variable_ops.ResourceVariable(
+ var1_np, name="var1_%d" % i)
+ else:
+ var0 = variables.Variable(var0_np)
+ var1 = variables.Variable(var1_np)
+ grads0 = constant_op.constant(grads0_np)
+ grads1 = constant_op.constant(grads1_np)
+
+ opt = adamax.AdaMaxOptimizer()
+ update = opt.apply_gradients(zip([grads0, grads1], [var0, var1]))
+ opt_variables = opt.variables()
+ beta1_power = opt._get_beta_accumulators()
+ self.assertTrue(beta1_power is not None)
+ self.assertIn(beta1_power, opt_variables)
+
+ with ops.Graph().as_default():
+ # Shouldn't return non-slot variables from other graphs.
+ self.assertEqual(0, len(opt.variables()))
+
+ if not context.executing_eagerly():
+ self.evaluate(variables.global_variables_initializer())
+ # Fetch params to validate initial values
+ self.assertAllClose([1.0, 2.0], self.evaluate(var0))
+ self.assertAllClose([3.0, 4.0], self.evaluate(var1))
+
+ beta1_power = opt._get_beta_accumulators()
+
+ # Run 3 steps of AdaMax
+ for t in range(1, 4):
+ if not context.executing_eagerly():
+ self.evaluate(update)
+ elif t > 1:
+ opt.apply_gradients(zip([grads0, grads1], [var0, var1]))
+
+ self.assertAllCloseAccordingToType(0.9**(t + 1),
+ self.evaluate(beta1_power))
+
+ var0_np, m0, v0 = adamax_update_numpy(var0_np, grads0_np, t, m0, v0)
+ var1_np, m1, v1 = adamax_update_numpy(var1_np, grads1_np, t, m1, v1)
+
+ # Validate updated params
+ self.assertAllCloseAccordingToType(var0_np, self.evaluate(var0))
+ self.assertAllCloseAccordingToType(var1_np, self.evaluate(var1))
+ if use_resource:
+ self.assertEqual("var0_%d/AdaMax:0" % (i,),
+ opt.get_slot(var=var0, name="m").name)
+
+ def testBasic(self):
+ with self.test_session():
+ self.doTestBasic(use_resource=False)
+
+ @test_util.run_in_graph_and_eager_modes(reset_test=True)
+ def testResourceBasic(self):
+ self.doTestBasic(use_resource=True)
+
+ def testTensorLearningRate(self):
+ for dtype in [dtypes.half, dtypes.float32, dtypes.float64]:
+ with self.test_session():
+ # Initialize variables for numpy implementation.
+ m0, v0, m1, v1 = 0.0, 0.0, 0.0, 0.0
+ var0_np = np.array([1.0, 2.0], dtype=dtype.as_numpy_dtype)
+ grads0_np = np.array([0.1, 0.1], dtype=dtype.as_numpy_dtype)
+ var1_np = np.array([3.0, 4.0], dtype=dtype.as_numpy_dtype)
+ grads1_np = np.array([0.01, 0.01], dtype=dtype.as_numpy_dtype)
+
+ var0 = variables.Variable(var0_np)
+ var1 = variables.Variable(var1_np)
+ grads0 = constant_op.constant(grads0_np)
+ grads1 = constant_op.constant(grads1_np)
+ opt = adamax.AdaMaxOptimizer(constant_op.constant(0.001))
+ update = opt.apply_gradients(zip([grads0, grads1], [var0, var1]))
+ variables.global_variables_initializer().run()
+
+ # Fetch params to validate initial values
+ self.assertAllClose([1.0, 2.0], var0.eval())
+ self.assertAllClose([3.0, 4.0], var1.eval())
+
+ beta1_power = opt._get_beta_accumulators()
+
+ # Run 3 steps of AdaMax
+ for t in range(1, 4):
+ self.assertAllCloseAccordingToType(0.9**t, beta1_power.eval())
+ update.run()
+
+ var0_np, m0, v0 = adamax_update_numpy(var0_np, grads0_np, t, m0, v0)
+ var1_np, m1, v1 = adamax_update_numpy(var1_np, grads1_np, t, m1, v1)
+
+ # Validate updated params
+ self.assertAllCloseAccordingToType(var0_np, var0.eval())
+ self.assertAllCloseAccordingToType(var1_np, var1.eval())
+
+ def testSharing(self):
+ for dtype in [dtypes.half, dtypes.float32, dtypes.float64]:
+ with self.test_session():
+ # Initialize variables for numpy implementation.
+ m0, v0, m1, v1 = 0.0, 0.0, 0.0, 0.0
+ var0_np = np.array([1.0, 2.0], dtype=dtype.as_numpy_dtype)
+ grads0_np = np.array([0.1, 0.1], dtype=dtype.as_numpy_dtype)
+ var1_np = np.array([3.0, 4.0], dtype=dtype.as_numpy_dtype)
+ grads1_np = np.array([0.01, 0.01], dtype=dtype.as_numpy_dtype)
+
+ var0 = variables.Variable(var0_np)
+ var1 = variables.Variable(var1_np)
+ grads0 = constant_op.constant(grads0_np)
+ grads1 = constant_op.constant(grads1_np)
+ opt = adamax.AdaMaxOptimizer()
+ update1 = opt.apply_gradients(zip([grads0, grads1], [var0, var1]))
+ update2 = opt.apply_gradients(zip([grads0, grads1], [var0, var1]))
+ variables.global_variables_initializer().run()
+
+ beta1_power = opt._get_beta_accumulators()
+
+ # Fetch params to validate initial values
+ self.assertAllClose([1.0, 2.0], var0.eval())
+ self.assertAllClose([3.0, 4.0], var1.eval())
+
+ # Run 3 steps of intertwined AdaMax1 and AdaMax2.
+ for t in range(1, 4):
+ self.assertAllCloseAccordingToType(0.9**t, beta1_power.eval())
+ if t % 2 == 0:
+ update1.run()
+ else:
+ update2.run()
+
+ var0_np, m0, v0 = adamax_update_numpy(var0_np, grads0_np, t, m0, v0)
+ var1_np, m1, v1 = adamax_update_numpy(var1_np, grads1_np, t, m1, v1)
+
+ # Validate updated params
+ self.assertAllCloseAccordingToType(var0_np, var0.eval())
+ self.assertAllCloseAccordingToType(var1_np, var1.eval())
+
+ def testTwoSessions(self):
+ optimizer = adamax.AdaMaxOptimizer()
+ g = ops.Graph()
+ with g.as_default():
+ with session.Session():
+ var0 = variables.Variable(np.array([1.0, 2.0]), name="v0")
+ grads0 = constant_op.constant(np.array([0.1, 0.1]))
+ optimizer.apply_gradients([(grads0, var0)])
+
+ gg = ops.Graph()
+ with gg.as_default():
+ with session.Session():
+ var0 = variables.Variable(np.array([1.0, 2.0]), name="v0")
+ grads0 = constant_op.constant(np.array([0.1, 0.1]))
+
+ # If the optimizer saves any state not keyed by graph the following line
+ # fails.
+ optimizer.apply_gradients([(grads0, var0)])
+
+ def testSlotsUniqueEager(self):
+ with context.eager_mode():
+ v1 = resource_variable_ops.ResourceVariable(1.)
+ v2 = resource_variable_ops.ResourceVariable(1.)
+ opt = adamax.AdaMaxOptimizer(1.)
+ opt.minimize(lambda: v1 + v2)
+ # There should be two non-slot variables, and two unique slot variables
+ # for v1 and v2 respectively.
+ self.assertEqual(5, len(set(opt.variables())))
+
+
+if __name__ == "__main__":
+ test.main()
diff --git a/tensorflow/contrib/opt/python/training/moving_average_optimizer_test.py b/tensorflow/contrib/opt/python/training/moving_average_optimizer_test.py
index 85e3e8d..ac04ad9 100644
--- a/tensorflow/contrib/opt/python/training/moving_average_optimizer_test.py
+++ b/tensorflow/contrib/opt/python/training/moving_average_optimizer_test.py
@@ -85,7 +85,7 @@
state_ops.assign_add(ema_var1, [4.0, 4.0])
])
- # Test taht saver with missing ema variables will fail.
+ # Test that saver with missing ema variables will fail.
with self.assertRaisesRegexp(ValueError, r'Variable to swap'):
opt.swapping_saver(var_list=[var0])
@@ -123,7 +123,7 @@
self.assertAllCloseAccordingToType([0.9, 1.9], ema_var0.eval())
self.assertAllCloseAccordingToType([4.98, 5.98], var1.eval())
self.assertAllCloseAccordingToType([6.99, 7.99], ema_var1.eval())
- # Restore back to previou state.
+ # Restore back to previous state.
train_saver.restore(sess, save_path)
# If updates are parallel, this is not always true after the 1st step.
diff --git a/tensorflow/contrib/optimizer_v2/checkpointable_utils_test.py b/tensorflow/contrib/optimizer_v2/checkpointable_utils_test.py
index 6ade4cc..8ac9b58 100644
--- a/tensorflow/contrib/optimizer_v2/checkpointable_utils_test.py
+++ b/tensorflow/contrib/optimizer_v2/checkpointable_utils_test.py
@@ -456,7 +456,7 @@
optimizer.apply_gradients(
[(g, v) for g, v in zip(grad, model.vars)])
- @test_util.run_in_graph_and_eager_modes(assert_no_eager_garbage=True)
+ @test_util.run_in_graph_and_eager_modes()
def testDeferredSlotRestoration(self):
checkpoint_directory = self.get_temp_dir()
diff --git a/tensorflow/contrib/optimizer_v2/optimizer_v2.py b/tensorflow/contrib/optimizer_v2/optimizer_v2.py
index dcb5bb6..46bfbb7 100644
--- a/tensorflow/contrib/optimizer_v2/optimizer_v2.py
+++ b/tensorflow/contrib/optimizer_v2/optimizer_v2.py
@@ -564,7 +564,7 @@
### State
- Internal methods apre passed a `state` argument with the correct
+ Internal methods are passed a `state` argument with the correct
values to use for the slot and non-slot variables, and the hyper
parameters.
"""
diff --git a/tensorflow/contrib/quantize/python/fold_batch_norms.py b/tensorflow/contrib/quantize/python/fold_batch_norms.py
index 4a8f8a0..aa0ef64 100644
--- a/tensorflow/contrib/quantize/python/fold_batch_norms.py
+++ b/tensorflow/contrib/quantize/python/fold_batch_norms.py
@@ -545,7 +545,7 @@
gamma_tensor = graph.get_tensor_by_name(op.name + ':0')
if not has_scaling:
- gamma_tensor = array_ops.ones(batch_mean_tensor.shape)
+ gamma_tensor = array_ops.ones(moving_mean_tensor.shape)
return _BatchNormMatch(
layer_op=None,
diff --git a/tensorflow/contrib/seq2seq/python/kernel_tests/attention_wrapper_test.py b/tensorflow/contrib/seq2seq/python/kernel_tests/attention_wrapper_test.py
index 0232103..cd162ba 100644
--- a/tensorflow/contrib/seq2seq/python/kernel_tests/attention_wrapper_test.py
+++ b/tensorflow/contrib/seq2seq/python/kernel_tests/attention_wrapper_test.py
@@ -30,6 +30,7 @@
from tensorflow.contrib.seq2seq.python.ops import basic_decoder
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
+from tensorflow.python.layers import core as layers_core
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import init_ops
from tensorflow.python.ops import math_ops
@@ -110,7 +111,12 @@
alignment_history=False,
expected_final_alignment_history=None,
attention_layer_size=6,
+ attention_layer=None,
name=''):
+ attention_layer_sizes = (
+ [attention_layer_size] if attention_layer_size is not None else None)
+ attention_layers = (
+ [attention_layer] if attention_layer is not None else None)
self._testWithMaybeMultiAttention(
is_multi=False,
create_attention_mechanisms=[create_attention_mechanism],
@@ -119,7 +125,8 @@
attention_mechanism_depths=[attention_mechanism_depth],
alignment_history=alignment_history,
expected_final_alignment_history=expected_final_alignment_history,
- attention_layer_sizes=[attention_layer_size],
+ attention_layer_sizes=attention_layer_sizes,
+ attention_layers=attention_layers,
name=name)
def _testWithMaybeMultiAttention(self,
@@ -131,6 +138,7 @@
alignment_history=False,
expected_final_alignment_history=None,
attention_layer_sizes=None,
+ attention_layers=None,
name=''):
# Allow is_multi to be True with a single mechanism to enable test for
# passing in a single mechanism in a list.
@@ -144,12 +152,18 @@
encoder_output_depth = 10
cell_depth = 9
- if attention_layer_sizes is None:
- attention_depth = encoder_output_depth * len(create_attention_mechanisms)
- else:
+ if attention_layer_sizes is not None:
# Compute sum of attention_layer_sizes. Use encoder_output_depth if None.
attention_depth = sum([attention_layer_size or encoder_output_depth
for attention_layer_size in attention_layer_sizes])
+ elif attention_layers is not None:
+ # Compute sum of attention_layers output depth.
+ attention_depth = sum(
+ attention_layer.compute_output_shape(
+ [batch_size, cell_depth + encoder_output_depth])[-1].value
+ for attention_layer in attention_layers)
+ else:
+ attention_depth = encoder_output_depth * len(create_attention_mechanisms)
decoder_inputs = array_ops.placeholder_with_default(
np.random.randn(batch_size, decoder_max_time,
@@ -171,13 +185,20 @@
with vs.variable_scope(
'root',
initializer=init_ops.random_normal_initializer(stddev=0.01, seed=3)):
+ attention_layer_size = attention_layer_sizes
+ attention_layer = attention_layers
+ if not is_multi:
+ if attention_layer_size is not None:
+ attention_layer_size = attention_layer_size[0]
+ if attention_layer is not None:
+ attention_layer = attention_layer[0]
cell = rnn_cell.LSTMCell(cell_depth)
cell = wrapper.AttentionWrapper(
cell,
attention_mechanisms if is_multi else attention_mechanisms[0],
- attention_layer_size=(attention_layer_sizes if is_multi
- else attention_layer_sizes[0]),
- alignment_history=alignment_history)
+ attention_layer_size=attention_layer_size,
+ alignment_history=alignment_history,
+ attention_layer=attention_layer)
helper = helper_py.TrainingHelper(decoder_inputs,
decoder_sequence_length)
my_decoder = basic_decoder.BasicDecoder(
@@ -260,6 +281,41 @@
expected_final_alignment_history,
final_alignment_history_info)
+ def testBahdanauNormalizedDType(self):
+ for dtype in [np.float16, np.float32, np.float64]:
+ num_units = 128
+ encoder_outputs = array_ops.placeholder(dtype, shape=[64, None, 256])
+ encoder_sequence_length = array_ops.placeholder(dtypes.int32, shape=[64])
+ decoder_inputs = array_ops.placeholder(dtype, shape=[64, None, 128])
+ decoder_sequence_length = array_ops.placeholder(dtypes.int32, shape=[64])
+ batch_size = 64
+ attention_mechanism = wrapper.BahdanauAttention(
+ num_units=num_units,
+ memory=encoder_outputs,
+ memory_sequence_length=encoder_sequence_length,
+ normalize=True,
+ dtype=dtype,
+ )
+ cell = rnn_cell.LSTMCell(num_units)
+ cell = wrapper.AttentionWrapper(cell, attention_mechanism)
+
+ helper = helper_py.TrainingHelper(decoder_inputs,
+ decoder_sequence_length)
+ my_decoder = basic_decoder.BasicDecoder(
+ cell=cell,
+ helper=helper,
+ initial_state=cell.zero_state(
+ dtype=dtype, batch_size=batch_size))
+
+ final_outputs, final_state, _ = decoder.dynamic_decode(my_decoder)
+ self.assertTrue(
+ isinstance(final_outputs, basic_decoder.BasicDecoderOutput))
+ self.assertEqual(final_outputs.rnn_output.dtype, dtype)
+ self.assertTrue(
+ isinstance(final_state, wrapper.AttentionWrapperState))
+ self.assertTrue(
+ isinstance(final_state.cell_state, rnn_cell.LSTMStateTuple))
+
def testBahdanauNotNormalized(self):
create_attention_mechanism = wrapper.BahdanauAttention
@@ -797,6 +853,48 @@
expected_final_alignment_history=expected_final_alignment_history,
name='testMultiAttention')
+ def testMultiAttentionWithLayerInstances(self):
+ create_attention_mechanisms = (
+ wrapper.BahdanauAttention, wrapper.LuongAttention)
+
+ expected_final_output = BasicDecoderOutput(
+ rnn_output=ResultSummary(
+ shape=(5, 3, 7), dtype=dtype('float32'), mean=0.0011709079),
+ sample_id=ResultSummary(
+ shape=(5, 3), dtype=dtype('int32'), mean=3.2000000000000002))
+ expected_final_state = AttentionWrapperState(
+ cell_state=LSTMStateTuple(
+ c=ResultSummary(
+ shape=(5, 9), dtype=dtype('float32'), mean=-0.0038725811),
+ h=ResultSummary(
+ shape=(5, 9), dtype=dtype('float32'), mean=-0.0019329828)),
+ attention=ResultSummary(
+ shape=(5, 7), dtype=dtype('float32'), mean=0.001174294),
+ time=3,
+ alignments=(
+ ResultSummary(shape=(5, 8), dtype=dtype('float32'), mean=0.125),
+ ResultSummary(shape=(5, 8), dtype=dtype('float32'), mean=0.125)),
+ attention_state=(
+ ResultSummary(shape=(5, 8), dtype=dtype('float32'), mean=0.125),
+ ResultSummary(shape=(5, 8), dtype=dtype('float32'), mean=0.125)),
+ alignment_history=())
+
+ expected_final_alignment_history = (
+ ResultSummary(shape=(3, 5, 8), dtype=dtype('float32'), mean=0.125),
+ ResultSummary(shape=(3, 5, 8), dtype=dtype('float32'), mean=0.125))
+
+ self._testWithMaybeMultiAttention(
+ True,
+ create_attention_mechanisms,
+ expected_final_output,
+ expected_final_state,
+ attention_mechanism_depths=[9, 9],
+ attention_layers=[layers_core.Dense(3, use_bias=False),
+ layers_core.Dense(4, use_bias=False)],
+ alignment_history=True,
+ expected_final_alignment_history=expected_final_alignment_history,
+ name='testMultiAttention')
+
def testLuongMonotonicHard(self):
# Run attention mechanism with mode='hard', make sure probabilities are hard
b, t, u, d = 10, 20, 30, 40
diff --git a/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py b/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py
index 8a40a7a..1c9d179 100644
--- a/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py
+++ b/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py
@@ -472,7 +472,8 @@
# Scalar used in weight normalization
g = variable_scope.get_variable(
"attention_g", dtype=dtype,
- initializer=math.sqrt((1. / num_units)))
+ initializer=init_ops.constant_initializer(math.sqrt((1. / num_units))),
+ shape=())
# Bias added prior to the nonlinearity
b = variable_scope.get_variable(
"attention_b", [num_units], dtype=dtype,
@@ -1082,7 +1083,8 @@
cell_input_fn=None,
output_attention=True,
initial_cell_state=None,
- name=None):
+ name=None,
+ attention_layer=None):
"""Construct the `AttentionWrapper`.
**NOTE** If you are using the `BeamSearchDecoder` with a cell wrapped in
@@ -1125,7 +1127,8 @@
(default), use the context as attention at each time step. Otherwise,
feed the context and cell output into the attention layer to generate
attention at each time step. If attention_mechanism is a list,
- attention_layer_size must be a list of the same length.
+ attention_layer_size must be a list of the same length. If
+ attention_layer is set, this must be None.
alignment_history: Python boolean, whether to store alignment history
from all time steps in the final output state (currently stored as a
time major `TensorArray` on which you must call `stack()`).
@@ -1145,12 +1148,19 @@
does not match the batch size of `initial_cell_state`, proper
behavior is not guaranteed.
name: Name to use when creating ops.
+ attention_layer: A list of `tf.layers.Layer` instances or a
+ single `tf.layers.Layer` instance taking the context and cell output as
+ inputs to generate attention at each time step. If None (default), use
+ the context as attention at each time step. If attention_mechanism is a
+ list, attention_layer must be a list of the same length. If
+ attention_layers_size is set, this must be None.
Raises:
TypeError: `attention_layer_size` is not None and (`attention_mechanism`
is a list but `attention_layer_size` is not; or vice versa).
ValueError: if `attention_layer_size` is not None, `attention_mechanism`
- is a list, and its length does not match that of `attention_layer_size`.
+ is a list, and its length does not match that of `attention_layer_size`;
+ if `attention_layer_size` and `attention_layer` are set simultaneously.
"""
super(AttentionWrapper, self).__init__(name=name)
rnn_cell_impl.assert_like_rnncell("cell", cell)
@@ -1181,6 +1191,10 @@
"cell_input_fn must be callable, saw type: %s"
% type(cell_input_fn).__name__)
+ if attention_layer_size is not None and attention_layer is not None:
+ raise ValueError("Only one of attention_layer_size and attention_layer "
+ "should be set")
+
if attention_layer_size is not None:
attention_layer_sizes = tuple(
attention_layer_size
@@ -1199,6 +1213,22 @@
dtype=attention_mechanisms[i].dtype)
for i, attention_layer_size in enumerate(attention_layer_sizes))
self._attention_layer_size = sum(attention_layer_sizes)
+ elif attention_layer is not None:
+ self._attention_layers = tuple(
+ attention_layer
+ if isinstance(attention_layer, (list, tuple))
+ else (attention_layer,))
+ if len(self._attention_layers) != len(attention_mechanisms):
+ raise ValueError(
+ "If provided, attention_layer must contain exactly one "
+ "layer per attention_mechanism, saw: %d vs %d"
+ % (len(self._attention_layers), len(attention_mechanisms)))
+ self._attention_layer_size = sum(
+ layer.compute_output_shape(
+ [None,
+ cell.output_size + mechanism.values.shape[-1].value])[-1].value
+ for layer, mechanism in zip(
+ self._attention_layers, attention_mechanisms))
else:
self._attention_layers = None
self._attention_layer_size = sum(
diff --git a/tensorflow/contrib/signal/python/kernel_tests/mel_ops_test.py b/tensorflow/contrib/signal/python/kernel_tests/mel_ops_test.py
index 35c4b5b..345eb6c 100644
--- a/tensorflow/contrib/signal/python/kernel_tests/mel_ops_test.py
+++ b/tensorflow/contrib/signal/python/kernel_tests/mel_ops_test.py
@@ -24,6 +24,7 @@
from tensorflow.contrib.signal.python.ops import mel_ops
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
+from tensorflow.python.ops import array_ops
from tensorflow.python.platform import test
# mel spectrum constants and functions.
@@ -173,6 +174,18 @@
rewritten_graph = test_util.grappler_optimize(g, [mel_matrix])
self.assertEqual(1, len(rewritten_graph.node))
+ def test_num_spectrogram_bins_dynamic(self):
+ with self.test_session(use_gpu=True):
+ num_spectrogram_bins = array_ops.placeholder(shape=(),
+ dtype=dtypes.int32)
+ mel_matrix_np = spectrogram_to_mel_matrix(
+ 20, 129, 8000.0, 125.0, 3800.0)
+ mel_matrix = mel_ops.linear_to_mel_weight_matrix(
+ 20, num_spectrogram_bins, 8000.0, 125.0, 3800.0)
+ self.assertAllClose(
+ mel_matrix_np,
+ mel_matrix.eval(feed_dict={num_spectrogram_bins: 129}), atol=3e-6)
+
if __name__ == "__main__":
test.main()
diff --git a/tensorflow/contrib/signal/python/ops/mel_ops.py b/tensorflow/contrib/signal/python/ops/mel_ops.py
index d1a3654..1e84006 100644
--- a/tensorflow/contrib/signal/python/ops/mel_ops.py
+++ b/tensorflow/contrib/signal/python/ops/mel_ops.py
@@ -64,14 +64,11 @@
1.0 + (frequencies_hertz / _MEL_BREAK_FREQUENCY_HERTZ))
-def _validate_arguments(num_mel_bins, num_spectrogram_bins, sample_rate,
+def _validate_arguments(num_mel_bins, sample_rate,
lower_edge_hertz, upper_edge_hertz, dtype):
"""Checks the inputs to linear_to_mel_weight_matrix."""
if num_mel_bins <= 0:
raise ValueError('num_mel_bins must be positive. Got: %s' % num_mel_bins)
- if num_spectrogram_bins <= 0:
- raise ValueError('num_spectrogram_bins must be positive. Got: %s' %
- num_spectrogram_bins)
if sample_rate <= 0.0:
raise ValueError('sample_rate must be positive. Got: %s' % sample_rate)
if lower_edge_hertz < 0.0:
@@ -122,9 +119,9 @@
Args:
num_mel_bins: Python int. How many bands in the resulting mel spectrum.
- num_spectrogram_bins: Python int. How many bins there are in the source
- spectrogram data, which is understood to be `fft_size // 2 + 1`, i.e. the
- spectrogram only contains the nonredundant FFT bins.
+ num_spectrogram_bins: An integer `Tensor`. How many bins there are in the
+ source spectrogram data, which is understood to be `fft_size // 2 + 1`,
+ i.e. the spectrogram only contains the nonredundant FFT bins.
sample_rate: Python float. Samples per second of the input signal used to
create the spectrogram. We need this to figure out the actual frequencies
for each spectrogram bin, which dictates how they are mapped into the mel
@@ -148,7 +145,10 @@
[mel]: https://en.wikipedia.org/wiki/Mel_scale
"""
with ops.name_scope(name, 'linear_to_mel_weight_matrix') as name:
- _validate_arguments(num_mel_bins, num_spectrogram_bins, sample_rate,
+ # Note: As num_spectrogram_bins is passed to `math_ops.linspace`
+ # and the validation is already done in linspace (both in shape function
+ # and in kernel), there is no need to validate num_spectrogram_bins here.
+ _validate_arguments(num_mel_bins, sample_rate,
lower_edge_hertz, upper_edge_hertz, dtype)
# To preserve accuracy, we compute the matrix at float64 precision and then
diff --git a/tensorflow/contrib/slim/README.md b/tensorflow/contrib/slim/README.md
index 40f484f..746b955 100644
--- a/tensorflow/contrib/slim/README.md
+++ b/tensorflow/contrib/slim/README.md
@@ -290,9 +290,9 @@
In addition to the types of scope mechanisms in TensorFlow
([name_scope](https://www.tensorflow.org/api_docs/python/tf/name_scope),
-[variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope),
+[variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope)),
TF-Slim adds a new scoping mechanism called
-[arg_scope](https://www.tensorflow.org/api_docs/python/tf/contrib/framework/arg_scope),
+[arg_scope](https://www.tensorflow.org/api_docs/python/tf/contrib/framework/arg_scope).
This new scope allows a user to specify one or more operations and a set of
arguments which will be passed to each of the operations defined in the
`arg_scope`. This functionality is best illustrated by example. Consider the
@@ -761,8 +761,8 @@
3. Finalization: (optionally) perform any final operation to compute metric
values. For example, computing means, mins, maxes, etc.
-For example, to compute `mean_absolute_error`, two variables, a `count` and
-`total` variable are *initialized* to zero. During *aggregation*, we observed
+For example, to compute `mean_absolute_error`, two variables (`count` and
+`total`) are *initialized* to zero. During *aggregation*, we observed
some set of predictions and labels, compute their absolute differences and add
the total to `total`. Each time we observe another value,
`count` is incremented. Finally, during *finalization*, `total` is divided
diff --git a/tensorflow/contrib/slim/python/slim/learning.py b/tensorflow/contrib/slim/python/slim/learning.py
index 6a200de..8a2c747 100644
--- a/tensorflow/contrib/slim/python/slim/learning.py
+++ b/tensorflow/contrib/slim/python/slim/learning.py
@@ -389,7 +389,7 @@
total_loss: A `Tensor` representing the total loss.
optimizer: A tf.Optimizer to use for computing the gradients.
global_step: A `Tensor` representing the global step variable. If left as
- `_USE_GLOBAL_STEP`, then slim.variables.global_step() is used.
+ `_USE_GLOBAL_STEP`, then tf.contrib.framework.global_step() is used.
update_ops: An optional list of updates to execute. If `update_ops` is
`None`, then the update ops are set to the contents of the
`tf.GraphKeys.UPDATE_OPS` collection. If `update_ops` is not `None`, but
@@ -578,7 +578,8 @@
is_chief: Specifies whether or not the training is being run by the primary
replica during replica training.
global_step: The `Tensor` representing the global step. If left as `None`,
- then slim.variables.get_or_create_global_step() is used.
+ then training_util.get_or_create_global_step(), that is,
+ tf.contrib.framework.global_step() is used.
number_of_steps: The max number of gradient steps to take during training,
as measured by 'global_step': training will stop if global_step is
greater than 'number_of_steps'. If the value is left as None, training
diff --git a/tensorflow/contrib/slim/python/slim/nets/resnet_v1.py b/tensorflow/contrib/slim/python/slim/nets/resnet_v1.py
index 235a595..11c4214 100644
--- a/tensorflow/contrib/slim/python/slim/nets/resnet_v1.py
+++ b/tensorflow/contrib/slim/python/slim/nets/resnet_v1.py
@@ -207,7 +207,7 @@
net = resnet_utils.stack_blocks_dense(net, blocks, output_stride)
if global_pool:
# Global average pooling.
- net = math_ops.reduce_mean(net, [1, 2], name='pool5', keep_dims=True)
+ net = math_ops.reduce_mean(net, [1, 2], name='pool5', keepdims=True)
if num_classes is not None:
net = layers.conv2d(
net,
diff --git a/tensorflow/contrib/slim/python/slim/nets/resnet_v2.py b/tensorflow/contrib/slim/python/slim/nets/resnet_v2.py
index 61665c9..19e0538 100644
--- a/tensorflow/contrib/slim/python/slim/nets/resnet_v2.py
+++ b/tensorflow/contrib/slim/python/slim/nets/resnet_v2.py
@@ -221,7 +221,7 @@
net, activation_fn=nn_ops.relu, scope='postnorm')
if global_pool:
# Global average pooling.
- net = math_ops.reduce_mean(net, [1, 2], name='pool5', keep_dims=True)
+ net = math_ops.reduce_mean(net, [1, 2], name='pool5', keepdims=True)
if num_classes is not None:
net = layers_lib.conv2d(
net,
diff --git a/tensorflow/contrib/tensor_forest/client/random_forest.py b/tensorflow/contrib/tensor_forest/client/random_forest.py
index 4abcc20..35e8c92 100644
--- a/tensorflow/contrib/tensor_forest/client/random_forest.py
+++ b/tensorflow/contrib/tensor_forest/client/random_forest.py
@@ -399,7 +399,7 @@
training ops: tf.group them.
loss: average them.
predictions: concat probabilities such that predictions[*][0-C1] are the
- probablities for output 1 (where C1 is the number of classes in output 1),
+ probabilities for output 1 (where C1 is the number of classes in output 1),
predictions[*][C1-(C1+C2)] are the probabilities for output 2 (where C2
is the number of classes in output 2), etc. Also stack predictions such
that predictions[i][j] is the class prediction for example i and output j.
diff --git a/tensorflow/contrib/tensor_forest/hybrid/core/ops/hard_routing_function_op.cc b/tensorflow/contrib/tensor_forest/hybrid/core/ops/hard_routing_function_op.cc
index cf0db78..06bfe87 100644
--- a/tensorflow/contrib/tensor_forest/hybrid/core/ops/hard_routing_function_op.cc
+++ b/tensorflow/contrib/tensor_forest/hybrid/core/ops/hard_routing_function_op.cc
@@ -80,7 +80,7 @@
regression model that translates from node features to
probabilities.
- path_probility: `path_probability[i]` gives the probability of reaching each
+ path_probability: `path_probability[i]` gives the probability of reaching each
node in `path[i]`.
path: `path[i][j]` gives the jth node in the path taken by the ith data
instance.
diff --git a/tensorflow/contrib/tensor_forest/hybrid/core/ops/stochastic_hard_routing_function_op.cc b/tensorflow/contrib/tensor_forest/hybrid/core/ops/stochastic_hard_routing_function_op.cc
index c9df09b..1a05575 100644
--- a/tensorflow/contrib/tensor_forest/hybrid/core/ops/stochastic_hard_routing_function_op.cc
+++ b/tensorflow/contrib/tensor_forest/hybrid/core/ops/stochastic_hard_routing_function_op.cc
@@ -85,7 +85,7 @@
regression model that translates from node features to
probabilities.
- path_probility: `path_probability[i]` gives the probability of reaching each
+ path_probability: `path_probability[i]` gives the probability of reaching each
node in `path[i]`.
path: `path[i][j]` gives the jth node in the path taken by the ith data
instance.
diff --git a/tensorflow/contrib/tensor_forest/hybrid/core/ops/stochastic_hard_routing_gradient_op.cc b/tensorflow/contrib/tensor_forest/hybrid/core/ops/stochastic_hard_routing_gradient_op.cc
index b0d8b83..7d092bb 100644
--- a/tensorflow/contrib/tensor_forest/hybrid/core/ops/stochastic_hard_routing_gradient_op.cc
+++ b/tensorflow/contrib/tensor_forest/hybrid/core/ops/stochastic_hard_routing_gradient_op.cc
@@ -81,7 +81,7 @@
tree_biases: `tree_biases[i]` gives the bias of the logistic
regression model that translates from node features to
probabilities.
- path_probility: `path_probability[i]` gives the probability of reaching each
+ path_probability: `path_probability[i]` gives the probability of reaching each
node in `path[i]`.
path: `path[i][j]` gives the jth node in the path taken by the ith data
instance.
diff --git a/tensorflow/contrib/tensor_forest/kernels/tree_utils.cc b/tensorflow/contrib/tensor_forest/kernels/tree_utils.cc
index 44997ec..cefcc96 100644
--- a/tensorflow/contrib/tensor_forest/kernels/tree_utils.cc
+++ b/tensorflow/contrib/tensor_forest/kernels/tree_utils.cc
@@ -421,7 +421,7 @@
const std::vector<float>& mu2) {
// Math time!!
// We are trying to minimize d = |mu1 - x|^2 + |mu2 - y|^2 over the surface.
- // Using Langrange multipliers, we get
+ // Using Lagrange multipliers, we get
// partial d / partial x = -2 mu1 + 2 x = lambda_1 1 + 2 lambda_3 x
// partial d / partial y = -2 mu2 + 2 y = lambda_2 1 - 2 lambda_3 y
// or
@@ -485,7 +485,7 @@
}
double sdiscrim = sqrt(discrim);
- // TODO(thomaswc): Analyze whetever one of these is always closer.
+ // TODO(thomaswc): Analyze whatever one of these is always closer.
double v1 = (-b + sdiscrim) / (2 * a);
double v2 = (-b - sdiscrim) / (2 * a);
double dist1 = getDistanceFromLambda3(v1, mu1, mu2);
diff --git a/tensorflow/contrib/tensor_forest/kernels/tree_utils.h b/tensorflow/contrib/tensor_forest/kernels/tree_utils.h
index edbac67..03aab1b 100644
--- a/tensorflow/contrib/tensor_forest/kernels/tree_utils.h
+++ b/tensorflow/contrib/tensor_forest/kernels/tree_utils.h
@@ -123,7 +123,7 @@
const Tensor& split_squares,
int32 accumulator);
-// Performs booststrap_samples bootstrap samples of the best split's class
+// Performs bootstrap_samples bootstrap samples of the best split's class
// counts and the second best splits's class counts, and returns true if at
// least dominate_fraction of the time, the former has a better (lower)
// Gini impurity. Does not take over ownership of *rand.
diff --git a/tensorflow/contrib/tensor_forest/kernels/v4/decision-tree-resource.h b/tensorflow/contrib/tensor_forest/kernels/v4/decision-tree-resource.h
index 328af28..d3edb43 100644
--- a/tensorflow/contrib/tensor_forest/kernels/v4/decision-tree-resource.h
+++ b/tensorflow/contrib/tensor_forest/kernels/v4/decision-tree-resource.h
@@ -60,7 +60,7 @@
mutex* get_mutex() { return &mu_; }
// Return the TreeNode for the leaf that the example ends up at according
- // to decsion_tree_. Also fill in that leaf's depth if it isn't nullptr.
+ // to decision_tree_. Also fill in that leaf's depth if it isn't nullptr.
int32 TraverseTree(const std::unique_ptr<TensorDataSet>& input_data,
int example, int32* depth, TreePath* path) const;
diff --git a/tensorflow/contrib/tensor_forest/kernels/v4/decision_node_evaluator.h b/tensorflow/contrib/tensor_forest/kernels/v4/decision_node_evaluator.h
index bf2b2aa..3db351c 100644
--- a/tensorflow/contrib/tensor_forest/kernels/v4/decision_node_evaluator.h
+++ b/tensorflow/contrib/tensor_forest/kernels/v4/decision_node_evaluator.h
@@ -60,7 +60,7 @@
bool include_equals_;
};
-// Evalutor for splits with multiple weighted features.
+// Evaluator for splits with multiple weighted features.
class ObliqueInequalityDecisionNodeEvaluator
: public BinaryDecisionNodeEvaluator {
public:
diff --git a/tensorflow/contrib/tensor_forest/ops/model_ops.cc b/tensorflow/contrib/tensor_forest/ops/model_ops.cc
index 3099ccc..98124d5 100644
--- a/tensorflow/contrib/tensor_forest/ops/model_ops.cc
+++ b/tensorflow/contrib/tensor_forest/ops/model_ops.cc
@@ -165,7 +165,7 @@
leaf_ids: `leaf_ids[i]` is the leaf id for input i.
input_labels: The training batch's labels as a 1 or 2-d tensor.
'input_labels[i][j]' gives the j-th label/target for the i-th input.
-input_weights: The training batch's eample weights as a 1-d tensor.
+input_weights: The training batch's weights as a 1-d tensor.
'input_weights[i]' gives the weight for the i-th input.
)doc");
diff --git a/tensorflow/contrib/tensor_forest/ops/stats_ops.cc b/tensorflow/contrib/tensor_forest/ops/stats_ops.cc
index e8b5c5d..5be581a 100644
--- a/tensorflow/contrib/tensor_forest/ops/stats_ops.cc
+++ b/tensorflow/contrib/tensor_forest/ops/stats_ops.cc
@@ -75,7 +75,7 @@
.Attr("params: string")
.Input("tree_handle: resource")
.Input("stats_handle: resource")
- .Input("finshed_nodes: int32")
+ .Input("finished_nodes: int32")
.SetShapeFn(tensorflow::shape_inference::NoOutputs)
.Doc(R"doc(
Grows the tree for finished nodes and allocates waiting nodes.
@@ -83,7 +83,7 @@
params: A serialized TensorForestParams proto.
tree_handle: The handle to the tree.
stats_handle: The handle to the stats.
-finshed_nodes: A 1-d Tensor of finished node ids from ProcessInput.
+finished_nodes: A 1-d Tensor of finished node ids from ProcessInput.
)doc");
REGISTER_OP("ProcessInputV4")
@@ -119,7 +119,7 @@
sparse_input_shape: The shape tensor from the SparseTensor input.
input_labels: The training batch's labels as a 1 or 2-d tensor.
'input_labels[i][j]' gives the j-th label/target for the i-th input.
-input_weights: The training batch's eample weights as a 1-d tensor.
+input_weights: The training batch's weights as a 1-d tensor.
'input_weights[i]' gives the weight for the i-th input.
finished_nodes: A 1-d tensor of node ids that have finished and are ready to
grow.
diff --git a/tensorflow/contrib/tensor_forest/python/tensor_forest.py b/tensorflow/contrib/tensor_forest/python/tensor_forest.py
index 3650b5d..b9bcbb1 100644
--- a/tensorflow/contrib/tensor_forest/python/tensor_forest.py
+++ b/tensorflow/contrib/tensor_forest/python/tensor_forest.py
@@ -212,7 +212,7 @@
self.regression = getattr(self, 'regression', False)
# Num_outputs is the actual number of outputs (a single prediction for
- # classification, a N-dimenensional point for regression).
+ # classification, a N-dimensional point for regression).
self.num_outputs = self.num_classes if self.regression else 1
# Add an extra column to classes for storing counts, which is needed for
diff --git a/tensorflow/contrib/tensorrt/BUILD b/tensorflow/contrib/tensorrt/BUILD
index 2f31676..f80b4f1 100644
--- a/tensorflow/contrib/tensorrt/BUILD
+++ b/tensorflow/contrib/tensorrt/BUILD
@@ -11,6 +11,7 @@
load(
"//tensorflow:tensorflow.bzl",
+ "py_test",
"tf_cc_test",
"tf_copts",
"tf_cuda_library",
@@ -52,7 +53,6 @@
"ops/trt_engine_op.cc",
],
deps = [
- ":trt_engine_op_kernel",
":trt_shape_function",
"//tensorflow/core:lib_proto_parsing",
] + if_tensorrt([
@@ -140,6 +140,7 @@
]),
srcs_version = "PY2AND3",
deps = [
+ "//tensorflow/contrib/util:util_py",
"//tensorflow/python:framework_for_generated_wrappers",
"//tensorflow/python:resources",
],
@@ -174,6 +175,7 @@
srcs_version = "PY2AND3",
deps = [
":wrap_conversion",
+ "//tensorflow/python:tf_optimizer",
],
)
@@ -183,6 +185,7 @@
copts = tf_copts(),
deps = [
":trt_conversion",
+ ":trt_engine_op_kernel",
"//tensorflow/core:framework_lite",
"//util/python:python_headers",
],
@@ -272,3 +275,19 @@
"//tensorflow/core:test_main",
],
)
+
+py_test(
+ name = "tf_trt_integration_test",
+ srcs = ["test/tf_trt_integration_test.py"],
+ main = "test/tf_trt_integration_test.py",
+ srcs_version = "PY2AND3",
+ tags = [
+ "manual",
+ "notap",
+ ],
+ deps = [
+ ":init_py",
+ "//tensorflow/python:client_testlib",
+ "//tensorflow/python:framework_test_lib",
+ ],
+)
diff --git a/tensorflow/contrib/tensorrt/README.md b/tensorflow/contrib/tensorrt/README.md
index 6eafc17..687dee0 100644
--- a/tensorflow/contrib/tensorrt/README.md
+++ b/tensorflow/contrib/tensorrt/README.md
@@ -1,59 +1,29 @@
# Using TensorRT in TensorFlow
-
-This module provides necessary bindings and introduces TRT_engine_op
-operator that wraps a subgraph in TensorRT. This is still a work in progress
-but should be useable with most common graphs.
+This module provides necessary bindings and introduces TRT_engine_op operator
+that wraps a subgraph in TensorRT. This is still a work in progress but should
+be useable with most common graphs.
## Compilation
-
-In order to compile the module, you need to have a local TensorRT
-installation ( libnvinfer.so and respective include files ). During the
-configuration step, TensorRT should be enabled and installation path
-should be set. If installed through package managers (deb,rpm),
-configure script should find the necessary components from the system
-automatically. If installed from tar packages, user has to set path to
-location where the library is installed during configuration.
+In order to compile the module, you need to have a local TensorRT installation
+(libnvinfer.so and respective include files). During the configuration step,
+TensorRT should be enabled and installation path should be set. If installed
+through package managers (deb,rpm), configure script should find the necessary
+components from the system automatically. If installed from tar packages, user
+has to set path to location where the library is installed during configuration.
```shell
bazel build --config=cuda --config=opt //tensorflow/tools/pip_package:build_pip_package
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/
```
-After the installation of tensorflow package, TensorRT transformation
-will be available. An example use can be found in test/test_tftrt.py script
+After the installation of tensorflow package, TensorRT transformation will be
+available. An example use can be found in test/test_tftrt.py script
## Installing TensorRT 3.0.4
-In order to make use of TensorRT integration, you will need a local installation of TensorRT 3.0.4 from the [NVIDIA Developer website](https://developer.nvidia.com/tensorrt). Due to compiler compatibility, you will need to download and install the TensorRT 3.0.4 tarball for _Ubuntu 14.04_, i.e., **_TensorRT-3.0.4.Ubuntu-14.04.5.x86_64.cuda-9.0.cudnn7.0-tar.gz_**, even if you are using Ubuntu 16.04 or later.
-
-### Preparing TensorRT installation
-
-Once you have downloaded TensorRT-3.0.4.Ubuntu-14.04.5.x86_64.cuda-9.0.cudnn7.0-tar.gz, you will need to unpack it to an installation directory, which will be referred to as <install_dir>. Please replace <install_dir> with the full path of actual installation directory you choose in commands below.
-
-```shell
-cd <install_dir> && tar -zxf /path/to/TensorRT-3.0.4.Ubuntu-14.04.5.x86_64.cuda-9.0.cudnn7.0-tar.gz
-```
-
-After unpacking the binaries, you have several options to use them:
-
-#### To run TensorFlow as a user without superuser privileges
-
-For a regular user without any sudo rights, you should add TensorRT to your `$LD_LIBRARY_PATH`:
-
- ```shell
- export LD_LIBRARY_PATH=<install_dir>/TensorRT-3.0.4/lib${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
- ```
-
-Then you are ready to use TensorFlow-TensorRT integration. `$LD_LIBRARY_PATH` must contain the path to TensorRT installation for TensorFlow-TensorRT integration to work. If you are using a VirtualEnv-like setup, you can add the command above to your `bin/activate` script or to your `.bashrc` script.
-
-#### To run TensorFlow as a superuser
-
- When running as a superuser, such as in a container or via sudo, the `$LD_LIBRARY_PATH` approach above may not work. The following is preferred when the user has superuser privileges:
-
- ```shell
- echo "<install_dir>/TensorRT-3.0.4/lib" | sudo tee /etc/ld.so.conf.d/tensorrt304.conf && sudo ldconfig
- ```
-
- Please ensure that any existing deb package installation of TensorRT is removed before following these instructions to avoid package conflicts.
\ No newline at end of file
+In order to make use of TensorRT integration, you will need a local installation
+of TensorRT 3.0.4 from the [NVIDIA Developer website](https://developer.nvidia.com/tensorrt).
+Installation instructions for compatibility with TensorFlow are provided on the
+[TensorFlow Installation page](https://www.tensorflow.org/install/install_linux#nvidia_requirements_to_run_tensorflow_with_gpu_support).
diff --git a/tensorflow/contrib/tensorrt/resources/trt_resource_manager.cc b/tensorflow/contrib/tensorrt/resources/trt_resource_manager.cc
index e663eed..9c3698e 100644
--- a/tensorflow/contrib/tensorrt/resources/trt_resource_manager.cc
+++ b/tensorflow/contrib/tensorrt/resources/trt_resource_manager.cc
@@ -19,6 +19,12 @@
namespace tensorflow {
namespace tensorrt {
+std::shared_ptr<TRTResourceManager>
+tensorflow::tensorrt::TRTResourceManager::instance() {
+ static std::shared_ptr<TRTResourceManager> instance_(new TRTResourceManager);
+ return instance_;
+}
+
std::shared_ptr<tensorflow::ResourceMgr>
tensorflow::tensorrt::TRTResourceManager::getManager(const string& op_name) {
// mutex is held for lookup only. Most instantiations where mutex will be held
diff --git a/tensorflow/contrib/tensorrt/resources/trt_resource_manager.h b/tensorflow/contrib/tensorrt/resources/trt_resource_manager.h
index 5f8ad49..bc15b51 100644
--- a/tensorflow/contrib/tensorrt/resources/trt_resource_manager.h
+++ b/tensorflow/contrib/tensorrt/resources/trt_resource_manager.h
@@ -29,11 +29,7 @@
TRTResourceManager() = default;
public:
- static std::shared_ptr<TRTResourceManager> instance() {
- static std::shared_ptr<TRTResourceManager> instance_(
- new TRTResourceManager);
- return instance_;
- }
+ static std::shared_ptr<TRTResourceManager> instance();
// returns a manager for given op, if it doesn't exists it creates one
std::shared_ptr<tensorflow::ResourceMgr> getManager(const string& op_name);
diff --git a/tensorflow/contrib/tensorrt/test/tf_trt_integration_test.py b/tensorflow/contrib/tensorrt/test/tf_trt_integration_test.py
new file mode 100644
index 0000000..7a47328
--- /dev/null
+++ b/tensorflow/contrib/tensorrt/test/tf_trt_integration_test.py
@@ -0,0 +1,156 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Script to test TF-TensorRT integration."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import warnings
+import numpy as np
+
+from tensorflow.contrib import tensorrt as trt
+from tensorflow.core.protobuf import config_pb2 as cpb2
+from tensorflow.python.framework import constant_op as cop
+from tensorflow.python.framework import dtypes as dtypes
+from tensorflow.python.framework import importer as importer
+from tensorflow.python.framework import ops as ops
+from tensorflow.python.framework import test_util
+from tensorflow.python.ops import array_ops as aops
+from tensorflow.python.ops import nn as nn
+from tensorflow.python.ops import nn_ops as nn_ops
+from tensorflow.python.platform import googletest
+
+
+@test_util.with_c_api
+class IntegrationTest(test_util.TensorFlowTestCase):
+ """Class to test Tensorflow-TensorRT integration."""
+
+ def setUp(self):
+ """Setup method."""
+ super(IntegrationTest, self).setUp()
+ warnings.simplefilter("always")
+ inp_dims = (100, 24, 24, 2)
+ self._input = np.random.random_sample(inp_dims)
+ self._original_graph = self.get_simple_graph_def()
+ self._gpu_options = cpb2.GPUOptions(
+ per_process_gpu_memory_fraction=0.50)
+ self._config = cpb2.ConfigProto(gpu_options=self._gpu_options)
+ self._reference = self.run_graph(self._original_graph, self._input)
+
+ def get_simple_graph_def(self):
+ """Create a simple graph and return its graph_def."""
+ g = ops.Graph()
+ with g.as_default():
+ a = aops.placeholder(
+ dtype=dtypes.float32, shape=(None, 24, 24, 2), name="input")
+ e = cop.constant(
+ [[[[1., 0.5, 4., 6., 0.5, 1.], [1., 0.5, 1., 1., 0.5, 1.]]]],
+ name="weights",
+ dtype=dtypes.float32)
+ conv = nn.conv2d(
+ input=a,
+ filter=e,
+ strides=[1, 2, 2, 1],
+ padding="SAME",
+ name="conv")
+ b = cop.constant(
+ [4., 1.5, 2., 3., 5., 7.], name="bias", dtype=dtypes.float32)
+ t = nn.bias_add(conv, b, name="biasAdd")
+ relu = nn.relu(t, "relu")
+ idty = aops.identity(relu, "ID")
+ v = nn_ops.max_pool(
+ idty, [1, 2, 2, 1], [1, 2, 2, 1], "VALID", name="max_pool")
+ aops.squeeze(v, name="output")
+ return g.as_graph_def()
+
+ def run_graph(self, gdef, dumm_inp):
+ """Run given graphdef once."""
+ ops.reset_default_graph()
+ g = ops.Graph()
+ with g.as_default():
+ inp, out = importer.import_graph_def(
+ graph_def=gdef, return_elements=["input", "output"])
+ inp = inp.outputs[0]
+ out = out.outputs[0]
+ with self.test_session(
+ graph=g, config=self._config, use_gpu=True,
+ force_gpu=True) as sess:
+ val = sess.run(out, {inp: dumm_inp})
+ return val
+
+ # Use real data that is representative of the inference dataset
+ # for calibration. For this test script it is random data.
+ def run_calibration(self, gdef, dumm_inp):
+ """Run given calibration graph multiple times."""
+ ops.reset_default_graph()
+ g = ops.Graph()
+ with g.as_default():
+ inp, out = importer.import_graph_def(
+ graph_def=gdef, return_elements=["input", "output"])
+ inp = inp.outputs[0]
+ out = out.outputs[0]
+ # run over real calibration data here, we are mimicking a calibration
+ # set of 30 different batches. Use as much calibration data as you want
+ with self.test_session(
+ graph=g, config=self._config, use_gpu=True,
+ force_gpu=True) as sess:
+ for _ in range(30):
+ val = sess.run(out, {inp: dumm_inp})
+ return val
+
+ def get_trt_graph(self, mode):
+ """Return trt converted graph."""
+ if mode in ["FP32", "FP16", "INT8"]:
+ return trt.create_inference_graph(
+ input_graph_def=self._original_graph,
+ outputs=["output"],
+ max_batch_size=self._input.shape[0],
+ max_workspace_size_bytes=1 << 25,
+ precision_mode=mode, # TRT Engine precision "FP32","FP16" or "INT8"
+ minimum_segment_size=2 # minimum number of nodes in an engine
+ )
+ return None
+
+ def testFP32(self):
+ """Test FP32 conversion. Results should be identical to native case."""
+ trt_graph = self.get_trt_graph("FP32")
+ result = self.run_graph(trt_graph, self._input)
+ self.assertAllEqual(self._reference, result)
+ result1 = self.run_graph(trt_graph, self._input)
+ self.assertAllEqual(result1, result)
+
+ def testFP16(self):
+ """Test FP16 conversion. Results may be different from native case."""
+ trt_graph = self.get_trt_graph("FP16")
+ result = self.run_graph(trt_graph, self._input)
+ self.assertAllClose(self._reference, result, rtol=1.e-03)
+ result1 = self.run_graph(trt_graph, self._input)
+ self.assertAllEqual(result1, result)
+
+ def testINT8(self):
+ """Test INT8 conversion. Results may be different from native case."""
+ calib_graph = self.get_trt_graph("INT8")
+ result = self.run_calibration(calib_graph, self._input)
+ self.assertAllEqual(self._reference, result)
+ int8_graph = trt.calib_graph_to_infer_graph(calib_graph)
+ result = self.run_graph(int8_graph, self._input)
+ self.assertAllClose(self._reference, result, rtol=1.e-03)
+ result1 = self.run_graph(int8_graph, self._input)
+ self.assertAllEqual(result1, result)
+
+
+if __name__ == "__main__":
+ googletest.main()
diff --git a/tensorflow/contrib/timeseries/python/timeseries/math_utils.py b/tensorflow/contrib/timeseries/python/timeseries/math_utils.py
index 26793c8..9b593fe 100644
--- a/tensorflow/contrib/timeseries/python/timeseries/math_utils.py
+++ b/tensorflow/contrib/timeseries/python/timeseries/math_utils.py
@@ -60,7 +60,7 @@
# TODO(allenl): Smarter scaling here so that correlations are preserved when
# fiddling with diagonal elements.
diagonal = array_ops.matrix_diag_part(covariance_matrix)
- maximum = math_ops.reduce_max(diagonal, axis=-1, keep_dims=True)
+ maximum = math_ops.reduce_max(diagonal, axis=-1, keepdims=True)
new_diagonal = gen_math_ops.maximum(
diagonal, maximum / maximum_variance_ratio)
return array_ops.matrix_set_diag(
diff --git a/tensorflow/contrib/training/python/training/resample.py b/tensorflow/contrib/training/python/training/resample.py
index b16159b..7b8332b 100644
--- a/tensorflow/contrib/training/python/training/resample.py
+++ b/tensorflow/contrib/training/python/training/resample.py
@@ -77,7 +77,7 @@
Args:
inputs: A list of tensors, each of which has a shape of `[batch_size, ...]`
- rates: A tensor of shape `[batch_size]` contiaining the resampling rates
+ rates: A tensor of shape `[batch_size]` containing the resampling rates
for each input.
scope: Scope for the op.
seed: Random seed to use.
diff --git a/tensorflow/contrib/training/python/training/sampling_ops.py b/tensorflow/contrib/training/python/training/sampling_ops.py
index ba888f8..7140f2a 100644
--- a/tensorflow/contrib/training/python/training/sampling_ops.py
+++ b/tensorflow/contrib/training/python/training/sampling_ops.py
@@ -123,7 +123,7 @@
batch_size=batch_size,
num_threads=queue_threads)
- # Queues return a single tensor if the list of enqued tensors is one. Since
+ # Queues return a single tensor if the list of enqueued tensors is one. Since
# we want the type to always be the same, always return a list.
if isinstance(minibatch, ops.Tensor):
minibatch = [minibatch]
@@ -312,7 +312,7 @@
"""Verify that batched inputs are well-formed."""
checked_probs_list = []
for probs in probs_list:
- # Since number of classes shouldn't change at runtime, probalities shape
+ # Since number of classes shouldn't change at runtime, probabilities shape
# should be fully defined.
probs.get_shape().assert_is_fully_defined()
@@ -407,7 +407,7 @@
```
- A solution for a_i in terms of the other variabes is the following:
+ A solution for a_i in terms of the other variables is the following:
```a_i = (t_i / p_i) / max_i[t_i / p_i]```
"""
# Make list of t_i / p_i.
diff --git a/tensorflow/contrib/training/python/training/sequence_queueing_state_saver.py b/tensorflow/contrib/training/python/training/sequence_queueing_state_saver.py
index 99d486b..39d75a0 100644
--- a/tensorflow/contrib/training/python/training/sequence_queueing_state_saver.py
+++ b/tensorflow/contrib/training/python/training/sequence_queueing_state_saver.py
@@ -876,7 +876,7 @@
]):
self._length = array_ops.identity(self._length)
- # Only create barrier; enqueu and dequeue operations happen when you
+ # Only create barrier; enqueue and dequeue operations happen when you
# access prefetch_op and next_batch.
self._create_barrier()
self._scope = scope
@@ -1637,7 +1637,7 @@
For `key, value` pairs in `input_context` with `SparseTensor` `value` removes
them from `input_context` and transforms the `value` into a sequence and
- then adding `key`, transformed `value` into `input_seuqences`.
+ then adding `key`, transformed `value` into `input_sequences`.
The transformation is done by adding a new first dimension of `value_length`
equal to that of the other values in input_sequences` and tiling the `value`
every `num_unroll` steps.
diff --git a/tensorflow/core/BUILD b/tensorflow/core/BUILD
index a2ff297..ba1fd41 100644
--- a/tensorflow/core/BUILD
+++ b/tensorflow/core/BUILD
@@ -145,6 +145,7 @@
"if_static",
)
load("@local_config_cuda//cuda:build_defs.bzl", "if_cuda")
+load("@io_bazel_rules_closure//closure:defs.bzl", "closure_proto_library")
load(
"//third_party/mkl:build_defs.bzl",
"if_mkl",
@@ -247,6 +248,15 @@
deps = [":protos_all_cc"],
)
+proto_library(
+ name = "example_protos",
+ srcs = [
+ "example/example.proto",
+ "example/feature.proto",
+ ],
+ visibility = ["//visibility:public"],
+)
+
exports_files([
"framework/types.proto",
])
@@ -4066,3 +4076,9 @@
actual = ":mobile_srcs",
visibility = ["//visibility:public"],
)
+
+closure_proto_library(
+ name = "example_protos_closure",
+ visibility = ["//visibility:public"],
+ deps = [":example_protos"],
+)
diff --git a/tensorflow/core/api_def/base_api/api_def_ApplyAdaMax.pbtxt b/tensorflow/core/api_def/base_api/api_def_ApplyAdaMax.pbtxt
new file mode 100644
index 0000000..145d05d
--- /dev/null
+++ b/tensorflow/core/api_def/base_api/api_def_ApplyAdaMax.pbtxt
@@ -0,0 +1,78 @@
+op {
+ graph_op_name: "ApplyAdaMax"
+ visibility: HIDDEN
+ in_arg {
+ name: "var"
+ description: <<END
+Should be from a Variable().
+END
+ }
+ in_arg {
+ name: "m"
+ description: <<END
+Should be from a Variable().
+END
+ }
+ in_arg {
+ name: "v"
+ description: <<END
+Should be from a Variable().
+END
+ }
+ in_arg {
+ name: "beta1_power"
+ description: <<END
+Must be a scalar.
+END
+ }
+ in_arg {
+ name: "lr"
+ description: <<END
+Scaling factor. Must be a scalar.
+END
+ }
+ in_arg {
+ name: "beta1"
+ description: <<END
+Momentum factor. Must be a scalar.
+END
+ }
+ in_arg {
+ name: "beta2"
+ description: <<END
+Momentum factor. Must be a scalar.
+END
+ }
+ in_arg {
+ name: "epsilon"
+ description: <<END
+Ridge term. Must be a scalar.
+END
+ }
+ in_arg {
+ name: "grad"
+ description: <<END
+The gradient.
+END
+ }
+ out_arg {
+ name: "out"
+ description: <<END
+Same as "var".
+END
+ }
+ attr {
+ name: "use_locking"
+ description: <<END
+If `True`, updating of the var, m, and v tensors will be protected
+by a lock; otherwise the behavior is undefined, but may exhibit less
+contention.
+END
+ }
+ summary: "Update \'*var\' according to the AdaMax algorithm."
+ description: <<END
+m_t <- beta1 * m_{t-1} + (1 - beta1) * g
+v_t <- max(beta2 * v_{t-1}, abs(g))
+variable <- variable - learning_rate / (1 - beta1^t) * m_t / (v_t + epsilon)
+END
+}
diff --git a/tensorflow/core/api_def/base_api/api_def_BroadcastTo.pbtxt b/tensorflow/core/api_def/base_api/api_def_BroadcastTo.pbtxt
new file mode 100644
index 0000000..7637601
--- /dev/null
+++ b/tensorflow/core/api_def/base_api/api_def_BroadcastTo.pbtxt
@@ -0,0 +1,41 @@
+op {
+ graph_op_name: "BroadcastTo"
+ in_arg {
+ name: "input"
+ description: <<END
+A Tensor to broadcast.
+END
+ }
+ in_arg {
+ name: "shape"
+ description: <<END
+An 1-D `int` Tensor. The shape of the desired output.
+END
+ }
+ out_arg {
+ name: "output"
+ description: <<END
+A Tensor.
+END
+ }
+ summary: "Broadcast an array for a compatible shape."
+ description: <<END
+Broadcasting is the process of making arrays to have compatible shapes
+for arithmetic operations. Two shapes are compatible if for each
+dimension pair they are either equal or one of them is one. When trying
+to broadcast a Tensor to a shape, it starts with the trailing dimensions,
+and works its way forward.
+
+For example,
+```
+>>> x = tf.constant([1, 2, 3])
+>>> y = tf.broadcast_to(x, [3, 3])
+>>> sess.run(y)
+array([[1, 2, 3],
+ [1, 2, 3],
+ [1, 2, 3]], dtype=int32)
+```
+In the above example, the input Tensor with the shape of `[1, 3]`
+is broadcasted to output Tensor with shape of `[3, 3]`.
+END
+}
diff --git a/tensorflow/core/api_def/base_api/api_def_ImageSummary.pbtxt b/tensorflow/core/api_def/base_api/api_def_ImageSummary.pbtxt
index 9b00f5b..56a3658 100644
--- a/tensorflow/core/api_def/base_api/api_def_ImageSummary.pbtxt
+++ b/tensorflow/core/api_def/base_api/api_def_ImageSummary.pbtxt
@@ -61,7 +61,7 @@
generated sequentially as '*tag*/image/0', '*tag*/image/1', etc.
The `bad_color` argument is the color to use in the generated images for
-non-finite input values. It is a `unit8` 1-D tensor of length `channels`.
+non-finite input values. It is a `uint8` 1-D tensor of length `channels`.
Each element must be in the range `[0, 255]` (It represents the value of a
pixel in the output image). Non-finite values in the input tensor are
replaced by this tensor in the output image. The default value is the color
diff --git a/tensorflow/core/api_def/base_api/api_def_ResourceApplyAdaMax.pbtxt b/tensorflow/core/api_def/base_api/api_def_ResourceApplyAdaMax.pbtxt
new file mode 100644
index 0000000..a3f2188
--- /dev/null
+++ b/tensorflow/core/api_def/base_api/api_def_ResourceApplyAdaMax.pbtxt
@@ -0,0 +1,72 @@
+op {
+ graph_op_name: "ResourceApplyAdaMax"
+ visibility: HIDDEN
+ in_arg {
+ name: "var"
+ description: <<END
+Should be from a Variable().
+END
+ }
+ in_arg {
+ name: "m"
+ description: <<END
+Should be from a Variable().
+END
+ }
+ in_arg {
+ name: "v"
+ description: <<END
+Should be from a Variable().
+END
+ }
+ in_arg {
+ name: "beta1_power"
+ description: <<END
+Must be a scalar.
+END
+ }
+ in_arg {
+ name: "lr"
+ description: <<END
+Scaling factor. Must be a scalar.
+END
+ }
+ in_arg {
+ name: "beta1"
+ description: <<END
+Momentum factor. Must be a scalar.
+END
+ }
+ in_arg {
+ name: "beta2"
+ description: <<END
+Momentum factor. Must be a scalar.
+END
+ }
+ in_arg {
+ name: "epsilon"
+ description: <<END
+Ridge term. Must be a scalar.
+END
+ }
+ in_arg {
+ name: "grad"
+ description: <<END
+The gradient.
+END
+ }
+ attr {
+ name: "use_locking"
+ description: <<END
+If `True`, updating of the var, m, and v tensors will be protected
+by a lock; otherwise the behavior is undefined, but may exhibit less
+contention.
+END
+ }
+ summary: "Update \'*var\' according to the AdaMax algorithm."
+ description: <<END
+m_t <- beta1 * m_{t-1} + (1 - beta1) * g
+v_t <- max(beta2 * v_{t-1}, abs(g))
+variable <- variable - learning_rate / (1 - beta1^t) * m_t / (v_t + epsilon)
+END
+}
diff --git a/tensorflow/core/api_def/base_api/api_def_StringStrip.pbtxt b/tensorflow/core/api_def/base_api/api_def_StringStrip.pbtxt
new file mode 100644
index 0000000..12fbdfd
--- /dev/null
+++ b/tensorflow/core/api_def/base_api/api_def_StringStrip.pbtxt
@@ -0,0 +1,16 @@
+op {
+ graph_op_name: "StringStrip"
+ in_arg {
+ name: "input"
+ description: <<END
+A string `Tensor` of any shape.
+END
+ }
+ out_arg {
+ name: "output"
+ description: <<END
+A string `Tensor` of the same shape as the input.
+END
+ }
+ summary: "Strip leading and trailing whitespaces from the Tensor."
+}
diff --git a/tensorflow/core/api_def/python_api/api_def_ApplyAdaMax.pbtxt b/tensorflow/core/api_def/python_api/api_def_ApplyAdaMax.pbtxt
new file mode 100644
index 0000000..e49a355
--- /dev/null
+++ b/tensorflow/core/api_def/python_api/api_def_ApplyAdaMax.pbtxt
@@ -0,0 +1,4 @@
+op {
+ graph_op_name: "ApplyAdaMax"
+ visibility: HIDDEN
+}
diff --git a/tensorflow/core/api_def/python_api/api_def_BroadcastTo.pbtxt b/tensorflow/core/api_def/python_api/api_def_BroadcastTo.pbtxt
new file mode 100644
index 0000000..083eeced
--- /dev/null
+++ b/tensorflow/core/api_def/python_api/api_def_BroadcastTo.pbtxt
@@ -0,0 +1,4 @@
+op {
+ graph_op_name: "BroadcastTo"
+ visibility: HIDDEN
+}
diff --git a/tensorflow/core/api_def/python_api/api_def_ResourceApplyAdaMax.pbtxt b/tensorflow/core/api_def/python_api/api_def_ResourceApplyAdaMax.pbtxt
new file mode 100644
index 0000000..ca679e6
--- /dev/null
+++ b/tensorflow/core/api_def/python_api/api_def_ResourceApplyAdaMax.pbtxt
@@ -0,0 +1,4 @@
+op {
+ graph_op_name: "ResourceApplyAdaMax"
+ visibility: HIDDEN
+}
diff --git a/tensorflow/core/common_runtime/bfc_allocator.h b/tensorflow/core/common_runtime/bfc_allocator.h
index b8e7735..ba5a3ee 100644
--- a/tensorflow/core/common_runtime/bfc_allocator.h
+++ b/tensorflow/core/common_runtime/bfc_allocator.h
@@ -378,7 +378,7 @@
inline int Log2FloorNonZero(uint64 n) {
#if defined(__GNUC__)
return 63 ^ __builtin_clzll(n);
-#elif defined(PLATFORM_WINDOWS)
+#elif defined(PLATFORM_WINDOWS) && (_WIN64)
unsigned long index;
_BitScanReverse64(&index, n);
return index;
diff --git a/tensorflow/core/common_runtime/mkl_cpu_allocator.h b/tensorflow/core/common_runtime/mkl_cpu_allocator.h
index b2ef51d..245320c 100644
--- a/tensorflow/core/common_runtime/mkl_cpu_allocator.h
+++ b/tensorflow/core/common_runtime/mkl_cpu_allocator.h
@@ -31,6 +31,10 @@
#include "i_malloc.h"
+#ifdef _WIN32
+typedef unsigned int uint;
+#endif
+
namespace tensorflow {
class MklSubAllocator : public SubAllocator {
diff --git a/tensorflow/core/framework/collective.h b/tensorflow/core/framework/collective.h
index 0943b85..f6fe12e 100644
--- a/tensorflow/core/framework/collective.h
+++ b/tensorflow/core/framework/collective.h
@@ -179,7 +179,7 @@
virtual void RefreshStepIdSequenceAsync(int64 graph_key,
const StatusCallback& done) = 0;
- // Returns the the step_id that should be used for initiating a new execution
+ // Returns the step_id that should be used for initiating a new execution
// on the specified graph. May return the same step_id multiple times if
// RetireStepId or RefreshStepIdReservation is not called.
virtual int64 NextStepId(int64 graph_key) = 0;
diff --git a/tensorflow/core/framework/numeric_types.h b/tensorflow/core/framework/numeric_types.h
index dab53cb..b1d0127 100644
--- a/tensorflow/core/framework/numeric_types.h
+++ b/tensorflow/core/framework/numeric_types.h
@@ -111,7 +111,7 @@
} // namespace numext
} // namespace Eigen
-#if defined(COMPILER_MSVC) && !defined(__clang__)
+#if defined(_MSC_VER) && !defined(__clang__)
namespace std {
template <>
struct hash<Eigen::half> {
@@ -120,6 +120,6 @@
}
};
} // namespace std
-#endif // COMPILER_MSVC
+#endif // _MSC_VER
#endif // TENSORFLOW_FRAMEWORK_NUMERIC_TYPES_H_
diff --git a/tensorflow/core/graph/mkl_tfconversion_pass.h b/tensorflow/core/graph/mkl_tfconversion_pass.h
index 0562d8b..84e50ee 100644
--- a/tensorflow/core/graph/mkl_tfconversion_pass.h
+++ b/tensorflow/core/graph/mkl_tfconversion_pass.h
@@ -24,6 +24,10 @@
#include <memory>
#include "tensorflow/core/graph/graph.h"
+#ifdef _WIN32
+typedef unsigned int uint;
+#endif
+
namespace tensorflow {
// Interface to invoke the pass for unit test
//
diff --git a/tensorflow/core/grappler/clusters/single_machine_test.cc b/tensorflow/core/grappler/clusters/single_machine_test.cc
index c6352c1..352f08f 100644
--- a/tensorflow/core/grappler/clusters/single_machine_test.cc
+++ b/tensorflow/core/grappler/clusters/single_machine_test.cc
@@ -196,10 +196,19 @@
TF_CHECK_OK(cluster_->Run(item.graph, item.feed, item.fetch, &metadata));
std::set<string> cost_nodes;
for (const auto& node : metadata.cost_graph().node()) {
+#ifdef INTEL_MKL
+ // Skip the special nodes inserted by TF (and MKL): these are either
+ // prefixed with an underscore or contain "/_".
+ if (node.name()[0] == '_' || node.name().find("/_") != string::npos) {
+ continue;
+ }
+ cost_nodes.insert(node.name());
+#else
// Skip nodes added by TF internally.
if (node.name()[0] != '_') {
cost_nodes.insert(node.name());
}
+#endif
}
const std::set<string> expected_cost_nodes = {
"zero", "one", "add", "square",
diff --git a/tensorflow/core/grappler/optimizers/BUILD b/tensorflow/core/grappler/optimizers/BUILD
index 3f573cd..ad2db68 100644
--- a/tensorflow/core/grappler/optimizers/BUILD
+++ b/tensorflow/core/grappler/optimizers/BUILD
@@ -243,6 +243,7 @@
deps = [
":graph_optimizer",
"//tensorflow/core:lib",
+ "//tensorflow/core:protos_all_cc",
],
)
diff --git a/tensorflow/core/grappler/optimizers/custom_graph_optimizer.h b/tensorflow/core/grappler/optimizers/custom_graph_optimizer.h
index a80d46f..4d7f8c9 100644
--- a/tensorflow/core/grappler/optimizers/custom_graph_optimizer.h
+++ b/tensorflow/core/grappler/optimizers/custom_graph_optimizer.h
@@ -18,6 +18,7 @@
#include "tensorflow/core/grappler/optimizers/graph_optimizer.h"
#include "tensorflow/core/lib/core/status.h"
+#include "tensorflow/core/protobuf/rewriter_config.pb.h"
namespace tensorflow {
namespace grappler {
@@ -26,7 +27,8 @@
class CustomGraphOptimizer : public GraphOptimizer {
public:
virtual ~CustomGraphOptimizer() {}
- virtual Status Init() = 0;
+ virtual Status Init(const tensorflow::RewriterConfig_CustomGraphOptimizer*
+ config = nullptr) = 0;
};
} // end namespace grappler
diff --git a/tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry_test.cc b/tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry_test.cc
index 629f5e8..bdb1ae8 100644
--- a/tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry_test.cc
+++ b/tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry_test.cc
@@ -32,7 +32,10 @@
class TestGraphOptimizer : public CustomGraphOptimizer {
public:
- Status Init() override { return Status::OK(); }
+ Status Init(const tensorflow::RewriterConfig_CustomGraphOptimizer* config =
+ nullptr) override {
+ return Status::OK();
+ }
string name() const override { return kTestOptimizerName; }
Status Optimize(Cluster* cluster, const GrapplerItem& item,
GraphDef* optimized_graph) override {
diff --git a/tensorflow/core/grappler/optimizers/meta_optimizer_test.cc b/tensorflow/core/grappler/optimizers/meta_optimizer_test.cc
index d9a386b..9fcf076 100644
--- a/tensorflow/core/grappler/optimizers/meta_optimizer_test.cc
+++ b/tensorflow/core/grappler/optimizers/meta_optimizer_test.cc
@@ -36,7 +36,10 @@
TestOptimizer() {}
string name() const override { return "test_optimizer"; }
- Status Init() override { return Status::OK(); }
+ Status Init(const tensorflow::RewriterConfig_CustomGraphOptimizer* config =
+ nullptr) override {
+ return Status::OK();
+ }
Status Optimize(Cluster* cluster, const GrapplerItem& item,
GraphDef* optimized_graph) override {
diff --git a/tensorflow/core/kernels/BUILD b/tensorflow/core/kernels/BUILD
index f7f6a9b..201cd35 100644
--- a/tensorflow/core/kernels/BUILD
+++ b/tensorflow/core/kernels/BUILD
@@ -617,6 +617,7 @@
":batch_space_ops",
":bcast_ops",
":bitcast_op",
+ ":broadcast_to_op",
":concat_op",
":constant_op",
":depth_space_ops",
@@ -669,6 +670,12 @@
)
tf_kernel_library(
+ name = "broadcast_to_op",
+ prefix = "broadcast_to_op",
+ deps = ARRAY_DEPS,
+)
+
+tf_kernel_library(
name = "concat_op",
prefix = "concat_op",
deps = ARRAY_DEPS,
@@ -4227,6 +4234,7 @@
":regex_replace_op",
":string_join_op",
":string_split_op",
+ ":string_strip_op",
":string_to_hash_bucket_op",
":substr_op",
],
@@ -4272,6 +4280,12 @@
)
tf_kernel_library(
+ name = "string_strip_op",
+ prefix = "string_strip_op",
+ deps = STRING_DEPS,
+)
+
+tf_kernel_library(
name = "substr_op",
prefix = "substr_op",
deps = STRING_DEPS,
@@ -5947,8 +5961,7 @@
"//tensorflow/core:lib_internal",
"//tensorflow/core:nn_ops_op_lib",
"//third_party/mkl:intel_binary_blob",
- "@mkl_dnn",
- ],
+ ] + if_mkl(["@mkl_dnn"]),
)
tf_mkl_kernel_library(
@@ -5963,8 +5976,7 @@
"//tensorflow/core:lib_internal",
"//tensorflow/core:nn_ops_op_lib",
"//third_party/mkl:intel_binary_blob",
- "@mkl_dnn",
- ],
+ ] + if_mkl(["@mkl_dnn"]),
)
tf_mkl_kernel_library(
@@ -5980,8 +5992,7 @@
"//tensorflow/core:lib_internal",
"//tensorflow/core:nn_ops_op_lib",
"//third_party/mkl:intel_binary_blob",
- "@mkl_dnn",
- ],
+ ] + if_mkl(["@mkl_dnn"]),
)
tf_mkl_kernel_library(
@@ -6001,8 +6012,7 @@
"//tensorflow/core:lib_internal",
"//tensorflow/core:nn_ops_op_lib",
"//third_party/mkl:intel_binary_blob",
- "@mkl_dnn",
- ],
+ ] + if_mkl(["@mkl_dnn"]),
)
tf_mkl_kernel_library(
@@ -6018,8 +6028,7 @@
"//tensorflow/core:nn_ops_op_lib",
"//third_party/eigen3",
"//third_party/mkl:intel_binary_blob",
- "@mkl_dnn",
- ],
+ ] + if_mkl(["@mkl_dnn"]),
)
tf_mkl_kernel_library(
@@ -6035,8 +6044,7 @@
"//tensorflow/core:nn_ops_op_lib",
"//third_party/eigen3",
"//third_party/mkl:intel_binary_blob",
- "@mkl_dnn",
- ],
+ ] + if_mkl(["@mkl_dnn"]),
)
tf_mkl_kernel_library(
@@ -6044,8 +6052,7 @@
srcs = ["mkl_fused_batch_norm_op.cc"],
deps = NN_DEPS + [
"//third_party/mkl:intel_binary_blob",
- "@mkl_dnn",
- ],
+ ] + if_mkl(["@mkl_dnn"]),
)
tf_mkl_kernel_library(
@@ -6053,8 +6060,7 @@
prefix = "mkl_aggregate_ops",
deps = MATH_DEPS + [
"//third_party/mkl:intel_binary_blob",
- "@mkl_dnn",
- ],
+ ] + if_mkl(["@mkl_dnn"]),
)
tf_mkl_kernel_library(
@@ -6062,8 +6068,7 @@
prefix = "mkl_concat_op",
deps = ARRAY_DEPS + [
"//third_party/mkl:intel_binary_blob",
- "@mkl_dnn",
- ],
+ ] + if_mkl(["@mkl_dnn"]),
)
tf_mkl_kernel_library(
@@ -6071,8 +6076,7 @@
prefix = "mkl_reshape_op",
deps = ARRAY_DEPS + [
"//third_party/mkl:intel_binary_blob",
- "@mkl_dnn",
- ],
+ ] + if_mkl(["@mkl_dnn"]),
)
tf_mkl_kernel_library(
@@ -6080,8 +6084,7 @@
prefix = "mkl_identity_op",
deps = ARRAY_DEPS + [
"//third_party/mkl:intel_binary_blob",
- "@mkl_dnn",
- ],
+ ] + if_mkl(["@mkl_dnn"]),
)
tf_mkl_kernel_library(
@@ -6089,8 +6092,7 @@
prefix = "mkl_lrn_op",
deps = NN_DEPS + [
"//third_party/mkl:intel_binary_blob",
- "@mkl_dnn",
- ],
+ ] + if_mkl(["@mkl_dnn"]),
)
tf_mkl_kernel_library(
diff --git a/tensorflow/core/kernels/batching_util/shared_batch_scheduler.h b/tensorflow/core/kernels/batching_util/shared_batch_scheduler.h
index edc88a0..b4bce90 100644
--- a/tensorflow/core/kernels/batching_util/shared_batch_scheduler.h
+++ b/tensorflow/core/kernels/batching_util/shared_batch_scheduler.h
@@ -136,7 +136,7 @@
// (inclusive). If there is a need to quantize the batch sizes, i.e. only
// submit batches whose size is in a small set of allowed sizes, that can be
// done by adding padding in the process-batch callback.
- int max_batch_size = 1000;
+ size_t max_batch_size = 1000;
// If a task has been enqueued for this amount of time (in microseconds),
// and a thread is available, the scheduler will immediately form a batch
@@ -157,7 +157,7 @@
// If this limit is reached, Schedule() will return an UNAVAILABLE error.
// See the class documentation above for guidelines on how to tune this
// parameter.
- int max_enqueued_batches = 10;
+ size_t max_enqueued_batches = 10;
};
Status AddQueue(const QueueOptions& options,
std::function<void(std::unique_ptr<Batch<TaskType>>)>
@@ -394,7 +394,7 @@
std::function<void(std::unique_ptr<Batch<TaskType>>)>
process_batch_callback,
std::unique_ptr<BatchScheduler<TaskType>>* queue) {
- if (options.max_batch_size <= 0) {
+ if (options.max_batch_size == 0) {
return errors::InvalidArgument("max_batch_size must be positive; was ",
options.max_batch_size);
}
diff --git a/tensorflow/core/kernels/broadcast_to_op.cc b/tensorflow/core/kernels/broadcast_to_op.cc
new file mode 100644
index 0000000..2810925
--- /dev/null
+++ b/tensorflow/core/kernels/broadcast_to_op.cc
@@ -0,0 +1,91 @@
+/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+==============================================================================*/
+
+#define EIGEN_USE_THREADS
+
+#include "third_party/eigen3/unsupported/Eigen/CXX11/Tensor"
+
+#include "tensorflow/core/framework/op_kernel.h"
+#include "tensorflow/core/framework/register_types.h"
+#include "tensorflow/core/framework/tensor.h"
+#include "tensorflow/core/framework/types.h"
+#include "tensorflow/core/kernels/broadcast_to_op.h"
+
+namespace tensorflow {
+
+typedef Eigen::ThreadPoolDevice CPUDevice;
+typedef Eigen::GpuDevice GPUDevice;
+
+template <typename Device, typename T>
+class BroadcastToOp : public OpKernel {
+ public:
+ explicit BroadcastToOp(OpKernelConstruction* ctx) : OpKernel(ctx) {}
+
+ void Compute(OpKernelContext* ctx) override {
+ const Tensor& input_tensor = ctx->input(0);
+ const TensorShape& input_shape = input_tensor.shape();
+
+ const Tensor& shape_tensor = ctx->input(1);
+
+ TensorShape output_shape;
+ OP_REQUIRES_OK(ctx,
+ ctx->op_kernel().MakeShape(shape_tensor, &output_shape));
+
+ Tensor* output_tensor = nullptr;
+ OP_REQUIRES_OK(ctx, ctx->allocate_output(0, output_shape, &output_tensor));
+
+ const Device& d = ctx->eigen_device<Device>();
+ functor::BroadcastTo<Device, T>()(d, ctx, *output_tensor, output_shape,
+ input_tensor, input_shape);
+ }
+};
+
+// As MakeShape is able to handle both DT_INT32 and DT_INT64,
+// no need to have TypeConstraint for `Tidx`
+#define REGISTER_KERNEL(type) \
+ REGISTER_KERNEL_BUILDER( \
+ Name("BroadcastTo").Device(DEVICE_CPU).TypeConstraint<type>("T"), \
+ BroadcastToOp<CPUDevice, type>);
+
+TF_CALL_ALL_TYPES(REGISTER_KERNEL);
+#undef REGISTER_KERNEL
+
+#if GOOGLE_CUDA
+
+namespace functor {
+#define DECLARE_GPU_TEMPLATE(Type) \
+ template <> \
+ void BroadcastTo<GPUDevice, Type>::operator()( \
+ const GPUDevice& d, OpKernelContext* ctx, Tensor& output, \
+ const TensorShape& output_shape, const Tensor& input, \
+ const TensorShape& input_shape); \
+ extern template struct BroadcastTo<GPUDevice, Type>;
+
+TF_CALL_GPU_ALL_TYPES(DECLARE_GPU_TEMPLATE);
+#undef DECLARE_GPU_KERNEL
+} // namespace functor
+
+#define REGISTER_KERNEL(type) \
+ REGISTER_KERNEL_BUILDER(Name("BroadcastTo") \
+ .Device(DEVICE_GPU) \
+ .TypeConstraint<type>("T") \
+ .HostMemory("shape"), \
+ BroadcastToOp<GPUDevice, type>);
+
+TF_CALL_GPU_ALL_TYPES(REGISTER_KERNEL);
+#undef REGISTER_KERNEL
+#endif
+
+} // namespace tensorflow
diff --git a/tensorflow/core/kernels/broadcast_to_op.h b/tensorflow/core/kernels/broadcast_to_op.h
new file mode 100644
index 0000000..608e9b6
--- /dev/null
+++ b/tensorflow/core/kernels/broadcast_to_op.h
@@ -0,0 +1,220 @@
+/* Copyright 2015 The TensorFlow Authors. All Rights Reserved.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+==============================================================================*/
+
+#ifndef TENSORFLOW_KERNELS_BROADCAST_TO_OP_H_
+#define TENSORFLOW_KERNELS_BROADCAST_TO_OP_H_
+
+#include "tensorflow/core/framework/op_kernel.h"
+#include "tensorflow/core/framework/tensor.h"
+#include "tensorflow/core/framework/tensor_shape.h"
+#include "tensorflow/core/framework/tensor_types.h"
+#include "tensorflow/core/framework/types.h"
+#include "third_party/eigen3/unsupported/Eigen/CXX11/Tensor"
+
+namespace tensorflow {
+
+namespace functor {
+
+template <typename Device, typename T>
+struct BroadcastTo {
+ void operator()(const Device &d, OpKernelContext *ctx, Tensor &output_tensor,
+ const TensorShape &output_shape, const Tensor &input_tensor,
+ const TensorShape &input_shape) {
+#define BROADCAST_SHAPE(broadcast, reshape, NDIMS, input_shape, output_shape) \
+ for (int i = 0; i < NDIMS; i++) { \
+ OP_REQUIRES(ctx, (broadcast[i] % reshape[i] == 0), \
+ errors::InvalidArgument("invalid shape to broadcast from ", \
+ input_shape.DebugString(), " to ", \
+ output_shape.DebugString())); \
+ broadcast[i] = broadcast[i] / reshape[i]; \
+ }
+
+ switch (output_shape.dims()) {
+ case 1: {
+ auto reshape = AsEigenDSizesWithPrefix<1>(input_shape);
+ auto broadcast = output_shape.AsEigenDSizes<1>();
+
+ BROADCAST_SHAPE(broadcast, reshape, 1, input_shape, output_shape);
+
+ auto output = output_tensor.tensor<T, 1>();
+ switch (input_shape.dims()) {
+ case 0: {
+ output.device(d) = output.constant(input_tensor.scalar<T>()());
+ } break;
+ case 1: {
+ auto input = input_tensor.tensor<T, 1>();
+ output.device(d) = input.broadcast(broadcast);
+ } break;
+ default:
+ ctx->CtxFailure(errors::InvalidArgument(
+ "invalid shape to broadcast from ", input_shape.DebugString(),
+ " to ", output_shape.DebugString()));
+ break;
+ }
+ } break;
+ case 2: {
+ auto reshape = AsEigenDSizesWithPrefix<2>(input_shape);
+ auto broadcast = output_shape.AsEigenDSizes<2>();
+
+ BROADCAST_SHAPE(broadcast, reshape, 2, input_shape, output_shape);
+
+ auto output = output_tensor.tensor<T, 2>();
+ switch (input_shape.dims()) {
+ case 0: {
+ output.device(d) = output.constant(input_tensor.scalar<T>()());
+ } break;
+ case 1: {
+ auto input = input_tensor.tensor<T, 1>();
+ output.device(d) = input.reshape(reshape).broadcast(broadcast);
+ } break;
+ case 2: {
+ auto input = input_tensor.tensor<T, 2>();
+ output.device(d) = input.broadcast(broadcast);
+ } break;
+ default:
+ ctx->CtxFailure(errors::InvalidArgument(
+ "invalid shape to broadcast from ", input_shape.DebugString(),
+ " to ", output_shape.DebugString()));
+ break;
+ }
+ } break;
+ case 3: {
+ auto reshape = AsEigenDSizesWithPrefix<3>(input_shape);
+ auto broadcast = output_shape.AsEigenDSizes<3>();
+
+ BROADCAST_SHAPE(broadcast, reshape, 3, input_shape, output_shape);
+
+ auto output = output_tensor.tensor<T, 3>();
+ switch (input_shape.dims()) {
+ case 0: {
+ output.device(d) = output.constant(input_tensor.scalar<T>()());
+ } break;
+ case 1: {
+ auto input = input_tensor.tensor<T, 1>();
+ output.device(d) = input.reshape(reshape).broadcast(broadcast);
+ } break;
+ case 2: {
+ auto input = input_tensor.tensor<T, 2>();
+ output.device(d) = input.reshape(reshape).broadcast(broadcast);
+ } break;
+ case 3: {
+ auto input = input_tensor.tensor<T, 3>();
+ output.device(d) = input.broadcast(broadcast);
+ } break;
+ default:
+ ctx->CtxFailure(errors::InvalidArgument(
+ "invalid shape to broadcast from ", input_shape.DebugString(),
+ " to ", output_shape.DebugString()));
+ break;
+ }
+ } break;
+ case 4: {
+ auto reshape = AsEigenDSizesWithPrefix<4>(input_shape);
+ auto broadcast = output_shape.AsEigenDSizes<4>();
+
+ BROADCAST_SHAPE(broadcast, reshape, 4, input_shape, output_shape);
+
+ auto output = output_tensor.tensor<T, 4>();
+ switch (input_shape.dims()) {
+ case 0: {
+ output.device(d) = output.constant(input_tensor.scalar<T>()());
+ } break;
+ case 1: {
+ auto input = input_tensor.tensor<T, 1>();
+ output.device(d) = input.reshape(reshape).broadcast(broadcast);
+ } break;
+ case 2: {
+ auto input = input_tensor.tensor<T, 2>();
+ output.device(d) = input.reshape(reshape).broadcast(broadcast);
+ } break;
+ case 3: {
+ auto input = input_tensor.tensor<T, 3>();
+ output.device(d) = input.reshape(reshape).broadcast(broadcast);
+ } break;
+ case 4: {
+ auto input = input_tensor.tensor<T, 4>();
+ output.device(d) = input.broadcast(broadcast);
+ } break;
+ default:
+ ctx->CtxFailure(errors::InvalidArgument(
+ "invalid shape to broadcast from ", input_shape.DebugString(),
+ " to ", output_shape.DebugString()));
+ break;
+ }
+ } break;
+ case 5: {
+ auto reshape = AsEigenDSizesWithPrefix<5>(input_shape);
+ auto broadcast = output_shape.AsEigenDSizes<5>();
+
+ BROADCAST_SHAPE(broadcast, reshape, 5, input_shape, output_shape);
+ auto output = output_tensor.tensor<T, 5>();
+ switch (input_shape.dims()) {
+ case 0: {
+ output.device(d) = output.constant(input_tensor.scalar<T>()());
+ } break;
+ case 1: {
+ auto input = input_tensor.tensor<T, 1>();
+ output.device(d) = input.reshape(reshape).broadcast(broadcast);
+ } break;
+ case 2: {
+ auto input = input_tensor.tensor<T, 2>();
+ output.device(d) = input.reshape(reshape).broadcast(broadcast);
+ } break;
+ case 3: {
+ auto input = input_tensor.tensor<T, 3>();
+ output.device(d) = input.reshape(reshape).broadcast(broadcast);
+ } break;
+ case 4: {
+ auto input = input_tensor.tensor<T, 4>();
+ output.device(d) = input.reshape(reshape).broadcast(broadcast);
+ } break;
+ case 5: {
+ auto input = input_tensor.tensor<T, 5>();
+ output.device(d) = input.broadcast(broadcast);
+ } break;
+ default:
+ ctx->CtxFailure(errors::InvalidArgument(
+ "invalid shape to broadcast from ", input_shape.DebugString(),
+ " to ", output_shape.DebugString()));
+ break;
+ }
+ } break;
+ default:
+ ctx->CtxFailure(errors::InvalidArgument(
+ "invalid shape to broadcast from ", input_shape.DebugString(),
+ " to ", output_shape.DebugString()));
+ break;
+ }
+ }
+
+ private:
+ template <int NDIMS>
+ Eigen::DSizes<Eigen::DenseIndex, NDIMS> AsEigenDSizesWithPrefix(
+ const TensorShape &shape) const {
+ Eigen::DSizes<Eigen::DenseIndex, NDIMS> dsizes;
+ for (int d = 0; d < NDIMS - shape.dims(); d++) {
+ dsizes[d] = 1;
+ }
+ for (int d = NDIMS - shape.dims(); d < NDIMS; d++) {
+ dsizes[d] = shape.dim_size(d - (NDIMS - shape.dims()));
+ }
+ return dsizes;
+ }
+};
+
+} // namespace functor
+} // namespace tensorflow
+
+#endif // TENSORFLOW_KERNELS_BROADCAST_TO_OP_H_
diff --git a/tensorflow/core/kernels/broadcast_to_op_gpu.cu.cc b/tensorflow/core/kernels/broadcast_to_op_gpu.cu.cc
new file mode 100644
index 0000000..6459571
--- /dev/null
+++ b/tensorflow/core/kernels/broadcast_to_op_gpu.cu.cc
@@ -0,0 +1,34 @@
+/* Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+==============================================================================*/
+
+#if GOOGLE_CUDA
+
+#define EIGEN_USE_GPU
+
+#include "tensorflow/core/kernels/broadcast_to_op.h"
+#include "tensorflow/core/framework/register_types.h"
+
+namespace tensorflow {
+
+typedef Eigen::GpuDevice GPUDevice;
+
+#define INSTANTIATE_GPU_KERNEL(Type) \
+ template class functor::BroadcastTo<GPUDevice, Type>;
+TF_CALL_GPU_ALL_TYPES(INSTANTIATE_GPU_KERNEL);
+#undef INSTANTIATE_GPU_KERNEL
+
+} // namespace tensorflow
+
+#endif // GOOGLE_CUDA
diff --git a/tensorflow/core/kernels/conv_ops_gpu.h b/tensorflow/core/kernels/conv_ops_gpu.h
index 4215c45..d2c8020 100644
--- a/tensorflow/core/kernels/conv_ops_gpu.h
+++ b/tensorflow/core/kernels/conv_ops_gpu.h
@@ -139,9 +139,8 @@
bool ShouldIncludeWinogradNonfusedAlgo(
se::StreamExecutor* stream_exec) const {
// Skip this check for cuDNN 7 and newer.
- se::port::StatusOr<std::tuple<int, int, int>> version =
- stream_exec->AsDnn()->GetVersion();
- if (version.ok() && std::get<0>(version.ValueOrDie()) >= 7) {
+ auto version = stream_exec->AsDnn()->GetVersion();
+ if (version.ok() && version.ValueOrDie().major_version() >= 7) {
return true;
}
return ShouldIncludeWinogradNonfusedAlgoPreCudnn7<T>();
diff --git a/tensorflow/core/kernels/ctc_decoder_ops.cc b/tensorflow/core/kernels/ctc_decoder_ops.cc
index 96bdb6a..8cadeac 100644
--- a/tensorflow/core/kernels/ctc_decoder_ops.cc
+++ b/tensorflow/core/kernels/ctc_decoder_ops.cc
@@ -27,6 +27,7 @@
#include "tensorflow/core/platform/macros.h"
#include "tensorflow/core/util/ctc/ctc_beam_search.h"
#include "tensorflow/core/util/sparse/sparse_tensor.h"
+#include "tensorflow/core/util/work_sharder.h"
namespace tensorflow {
@@ -213,20 +214,29 @@
// Perform best path decoding
std::vector<std::vector<std::vector<int> > > sequences(batch_size);
- for (int b = 0; b < batch_size; ++b) {
- sequences[b].resize(1);
- auto& sequence = sequences[b][0];
- int prev_indices = -1;
- for (int t = 0; t < seq_len_t(b); ++t) {
- int max_class_indices;
- log_prob_t(b, 0) += -RowMax(input_list_t[t], b, &max_class_indices);
- if (max_class_indices != blank_index &&
- !(merge_repeated_ && max_class_indices == prev_indices)) {
- sequence.push_back(max_class_indices);
+ auto decode = [&](const int64 begin, const int64 end) {
+ for (int b = begin; b < end; ++b) {
+ sequences[b].resize(1);
+ auto &sequence = sequences[b][0];
+ int prev_indices = -1;
+ for (int t = 0; t < seq_len_t(b); ++t) {
+ int max_class_indices;
+ log_prob_t(b, 0) += -RowMax(input_list_t[t], b, &max_class_indices);
+ if (max_class_indices != blank_index &&
+ !(merge_repeated_ && max_class_indices == prev_indices)) {
+ sequence.push_back(max_class_indices);
+ }
+ prev_indices = max_class_indices;
}
- prev_indices = max_class_indices;
}
- }
+ };
+
+ const int64 kCostPerUnit = 50 * max_time * num_classes;
+ const int64 total = batch_size;
+ const DeviceBase::CpuWorkerThreads& worker_threads =
+ *ctx->device()->tensorflow_cpu_worker_threads();
+ Shard(worker_threads.num_threads, worker_threads.workers, total,
+ kCostPerUnit, decode);
OP_REQUIRES_OK(
ctx, decode_helper_.StoreAllDecodedSequences(
diff --git a/tensorflow/core/kernels/mkl_input_conversion_op.cc b/tensorflow/core/kernels/mkl_input_conversion_op.cc
index ea763ce..cda1402 100644
--- a/tensorflow/core/kernels/mkl_input_conversion_op.cc
+++ b/tensorflow/core/kernels/mkl_input_conversion_op.cc
@@ -312,9 +312,8 @@
VLOG(1) << "MklInputConversionOp: Shape is same, but format is "
"different, "
<< "need to convert to same format";
-
- // Convert input0, and keep input1 unchanged
- // Create MklDnnShape for output mkl tensor based on input0
+ // TODO: For now, input0 is converted and input1 is unchanged
+ // we should choose the optimal MKL format to convert to.
Tensor* tensor_out;
MklDnnShape mkl_output_mkl_shape;
mkl_output_mkl_shape.SetMklTensor(true);
@@ -362,7 +361,8 @@
// with MKL tensors)
VLOG(1) << "MklInputConversionOp: Broadcast needed, "
<< "converted MKL inputs to TF format";
-
+ // TODO: Cleanup op_data_type and has_avx512f_ after these two parameters
+ // are removed from ConvertMklToTf
MklToTfOp<Device, T>::ConvertMklToTf(this, context, data_format_str,
op_data_type, has_avx512f_,
kInputIndex_0);
@@ -403,19 +403,7 @@
}
// Broadcast is needed if the shapes are not the same
- bool broadcast_needed;
-
- size_t in0_size = 1;
- for (size_t i = 0; i < mkl_shape->GetDimension(); ++i)
- in0_size *= mkl_shape->TfDimSize(i);
-
- size_t in1_size = 1;
- for (size_t i = 0; i < tf_tensor->shape().dims(); ++i)
- in1_size *= tf_tensor->shape().dim_size(i);
-
- broadcast_needed = (in0_size != in1_size);
-
- if (!broadcast_needed) {
+ if (mkl_shape->GetTfShape().num_elements() == tf_tensor->shape().num_elements() ) {
// Both shapes are same, convert the TF input to MKL
VLOG(1) << "MklInputConversionOp: No broadcast needed.";
VLOG(1) << "MklInputConversionOp: Converting input " << tf_tensor_index
@@ -446,10 +434,19 @@
// Create reorder between tensorflow layout and Mkl layout if necessary
std::vector<primitive> net;
- tf_input.CheckReorderToOpMem(
+ bool reordered = tf_input.CheckReorderToOpMem(
memory::primitive_desc(output_mkl_md, cpu_engine),
tensor_out, &net);
- stream(stream::kind::eager).submit(net).wait();
+ if(!reordered) {
+ // This is the case that the TF tensor has the same shape and format of
+ // mkl tensor. However, tf_tensor can not be simply forwarded to the output
+ // tensor since mkl data tensor is always one dimensional tensor.
+ // Tensor::CopyFrom shares the buffer of the other tensor while set its shape
+ // to the other tensor.
+ tensor_out->CopyFrom(*tf_tensor, tensor_out->shape());
+ }
+ else
+ stream(stream::kind::eager).submit(net).wait();
// -- The tensor in MKL format passes through --
ForwardMklTensorInToOut(context, mkl_tensor_index, mkl_tensor_index);
diff --git a/tensorflow/core/kernels/mkl_relu_op.cc b/tensorflow/core/kernels/mkl_relu_op.cc
index 0a0f695..1ed4383 100644
--- a/tensorflow/core/kernels/mkl_relu_op.cc
+++ b/tensorflow/core/kernels/mkl_relu_op.cc
@@ -441,7 +441,9 @@
// Allocate output and MklDnnShape tensors separately for possible
// in-place operation
OP_REQUIRES_OK(context, context->forward_input_or_allocate_output(
- {src_index}, dst_index, tf_shape_dst, &dst_tensor));
+ {static_cast<const int>(src_index)},
+ static_cast<const int>(dst_index),
+ tf_shape_dst, &dst_tensor));
AllocateOutputSetMklShape(context, dst_index, dnn_shape_dst);
// Destination memory descriptor is same as source memory descriptor.
@@ -611,7 +613,9 @@
// Allocate diff_src and MklDnnShape tensors separately for possible
// in-place operation
OP_REQUIRES_OK(context, context->forward_input_or_allocate_output(
- {diff_dst_index}, diff_src_index, tf_shape_diff_src,
+ {static_cast<const int>(diff_dst_index)},
+ static_cast<const int>(diff_src_index),
+ tf_shape_diff_src,
&diff_src_tensor));
AllocateOutputSetMklShape(context, diff_src_index, dnn_shape_diff_src);
diff --git a/tensorflow/core/kernels/roll_op.cc b/tensorflow/core/kernels/roll_op.cc
index bcbdbee..4b63080 100644
--- a/tensorflow/core/kernels/roll_op.cc
+++ b/tensorflow/core/kernels/roll_op.cc
@@ -254,8 +254,11 @@
// total modulo sum of shifts for each dimension
gtl::InlinedVector<int, 4> shift_mod_sum(num_dims, 0);
for (int i = 0; i < num_shifts; i++) {
- const int axis = axis_flat(i);
- OP_REQUIRES(context, axis < num_dims,
+ int axis = axis_flat(i);
+ if (axis < 0) {
+ axis += num_dims;
+ }
+ OP_REQUIRES(context, 0 <= axis && axis < num_dims,
errors::InvalidArgument("axis ", axis, " is out of range"));
const int ds = std::max<int>(static_cast<int>(input.dim_size(axis)), 1);
const int sum = shift_mod_sum[axis] + static_cast<int>(shift_flat(i));
diff --git a/tensorflow/core/kernels/segment_reduction_ops.h b/tensorflow/core/kernels/segment_reduction_ops.h
index 183e5a1..bedd965 100644
--- a/tensorflow/core/kernels/segment_reduction_ops.h
+++ b/tensorflow/core/kernels/segment_reduction_ops.h
@@ -16,6 +16,14 @@
#ifndef TENSORFLOW_CORE_KERNELS_SEGMENT_REDUCTION_OPS_H_
#define TENSORFLOW_CORE_KERNELS_SEGMENT_REDUCTION_OPS_H_
+
+// This file requires the following include because it uses CudaAtomicMax:
+// #include "tensorflow/core/util/cuda_kernel_helper.h"
+
+// Unfortunately we can't add the #include, since it breaks compilation for
+// non-GPU targets. This only breaks in clang, because it's more strict for
+// template code and CudaAtomicMax is used in template context.
+
// This file requires the following include because it uses CudaAtomicMax:
// #include "tensorflow/core/util/cuda_kernel_helper.h"
diff --git a/tensorflow/core/kernels/string_strip_op.cc b/tensorflow/core/kernels/string_strip_op.cc
new file mode 100644
index 0000000..ae700f4
--- /dev/null
+++ b/tensorflow/core/kernels/string_strip_op.cc
@@ -0,0 +1,53 @@
+/* Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+==============================================================================*/
+
+// See docs in ../ops/string_ops.cc.
+
+#include <string>
+
+#include "tensorflow/core/framework/kernel_def_builder.h"
+#include "tensorflow/core/framework/op_kernel.h"
+#include "tensorflow/core/framework/tensor.h"
+#include "tensorflow/core/lib/core/errors.h"
+#include "tensorflow/core/lib/core/status.h"
+#include "tensorflow/core/lib/strings/str_util.h"
+
+namespace tensorflow {
+
+class StringStripOp : public OpKernel {
+ public:
+ explicit StringStripOp(OpKernelConstruction* context) : OpKernel(context) {}
+
+ void Compute(OpKernelContext* ctx) override {
+ const Tensor* input_tensor;
+ OP_REQUIRES_OK(ctx, ctx->input("input", &input_tensor));
+ Tensor* output_tensor;
+ OP_REQUIRES_OK(
+ ctx, ctx->allocate_output(0, input_tensor->shape(), &output_tensor));
+
+ const auto input = input_tensor->flat<string>();
+ auto output = output_tensor->flat<string>();
+
+ for (int64 i = 0; i < input.size(); ++i) {
+ StringPiece entry(input(i));
+ str_util::RemoveWhitespaceContext(&entry);
+ output(i) = entry.ToString();
+ }
+ }
+};
+
+REGISTER_KERNEL_BUILDER(Name("StringStrip").Device(DEVICE_CPU), StringStripOp);
+
+} // namespace tensorflow
diff --git a/tensorflow/core/kernels/training_ops.cc b/tensorflow/core/kernels/training_ops.cc
index f53c567..5b13b10 100644
--- a/tensorflow/core/kernels/training_ops.cc
+++ b/tensorflow/core/kernels/training_ops.cc
@@ -330,6 +330,27 @@
template <typename T>
struct ApplyAdam<CPUDevice, T> : ApplyAdamNonCuda<CPUDevice, T> {};
+template <typename Device, typename T>
+struct ApplyAdaMaxNonCuda {
+ void operator()(const Device& d, typename TTypes<T>::Flat var,
+ typename TTypes<T>::Flat m, typename TTypes<T>::Flat v,
+ typename TTypes<T>::ConstScalar beta1_power,
+ typename TTypes<T>::ConstScalar lr,
+ typename TTypes<T>::ConstScalar beta1,
+ typename TTypes<T>::ConstScalar beta2,
+ typename TTypes<T>::ConstScalar epsilon,
+ typename TTypes<T>::ConstFlat grad) {
+ m.device(d) += (grad - m) * (T(1) - beta1());
+ // Here v is u in section 7.1
+ v.device(d) = (beta2() * v).cwiseMax(grad.abs());
+ // var is θ in section 7.1
+ var.device(d) -= lr() / (T(1) - beta1_power()) * (m / (v + epsilon()));
+ }
+};
+
+template <typename T>
+struct ApplyAdaMax<CPUDevice, T> : ApplyAdaMaxNonCuda<CPUDevice, T> {};
+
template <typename T>
struct ApplyRMSProp<CPUDevice, T> {
void operator()(const CPUDevice& d, typename TTypes<T>::Flat var,
@@ -2752,6 +2773,135 @@
#undef REGISTER_KERNELS
template <typename Device, typename T>
+class ApplyAdaMaxOp : public OpKernel {
+ public:
+ explicit ApplyAdaMaxOp(OpKernelConstruction* ctx) : OpKernel(ctx) {
+ OP_REQUIRES_OK(ctx, ctx->GetAttr("use_locking", &use_exclusive_lock_));
+ }
+
+ void Compute(OpKernelContext* ctx) override {
+ auto locks = MaybeLockVariableInputMutexesInOrder(ctx, use_exclusive_lock_,
+ {0, 1, 2});
+
+ Tensor var;
+ OP_REQUIRES_OK(ctx, GetInputTensorFromVariable<Device, T>(
+ ctx, 0, use_exclusive_lock_, false, &var));
+ Tensor m;
+ OP_REQUIRES_OK(ctx, GetInputTensorFromVariable<Device, T>(
+ ctx, 1, use_exclusive_lock_, false, &m));
+ Tensor v;
+ OP_REQUIRES_OK(ctx, GetInputTensorFromVariable<Device, T>(
+ ctx, 2, use_exclusive_lock_, false, &v));
+ OP_REQUIRES(
+ ctx, var.IsInitialized(),
+ errors::FailedPrecondition(
+ "Attempting to use uninitialized variables: ", requested_input(0)));
+ OP_REQUIRES(
+ ctx, m.IsInitialized(),
+ errors::FailedPrecondition(
+ "Attempting to use uninitialized variables: ", requested_input(1)));
+ OP_REQUIRES(
+ ctx, v.IsInitialized(),
+ errors::FailedPrecondition(
+ "Attempting to use uninitialized variables: ", requested_input(2)));
+
+ const Tensor& beta1_power = ctx->input(3);
+ const Tensor& lr = ctx->input(4);
+ const Tensor& beta1 = ctx->input(5);
+ const Tensor& beta2 = ctx->input(6);
+ const Tensor& epsilon = ctx->input(7);
+
+ OP_REQUIRES(ctx, TensorShapeUtils::IsScalar(beta1_power.shape()),
+ errors::InvalidArgument("beta1_power is not a scalar: ",
+ beta1_power.shape().DebugString()));
+ OP_REQUIRES(ctx, TensorShapeUtils::IsScalar(lr.shape()),
+ errors::InvalidArgument("lr is not a scalar : ",
+ lr.shape().DebugString()));
+ OP_REQUIRES(ctx, TensorShapeUtils::IsScalar(beta1.shape()),
+ errors::InvalidArgument("beta1 is not a scalar: ",
+ beta1.shape().DebugString()));
+ OP_REQUIRES(ctx, TensorShapeUtils::IsScalar(beta2.shape()),
+ errors::InvalidArgument("beta2 is not a scalar: ",
+ beta2.shape().DebugString()));
+ OP_REQUIRES(ctx, TensorShapeUtils::IsScalar(epsilon.shape()),
+ errors::InvalidArgument("epsilon is not a scalar: ",
+ epsilon.shape().DebugString()));
+
+ const Tensor& grad = ctx->input(8);
+ OP_REQUIRES(ctx, var.shape().IsSameSize(m.shape()),
+ errors::InvalidArgument("var and m do not have the same shape",
+ var.shape().DebugString(), " ",
+ m.shape().DebugString()));
+ OP_REQUIRES(ctx, var.shape().IsSameSize(v.shape()),
+ errors::InvalidArgument("var and v do not have the same shape",
+ var.shape().DebugString(), " ",
+ v.shape().DebugString()));
+ OP_REQUIRES(
+ ctx, var.shape().IsSameSize(grad.shape()),
+ errors::InvalidArgument("var and grad do not have the same shape",
+ var.shape().DebugString(), " ",
+ grad.shape().DebugString()));
+
+ const Device& device = ctx->template eigen_device<Device>();
+ functor::ApplyAdaMax<Device, T>()(
+ device, var.flat<T>(), m.flat<T>(), v.flat<T>(),
+ beta1_power.scalar<T>(), lr.scalar<T>(),
+ beta1.scalar<T>(), beta2.scalar<T>(), epsilon.scalar<T>(),
+ grad.flat<T>());
+
+ MaybeForwardRefInputToRefOutput(ctx, 0, 0);
+ }
+
+ private:
+ bool use_exclusive_lock_;
+};
+
+#define REGISTER_KERNELS(D, T) \
+ REGISTER_KERNEL_BUILDER( \
+ Name("ApplyAdaMax").Device(DEVICE_##D).TypeConstraint<T>("T"), \
+ ApplyAdaMaxOp<D##Device, T>); \
+ REGISTER_KERNEL_BUILDER(Name("ResourceApplyAdaMax") \
+ .HostMemory("var") \
+ .HostMemory("m") \
+ .HostMemory("v") \
+ .Device(DEVICE_##D) \
+ .TypeConstraint<T>("T"), \
+ ApplyAdaMaxOp<D##Device, T>);
+#define REGISTER_CPU_KERNELS(T) REGISTER_KERNELS(CPU, T);
+
+TF_CALL_half(REGISTER_CPU_KERNELS);
+TF_CALL_float(REGISTER_CPU_KERNELS);
+TF_CALL_double(REGISTER_CPU_KERNELS);
+
+#if GOOGLE_CUDA
+// Forward declarations of the functor specializations for GPU.
+namespace functor {
+#define DECLARE_GPU_SPEC(T) \
+ template <> \
+ void ApplyAdaMax<GPUDevice, T>::operator()( \
+ const GPUDevice& d, typename TTypes<T>::Flat var, \
+ typename TTypes<T>::Flat m, typename TTypes<T>::Flat v, \
+ typename TTypes<T>::ConstScalar beta1_power, \
+ typename TTypes<T>::ConstScalar lr, \
+ typename TTypes<T>::ConstScalar beta1, \
+ typename TTypes<T>::ConstScalar beta2, \
+ typename TTypes<T>::ConstScalar epsilon, \
+ typename TTypes<T>::ConstFlat grad); \
+ extern template struct ApplyAdaMax<GPUDevice, T>;
+DECLARE_GPU_SPEC(Eigen::half);
+DECLARE_GPU_SPEC(float);
+DECLARE_GPU_SPEC(double);
+#undef DECLARE_GPU_SPEC
+} // namespace functor
+
+REGISTER_KERNELS(GPU, Eigen::half);
+REGISTER_KERNELS(GPU, float);
+REGISTER_KERNELS(GPU, double);
+#endif
+#undef REGISTER_CPU_KERNELS
+#undef REGISTER_KERNELS
+
+template <typename Device, typename T>
class ApplyRMSPropOp : public OpKernel {
public:
explicit ApplyRMSPropOp(OpKernelConstruction* ctx) : OpKernel(ctx) {
diff --git a/tensorflow/core/kernels/training_ops.h b/tensorflow/core/kernels/training_ops.h
index 7ee9560..f536a61 100644
--- a/tensorflow/core/kernels/training_ops.h
+++ b/tensorflow/core/kernels/training_ops.h
@@ -140,6 +140,18 @@
};
template <typename Device, typename T>
+struct ApplyAdaMax {
+ void operator()(const Device& d, typename TTypes<T>::Flat var,
+ typename TTypes<T>::Flat m, typename TTypes<T>::Flat v,
+ typename TTypes<T>::ConstScalar beta1_power,
+ typename TTypes<T>::ConstScalar lr,
+ typename TTypes<T>::ConstScalar beta1,
+ typename TTypes<T>::ConstScalar beta2,
+ typename TTypes<T>::ConstScalar epsilon,
+ typename TTypes<T>::ConstFlat grad);
+};
+
+template <typename Device, typename T>
struct ApplyRMSProp {
void operator()(const Device& d, typename TTypes<T>::Flat var,
typename TTypes<T>::Flat ms, typename TTypes<T>::Flat mom,
diff --git a/tensorflow/core/kernels/training_ops_gpu.cu.cc b/tensorflow/core/kernels/training_ops_gpu.cu.cc
index 0376a3b..2aa17f2 100644
--- a/tensorflow/core/kernels/training_ops_gpu.cu.cc
+++ b/tensorflow/core/kernels/training_ops_gpu.cu.cc
@@ -143,6 +143,32 @@
};
template <typename T>
+struct ApplyAdaMax<GPUDevice, T> {
+ void operator()(const GPUDevice& d, typename TTypes<T>::Flat var,
+ typename TTypes<T>::Flat m, typename TTypes<T>::Flat v,
+ typename TTypes<T>::ConstScalar beta1_power,
+ typename TTypes<T>::ConstScalar lr,
+ typename TTypes<T>::ConstScalar beta1,
+ typename TTypes<T>::ConstScalar beta2,
+ typename TTypes<T>::ConstScalar epsilon,
+ typename TTypes<T>::ConstFlat grad) {
+ Eigen::array<typename TTypes<T>::Tensor::Index, 1> bcast;
+ bcast[0] = grad.dimension(0);
+ Eigen::Sizes<1> single;
+ const auto one = static_cast<T>(1.0);
+ m.device(d) =
+ m + (beta1.constant(one) - beta1).reshape(single).broadcast(bcast) *
+ (grad - m);
+ v.device(d) =
+ (beta2.reshape(single).broadcast(bcast) * v).cwiseMax(grad.abs());
+ var.device(d) -=
+ lr / (beta1_power.constant(one) -
+ beta1_power).reshape(single).broadcast(bcast) *
+ (m / (v + epsilon));
+ }
+};
+
+template <typename T>
struct ApplyRMSProp<GPUDevice, T> {
void operator()(const GPUDevice& d, typename TTypes<T>::Flat var,
typename TTypes<T>::Flat ms, typename TTypes<T>::Flat mom,
@@ -278,6 +304,10 @@
template struct functor::ApplyAdam<GPUDevice, float>;
template struct functor::ApplyAdam<GPUDevice, double>;
+template struct functor::ApplyAdaMax<GPUDevice, Eigen::half>;
+template struct functor::ApplyAdaMax<GPUDevice, float>;
+template struct functor::ApplyAdaMax<GPUDevice, double>;
+
template struct functor::ApplyRMSProp<GPUDevice, Eigen::half>;
template struct functor::ApplyRMSProp<GPUDevice, float>;
template struct functor::ApplyRMSProp<GPUDevice, double>;
diff --git a/tensorflow/core/lib/bfloat16/bfloat16.h b/tensorflow/core/lib/bfloat16/bfloat16.h
index e7c2438..2c0576f 100644
--- a/tensorflow/core/lib/bfloat16/bfloat16.h
+++ b/tensorflow/core/lib/bfloat16/bfloat16.h
@@ -88,15 +88,13 @@
: bfloat16(static_cast<float>(val)) {}
B16_DEVICE_FUNC explicit operator float() const {
- float result;
+ float result = 0;
uint16_t* q = reinterpret_cast<uint16_t*>(&result);
#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
q[0] = value;
- q[1] = 0;
#else
- q[0] = 0;
q[1] = value;
#endif
return result;
diff --git a/tensorflow/core/lib/gtl/manual_constructor.h b/tensorflow/core/lib/gtl/manual_constructor.h
index 0a76e09..0176cdc 100644
--- a/tensorflow/core/lib/gtl/manual_constructor.h
+++ b/tensorflow/core/lib/gtl/manual_constructor.h
@@ -53,7 +53,7 @@
struct AlignType<0, size> {
typedef char result[size];
};
-#if defined(COMPILER_MSVC)
+#if defined(_MSC_VER)
#define TF_LIB_GTL_ALIGN_ATTRIBUTE(X) __declspec(align(X))
#define TF_LIB_GTL_ALIGN_OF(T) __alignof(T)
#elif defined(COMPILER_GCC3) || __GNUC__ >= 3 || defined(__APPLE__) || \
diff --git a/tensorflow/core/lib/strings/stringprintf.cc b/tensorflow/core/lib/strings/stringprintf.cc
index 03eba4c..bbffa06 100644
--- a/tensorflow/core/lib/strings/stringprintf.cc
+++ b/tensorflow/core/lib/strings/stringprintf.cc
@@ -22,12 +22,6 @@
namespace tensorflow {
namespace strings {
-#ifdef COMPILER_MSVC
-enum { IS_COMPILER_MSVC = 1 };
-#else
-enum { IS_COMPILER_MSVC = 0 };
-#endif
-
void Appendv(string* dst, const char* format, va_list ap) {
// First try with a small fixed size buffer
static const int kSpaceLength = 1024;
@@ -48,13 +42,13 @@
return;
}
- if (IS_COMPILER_MSVC) {
+#ifdef _MSC_VER
// Error or MSVC running out of space. MSVC 8.0 and higher
// can be asked about space needed with the special idiom below:
va_copy(backup_ap, ap);
result = vsnprintf(nullptr, 0, format, backup_ap);
va_end(backup_ap);
- }
+#endif
if (result < 0) {
// Just an error.
diff --git a/tensorflow/core/lib/strings/stringprintf_test.cc b/tensorflow/core/lib/strings/stringprintf_test.cc
index d61a1a9..02cf4cb 100644
--- a/tensorflow/core/lib/strings/stringprintf_test.cc
+++ b/tensorflow/core/lib/strings/stringprintf_test.cc
@@ -30,9 +30,9 @@
TEST(PrintfTest, Misc) {
// MSVC does not support $ format specifier.
-#if !defined(COMPILER_MSVC)
+#if !defined(_MSC_VER)
EXPECT_EQ("123hello w", Printf("%3$d%2$s %1$c", 'w', "hello", 123));
-#endif // !COMPILER_MSVC
+#endif // !_MSC_VER
}
TEST(AppendfTest, Empty) {
diff --git a/tensorflow/core/ops/array_ops.cc b/tensorflow/core/ops/array_ops.cc
index 2a8b9f9..88fc038 100644
--- a/tensorflow/core/ops/array_ops.cc
+++ b/tensorflow/core/ops/array_ops.cc
@@ -429,6 +429,58 @@
.Attr("Tidx: {int32, int64} = DT_INT32")
.SetShapeFn([](InferenceContext* c) { return Status::OK(); });
+REGISTER_OP("BroadcastTo")
+ .Input("input: T")
+ .Input("shape: Tidx")
+ .Output("output: T")
+ .Attr("T: type")
+ .Attr("Tidx: {int32, int64} = DT_INT32")
+ .SetShapeFn([](InferenceContext* c) {
+ ShapeHandle in = c->input(0);
+ ShapeHandle out;
+ TF_RETURN_IF_ERROR(c->MakeShapeFromShapeTensor(1, &out));
+
+ if (!c->RankKnown(out)) {
+ // We have no information about the shape of the output.
+ c->set_output(0, out);
+ return Status::OK();
+ }
+
+ if (!c->RankKnown(in)) {
+ // We have no information about the shape of the input,
+ // nothing to do here.
+ c->set_output(0, out);
+ return Status::OK();
+ }
+ if (c->Rank(out) < c->Rank(in)) {
+ return errors::InvalidArgument("Cannot broadcast a tensor with shape ",
+ c->DebugString(in), " shape ",
+ c->DebugString(out));
+ }
+
+ int32 in_offset = c->Rank(out) - c->Rank(in);
+ for (int32 i = 0; i < c->Rank(out); ++i) {
+ DimensionHandle dim = c->Dim(out, i);
+ if (c->ValueKnown(dim)) {
+ // The first in_offset dimensions for input will be expanded with 1,
+ // so no check needed.
+ if (i >= in_offset) {
+ DimensionHandle in_dim = c->Dim(in, i - in_offset);
+ if (c->ValueKnown(in_dim)) {
+ if (c->Value(dim) % c->Value(in_dim) != 0) {
+ return errors::InvalidArgument(
+ "Cannot broadcast a tensor with shape ", c->DebugString(in),
+ " shape ", c->DebugString(out));
+ }
+ }
+ }
+ }
+ }
+
+ c->set_output(0, out);
+ return Status::OK();
+ });
+
// --------------------------------------------------------------------------
// TODO(josh11b): Remove the >= 2 constraint, once we can rewrite the graph
// in the N == 1 case to remove the node.
diff --git a/tensorflow/core/ops/dataset_ops.cc b/tensorflow/core/ops/dataset_ops.cc
index 67c6c58..4ba3f15 100644
--- a/tensorflow/core/ops/dataset_ops.cc
+++ b/tensorflow/core/ops/dataset_ops.cc
@@ -148,7 +148,11 @@
.Output("handle: variant")
.Attr("output_types: list(type) >= 1")
.Attr("output_shapes: list(shape) >= 1")
- .SetShapeFn(shape_inference::ScalarShape);
+ .SetShapeFn([](shape_inference::InferenceContext* c) {
+ shape_inference::ShapeHandle tag_shape;
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 0, &tag_shape));
+ return shape_inference::ScalarShape(c);
+ });
REGISTER_OP("LatencyStatsDataset")
.Input("input_dataset: variant")
@@ -156,7 +160,11 @@
.Output("handle: variant")
.Attr("output_types: list(type) >= 1")
.Attr("output_shapes: list(shape) >= 1")
- .SetShapeFn(shape_inference::ScalarShape);
+ .SetShapeFn([](shape_inference::InferenceContext* c) {
+ shape_inference::ShapeHandle tag_shape;
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 0, &tag_shape));
+ return shape_inference::ScalarShape(c);
+ });
REGISTER_OP("SetStatsAggregatorDataset")
.Input("input_dataset: variant")
@@ -206,7 +214,12 @@
.Output("handle: variant")
.Attr("output_types: list(type) >= 1")
.Attr("output_shapes: list(shape) >= 1")
- .SetShapeFn(shape_inference::ScalarShape);
+ .SetShapeFn([](shape_inference::InferenceContext* c) {
+ shape_inference::ShapeHandle unused;
+ // buffer_size should be a scalar.
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 0, &unused));
+ return shape_inference::ScalarShape(c);
+ });
REGISTER_OP("ScanDataset")
.Input("input_dataset: variant")
@@ -290,7 +303,12 @@
.Output("handle: variant")
.Attr("output_types: list(type) >= 1")
.Attr("output_shapes: list(shape) >= 1")
- .SetShapeFn(shape_inference::ScalarShape);
+ .SetShapeFn([](shape_inference::InferenceContext* c) {
+ shape_inference::ShapeHandle unused;
+ // batch_size should be a scalar.
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 0, &unused));
+ return shape_inference::ScalarShape(c);
+ });
// TODO(mrry): move SlideDataset to contrib in the future.
REGISTER_OP("SlideDataset")
@@ -300,7 +318,13 @@
.Output("handle: variant")
.Attr("output_types: list(type) >= 1")
.Attr("output_shapes: list(shape) >= 1")
- .SetShapeFn(shape_inference::ScalarShape);
+ .SetShapeFn([](shape_inference::InferenceContext* c) {
+ shape_inference::ShapeHandle unused;
+ // window_size and stride should be scalars.
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 0, &unused));
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(2), 0, &unused));
+ return shape_inference::ScalarShape(c);
+ });
REGISTER_OP("PaddedBatchDataset")
.Input("input_dataset: variant")
@@ -330,7 +354,14 @@
.Output("handle: variant")
.Attr("output_types: list(type) >= 1")
.Attr("output_shapes: list(shape) >= 1")
- .SetShapeFn(shape_inference::ScalarShape);
+ .SetShapeFn([](shape_inference::InferenceContext* c) {
+ shape_inference::ShapeHandle unused;
+ // batch_size should be a scalar.
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 0, &unused));
+ // row_shape should be a 1-D vector.
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(2), 1, &unused));
+ return shape_inference::ScalarShape(c);
+ });
REGISTER_OP("RangeDataset")
.Input("start: int64")
@@ -341,7 +372,14 @@
.Attr("output_shapes: list(shape) >= 1")
.SetIsStateful() // TODO(b/65524810): Source dataset ops must be marked
// stateful to inhibit constant folding.
- .SetShapeFn(shape_inference::ScalarShape);
+ .SetShapeFn([](shape_inference::InferenceContext* c) {
+ shape_inference::ShapeHandle unused;
+ // start, stop, and step should be scalars.
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(0), 0, &unused));
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 0, &unused));
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(2), 0, &unused));
+ return shape_inference::ScalarShape(c);
+ });
REGISTER_OP("RandomDataset")
.Input("seed: int64")
@@ -351,7 +389,13 @@
.Attr("output_shapes: list(shape) >= 1")
.SetIsStateful() // TODO(b/65524810): Source dataset ops must be marked
// stateful to inhibit constant folding.
- .SetShapeFn(shape_inference::ScalarShape);
+ .SetShapeFn([](shape_inference::InferenceContext* c) {
+ shape_inference::ShapeHandle unused;
+ // buffer_size, seed, and seed2 should be scalars.
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(0), 0, &unused));
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 0, &unused));
+ return shape_inference::ScalarShape(c);
+ });
REGISTER_OP("ShuffleDataset")
.Input("input_dataset: variant")
@@ -362,7 +406,14 @@
.Attr("reshuffle_each_iteration: bool = true")
.Attr("output_types: list(type) >= 1")
.Attr("output_shapes: list(shape) >= 1")
- .SetShapeFn(shape_inference::ScalarShape);
+ .SetShapeFn([](shape_inference::InferenceContext* c) {
+ shape_inference::ShapeHandle unused;
+ // buffer_size, seed, and seed2 should be scalars.
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 0, &unused));
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(2), 0, &unused));
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(3), 0, &unused));
+ return shape_inference::ScalarShape(c);
+ });
REGISTER_OP("ShuffleAndRepeatDataset")
.Input("input_dataset: variant")
@@ -373,7 +424,15 @@
.Output("handle: variant")
.Attr("output_types: list(type) >= 1")
.Attr("output_shapes: list(shape) >= 1")
- .SetShapeFn(shape_inference::ScalarShape);
+ .SetShapeFn([](shape_inference::InferenceContext* c) {
+ shape_inference::ShapeHandle unused;
+ // buffer_size, seed, seed2, and count should be scalars.
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 0, &unused));
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(2), 0, &unused));
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(3), 0, &unused));
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(4), 0, &unused));
+ return shape_inference::ScalarShape(c);
+ });
REGISTER_OP("CacheDataset")
.Input("input_dataset: variant")
@@ -381,7 +440,12 @@
.Output("handle: variant")
.Attr("output_types: list(type) >= 1")
.Attr("output_shapes: list(shape) >= 1")
- .SetShapeFn(shape_inference::ScalarShape);
+ .SetShapeFn([](shape_inference::InferenceContext* c) {
+ shape_inference::ShapeHandle unused;
+ // filename should be a scalar.
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 0, &unused));
+ return shape_inference::ScalarShape(c);
+ });
REGISTER_OP("TextLineDataset")
.Input("filenames: string")
@@ -390,10 +454,16 @@
.Output("handle: variant")
.SetIsStateful() // TODO(b/65524810): Source dataset ops must be marked
// stateful to inhibit constant folding.
- .SetShapeFn(shape_inference::ScalarShape); // TODO(mrry): validate
- // that `filenames` is
- // a scalar or a
- // vector.
+ .SetShapeFn([](shape_inference::InferenceContext* c) {
+ shape_inference::ShapeHandle unused;
+ // `filenames` must be a scalar or a vector.
+ TF_RETURN_IF_ERROR(c->WithRankAtMost(c->input(0), 1, &unused));
+ return shape_inference::ScalarShape(c);
+ // `compression_type` could only be a scalar.
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 0, &unused));
+ // `buffer_size` could only be a scalar.
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(2), 0, &unused));
+ });
REGISTER_OP("SqlDataset")
.Input("driver_name: string")
@@ -404,7 +474,14 @@
.Attr("output_shapes: list(shape) >= 1")
.SetIsStateful() // TODO(b/65524810): Source dataset ops must be marked
// stateful to inhibit constant folding.
- .SetShapeFn(shape_inference::ScalarShape);
+ .SetShapeFn([](shape_inference::InferenceContext* c) {
+ shape_inference::ShapeHandle unused;
+ // driver_name, data_source_name, and query should be scalars.
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(0), 0, &unused));
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 0, &unused));
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(2), 0, &unused));
+ return shape_inference::ScalarShape(c);
+ });
REGISTER_OP("FixedLengthRecordDataset")
.Input("filenames: string")
@@ -415,7 +492,18 @@
.Output("handle: variant")
.SetIsStateful() // TODO(b/65524810): Source dataset ops must be marked
// stateful to inhibit constant folding.
- .SetShapeFn(shape_inference::ScalarShape);
+ .SetShapeFn([](shape_inference::InferenceContext* c) {
+ shape_inference::ShapeHandle unused;
+ // `filenames` must be a scalar or a vector.
+ TF_RETURN_IF_ERROR(c->WithRankAtMost(c->input(0), 1, &unused));
+ // header_bytes, record_bytes, footer_bytes, buffer_size should be
+ // scalars.
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 0, &unused));
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(2), 0, &unused));
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(3), 0, &unused));
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(4), 0, &unused));
+ return shape_inference::ScalarShape(c);
+ });
REGISTER_OP("TFRecordDataset")
.Input("filenames: string")
@@ -424,7 +512,16 @@
.Output("handle: variant")
.SetIsStateful() // TODO(b/65524810): Source dataset ops must be marked
// stateful to inhibit constant folding.
- .SetShapeFn(shape_inference::ScalarShape);
+ .SetShapeFn([](shape_inference::InferenceContext* c) {
+ shape_inference::ShapeHandle unused;
+ // `filenames` must be a scalar or a vector.
+ TF_RETURN_IF_ERROR(c->WithRankAtMost(c->input(0), 1, &unused));
+ // `compression_type` could only be a scalar.
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 0, &unused));
+ // `buffer_size` could only be a scalar.
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(2), 0, &unused));
+ return shape_inference::ScalarShape(c);
+ });
REGISTER_OP("Iterator")
.Output("handle: resource")
@@ -540,7 +637,12 @@
// length of `output_types` is `N`, the `output_shapes` are
// (as far as possible to tell statically) compatible with `padded_shapes`,
// and that `padding_values` are all scalars.
- .SetShapeFn(shape_inference::ScalarShape);
+ .SetShapeFn([](shape_inference::InferenceContext* c) {
+ shape_inference::ShapeHandle unused;
+ // batch_size should be a scalar.
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 0, &unused));
+ return shape_inference::ScalarShape(c);
+ });
REGISTER_OP("EnqueueInQueueDataset")
.Input("queue: variant")
diff --git a/tensorflow/core/ops/manip_ops.cc b/tensorflow/core/ops/manip_ops.cc
index 95b4774..e180f3d 100644
--- a/tensorflow/core/ops/manip_ops.cc
+++ b/tensorflow/core/ops/manip_ops.cc
@@ -28,6 +28,17 @@
.Attr("T: type")
.Attr("Tshift: {int32,int64}")
.Attr("Taxis: {int32,int64}")
- .SetShapeFn(shape_inference::UnchangedShape);
+ .SetShapeFn([](shape_inference::InferenceContext* c) {
+ shape_inference::ShapeHandle unused;
+ // The `input` must be 1-D or higher
+ TF_RETURN_IF_ERROR(c->WithRankAtLeast(c->input(0), 1, &unused));
+ // The `shift` must be scalar or 1-D.
+ TF_RETURN_IF_ERROR(c->WithRankAtMost(c->input(1), 1, &unused));
+ // The `axis` must be scalar or 1-D.
+ TF_RETURN_IF_ERROR(c->WithRankAtMost(c->input(2), 1, &unused));
+ // Validate 'shift' is the same shape as axis'.
+ TF_RETURN_IF_ERROR(c->Merge(c->input(1), c->input(2), &unused));
+ return shape_inference::UnchangedShape(c);
+ });
} // namespace tensorflow
diff --git a/tensorflow/core/ops/nn_ops.cc b/tensorflow/core/ops/nn_ops.cc
index 6dc3d9d..bb46daf 100644
--- a/tensorflow/core/ops/nn_ops.cc
+++ b/tensorflow/core/ops/nn_ops.cc
@@ -1535,6 +1535,7 @@
.Attr(GetPaddingAttrString())
.Attr(GetConvnetDataFormatAttrString())
.Attr("dilations: list(int) = [1, 1, 1, 1]")
+ .SetShapeFn(shape_inference::Conv2DShape)
.Doc(R"doc(
Dummy node that enables fusing Conv2D and BiasAdd operator for MKL. This node
does not perform anything. It is just created as an intermediate output of
@@ -1561,6 +1562,7 @@
.Attr(GetPaddingAttrString())
.Attr(GetConvnetDataFormatAttrString())
.Attr("dilations: list(int) = [1, 1, 1, 1]")
+ .SetShapeFn(shape_inference::Conv2DShape)
.Doc(R"doc(
MKL version of Conv2D and BiasAdd operator. Uses MKL DNN APIs to perform
2D convolution and add Bias to the output of convolution.
@@ -1683,6 +1685,7 @@
expected to invoke these operators.
)doc");
+#ifdef INTEL_MKL_ML
REGISTER_OP("_MklConv2DWithBiasBackpropBias")
.Input("out_backprop: T")
.Input("mkl_out_backprop: uint8")
@@ -1699,6 +1702,7 @@
NOTE Do not invoke this operator directly in Python. Graph rewrite pass is
expected to invoke these operators.
)doc");
+#endif
REGISTER_OP("_MklConv2DBackpropInput")
.Input("input_sizes: int32")
@@ -2156,6 +2160,7 @@
.Output("output: T")
.Attr("T: {half, float, double}")
.Attr(GetConvnetDataFormatAttrString())
+ .SetShapeFn(shape_inference::UnknownShape)
.Doc(R"doc(
MKL operator to convert a tensor from MKL layout to TensorFlow layout.
@@ -2177,6 +2182,7 @@
"T: {half, float, double, uint8, int8, uint16, int16, int32, int64, "
"complex64, complex128}")
.Attr(GetConvnetDataFormatAttrString())
+ .SetShapeFn(shape_inference::UnknownShape)
.Doc(R"doc(
MKL operator to process the inputs to an elementwise MKL op. Both inputs
need to be either in TF or in MKL format. This op is added before every
diff --git a/tensorflow/core/ops/random_ops.cc b/tensorflow/core/ops/random_ops.cc
index f6c668f..416ce9c 100644
--- a/tensorflow/core/ops/random_ops.cc
+++ b/tensorflow/core/ops/random_ops.cc
@@ -43,7 +43,12 @@
.Attr("seed2: int = 0")
.Attr("Tout: {int32, int64}")
.Attr("T: {int32, int64}")
- .SetShapeFn(shape_inference::RandomShape);
+ .SetShapeFn([](InferenceContext* c) {
+ ShapeHandle unused;
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 0, &unused));
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(2), 0, &unused));
+ return shape_inference::RandomShape(c);
+ });
REGISTER_OP("RandomStandardNormal")
.Input("shape: T")
diff --git a/tensorflow/core/ops/string_ops.cc b/tensorflow/core/ops/string_ops.cc
index 05f216a..469f193 100644
--- a/tensorflow/core/ops/string_ops.cc
+++ b/tensorflow/core/ops/string_ops.cc
@@ -123,6 +123,11 @@
return Status::OK();
});
+REGISTER_OP("StringStrip")
+ .Input("input: string")
+ .Output("output: string")
+ .SetShapeFn(shape_inference::UnchangedShape);
+
REGISTER_OP("EncodeBase64")
.Input("input: string")
.Output("output: string")
diff --git a/tensorflow/core/ops/training_ops.cc b/tensorflow/core/ops/training_ops.cc
index 6ce9595..dc7b588 100644
--- a/tensorflow/core/ops/training_ops.cc
+++ b/tensorflow/core/ops/training_ops.cc
@@ -737,6 +737,57 @@
return ApplyAdamShapeFn(c, false /* sparse */);
});
+static Status ApplyAdaMaxShapeFn(InferenceContext* c, bool sparse) {
+ ShapeHandle unused;
+ ShapeHandle s = ShapeOrHandleShape(c, 0); // var
+ TF_RETURN_IF_ERROR(c->Merge(s, ShapeOrHandleShape(c, 1), &s)); // m
+ TF_RETURN_IF_ERROR(c->Merge(s, ShapeOrHandleShape(c, 2), &s)); // v
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(3), 0, &unused)); // beta1_power
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(4), 0, &unused)); // lr
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(5), 0, &unused)); // beta1
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(6), 0, &unused)); // beta2
+ TF_RETURN_IF_ERROR(c->WithRank(c->input(7), 0, &unused)); // epsilon
+ TF_RETURN_IF_ERROR(
+ HandleGradAndIndicesInputs(c, sparse, 8 /* grad_idx */, &s));
+ if (c->num_outputs() > 0) {
+ c->set_output(0, s);
+ }
+ return Status::OK();
+}
+
+REGISTER_OP("ApplyAdaMax")
+ .Input("var: Ref(T)")
+ .Input("m: Ref(T)")
+ .Input("v: Ref(T)")
+ .Input("beta1_power: T")
+ .Input("lr: T")
+ .Input("beta1: T")
+ .Input("beta2: T")
+ .Input("epsilon: T")
+ .Input("grad: T")
+ .Output("out: Ref(T)")
+ .Attr("T: numbertype")
+ .Attr("use_locking: bool = false")
+ .SetShapeFn([](InferenceContext* c) {
+ return ApplyAdaMaxShapeFn(c, false /* sparse */);
+ });
+
+REGISTER_OP("ResourceApplyAdaMax")
+ .Input("var: resource")
+ .Input("m: resource")
+ .Input("v: resource")
+ .Input("beta1_power: T")
+ .Input("lr: T")
+ .Input("beta1: T")
+ .Input("beta2: T")
+ .Input("epsilon: T")
+ .Input("grad: T")
+ .Attr("T: numbertype")
+ .Attr("use_locking: bool = false")
+ .SetShapeFn([](InferenceContext* c) {
+ return ApplyAdaMaxShapeFn(c, false /* sparse */);
+ });
+
static Status ApplyRMSPropShapeFn(InferenceContext* c, bool sparse) {
ShapeHandle unused;
ShapeHandle s = ShapeOrHandleShape(c, 0); // var
diff --git a/tensorflow/core/platform/default/logging.cc b/tensorflow/core/platform/default/logging.cc
index 2b874da..c6e5777 100644
--- a/tensorflow/core/platform/default/logging.cc
+++ b/tensorflow/core/platform/default/logging.cc
@@ -21,6 +21,7 @@
#include <android/log.h>
#include <iostream>
#include <sstream>
+#include <cstring>
#endif
#include <stdlib.h>
diff --git a/tensorflow/core/platform/hadoop/hadoop_file_system.cc b/tensorflow/core/platform/hadoop/hadoop_file_system.cc
index 9a71fbe..a8cb405 100644
--- a/tensorflow/core/platform/hadoop/hadoop_file_system.cc
+++ b/tensorflow/core/platform/hadoop/hadoop_file_system.cc
@@ -109,6 +109,8 @@
// in the libhdfs documentation.
#if defined(PLATFORM_WINDOWS)
const char* kLibHdfsDso = "hdfs.dll";
+#elif defined(MACOS) || defined(TARGET_OS_MAC)
+ const char* kLibHdfsDso = "libhdfs.dylib";
#else
const char* kLibHdfsDso = "libhdfs.so";
#endif
diff --git a/tensorflow/core/protobuf/rewriter_config.proto b/tensorflow/core/protobuf/rewriter_config.proto
index 9b6202e..029b27c 100644
--- a/tensorflow/core/protobuf/rewriter_config.proto
+++ b/tensorflow/core/protobuf/rewriter_config.proto
@@ -6,6 +6,8 @@
option java_multiple_files = true;
option java_package = "org.tensorflow.framework";
+import "tensorflow/core/framework/attr_value.proto";
+
message AutoParallelOptions {
bool enable = 1;
int32 num_replicas = 2;
@@ -119,4 +121,13 @@
// Custom registered optimizers will be run after the base optimizers, in
// the order that they are specified.
repeated string optimizers = 100;
+
+ // Message to describe custom graph optimizer and its parameters
+ message CustomGraphOptimizer {
+ string name = 1;
+ map<string, AttrValue> parameter_map = 2;
+ }
+
+ // list of CustomGraphOptimizers to apply.
+ repeated CustomGraphOptimizer custom_optimizers = 200;
}
diff --git a/tensorflow/core/public/version.h b/tensorflow/core/public/version.h
index 706968d..0ca7d84 100644
--- a/tensorflow/core/public/version.h
+++ b/tensorflow/core/public/version.h
@@ -19,12 +19,12 @@
// TensorFlow uses semantic versioning, see http://semver.org/.
#define TF_MAJOR_VERSION 1
-#define TF_MINOR_VERSION 7
+#define TF_MINOR_VERSION 8
#define TF_PATCH_VERSION 0
// TF_VERSION_SUFFIX is non-empty for pre-releases (e.g. "-alpha", "-alpha.1",
// "-beta", "-rc", "-rc.1")
-#define TF_VERSION_SUFFIX ""
+#define TF_VERSION_SUFFIX "-rc0"
#define TF_STR_HELPER(x) #x
#define TF_STR(x) TF_STR_HELPER(x)
diff --git a/tensorflow/core/util/memmapped_file_system.cc b/tensorflow/core/util/memmapped_file_system.cc
index 1fa6b8b..d3439cb 100644
--- a/tensorflow/core/util/memmapped_file_system.cc
+++ b/tensorflow/core/util/memmapped_file_system.cc
@@ -185,7 +185,7 @@
return reinterpret_cast<const uint8*>(mapped_memory_->data()) + offset;
}
-#if defined(COMPILER_MSVC)
+#if defined(_MSC_VER)
constexpr char* MemmappedFileSystem::kMemmappedPackagePrefix;
constexpr char* MemmappedFileSystem::kMemmappedPackageDefaultGraphDef;
#else
diff --git a/tensorflow/core/util/memmapped_file_system.h b/tensorflow/core/util/memmapped_file_system.h
index 76cc491..958e23d 100644
--- a/tensorflow/core/util/memmapped_file_system.h
+++ b/tensorflow/core/util/memmapped_file_system.h
@@ -53,7 +53,7 @@
public:
// Memmapped regions use this prefix to distinguish from
// the filesystem.
-#if defined(COMPILER_MSVC)
+#if defined(_MSC_VER)
static constexpr char* kMemmappedPackagePrefix =
#else
static constexpr char kMemmappedPackagePrefix[] =
@@ -61,7 +61,7 @@
"memmapped_package://";
// The default graphdef in the package.
-#if defined(COMPILER_MSVC)
+#if defined(_MSC_VER)
static constexpr char* kMemmappedPackageDefaultGraphDef =
#else
static constexpr char kMemmappedPackageDefaultGraphDef[] =
diff --git a/tensorflow/core/util/mkl_util.h b/tensorflow/core/util/mkl_util.h
index 9f58e40..bc6d2d7 100644
--- a/tensorflow/core/util/mkl_util.h
+++ b/tensorflow/core/util/mkl_util.h
@@ -45,6 +45,10 @@
using mkldnn::reorder;
#endif
+#ifdef _WIN32
+typedef unsigned int uint;
+#endif
+
// The file contains a number of utility classes and functions used by MKL
// enabled kernels
diff --git a/tensorflow/docs_src/api_guides/python/contrib.bayesflow.monte_carlo.md b/tensorflow/docs_src/api_guides/python/contrib.bayesflow.monte_carlo.md
index f3db585..74fe4a3 100644
--- a/tensorflow/docs_src/api_guides/python/contrib.bayesflow.monte_carlo.md
+++ b/tensorflow/docs_src/api_guides/python/contrib.bayesflow.monte_carlo.md
@@ -6,43 +6,39 @@
## Background
Monte Carlo integration refers to the practice of estimating an expectation with
-a sample mean. For example, given random variable `Z in \\(R^k\\)` with density `p`,
+a sample mean. For example, given random variable Z in \\(R^k\\) with density `p`,
the expectation of function `f` can be approximated like:
-```
$$E_p[f(Z)] = \int f(z) p(z) dz$$
$$ ~ S_n
:= n^{-1} \sum_{i=1}^n f(z_i), z_i\ iid\ samples\ from\ p.$$
-```
-If `\\(E_p[|f(Z)|] < infinity\\)`, then `\\(S_n\\) --> \\(E_p[f(Z)]\\)` by the strong law of large
-numbers. If `\\(E_p[f(Z)^2] < infinity\\)`, then `\\(S_n\\)` is asymptotically normal with
-variance `\\(Var[f(Z)] / n\\)`.
+If \\(E_p[|f(Z)|] < infinity\\), then \\(S_n\\) --> \\(E_p[f(Z)]\\) by the strong law of large
+numbers. If \\(E_p[f(Z)^2] < infinity\\), then \\(S_n\\) is asymptotically normal with
+variance \\(Var[f(Z)] / n\\).
Practitioners of Bayesian statistics often find themselves wanting to estimate
-`\\(E_p[f(Z)]\\)` when the distribution `p` is known only up to a constant. For
+\\(E_p[f(Z)]\\) when the distribution `p` is known only up to a constant. For
example, the joint distribution `p(z, x)` may be known, but the evidence
-`\\(p(x) = \int p(z, x) dz\\)` may be intractable. In that case, a parameterized
-distribution family `\\(q_\lambda(z)\\)` may be chosen, and the optimal `\\(\lambda\\)` is the
-one minimizing the KL divergence between `\\(q_\lambda(z)\\)` and
-`\\(p(z | x)\\)`. We only know `p(z, x)`, but that is sufficient to find `\\(\lambda\\)`.
+\\(p(x) = \int p(z, x) dz\\) may be intractable. In that case, a parameterized
+distribution family \\(q_\lambda(z)\\) may be chosen, and the optimal \\(\lambda\\) is the
+one minimizing the KL divergence between \\(q_\lambda(z)\\) and
+\\(p(z | x)\\). We only know `p(z, x)`, but that is sufficient to find \\(\lambda\\).
## Log-space evaluation and subtracting the maximum
Care must be taken when the random variable lives in a high dimensional space.
-For example, the naive importance sample estimate `\\(E_q[f(Z) p(Z) / q(Z)]\\)`
-involves the ratio of two terms `\\(p(Z) / q(Z)\\)`, each of which must have tails
-dropping off faster than `\\(O(|z|^{-(k + 1)})\\)` in order to have finite integral.
+For example, the naive importance sample estimate \\(E_q[f(Z) p(Z) / q(Z)]\\)
+involves the ratio of two terms \\(p(Z) / q(Z)\\), each of which must have tails
+dropping off faster than \\(O(|z|^{-(k + 1)})\\) in order to have finite integral.
This ratio would often be zero or infinity up to numerical precision.
For that reason, we write
-```
$$Log E_q[ f(Z) p(Z) / q(Z) ]$$
$$ = Log E_q[ \exp\{Log[f(Z)] + Log[p(Z)] - Log[q(Z)] - C\} ] + C,$$ where
$$C := Max[ Log[f(Z)] + Log[p(Z)] - Log[q(Z)] ].$$
-```
The maximum value of the exponentiated term will be 0.0, and the expectation
can be evaluated in a stable manner.
diff --git a/tensorflow/docs_src/community/documentation.md b/tensorflow/docs_src/community/documentation.md
index d5bc7a5..8639656 100644
--- a/tensorflow/docs_src/community/documentation.md
+++ b/tensorflow/docs_src/community/documentation.md
@@ -402,24 +402,24 @@
For example:
- ```c++
- REGISTER_OP("PngDecode")
- .Input("contents: string")
- .Attr("channels: int = 0")
- .Output("image: uint8")
- .Doc(R"doc(
- Decodes the contents of a PNG file into a uint8 tensor.
+```c++
+REGISTER_OP("PngDecode")
+ .Input("contents: string")
+ .Attr("channels: int = 0")
+ .Output("image: uint8")
+ .Doc(R"doc(
+Decodes the contents of a PNG file into a uint8 tensor.
- contents: PNG file contents.
- channels: Number of color channels, or 0 to autodetect based on the input.
- Must be 0 for autodetect, 1 for grayscale, 3 for RGB, or 4 for RGBA.
- If the input has a different number of channels, it will be transformed
- accordingly.
- image:= A 3-D uint8 tensor of shape `[height, width, channels]`.
- If `channels` is 0, the last dimension is determined
- from the png contents.
- )doc");
- ```
+contents: PNG file contents.
+channels: Number of color channels, or 0 to autodetect based on the input.
+ Must be 0 for autodetect, 1 for grayscale, 3 for RGB, or 4 for RGBA.
+ If the input has a different number of channels, it will be transformed
+ accordingly.
+image:= A 3-D uint8 tensor of shape `[height, width, channels]`.
+ If `channels` is 0, the last dimension is determined
+ from the png contents.
+)doc");
+```
Results in this piece of Markdown:
@@ -429,12 +429,12 @@
#### Args:
- * <b>contents</b>: A string Tensor. PNG file contents.
- * <b>channels</b>: An optional int. Defaults to 0.
+ * **contents**: A string Tensor. PNG file contents.
+ * **channels**: An optional int. Defaults to 0.
Number of color channels, or 0 to autodetect based on the input.
Must be 0 for autodetect, 1 for grayscale, 3 for RGB, or 4 for RGBA. If the
input has a different number of channels, it will be transformed accordingly.
- * <b>name</b>: A name for the operation (optional).
+ * **name**: A name for the operation (optional).
#### Returns:
A 3-D uint8 tensor of shape `[height, width, channels]`. If `channels` is
@@ -442,7 +442,7 @@
Much of the argument description is added automatically. In particular, the doc
generator automatically adds the name and type of all inputs, attrs, and
-outputs. In the above example, `<b>contents</b>: A string Tensor.` was added
+outputs. In the above example, `contents: A string Tensor.` was added
automatically. You should write your additional text to flow naturally after
that description.
@@ -664,10 +664,10 @@
#### Args:
- * <b>`contents`</b>: A `Tensor` of type `string`. 0-D. The PNG-encoded
+ * **`contents`**: A `Tensor` of type `string`. 0-D. The PNG-encoded
image.
- * <b>`channels`</b>: An optional `int`. Defaults to `0`. Number of color
+ * **`channels`**: An optional `int`. Defaults to `0`. Number of color
channels for the decoded image.
- * <b>`dtype`</b>: An optional `tf.DType` from: `tf.uint8,
+ * **`dtype`**: An optional `tf.DType` from: `tf.uint8,
tf.uint16`. Defaults to `tf.uint 8`.
- * <b>`name`</b>: A name for the operation (optional).
+ * **`name`**: A name for the operation (optional).
diff --git a/tensorflow/docs_src/deploy/s3.md b/tensorflow/docs_src/deploy/s3.md
index 38f8428..ef3b030 100644
--- a/tensorflow/docs_src/deploy/s3.md
+++ b/tensorflow/docs_src/deploy/s3.md
@@ -1,22 +1,13 @@
# How to run TensorFlow on S3
-This document describes how to run TensorFlow on S3 file system.
+Tensorflow supports reading and writing data to S3. S3 is an object storage API which is nearly ubiquitious, and can help in situations where data must accessed by multiple actors, such as in distributed training.
-## S3
+This document guides you through the required setup, and provides examples on usage.
-We assume that you are familiar with @{$reading_data$reading data}.
-
-To use S3 with TensorFlow, change the file paths you use to read and write
-data to an S3 path. For example:
-
-```python
-filenames = ["s3://bucketname/path/to/file1.tfrecord",
- "s3://bucketname/path/to/file2.tfrecord"]
-dataset = tf.data.TFRecordDataset(filenames)
-```
+## Configuration
When reading or writing data on S3 with your TensorFlow program, the behavior
-could be controlled by various environmental variables:
+can be controlled by various environmental variables:
* **AWS_REGION**: By default, regional endpoint is used for S3, with region
controlled by `AWS_REGION`. If `AWS_REGION` is not specified, then
@@ -28,7 +19,7 @@
* **S3_VERIFY_SSL**: If HTTPS is used, SSL verification could be disabled
with `S3_VERIFY_SSL=0`.
-To read or write objects in a bucket that is no publicly accessible,
+To read or write objects in a bucket that is not publicly accessible,
AWS credentials must be provided through one of the following methods:
* Set credentials in the AWS credentials profile file on the local system,
@@ -38,3 +29,65 @@
variables.
* If TensorFlow is deployed on an EC2 instance, specify an IAM role and then
give the EC2 instance access to that role.
+
+## Example Setup
+
+Using the above information, we can configure Tensorflow to communicate to an S3 endpoint by setting the following environment variables:
+
+```bash
+AWS_ACCESS_KEY_ID=XXXXX # Credentials only needed if connecting to a private endpoint
+AWS_SECRET_ACCESS_KEY=XXXXX
+AWS_REGION=us-east-1 # Region for the S3 bucket, this is not always needed. Default is us-east-1.
+S3_ENDPOINT=s3.us-east-1.amazonaws.com # The S3 API Endpoint to connect to. This is specified in a HOST:PORT format.
+S3_USE_HTTPS=1 # Whether or not to use HTTPS. Disable with 0.
+S3_VERIFY_SSL=1 # If HTTPS is used, conterols if SSL should be enabled. Disable with 0.
+```
+
+## Usage
+
+Once setup is completed, Tensorflow can interact with S3 in a variety of ways. Anywhere there is a Tensorflow IO function, an S3 URL can be used.
+
+### Smoke Test
+
+To test your setup, stat a file:
+
+```python
+from tensorflow.python.lib.io import file_io
+print file_io.stat('s3://bucketname/path/')
+```
+
+You should see output similar to this:
+
+```console
+<tensorflow.python.pywrap_tensorflow_internal.FileStatistics; proxy of <Swig Object of type 'tensorflow::FileStatistics *' at 0x10c2171b0> >
+```
+
+### Reading Data
+
+When @{$reading_data$reading data}, change the file paths you use to read and write
+data to an S3 path. For example:
+
+```python
+filenames = ["s3://bucketname/path/to/file1.tfrecord",
+ "s3://bucketname/path/to/file2.tfrecord"]
+dataset = tf.data.TFRecordDataset(filenames)
+```
+
+### Tensorflow Tools
+
+Many Tensorflow tools, such as Tensorboard or model serving, can also take S3 URLS as arguments:
+
+```bash
+tensorboard --logdir s3://bucketname/path/to/model/
+tensorflow_model_server --port=9000 --model_name=model --model_base_path=s3://bucketname/path/to/model/export/
+```
+
+This enables an end to end workflow using S3 for all data needs.
+
+## S3 Endpoint Implementations
+
+S3 was invented by Amazon, but the S3 API has spread in popularity and has several implementations. The following implementations have passed basic compatibility tests:
+
+* [Amazon S3](https://aws.amazon.com/s3/)
+* [Google Storage](https://cloud.google.com/storage/docs/interoperability)
+* [Minio](https://www.minio.io/kubernetes.html)(Standalone mode only)
diff --git a/tensorflow/docs_src/extend/language_bindings.md b/tensorflow/docs_src/extend/language_bindings.md
index b9fd729..9a968d3 100644
--- a/tensorflow/docs_src/extend/language_bindings.md
+++ b/tensorflow/docs_src/extend/language_bindings.md
@@ -112,11 +112,11 @@
to interpret the `OpDef` messages.
- The C++ function `OpRegistry::Global()->GetRegisteredOps()` returns the same
list of all registered `OpDef`s (defined in
- [`tensorflow/core/framework/op.h`]). This can be used to write the generator
+ [`tensorflow/core/framework/op.h`](https://www.tensorflow.org/code/tensorflow/core/framework/op.h)). This can be used to write the generator
in C++ (particularly useful for languages that do not have protocol buffer
support).
- The ASCII-serialized version of that list is periodically checked in to
- [`tensorflow/core/ops/ops.pbtxt`] by an automated process.
+ [`tensorflow/core/ops/ops.pbtxt`](https://www.tensorflow.org/code/tensorflow/core/ops/ops.pbtxt) by an automated process.
The `OpDef` specifies the following:
@@ -159,7 +159,7 @@
useful for languages where code is expected to be generated ahead of time like
`go get` for Go and `cargo ops` for Rust. At the other end of the spectrum, for
some languages the code could be generated dynamically from
-[`tensorflow/core/ops/ops.pbtxt`].
+[`tensorflow/core/ops/ops.pbtxt`](https://www.tensorflow.org/code/tensorflow/core/ops/ops.pbtxt).
#### Handling Constants
@@ -229,6 +229,3 @@
updated when the [C API] provides necessary support.
[C API]: https://www.tensorflow.org/code/tensorflow/c/c_api.h
-[`tensorflow/core/ops/ops.pbtxt`]: https://www.tensorflow.org/code/tensorflow/core/ops/ops.pbtxt
-[`tensorflow/python/BUILD`]: https://www.tensorflow.org/code/tensorflow/python/BUILD
-[`tensorflow/core/framework/op.h`]: https://www.tensorflow.org/code/tensorflow/core/framework/op.h
diff --git a/tensorflow/docs_src/install/install_c.md b/tensorflow/docs_src/install/install_c.md
index 274413e..995b8ae 100644
--- a/tensorflow/docs_src/install/install_c.md
+++ b/tensorflow/docs_src/install/install_c.md
@@ -38,7 +38,7 @@
OS="linux" # Change to "darwin" for macOS
TARGET_DIRECTORY="/usr/local"
curl -L \
- "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.7.0.tar.gz" |
+ "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.8.0-rc0.tar.gz" |
sudo tar -C $TARGET_DIRECTORY -xz
The `tar` command extracts the TensorFlow C library into the `lib`
diff --git a/tensorflow/docs_src/install/install_go.md b/tensorflow/docs_src/install/install_go.md
index 1a09566..2938a8f 100644
--- a/tensorflow/docs_src/install/install_go.md
+++ b/tensorflow/docs_src/install/install_go.md
@@ -38,7 +38,7 @@
TF_TYPE="cpu" # Change to "gpu" for GPU support
TARGET_DIRECTORY='/usr/local'
curl -L \
- "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.7.0.tar.gz" |
+ "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.8.0-rc0.tar.gz" |
sudo tar -C $TARGET_DIRECTORY -xz
The `tar` command extracts the TensorFlow C library into the `lib`
diff --git a/tensorflow/docs_src/install/install_java.md b/tensorflow/docs_src/install/install_java.md
index cdde45a..05604d9 100644
--- a/tensorflow/docs_src/install/install_java.md
+++ b/tensorflow/docs_src/install/install_java.md
@@ -36,7 +36,7 @@
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>tensorflow</artifactId>
- <version>1.7.0</version>
+ <version>1.8.0-rc0</version>
</dependency>
```
@@ -65,7 +65,7 @@
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>tensorflow</artifactId>
- <version>1.7.0</version>
+ <version>1.8.0-rc0</version>
</dependency>
</dependencies>
</project>
@@ -93,6 +93,7 @@
// Execute the "MyConst" operation in a Session.
try (Session s = new Session(g);
+ // Generally, there may be multiple output tensors, all of them must be closed to prevent resource leaks.
Tensor output = s.runner().fetch("MyConst").run().get(0)) {
System.out.println(new String(output.bytesValue(), "UTF-8"));
}
@@ -123,12 +124,12 @@
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>libtensorflow</artifactId>
- <version>1.7.0</version>
+ <version>1.8.0-rc0</version>
</dependency>
<dependency>
<groupId>org.tensorflow</groupId>
<artifactId>libtensorflow_jni_gpu</artifactId>
- <version>1.7.0</version>
+ <version>1.8.0-rc0</version>
</dependency>
```
@@ -147,7 +148,7 @@
Take the following steps to install TensorFlow for Java on Linux or macOS:
1. Download
- [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.7.0.jar),
+ [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.8.0-rc0.jar),
which is the TensorFlow Java Archive (JAR).
2. Decide whether you will run TensorFlow for Java on CPU(s) only or with
@@ -166,7 +167,7 @@
OS=$(uname -s | tr '[:upper:]' '[:lower:]')
mkdir -p ./jni
curl -L \
- "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-${TF_TYPE}-${OS}-x86_64-1.7.0.tar.gz" |
+ "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-${TF_TYPE}-${OS}-x86_64-1.8.0-rc0.tar.gz" |
tar -xz -C ./jni
### Install on Windows
@@ -174,10 +175,10 @@
Take the following steps to install TensorFlow for Java on Windows:
1. Download
- [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.7.0.jar),
+ [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.8.0-rc0.jar),
which is the TensorFlow Java Archive (JAR).
2. Download the following Java Native Interface (JNI) file appropriate for
- [TensorFlow for Java on Windows](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-windows-x86_64-1.7.0.zip).
+ [TensorFlow for Java on Windows](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-windows-x86_64-1.8.0-rc0.zip).
3. Extract this .zip file.
@@ -207,6 +208,7 @@
// Execute the "MyConst" operation in a Session.
try (Session s = new Session(g);
+ // Generally, there may be multiple output tensors, all of them must be closed to prevent resource leaks.
Tensor output = s.runner().fetch("MyConst").run().get(0)) {
System.out.println(new String(output.bytesValue(), "UTF-8"));
}
@@ -225,7 +227,7 @@
downloaded `.jar` in your `classpath` by using the `-cp` compilation flag
as follows:
-<pre><b>javac -cp libtensorflow-1.7.0.jar HelloTF.java</b></pre>
+<pre><b>javac -cp libtensorflow-1.8.0-rc0.jar HelloTF.java</b></pre>
### Running
@@ -239,11 +241,11 @@
For example, the following command line executes the `HelloTF` program on Linux
and macOS X:
-<pre><b>java -cp libtensorflow-1.7.0.jar:. -Djava.library.path=./jni HelloTF</b></pre>
+<pre><b>java -cp libtensorflow-1.8.0-rc0.jar:. -Djava.library.path=./jni HelloTF</b></pre>
And the following command line executes the `HelloTF` program on Windows:
-<pre><b>java -cp libtensorflow-1.7.0.jar;. -Djava.library.path=jni HelloTF</b></pre>
+<pre><b>java -cp libtensorflow-1.8.0-rc0.jar;. -Djava.library.path=jni HelloTF</b></pre>
If the program prints <tt>Hello from <i>version</i></tt>, you've successfully
installed TensorFlow for Java and are ready to use the API. If the program
diff --git a/tensorflow/docs_src/install/install_linux.md b/tensorflow/docs_src/install/install_linux.md
index 04e4242..1a349f5 100644
--- a/tensorflow/docs_src/install/install_linux.md
+++ b/tensorflow/docs_src/install/install_linux.md
@@ -65,16 +65,38 @@
<pre>
$ <b>sudo apt-get install libcupti-dev</b>
</pre>
+
* **[OPTIONAL]** For optimized inferencing performance, you can also install
- NVIDIA TensorRT 3.0. For details, see
- [NVIDIA's TensorRT documentation](http://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html#installing-tar).
- Only steps 1-4 in the TensorRT Tar File installation instructions are
- required for compatibility with TensorFlow; the Python package installation
- in steps 5 and 6 can be omitted. Detailed installation instructions can be found at [package documentataion](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/tensorrt#installing-tensorrt-304)
+ **NVIDIA TensorRT 3.0**. The minimal set of TensorRT runtime components needed
+ for use with the pre-built `tensorflow-gpu` package can be installed as follows:
+
+ <pre>
+ $ <b>wget https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1404/x86_64/nvinfer-runtime-trt-repo-ubuntu1404-3.0.4-ga-cuda9.0_1.0-1_amd64.deb</b>
+ $ <b>sudo dpkg -i nvinfer-runtime-trt-repo-ubuntu1404-3.0.4-ga-cuda9.0_1.0-1_amd64.deb</b>
+ $ <b>sudo apt-get update</b>
+ $ <b>sudo apt-get install -y --allow-downgrades libnvinfer-dev libcudnn7-dev=7.0.5.15-1+cuda9.0 libcudnn7=7.0.5.15-1+cuda9.0</b>
+ </pre>
**IMPORTANT:** For compatibility with the pre-built `tensorflow-gpu`
- package, please use the Ubuntu **14.04** tar file package of TensorRT
- even when installing onto an Ubuntu 16.04 system.
+ package, please use the Ubuntu **14.04** package of TensorRT as shown above,
+ even when installing onto an Ubuntu 16.04 system.<br/>
+ <br/>
+ To build the TensorFlow-TensorRT integration module from source rather than
+ using pre-built binaries, see the [module documentation](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/tensorrt#using-tensorrt-in-tensorflow).
+ For detailed TensorRT installation instructions, see [NVIDIA's TensorRT documentation](http://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html).<br/>
+ <br/>
+ To avoid cuDNN version conflicts during later system upgrades, you can hold
+ the cuDNN version at 7.0.5:
+
+ <pre>
+ $ <b> sudo apt-mark hold libcudnn7 libcudnn7-dev</b>
+ </pre>
+
+ To later allow upgrades, you can remove the hold:
+
+ <pre>
+ $ <b> sudo apt-mark unhold libcudnn7 libcudnn7-dev</b>
+ </pre>
If you have an earlier version of the preceding packages, please upgrade to
the specified versions. If upgrading is not possible, then you may still run
@@ -194,7 +216,7 @@
Virtualenv environment:
<pre>(tensorflow)$ <b>pip3 install --upgrade \
- https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.7.0-cp34-cp34m-linux_x86_64.whl</b></pre>
+ https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.8.0rc0-cp34-cp34m-linux_x86_64.whl</b></pre>
If you encounter installation problems, see
[Common Installation Problems](#common_installation_problems).
@@ -299,7 +321,7 @@
<pre>
$ <b>sudo pip3 install --upgrade \
- https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.7.0-cp34-cp34m-linux_x86_64.whl</b>
+ https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.8.0rc0-cp34-cp34m-linux_x86_64.whl</b>
</pre>
If this step fails, see
@@ -485,7 +507,7 @@
<pre>
(tensorflow)$ <b>pip install --ignore-installed --upgrade \
- https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.7.0-cp34-cp34m-linux_x86_64.whl</b></pre>
+ https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.8.0rc0-cp34-cp34m-linux_x86_64.whl</b></pre>
<a name="ValidateYourInstallation"></a>
## Validate your installation
@@ -659,14 +681,14 @@
CPU only:
<pre>
-https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.7.0-cp27-none-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.8.0rc0-cp27-none-linux_x86_64.whl
</pre>
GPU support:
<pre>
-https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.7.0-cp27-none-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.8.0rc0-cp27-none-linux_x86_64.whl
</pre>
Note that GPU support requires the NVIDIA hardware and software described in
@@ -678,14 +700,14 @@
CPU only:
<pre>
-https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.7.0-cp34-cp34m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.8.0rc0-cp34-cp34m-linux_x86_64.whl
</pre>
GPU support:
<pre>
-https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.7.0-cp34-cp34m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.8.0rc0-cp34-cp34m-linux_x86_64.whl
</pre>
Note that GPU support requires the NVIDIA hardware and software described in
@@ -697,14 +719,14 @@
CPU only:
<pre>
-https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.7.0-cp35-cp35m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.8.0rc0-cp35-cp35m-linux_x86_64.whl
</pre>
GPU support:
<pre>
-https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.7.0-cp35-cp35m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.8.0rc0-cp35-cp35m-linux_x86_64.whl
</pre>
@@ -716,14 +738,14 @@
CPU only:
<pre>
-https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.7.0-cp36-cp36m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.8.0rc0-cp36-cp36m-linux_x86_64.whl
</pre>
GPU support:
<pre>
-https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.7.0-cp36-cp36m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.8.0rc0-cp36-cp36m-linux_x86_64.whl
</pre>
diff --git a/tensorflow/docs_src/install/install_mac.md b/tensorflow/docs_src/install/install_mac.md
index b3e9616..a237d1a 100644
--- a/tensorflow/docs_src/install/install_mac.md
+++ b/tensorflow/docs_src/install/install_mac.md
@@ -119,7 +119,7 @@
TensorFlow in the active Virtualenv is as follows:
<pre> $ <b>pip3 install --upgrade \
- https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.7.0-py3-none-any.whl</b></pre>
+ https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.8.0rc0-py3-none-any.whl</b></pre>
If you encounter installation problems, see
[Common Installation Problems](#common-installation-problems).
@@ -242,7 +242,7 @@
issue the following command:
<pre> $ <b>sudo pip3 install --upgrade \
- https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.7.0-py3-none-any.whl</b> </pre>
+ https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.8.0rc0-py3-none-any.whl</b> </pre>
If the preceding command fails, see
[installation problems](#common-installation-problems).
@@ -350,7 +350,7 @@
TensorFlow for Python 2.7:
<pre> (<i>targetDirectory</i>)$ <b>pip install --ignore-installed --upgrade \
- https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.7.0-py2-none-any.whl</b></pre>
+ https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.8.0rc0-py2-none-any.whl</b></pre>
<a name="ValidateYourInstallation"></a>
@@ -524,7 +524,7 @@
<pre>
-https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.7.0-py2-none-any.whl
+https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.8.0rc0-py2-none-any.whl
</pre>
@@ -532,5 +532,5 @@
<pre>
-https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.7.0-py3-none-any.whl
+https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.8.0rc0-py3-none-any.whl
</pre>
diff --git a/tensorflow/docs_src/install/install_sources.md b/tensorflow/docs_src/install/install_sources.md
index 26287aa..b186758 100644
--- a/tensorflow/docs_src/install/install_sources.md
+++ b/tensorflow/docs_src/install/install_sources.md
@@ -354,10 +354,10 @@
The filename of the `.whl` file depends on your platform.
For example, the following command will install the pip package
-for TensorFlow 1.7.0 on Linux:
+for TensorFlow 1.8.0rc0 on Linux:
<pre>
-$ <b>sudo pip install /tmp/tensorflow_pkg/tensorflow-1.7.0-py2-none-any.whl</b>
+$ <b>sudo pip install /tmp/tensorflow_pkg/tensorflow-1.8.0rc0-py2-none-any.whl</b>
</pre>
## Validate your installation
@@ -454,6 +454,8 @@
**Linux**
<table>
<tr><th>Version:</th><th>CPU/GPU:</th><th>Python Version:</th><th>Compiler:</th><th>Build Tools:</th><th>cuDNN:</th><th>CUDA:</th></tr>
+<tr><td>tensorflow-1.8.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.10.0</td><td>N/A</td><td>N/A</td></tr>
+<tr><td>tensorflow_gpu-1.8.0</td><td>GPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.9.0</td><td>7</td><td>9</td></tr>
<tr><td>tensorflow-1.7.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.10.0</td><td>N/A</td><td>N/A</td></tr>
<tr><td>tensorflow_gpu-1.7.0</td><td>GPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.9.0</td><td>7</td><td>9</td></tr>
<tr><td>tensorflow-1.6.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.9.0</td><td>N/A</td><td>N/A</td></tr>
@@ -475,6 +477,7 @@
**Mac**
<table>
<tr><th>Version:</th><th>CPU/GPU:</th><th>Python Version:</th><th>Compiler:</th><th>Build Tools:</th><th>cuDNN:</th><th>CUDA:</th></tr>
+<tr><td>tensorflow-1.8.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.10.1</td><td>N/A</td><td>N/A</td></tr>
<tr><td>tensorflow-1.7.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.10.1</td><td>N/A</td><td>N/A</td></tr>
<tr><td>tensorflow-1.6.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.8.1</td><td>N/A</td><td>N/A</td></tr>
<tr><td>tensorflow-1.5.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.8.1</td><td>N/A</td><td>N/A</td></tr>
@@ -490,6 +493,8 @@
**Windows**
<table>
<tr><th>Version:</th><th>CPU/GPU:</th><th>Python Version:</th><th>Compiler:</th><th>Build Tools:</th><th>cuDNN:</th><th>CUDA:</th></tr>
+<tr><td>tensorflow-1.8.0</td><td>CPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>N/A</td><td>N/A</td></tr>
+<tr><td>tensorflow_gpu-1.8.0</td><td>GPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>7</td><td>9</td></tr>
<tr><td>tensorflow-1.7.0</td><td>CPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>N/A</td><td>N/A</td></tr>
<tr><td>tensorflow_gpu-1.7.0</td><td>GPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>7</td><td>9</td></tr>
<tr><td>tensorflow-1.6.0</td><td>CPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>N/A</td><td>N/A</td></tr>
diff --git a/tensorflow/docs_src/mobile/android_build.md b/tensorflow/docs_src/mobile/android_build.md
index 08a5fbe..c355300 100644
--- a/tensorflow/docs_src/mobile/android_build.md
+++ b/tensorflow/docs_src/mobile/android_build.md
@@ -51,7 +51,8 @@
// set to 'bazel', 'cmake', 'makefile', 'none'
def nativeBuildSystem = 'none'
-4. Click the Run button (the green arrow) or use **Run -> Run 'android'** from the top menu.
+4. Click the *Run* button (the green arrow) or select *Run > Run 'android'* from the
+ top menu. You may need to rebuild the project using *Build > Rebuild Project*.
If it asks you to use Instant Run, click **Proceed Without Instant Run**.
diff --git a/tensorflow/docs_src/performance/quantization.md b/tensorflow/docs_src/performance/quantization.md
index 411889c..2fea02d 100644
--- a/tensorflow/docs_src/performance/quantization.md
+++ b/tensorflow/docs_src/performance/quantization.md
@@ -110,7 +110,7 @@
```
# Build eval model
-logits = tf.nn.softmax_cross_entropy_with_logits(...)
+logits = tf.nn.softmax_cross_entropy_with_logits_v2(...)
# Call the eval rewrite which rewrites the graph in-place with
# FakeQuantization nodes and fold batchnorm for eval.
diff --git a/tensorflow/docs_src/programmers_guide/debugger.md b/tensorflow/docs_src/programmers_guide/debugger.md
index f5a0eb0..f7817b0 100644
--- a/tensorflow/docs_src/programmers_guide/debugger.md
+++ b/tensorflow/docs_src/programmers_guide/debugger.md
@@ -400,7 +400,7 @@
to the built-in, numerically-stable implementation of softmax cross-entropy:
```python
-diff = tf.losses.sparse_softmax_cross_entropy(labels=y_, logits=logits)
+diff = tf.losses.softmax_cross_entropy(labels=y_, logits=logits)
```
Rerun with the `--debug` flag as follows:
diff --git a/tensorflow/docs_src/programmers_guide/graphs.md b/tensorflow/docs_src/programmers_guide/graphs.md
index aa72cae..f0dd8de 100644
--- a/tensorflow/docs_src/programmers_guide/graphs.md
+++ b/tensorflow/docs_src/programmers_guide/graphs.md
@@ -210,7 +210,7 @@
# Operations created in this context will be pinned to the GPU.
result = tf.matmul(weights, img)
```
-If you are deploying TensorFlow in a @{$deploy/distributed$typical distributed configuration},
+If you are deploying TensorFlow in a @{$distributed$typical distributed configuration},
you might specify the job name and task ID to place variables on
a task in the parameter server job (`"/job:ps"`), and the other operations on
task in the worker job (`"/job:worker"`):
@@ -362,7 +362,7 @@
@{tf.Session.run} requires you to specify a list of **fetches**, which determine
the return values, and may be a @{tf.Operation}, a @{tf.Tensor}, or
-a [tensor-like type](#tensor-like-objects) such as @{tf.Variable}. These fetches
+a [tensor-like type](#tensor-like_objects) such as @{tf.Variable}. These fetches
determine what **subgraph** of the overall @{tf.Graph} must be executed to
produce the result: this is the subgraph that contains all operations named in
the fetch list, plus all operations whose outputs are used to compute the value
@@ -505,7 +505,7 @@
As noted above, TensorFlow provides a "default graph" that is implicitly passed
to all API functions in the same context. For many applications, a single graph
is sufficient. However, TensorFlow also provides methods for manipulating
-the default graph, which can be useful in more advanced used cases. For example:
+the default graph, which can be useful in more advanced use cases. For example:
* A @{tf.Graph} defines the namespace for @{tf.Operation} objects: each
operation in a single graph must have a unique name. TensorFlow will
diff --git a/tensorflow/docs_src/programmers_guide/saved_model.md b/tensorflow/docs_src/programmers_guide/saved_model.md
index 55ee42d..c6ef87c 100644
--- a/tensorflow/docs_src/programmers_guide/saved_model.md
+++ b/tensorflow/docs_src/programmers_guide/saved_model.md
@@ -485,31 +485,7 @@
to expect and how to map them to your model's expected inputs.
By contrast, the *output* portion of the signature is determined by the model.
-
-### Perform the export
-
-To export your trained Estimator, call
-@{tf.estimator.Estimator.export_savedmodel} with the export base path and
-the `serving_input_receiver_fn`.
-
-```py
-estimator.export_savedmodel(export_dir_base, serving_input_receiver_fn,
- strip_default_attrs=True)
-```
-
-This method builds a new graph by first calling the
-`serving_input_receiver_fn()` to obtain feature `Tensor`s, and then calling
-this `Estimator`'s `model_fn()` to generate the model graph based on those
-features. It starts a fresh `Session`, and, by default, restores the most recent
-checkpoint into it. (A different checkpoint may be passed, if needed.)
-Finally it creates a time-stamped export directory below the given
-`export_dir_base` (i.e., `export_dir_base/<timestamp>`), and writes a
-SavedModel into it containing a single `MetaGraphDef` saved from this
-Session.
-
-> Note: It is your responsibility to garbage-collect old exports.
-> Otherwise, successive exports will accumulate under `export_dir_base`.
-
+<a name="specify_outputs"></a>
### Specify the outputs of a custom model
When writing a custom `model_fn`, you must populate the `export_outputs` element
@@ -541,6 +517,30 @@
indicating which `SignatureDef` will be served when an inference request
does not specify one.
+<a name="perform_export"></a>
+### Perform the export
+
+To export your trained Estimator, call
+@{tf.estimator.Estimator.export_savedmodel} with the export base path and
+the `serving_input_receiver_fn`.
+
+```py
+estimator.export_savedmodel(export_dir_base, serving_input_receiver_fn,
+ strip_default_attrs=True)
+```
+
+This method builds a new graph by first calling the
+`serving_input_receiver_fn()` to obtain feature `Tensor`s, and then calling
+this `Estimator`'s `model_fn()` to generate the model graph based on those
+features. It starts a fresh `Session`, and, by default, restores the most recent
+checkpoint into it. (A different checkpoint may be passed, if needed.)
+Finally it creates a time-stamped export directory below the given
+`export_dir_base` (i.e., `export_dir_base/<timestamp>`), and writes a
+SavedModel into it containing a single `MetaGraphDef` saved from this
+Session.
+
+> Note: It is your responsibility to garbage-collect old exports.
+> Otherwise, successive exports will accumulate under `export_dir_base`.
### Serve the exported model locally
diff --git a/tensorflow/docs_src/programmers_guide/using_tpu.md b/tensorflow/docs_src/programmers_guide/using_tpu.md
index cb0d86f..5e3e49d 100644
--- a/tensorflow/docs_src/programmers_guide/using_tpu.md
+++ b/tensorflow/docs_src/programmers_guide/using_tpu.md
@@ -280,8 +280,8 @@
### Static shapes and batch size
The input pipeline generated by your `input_fn` is run on CPU. So it is mostly
-free strict static shape requirements imposed by the XLA/TPU environment. The
-one requirement is that the batches of data fed from your input pipeline to
+free from the strict static shape requirements imposed by the XLA/TPU environment.
+The one requirement is that the batches of data fed from your input pipeline to
the TPU have a static shape, as determined by the standard TensorFlow shape
inference algorithm. Intermediate tensors are free to have a dynamic shapes.
If shape inference has failed, but the shape is known it is possible to
diff --git a/tensorflow/docs_src/tutorials/audio_recognition.md b/tensorflow/docs_src/tutorials/audio_recognition.md
index 7d79f43..372ab47 100644
--- a/tensorflow/docs_src/tutorials/audio_recognition.md
+++ b/tensorflow/docs_src/tutorials/audio_recognition.md
@@ -280,7 +280,7 @@
```
bazel run tensorflow/examples/wav_to_spectrogram:wav_to_spectrogram -- \
--input_wav=/tmp/speech_dataset/happy/ab00c4b2_nohash_0.wav \
---output_png=/tmp/spectrogram.png
+--output_image=/tmp/spectrogram.png
```
If you open up `/tmp/spectrogram.png` you should see something like this:
diff --git a/tensorflow/docs_src/tutorials/layers.md b/tensorflow/docs_src/tutorials/layers.md
index cadaec3..37cd2bb 100644
--- a/tensorflow/docs_src/tutorials/layers.md
+++ b/tensorflow/docs_src/tutorials/layers.md
@@ -192,8 +192,7 @@
to calculate loss, configure the training op, and generate predictions. If
you're already experienced with CNNs and @{$get_started/custom_estimators$TensorFlow `Estimator`s},
and find the above code intuitive, you may want to skim these sections or just
-skip ahead to ["Training and Evaluating the CNN MNIST
-Classifier"](#training_and_evaluating_the_cnn_mnist_classifier).
+skip ahead to ["Training and Evaluating the CNN MNIST Classifier"](#train_eval_mnist).
### Input Layer
@@ -536,8 +535,9 @@
```
> Note: For a more in-depth look at configuring training ops for Estimator model
-> functions, see @{$get_started/custom_estimators#defining_the_training_op_for_the_model$"Defining the training op for the model"}
-> in the @{$get_started/custom_estimators$"Creating Estimators in tf.estimator."} tutorial.
+> functions, see @{$get_started/custom_estimators#defining-the-training-op-for-the-model$"Defining the training op for the model"}
+> in the @{$get_started/custom_estimators$"Creating Estimations in tf.estimator"} tutorial.
+
### Add evaluation metrics
@@ -552,7 +552,8 @@
mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
```
-## Training and Evaluating the CNN MNIST Classifier {#training_and_evaluating_the_cnn_mnist_classifier}
+<a id="train_eval_mnist"></a>
+## Training and Evaluating the CNN MNIST Classifier
We've coded our MNIST CNN model function; now we're ready to train and evaluate
it.
@@ -612,9 +613,9 @@
```python
# Set up logging for predictions
- tensors_to_log = {"probabilities": "softmax_tensor"}
- logging_hook = tf.train.LoggingTensorHook(
- tensors=tensors_to_log, every_n_iter=50)
+tensors_to_log = {"probabilities": "softmax_tensor"}
+logging_hook = tf.train.LoggingTensorHook(
+ tensors=tensors_to_log, every_n_iter=50)
```
We store a dict of the tensors we want to log in `tensors_to_log`. Each key is a
diff --git a/tensorflow/examples/tutorials/word2vec/word2vec_basic.py b/tensorflow/examples/tutorials/word2vec/word2vec_basic.py
index 14ae7fb..b09ee99 100644
--- a/tensorflow/examples/tutorials/word2vec/word2vec_basic.py
+++ b/tensorflow/examples/tutorials/word2vec/word2vec_basic.py
@@ -224,7 +224,7 @@
optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss)
# Compute the cosine similarity between minibatch examples and all embeddings.
- norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
+ norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keepdims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(normalized_embeddings,
valid_dataset)
diff --git a/tensorflow/go/op/wrappers.go b/tensorflow/go/op/wrappers.go
index ec7d9dc..c31ca8b 100644
--- a/tensorflow/go/op/wrappers.go
+++ b/tensorflow/go/op/wrappers.go
@@ -21159,7 +21159,7 @@
// generated sequentially as '*tag*/image/0', '*tag*/image/1', etc.
//
// The `bad_color` argument is the color to use in the generated images for
-// non-finite input values. It is a `unit8` 1-D tensor of length `channels`.
+// non-finite input values. It is a `uint8` 1-D tensor of length `channels`.
// Each element must be in the range `[0, 255]` (It represents the value of a
// pixel in the output image). Non-finite values in the input tensor are
// replaced by this tensor in the output image. The default value is the color
diff --git a/tensorflow/java/src/main/java/org/tensorflow/examples/LabelImage.java b/tensorflow/java/src/main/java/org/tensorflow/examples/LabelImage.java
index 489e95c..3948991 100644
--- a/tensorflow/java/src/main/java/org/tensorflow/examples/LabelImage.java
+++ b/tensorflow/java/src/main/java/org/tensorflow/examples/LabelImage.java
@@ -101,6 +101,7 @@
b.constant("mean", mean)),
b.constant("scale", scale));
try (Session s = new Session(g)) {
+ // Generally, there may be multiple output tensors, all of them must be closed to prevent resource leaks.
return s.runner().fetch(output.op().name()).run().get(0).expect(Float.class);
}
}
@@ -110,6 +111,7 @@
try (Graph g = new Graph()) {
g.importGraphDef(graphDef);
try (Session s = new Session(g);
+ // Generally, there may be multiple output tensors, all of them must be closed to prevent resource leaks.
Tensor<Float> result =
s.runner().feed("input", image).fetch("output").run().get(0).expect(Float.class)) {
final long[] rshape = result.shape();
diff --git a/tensorflow/python/BUILD b/tensorflow/python/BUILD
index 9dc03d7..8e7f0ca 100644
--- a/tensorflow/python/BUILD
+++ b/tensorflow/python/BUILD
@@ -1946,7 +1946,8 @@
":array_ops",
":constant_op",
":dtypes",
- ":linalg_ops",
+ ":linalg_ops_gen",
+ ":linalg_ops_impl",
":math_ops",
":nn_ops",
":random_ops",
@@ -1997,7 +1998,22 @@
":array_ops",
":dtypes",
":framework_ops",
+ ":functional_ops",
":linalg_ops_gen",
+ ":linalg_ops_impl",
+ ":math_ops",
+ "//third_party/py/numpy",
+ ],
+)
+
+py_library(
+ name = "linalg_ops_impl",
+ srcs = ["ops/linalg_ops_impl.py"],
+ srcs_version = "PY2AND3",
+ deps = [
+ ":array_ops",
+ ":dtypes",
+ ":framework_ops",
":math_ops",
"//third_party/py/numpy",
],
@@ -3493,6 +3509,7 @@
"//tensorflow/core/profiler/internal:print_model_analysis",
"//tensorflow/tools/graph_transforms:transform_graph_lib",
"//tensorflow/python/eager:pywrap_tfe_lib",
+ "//tensorflow/python/eager:python_eager_op_gen",
"//util/python:python_headers",
] + (tf_additional_lib_deps() +
tf_additional_plugin_deps() +
diff --git a/tensorflow/python/debug/cli/readline_ui.py b/tensorflow/python/debug/cli/readline_ui.py
index 1516387..3296e45 100644
--- a/tensorflow/python/debug/cli/readline_ui.py
+++ b/tensorflow/python/debug/cli/readline_ui.py
@@ -19,6 +19,8 @@
import readline
+import six
+
from tensorflow.python.debug.cli import base_ui
from tensorflow.python.debug.cli import debugger_cli_common
@@ -39,11 +41,7 @@
readline.set_completer(self._readline_complete)
readline.parse_and_bind("tab: complete")
- # For Python 2-3 compatibility.
- try:
- self._input = raw_input
- except NameError:
- self._input = input
+ self._input = six.moves.input
def _readline_complete(self, text, state):
context, prefix, except_last_word = self._analyze_tab_complete_input(text)
diff --git a/tensorflow/python/debug/wrappers/grpc_wrapper.py b/tensorflow/python/debug/wrappers/grpc_wrapper.py
index fb9494f..1f9c8fa 100644
--- a/tensorflow/python/debug/wrappers/grpc_wrapper.py
+++ b/tensorflow/python/debug/wrappers/grpc_wrapper.py
@@ -21,6 +21,8 @@
import sys
import traceback
+import six
+
# Google-internal import(s).
from tensorflow.python.debug.lib import common
from tensorflow.python.debug.wrappers import framework
@@ -140,14 +142,9 @@
def _signal_handler(unused_signal, unused_frame):
- try:
- input_func = raw_input
- except NameError:
- # Python 3 does not have raw_input.
- input_func = input
-
while True:
- response = input_func("\nSIGINT received. Quit program? (Y/n): ").strip()
+ response = six.moves.input(
+ "\nSIGINT received. Quit program? (Y/n): ").strip()
if response in ("", "Y", "y"):
sys.exit(0)
elif response in ("N", "n"):
diff --git a/tensorflow/python/debug/wrappers/hooks.py b/tensorflow/python/debug/wrappers/hooks.py
index 6705cd3..5e4604f 100644
--- a/tensorflow/python/debug/wrappers/hooks.py
+++ b/tensorflow/python/debug/wrappers/hooks.py
@@ -31,15 +31,18 @@
class LocalCLIDebugHook(session_run_hook.SessionRunHook):
"""Command-line-interface debugger hook.
- Can be used as a monitor/hook for `tf.train.MonitoredSession`s and
- `tf.contrib.learn`'s `Estimator`s and `Experiment`s.
+ Can be used as a hook for `tf.train.MonitoredSession`s and
+ `tf.estimator.Estimator`s. Provides a substitute for
+ `tfdbg.LocalCLIDebugWrapperSession` in cases where the session is not directly
+ available.
"""
def __init__(self, ui_type="curses", dump_root=None, thread_name_filter=None):
"""Create a local debugger command-line interface (CLI) hook.
Args:
- ui_type: (str) user-interface type.
+ ui_type: (`str`) requested user-interface type. Currently supported:
+ (curses | readline).
dump_root: (`str`) optional path to the dump root directory. Must be a
directory that does not exist or an empty directory. If the directory
does not exist, it will be created by the debugger core during debug
@@ -153,8 +156,8 @@
class DumpingDebugHook(session_run_hook.SessionRunHook):
"""A debugger hook that dumps debug data to filesystem.
- Can be used as a monitor/hook for `tf.train.MonitoredSession`s and
- `tf.contrib.learn`'s `Estimator`s and `Experiment`s.
+ Can be used as a hook for `tf.train.MonitoredSession`s and
+ `tf.estimator.Estimator`s.
"""
def __init__(self,
@@ -229,8 +232,8 @@
When the arguments of debug_utils.watch_graph changes, strongly consider
changing arguments here too so that features are available to tflearn users.
- Can be used as a monitor/hook for `tf.train.MonitoredSession`s and
- `tf.contrib.learn`'s `Estimator`s and `Experiment`s.
+ Can be used as a hook for `tf.train.MonitoredSession`s and
+ `tf.estimator.Estimator`s.
"""
def __init__(self,
diff --git a/tensorflow/python/estimator/canned/head.py b/tensorflow/python/estimator/canned/head.py
index c365ea8..efa4bdf 100644
--- a/tensorflow/python/estimator/canned/head.py
+++ b/tensorflow/python/estimator/canned/head.py
@@ -263,9 +263,12 @@
if (dim1 is not None) and (dim1 != expected_labels_dimension):
raise ValueError(
'Mismatched label shape. '
- 'Classifier configured with n_classes=%s. Received %s. '
- 'Suggested Fix: check your n_classes argument to the estimator '
- 'and/or the shape of your label.' %
+ 'Expected labels dimension=%s. Received %s. '
+ 'Suggested Fix:'
+ 'If your classifier expects one-hot encoding label,'
+ 'check your n_classes argument to the estimator'
+ 'and/or the shape of your label.'
+ 'Otherwise, check the shape of your label.' %
(expected_labels_dimension, dim1))
expected_labels_shape = array_ops.concat(
[logits_shape[:-1], [expected_labels_dimension]], axis=0)
diff --git a/tensorflow/python/estimator/estimator.py b/tensorflow/python/estimator/estimator.py
index 351fcb6..2f1212d 100644
--- a/tensorflow/python/estimator/estimator.py
+++ b/tensorflow/python/estimator/estimator.py
@@ -207,7 +207,8 @@
else:
self._session_config = self._config.session_config
- self._device_fn = _get_replica_device_setter(self._config)
+ self._device_fn = self._config.device_fn or \
+ _get_replica_device_setter(self._config)
if model_fn is None:
raise ValueError('model_fn must be provided to Estimator.')
@@ -716,7 +717,7 @@
batch_length = batch_length or value.shape[0]
if value.shape[0] != batch_length:
raise ValueError('Batch length of predictions should be same. %s has '
- 'different batch length then others.' % key)
+ 'different batch length than others.' % key)
return batch_length
def _extract_keys(self, predictions, predict_keys):
diff --git a/tensorflow/python/estimator/run_config.py b/tensorflow/python/estimator/run_config.py
index dab442a..8162b24 100644
--- a/tensorflow/python/estimator/run_config.py
+++ b/tensorflow/python/estimator/run_config.py
@@ -27,11 +27,13 @@
from tensorflow.core.protobuf import config_pb2
from tensorflow.python.platform import tf_logging as logging
from tensorflow.python.training import server_lib
+from tensorflow.python.estimator import util
from tensorflow.python.util import compat_internal
from tensorflow.python.util.tf_export import tf_export
_USE_DEFAULT = object()
+_VALID_DEVICE_FN_ARGS = set(['op'])
# A list of the property names in RunConfig that the user is allowed to change.
_DEFAULT_REPLACEABLE_LIST = [
@@ -44,7 +46,8 @@
'keep_checkpoint_max',
'keep_checkpoint_every_n_hours',
'log_step_count_steps',
- 'train_distribute'
+ 'train_distribute',
+ 'device_fn'
]
_SAVE_CKPT_ERR = (
@@ -279,6 +282,11 @@
_validate('tf_random_seed', lambda seed: isinstance(seed, six.integer_types),
message='tf_random_seed must be integer.')
+ _validate('device_fn', lambda device_fn: six.callable(device_fn) and
+ set(util.fn_args(device_fn)) == _VALID_DEVICE_FN_ARGS,
+ message='device_fn must be callable with exactly'
+ ' one argument "op".')
+
class TaskType(object):
MASTER = 'master'
@@ -302,7 +310,8 @@
keep_checkpoint_max=5,
keep_checkpoint_every_n_hours=10000,
log_step_count_steps=100,
- train_distribute=None):
+ train_distribute=None,
+ device_fn=None):
"""Constructs a RunConfig.
All distributed training related properties `cluster_spec`, `is_chief`,
@@ -430,6 +439,10 @@
`tf.contrib.distribute.DistributionStrategy`. If specified,
then Estimator will distribute the user's model during training,
according to the policy specified by that strategy.
+ device_fn: A callable invoked for every `Operation` that takes the
+ `Operation` and returns the device string. If `None`, defaults to
+ the device function returned by `tf.train.replica_device_setter`
+ with round-robin strategy.
Raises:
ValueError: If both `save_checkpoints_steps` and `save_checkpoints_secs`
@@ -466,7 +479,8 @@
keep_checkpoint_max=keep_checkpoint_max,
keep_checkpoint_every_n_hours=keep_checkpoint_every_n_hours,
log_step_count_steps=log_step_count_steps,
- train_distribute=train_distribute)
+ train_distribute=train_distribute,
+ device_fn=device_fn)
self._init_distributed_setting_from_environment_var(tf_config)
@@ -569,6 +583,16 @@
return self._cluster_spec
@property
+ def device_fn(self):
+ """Returns the device_fn.
+
+ If device_fn is not `None`, it overrides the default
+ device function used in `Estimator`.
+ Otherwise the default one is used.
+ """
+ return self._device_fn
+
+ @property
def evaluation_master(self):
return self._evaluation_master
@@ -697,7 +721,8 @@
- `keep_checkpoint_max`,
- `keep_checkpoint_every_n_hours`,
- `log_step_count_steps`,
- - `train_distribute`.
+ - `train_distribute`,
+ - `device_fn`.
In addition, either `save_checkpoints_steps` or `save_checkpoints_secs`
can be set (should not be both).
diff --git a/tensorflow/python/estimator/run_config_test.py b/tensorflow/python/estimator/run_config_test.py
index a3eef4c..c8b1260 100644
--- a/tensorflow/python/estimator/run_config_test.py
+++ b/tensorflow/python/estimator/run_config_test.py
@@ -42,6 +42,7 @@
_KEEP_CKPT_MAX_ERR = 'keep_checkpoint_max should be >= 0'
_KEEP_CKPT_HOURS_ERR = 'keep_checkpoint_every_n_hours should be > 0'
_TF_RANDOM_SEED_ERR = 'tf_random_seed must be integer'
+_DEVICE_FN_ERR = 'device_fn must be callable with exactly one argument "op".'
_ONE_CHIEF_ERR = 'The "cluster" in TF_CONFIG must have only one "chief" node.'
_ONE_MASTER_ERR = 'The "cluster" in TF_CONFIG must have only one "master" node.'
_INVALID_TASK_TYPE_FOR_EVAL_MASTER = (
@@ -83,6 +84,7 @@
self.assertEqual(5, config.keep_checkpoint_max)
self.assertEqual(10000, config.keep_checkpoint_every_n_hours)
self.assertIsNone(config.service)
+ self.assertIsNone(config.device_fn)
def test_model_dir(self):
empty_config = run_config_lib.RunConfig()
@@ -93,6 +95,7 @@
def test_replace_with_allowed_properties(self):
session_config = config_pb2.ConfigProto(allow_soft_placement=True)
+ device_fn = lambda op: "/cpu:0"
config = run_config_lib.RunConfig().replace(
tf_random_seed=11,
@@ -100,13 +103,15 @@
save_checkpoints_secs=14,
session_config=session_config,
keep_checkpoint_max=16,
- keep_checkpoint_every_n_hours=17)
+ keep_checkpoint_every_n_hours=17,
+ device_fn=device_fn)
self.assertEqual(11, config.tf_random_seed)
self.assertEqual(12, config.save_summary_steps)
self.assertEqual(14, config.save_checkpoints_secs)
self.assertEqual(session_config, config.session_config)
self.assertEqual(16, config.keep_checkpoint_max)
self.assertEqual(17, config.keep_checkpoint_every_n_hours)
+ self.assertEqual(device_fn, config.device_fn)
def test_replace_none_value(self):
config = run_config_lib.RunConfig().replace(
@@ -117,7 +122,8 @@
save_checkpoints_steps=None,
session_config=None,
keep_checkpoint_max=None,
- keep_checkpoint_every_n_hours=None)
+ keep_checkpoint_every_n_hours=None,
+ device_fn=None)
self.assertIsNone(config.tf_random_seed)
self.assertIsNone(config.model_dir)
self.assertIsNone(config.save_summary_steps)
@@ -126,6 +132,7 @@
self.assertIsNone(config.session_config)
self.assertIsNone(config.keep_checkpoint_max)
self.assertIsNone(config.keep_checkpoint_every_n_hours)
+ self.assertIsNone(config.device_fn)
def test_replace_with_disallowallowed_properties(self):
config = run_config_lib.RunConfig()
@@ -166,9 +173,12 @@
config.replace(keep_checkpoint_every_n_hours=0)
with self.assertRaisesRegexp(ValueError, _TF_RANDOM_SEED_ERR):
config.replace(tf_random_seed=1.0)
+ with self.assertRaisesRegexp(ValueError, _DEVICE_FN_ERR):
+ config.replace(device_fn=lambda x, y: 0)
def test_init_with_allowed_properties(self):
session_config = config_pb2.ConfigProto(allow_soft_placement=True)
+ device_fn = lambda op: "/cpu:0"
config = run_config_lib.RunConfig(
tf_random_seed=11,
@@ -176,13 +186,15 @@
save_checkpoints_secs=14,
session_config=session_config,
keep_checkpoint_max=16,
- keep_checkpoint_every_n_hours=17)
+ keep_checkpoint_every_n_hours=17,
+ device_fn=device_fn)
self.assertEqual(11, config.tf_random_seed)
self.assertEqual(12, config.save_summary_steps)
self.assertEqual(14, config.save_checkpoints_secs)
self.assertEqual(session_config, config.session_config)
self.assertEqual(16, config.keep_checkpoint_max)
self.assertEqual(17, config.keep_checkpoint_every_n_hours)
+ self.assertEqual(device_fn, config.device_fn)
def test_init_none_value(self):
config = run_config_lib.RunConfig(
@@ -193,7 +205,8 @@
save_checkpoints_steps=None,
session_config=None,
keep_checkpoint_max=None,
- keep_checkpoint_every_n_hours=None)
+ keep_checkpoint_every_n_hours=None,
+ device_fn=None)
self.assertIsNone(config.tf_random_seed)
self.assertIsNone(config.model_dir)
self.assertIsNone(config.save_summary_steps)
@@ -202,6 +215,7 @@
self.assertIsNone(config.session_config)
self.assertIsNone(config.keep_checkpoint_max)
self.assertIsNone(config.keep_checkpoint_every_n_hours)
+ self.assertIsNone(config.device_fn)
def test_init_invalid_values(self):
with self.assertRaisesRegexp(ValueError, _MODEL_DIR_ERR):
@@ -220,6 +234,8 @@
run_config_lib.RunConfig(keep_checkpoint_every_n_hours=0)
with self.assertRaisesRegexp(ValueError, _TF_RANDOM_SEED_ERR):
run_config_lib.RunConfig(tf_random_seed=1.0)
+ with self.assertRaisesRegexp(ValueError, _DEVICE_FN_ERR):
+ run_config_lib.RunConfig(device_fn=lambda x: "/cpu:0")
class RunConfigDistributedSettingTest(test.TestCase):
diff --git a/tensorflow/python/feature_column/feature_column.py b/tensorflow/python/feature_column/feature_column.py
index a7c4eab..c16c3cd 100644
--- a/tensorflow/python/feature_column/feature_column.py
+++ b/tensorflow/python/feature_column/feature_column.py
@@ -162,7 +162,6 @@
from tensorflow.python.training import checkpoint_utils
from tensorflow.python.util import nest
from tensorflow.python.util.tf_export import tf_export
-from tensorflow.python.util.tf_export import tf_export
def _internal_input_layer(features,
diff --git a/tensorflow/python/framework/dtypes.py b/tensorflow/python/framework/dtypes.py
index 807582b..7f9ef53 100644
--- a/tensorflow/python/framework/dtypes.py
+++ b/tensorflow/python/framework/dtypes.py
@@ -700,11 +700,13 @@
if type_value.type == np.string_ or type_value.type == np.unicode_:
return string
- for key, val in _NP_TO_TF:
- try:
- if key == type_value:
- return val
- except TypeError as e:
- raise TypeError("Cannot convert {} to a dtype. {}".format(type_value, e))
+ if isinstance(type_value, (type, np.dtype)):
+ for key, val in _NP_TO_TF:
+ try:
+ if key == type_value:
+ return val
+ except TypeError as e:
+ raise TypeError("Cannot convert {} to a dtype. {}".format(
+ type_value, e))
raise TypeError("Cannot convert value %r to a TensorFlow DType." % type_value)
diff --git a/tensorflow/python/framework/graph_util_impl.py b/tensorflow/python/framework/graph_util_impl.py
index 9103643..394fac6 100644
--- a/tensorflow/python/framework/graph_util_impl.py
+++ b/tensorflow/python/framework/graph_util_impl.py
@@ -285,7 +285,7 @@
output_graph_def.node.extend([output_node])
output_graph_def.library.CopyFrom(inference_graph.library)
- print("Converted %d variables to const ops." % how_many_converted)
+ logging.info("Converted %d variables to const ops.", how_many_converted)
return output_graph_def
diff --git a/tensorflow/python/framework/graph_util_test.py b/tensorflow/python/framework/graph_util_test.py
index b618152..2dafb94 100644
--- a/tensorflow/python/framework/graph_util_test.py
+++ b/tensorflow/python/framework/graph_util_test.py
@@ -209,7 +209,7 @@
defun_node, 2.0, name="output_node")
with session.Session() as sess:
- init = variables.initialize_variables([variable_node])
+ init = variables.variables_initializer([variable_node])
sess.run(init)
output = sess.run(output_node)
self.assertNear(4.0, output, 0.00001)
diff --git a/tensorflow/python/framework/load_library.py b/tensorflow/python/framework/load_library.py
index 535c601..9a8477d 100644
--- a/tensorflow/python/framework/load_library.py
+++ b/tensorflow/python/framework/load_library.py
@@ -58,7 +58,7 @@
op_list_str = py_tf.TF_GetOpList(lib_handle)
op_list = op_def_pb2.OpList()
op_list.ParseFromString(compat.as_bytes(op_list_str))
- wrappers = py_tf.GetPythonWrappers(op_list_str)
+ wrappers = py_tf.GetEagerPythonWrappers(op_list_str)
# Delete the library handle to release any memory held in C
# that are no longer needed.
diff --git a/tensorflow/python/framework/python_op_gen.i b/tensorflow/python/framework/python_op_gen.i
index 26ec4e8..efcce2f 100644
--- a/tensorflow/python/framework/python_op_gen.i
+++ b/tensorflow/python/framework/python_op_gen.i
@@ -16,10 +16,10 @@
%include "tensorflow/python/platform/base.i"
%{
-#include "tensorflow/python/framework/python_op_gen.h"
+#include "tensorflow/python/eager/python_eager_op_gen.h"
%}
-// Input typemap for GetPythonWrappers.
+// Input typemap for GetEagerPythonWrappers.
// Accepts a python object of 'bytes' type, and converts it to
// a const char* pointer and size_t length. The default typemap
// going from python bytes to const char* tries to decode the
@@ -37,5 +37,5 @@
%ignoreall;
-%unignore tensorflow::GetPythonWrappers;
-%include "tensorflow/python/framework/python_op_gen.h"
+%unignore tensorflow::GetEagerPythonWrappers;
+%include "tensorflow/python/eager/python_eager_op_gen.h"
diff --git a/tensorflow/python/framework/test_util.py b/tensorflow/python/framework/test_util.py
index f954b9d..5a8bc43 100644
--- a/tensorflow/python/framework/test_util.py
+++ b/tensorflow/python/framework/test_util.py
@@ -1014,6 +1014,8 @@
config.graph_options.optimizer_options.opt_level = -1
config.graph_options.rewrite_options.constant_folding = (
rewriter_config_pb2.RewriterConfig.OFF)
+ config.graph_options.rewrite_options.arithmetic_optimization = (
+ rewriter_config_pb2.RewriterConfig.OFF)
return config
if graph is None:
diff --git a/tensorflow/python/grappler/layout_optimizer_test.py b/tensorflow/python/grappler/layout_optimizer_test.py
index 5a84b16..e3dd4b0 100644
--- a/tensorflow/python/grappler/layout_optimizer_test.py
+++ b/tensorflow/python/grappler/layout_optimizer_test.py
@@ -476,7 +476,7 @@
random_seed.set_random_seed(0)
x = random_ops.truncated_normal([1, 784], seed=0)
conv = _two_layer_model(x)
- reduce_sum = math_ops.reduce_sum(conv, axis=[1, 2], keep_dims=True)
+ reduce_sum = math_ops.reduce_sum(conv, axis=[1, 2], keepdims=True)
squeeze = array_ops.squeeze(reduce_sum, axis=[1, 2])
output = array_ops.identity(squeeze)
@@ -506,7 +506,7 @@
random_seed.set_random_seed(0)
x = random_ops.truncated_normal([1, 784], seed=0)
conv = _two_layer_model(x)
- reduce_sum = math_ops.reduce_sum(conv, axis=[0, 1, 2], keep_dims=True)
+ reduce_sum = math_ops.reduce_sum(conv, axis=[0, 1, 2], keepdims=True)
squeeze = array_ops.squeeze(reduce_sum, axis=[0, 1, 2])
output = array_ops.identity(squeeze)
@@ -623,7 +623,7 @@
random_seed.set_random_seed(0)
x = random_ops.truncated_normal([1, 784], seed=0)
conv = _two_layer_model(x)
- reduce_sum = math_ops.reduce_sum(conv, axis=[3], keep_dims=True)
+ reduce_sum = math_ops.reduce_sum(conv, axis=[3], keepdims=True)
output = array_ops.identity(reduce_sum)
with session.Session(config=_get_config(False)) as sess:
@@ -653,7 +653,7 @@
random_seed.set_random_seed(0)
x = random_ops.truncated_normal([1, 784], seed=0)
conv = _two_layer_model(x)
- reduce_sum = math_ops.reduce_sum(conv, axis=[2], keep_dims=True)
+ reduce_sum = math_ops.reduce_sum(conv, axis=[2], keepdims=True)
output = array_ops.identity(reduce_sum)
with session.Session(config=_get_config(False)) as sess:
@@ -682,7 +682,7 @@
random_seed.set_random_seed(0)
x = random_ops.truncated_normal([1, 784], seed=0)
conv = _two_layer_model(x)
- reduce_sum = math_ops.reduce_sum(conv, axis=[2, 3], keep_dims=True)
+ reduce_sum = math_ops.reduce_sum(conv, axis=[2, 3], keepdims=True)
output = array_ops.identity(reduce_sum)
with session.Session(config=_get_config(False)) as sess:
diff --git a/tensorflow/python/keras/_impl/keras/backend.py b/tensorflow/python/keras/_impl/keras/backend.py
index 81a4d2f..449410f 100644
--- a/tensorflow/python/keras/_impl/keras/backend.py
+++ b/tensorflow/python/keras/_impl/keras/backend.py
@@ -3448,7 +3448,7 @@
Returns:
Output tensor.
"""
- # Note: nn.softmax_cross_entropy_with_logits
+ # Note: nn.softmax_cross_entropy_with_logits_v2
# expects logits, Keras expects probabilities.
if not from_logits:
# scale preds so that the class probas of each sample sum to 1
@@ -3512,7 +3512,7 @@
Returns:
A tensor.
"""
- # Note: nn.softmax_cross_entropy_with_logits
+ # Note: nn.sigmoid_cross_entropy_with_logits
# expects logits, Keras expects probabilities.
if not from_logits:
# transform back to logits
diff --git a/tensorflow/python/keras/_impl/keras/layers/normalization.py b/tensorflow/python/keras/_impl/keras/layers/normalization.py
index 5462a95..c16fc07 100644
--- a/tensorflow/python/keras/_impl/keras/layers/normalization.py
+++ b/tensorflow/python/keras/_impl/keras/layers/normalization.py
@@ -593,9 +593,9 @@
# used during evaluation, it is more efficient to just update in one
# step and should not make a significant difference in the result.
new_mean = math_ops.reduce_mean(new_mean,
- axis=1, keep_dims=True)
+ axis=1, keepdims=True)
new_variance = math_ops.reduce_mean(new_variance,
- axis=1, keep_dims=True)
+ axis=1, keepdims=True)
def _do_update(var, value):
if in_eager_mode and not self.trainable:
diff --git a/tensorflow/python/kernel_tests/BUILD b/tensorflow/python/kernel_tests/BUILD
index ebbec39..c03c514 100644
--- a/tensorflow/python/kernel_tests/BUILD
+++ b/tensorflow/python/kernel_tests/BUILD
@@ -918,6 +918,20 @@
)
tf_py_test(
+ name = "string_strip_op_test",
+ size = "small",
+ srcs = ["string_strip_op_test.py"],
+ additional_deps = [
+ "//third_party/py/numpy",
+ "//tensorflow/python:array_ops",
+ "//tensorflow/python:client_testlib",
+ "//tensorflow/python:errors",
+ "//tensorflow/python:framework_for_generated_wrappers",
+ "//tensorflow/python:string_ops",
+ ],
+)
+
+tf_py_test(
name = "substr_op_test",
size = "small",
srcs = ["substr_op_test.py"],
@@ -1196,6 +1210,18 @@
)
cuda_py_test(
+ name = "broadcast_to_ops_test",
+ size = "small",
+ srcs = ["broadcast_to_ops_test.py"],
+ additional_deps = [
+ "//third_party/py/numpy",
+ "//tensorflow/python:array_ops",
+ "//tensorflow/python:client",
+ "//tensorflow/python:client_testlib",
+ ],
+)
+
+cuda_py_test(
name = "inplace_ops_test",
size = "small",
srcs = ["inplace_ops_test.py"],
diff --git a/tensorflow/python/kernel_tests/broadcast_to_ops_test.py b/tensorflow/python/kernel_tests/broadcast_to_ops_test.py
new file mode 100644
index 0000000..6a1bd95
--- /dev/null
+++ b/tensorflow/python/kernel_tests/broadcast_to_ops_test.py
@@ -0,0 +1,85 @@
+# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for broadcast_to ops."""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+
+from tensorflow.python.framework import constant_op
+from tensorflow.python.framework import dtypes
+from tensorflow.python.framework import test_util
+from tensorflow.python.ops import array_ops
+from tensorflow.python.platform import test as test_lib
+
+
+class BroadcastToTest(test_util.TensorFlowTestCase):
+
+ def testBroadcastToBasic(self):
+ for dtype in [np.uint8, np.uint16, np.int8, np.int16, np.int32, np.int64]:
+ with self.test_session(use_gpu=True):
+ x = np.array([1, 2, 3], dtype=dtype)
+ v_tf = array_ops.broadcast_to(constant_op.constant(x), [3, 3])
+ v_np = np.broadcast_to(x, [3, 3])
+ self.assertAllEqual(v_tf.eval(), v_np)
+
+ def testBroadcastToString(self):
+ with self.test_session(use_gpu=True):
+ x = np.array([b"1", b"2", b"3"])
+ v_tf = array_ops.broadcast_to(constant_op.constant(x), [3, 3])
+ v_np = np.broadcast_to(x, [3, 3])
+ self.assertAllEqual(v_tf.eval(), v_np)
+
+ def testBroadcastToBool(self):
+ with self.test_session(use_gpu=True):
+ x = np.array([True, False, True], dtype=np.bool)
+ v_tf = array_ops.broadcast_to(constant_op.constant(x), [3, 3])
+ v_np = np.broadcast_to(x, [3, 3])
+ self.assertAllEqual(v_tf.eval(), v_np)
+
+ def testBroadcastToShape(self):
+ for input_dim in range(1, 6):
+ for output_dim in range(input_dim, 6):
+ with self.test_session(use_gpu=True):
+ input_shape = [2] * input_dim
+ output_shape = [2] * output_dim
+ x = np.array(np.random.randint(5, size=input_shape), dtype=np.int32)
+ v_tf = array_ops.broadcast_to(constant_op.constant(x), output_shape)
+ v_np = np.broadcast_to(x, output_shape)
+ self.assertAllEqual(v_tf.eval(), v_np)
+
+ def testBroadcastToScalar(self):
+ with self.test_session(use_gpu=True):
+ x = np.array(1, dtype=np.int32)
+ v_tf = array_ops.broadcast_to(constant_op.constant(x), [3, 3])
+ v_np = np.broadcast_to(x, [3, 3])
+ self.assertAllEqual(v_tf.eval(), v_np)
+
+ def testBroadcastToShapeTypeAndInference(self):
+ for dtype in [dtypes.int32, dtypes.int64]:
+ with self.test_session(use_gpu=True):
+ x = np.array([1, 2, 3])
+ v_tf = array_ops.broadcast_to(
+ constant_op.constant(x),
+ constant_op.constant([3, 3], dtype=dtype))
+ shape = v_tf.get_shape().as_list()
+ v_np = np.broadcast_to(x, [3, 3])
+ self.assertAllEqual(v_tf.eval(), v_np)
+ # check shape inference when shape input is constant
+ self.assertAllEqual(shape, v_np.shape)
+
+if __name__ == "__main__":
+ test_lib.main()
diff --git a/tensorflow/python/kernel_tests/confusion_matrix_test.py b/tensorflow/python/kernel_tests/confusion_matrix_test.py
index 670a625..79e4198 100644
--- a/tensorflow/python/kernel_tests/confusion_matrix_test.py
+++ b/tensorflow/python/kernel_tests/confusion_matrix_test.py
@@ -19,6 +19,7 @@
from __future__ import print_function
import numpy as np
+from six.moves import xrange # pylint: disable=redefined-builtin
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
@@ -104,11 +105,7 @@
d, l, cm_out = sess.run([data, lab, cm], {m_neg: 0.0, m_pos: 1.0, s: 1.0})
truth = np.zeros([2, 2], dtype=np_dtype)
- try:
- range_builder = xrange
- except NameError: # In Python 3.
- range_builder = range
- for i in range_builder(len(d)):
+ for i in xrange(len(d)):
truth[l[i], d[i]] += 1
self.assertEqual(cm_out.dtype, np_dtype)
diff --git a/tensorflow/python/kernel_tests/constant_op_test.py b/tensorflow/python/kernel_tests/constant_op_test.py
index 749313b..107ee37 100644
--- a/tensorflow/python/kernel_tests/constant_op_test.py
+++ b/tensorflow/python/kernel_tests/constant_op_test.py
@@ -65,6 +65,11 @@
self._testCpu(x)
self._testGpu(x)
+ def testInvalidDType(self):
+ # Test case for GitHub issue 18474
+ with self.assertRaises(TypeError):
+ constant_op.constant(dtypes_lib.string, "[,]")
+
def testBFloat16(self):
bfloat16 = dtypes_lib.bfloat16.as_numpy_dtype
self._testAll(np.arange(-15, 15).reshape([2, 3, 5]).astype(bfloat16))
diff --git a/tensorflow/python/kernel_tests/conv3d_transpose_test.py b/tensorflow/python/kernel_tests/conv3d_transpose_test.py
index a8b3af5..8973a45 100644
--- a/tensorflow/python/kernel_tests/conv3d_transpose_test.py
+++ b/tensorflow/python/kernel_tests/conv3d_transpose_test.py
@@ -119,6 +119,18 @@
target = 3.0
self.assertAllClose(target, value[n, d, h, w, k])
+ def testConv3DTransposeShapeMismatch(self):
+ # Test case for GitHub issue 18460
+ x_shape = [2, 2, 3, 4, 3]
+ f_shape = [3, 3, 3, 2, 2]
+ y_shape = [2, 2, 6, 8, 6]
+ strides = [1, 1, 2, 2, 2]
+ np.random.seed(1)
+ x_value = np.random.random_sample(x_shape).astype(np.float64)
+ f_value = np.random.random_sample(f_shape).astype(np.float64)
+ nn_ops.conv3d_transpose(
+ x_value, f_value, y_shape, strides, data_format='NCDHW')
+
def testConv3DTransposeValid(self):
with self.test_session():
strides = [1, 2, 2, 2, 1]
diff --git a/tensorflow/python/kernel_tests/manip_ops_test.py b/tensorflow/python/kernel_tests/manip_ops_test.py
index b8200ac..f314267 100644
--- a/tensorflow/python/kernel_tests/manip_ops_test.py
+++ b/tensorflow/python/kernel_tests/manip_ops_test.py
@@ -20,8 +20,10 @@
import numpy as np
from tensorflow.python.framework import constant_op
+from tensorflow.python.framework import dtypes
from tensorflow.python.framework import errors_impl
from tensorflow.python.framework import test_util
+from tensorflow.python.ops import array_ops
from tensorflow.python.ops import gradient_checker
from tensorflow.python.ops import manip_ops
from tensorflow.python.platform import test as test_lib
@@ -88,41 +90,78 @@
x = np.random.rand(3, 2, 1, 1).astype(t)
self._testAll(x + 1j * x, [2, 1, 1, 0], [0, 3, 1, 2])
+ def testNegativeAxis(self):
+ self._testAll(np.random.randint(-100, 100, (5)).astype(np.int32), 3, -1)
+ self._testAll(np.random.randint(-100, 100, (4, 4)).astype(np.int32), 3, -2)
+ # Make sure negative axis shoudl be 0 <= axis + dims < dims
+ with self.test_session():
+ with self.assertRaisesRegexp(errors_impl.InvalidArgumentError,
+ "is out of range"):
+ manip_ops.roll(np.random.randint(-100, 100, (4, 4)).astype(np.int32),
+ 3, -10).eval()
+
+ def testInvalidInputShape(self):
+ # The input should be 1-D or higher, checked in shape function.
+ with self.assertRaisesRegexp(
+ ValueError, "Shape must be at least rank 1 but is rank 0"):
+ manip_ops.roll(7, 1, 0)
+
def testRollInputMustVectorHigherRaises(self):
- tensor = 7
+ # The input should be 1-D or higher, checked in kernel.
+ tensor = array_ops.placeholder(dtype=dtypes.int32)
shift = 1
axis = 0
with self.test_session():
with self.assertRaisesRegexp(errors_impl.InvalidArgumentError,
"input must be 1-D or higher"):
- manip_ops.roll(tensor, shift, axis).eval()
+ manip_ops.roll(tensor, shift, axis).eval(feed_dict={tensor: 7})
+
+ def testInvalidAxisShape(self):
+ # The axis should be a scalar or 1-D, checked in shape function.
+ with self.assertRaisesRegexp(
+ ValueError, "Shape must be at most rank 1 but is rank 2"):
+ manip_ops.roll([[1, 2], [3, 4]], 1, [[0, 1]])
def testRollAxisMustBeScalarOrVectorRaises(self):
+ # The axis should be a scalar or 1-D, checked in kernel.
tensor = [[1, 2], [3, 4]]
shift = 1
- axis = [[0, 1]]
+ axis = array_ops.placeholder(dtype=dtypes.int32)
with self.test_session():
with self.assertRaisesRegexp(errors_impl.InvalidArgumentError,
"axis must be a scalar or a 1-D vector"):
- manip_ops.roll(tensor, shift, axis).eval()
+ manip_ops.roll(tensor, shift, axis).eval(feed_dict={axis: [[0, 1]]})
+
+ def testInvalidShiftShape(self):
+ # The shift should be a scalar or 1-D, checked in shape function.
+ with self.assertRaisesRegexp(
+ ValueError, "Shape must be at most rank 1 but is rank 2"):
+ manip_ops.roll([[1, 2], [3, 4]], [[0, 1]], 1)
def testRollShiftMustBeScalarOrVectorRaises(self):
+ # The shift should be a scalar or 1-D, checked in kernel.
tensor = [[1, 2], [3, 4]]
- shift = [[0, 1]]
+ shift = array_ops.placeholder(dtype=dtypes.int32)
axis = 1
with self.test_session():
with self.assertRaisesRegexp(errors_impl.InvalidArgumentError,
"shift must be a scalar or a 1-D vector"):
- manip_ops.roll(tensor, shift, axis).eval()
+ manip_ops.roll(tensor, shift, axis).eval(feed_dict={shift: [[0, 1]]})
+
+ def testInvalidShiftAndAxisNotEqualShape(self):
+ # The shift and axis must be same size, checked in shape function.
+ with self.assertRaisesRegexp(ValueError, "both shapes must be equal"):
+ manip_ops.roll([[1, 2], [3, 4]], [1], [0, 1])
def testRollShiftAndAxisMustBeSameSizeRaises(self):
+ # The shift and axis must be same size, checked in kernel.
tensor = [[1, 2], [3, 4]]
- shift = [1]
+ shift = array_ops.placeholder(dtype=dtypes.int32)
axis = [0, 1]
with self.test_session():
with self.assertRaisesRegexp(errors_impl.InvalidArgumentError,
"shift and axis must have the same size"):
- manip_ops.roll(tensor, shift, axis).eval()
+ manip_ops.roll(tensor, shift, axis).eval(feed_dict={shift: [1]})
def testRollAxisOutOfRangeRaises(self):
tensor = [1, 2]
diff --git a/tensorflow/python/kernel_tests/norm_op_test.py b/tensorflow/python/kernel_tests/norm_op_test.py
index d85512f..3f71b32 100644
--- a/tensorflow/python/kernel_tests/norm_op_test.py
+++ b/tensorflow/python/kernel_tests/norm_op_test.py
@@ -37,17 +37,17 @@
def testBadOrder(self):
matrix = [[0., 1.], [2., 3.]]
- for ord_ in "foo", -7, -1.1, 0:
+ for ord_ in "fro", -7, -1.1, 0:
with self.assertRaisesRegexp(ValueError,
"'ord' must be a supported vector norm"):
- linalg_ops.norm(matrix, ord="fro")
+ linalg_ops.norm(matrix, ord=ord_)
- for ord_ in "foo", -7, -1.1, 0:
+ for ord_ in "fro", -7, -1.1, 0:
with self.assertRaisesRegexp(ValueError,
"'ord' must be a supported vector norm"):
linalg_ops.norm(matrix, ord=ord_, axis=-1)
- for ord_ in 1.1, 2:
+ for ord_ in "foo", -7, -1.1, 1.1:
with self.assertRaisesRegexp(ValueError,
"'ord' must be a supported matrix norm"):
linalg_ops.norm(matrix, ord=ord_, axis=[-2, -1])
@@ -69,14 +69,14 @@
if use_static_shape_:
tf_matrix = constant_op.constant(matrix)
tf_norm = linalg_ops.norm(
- tf_matrix, ord=ord_, axis=axis_, keep_dims=keep_dims_)
+ tf_matrix, ord=ord_, axis=axis_, keepdims=keep_dims_)
tf_norm_val = sess.run(tf_norm)
else:
tf_matrix = array_ops.placeholder(dtype_)
tf_norm = linalg_ops.norm(
- tf_matrix, ord=ord_, axis=axis_, keep_dims=keep_dims_)
+ tf_matrix, ord=ord_, axis=axis_, keepdims=keep_dims_)
tf_norm_val = sess.run(tf_norm, feed_dict={tf_matrix: matrix})
- self.assertAllClose(np_norm, tf_norm_val)
+ self.assertAllClose(np_norm, tf_norm_val, rtol=1e-5, atol=1e-5)
def Test(self):
is_matrix_norm = (isinstance(axis_, tuple) or
@@ -85,8 +85,6 @@
if ((not is_matrix_norm and ord_ == "fro") or
(is_matrix_norm and is_fancy_p_norm)):
self.skipTest("Not supported by neither numpy.linalg.norm nor tf.norm")
- if is_matrix_norm and ord_ == 2:
- self.skipTest("Not supported by tf.norm")
if ord_ == 'euclidean' or (axis_ is None and len(shape) > 2):
self.skipTest("Not supported by numpy.linalg.norm")
matrix = np.random.randn(*shape_).astype(dtype_)
diff --git a/tensorflow/python/kernel_tests/py_func_test.py b/tensorflow/python/kernel_tests/py_func_test.py
index 5b508b7..b9f44d7 100644
--- a/tensorflow/python/kernel_tests/py_func_test.py
+++ b/tensorflow/python/kernel_tests/py_func_test.py
@@ -52,6 +52,38 @@
"""Encapsulates tests for py_func and eager_py_func."""
# ----- Tests for py_func -----
+ def testRealDataTypes(self):
+ def sum_func(x, y):
+ return x + y
+ for dtype in [dtypes.float16, dtypes.float32, dtypes.float64,
+ dtypes.uint8, dtypes.int8, dtypes.uint16, dtypes.int16,
+ dtypes.int32, dtypes.int64]:
+ with self.test_session():
+ x = constant_op.constant(1, dtype=dtype)
+ y = constant_op.constant(2, dtype=dtype)
+ z = self.evaluate(script_ops.py_func(sum_func, [x, y], dtype))
+ self.assertEqual(z, 3)
+
+ def testComplexDataTypes(self):
+ def sub_func(x, y):
+ return x - y
+ for dtype in [dtypes.complex64, dtypes.complex128]:
+ with self.test_session():
+ x = constant_op.constant(1 + 1j, dtype=dtype)
+ y = constant_op.constant(2 - 2j, dtype=dtype)
+ z = self.evaluate(script_ops.py_func(sub_func, [x, y], dtype))
+ self.assertEqual(z, -1 + 3j)
+
+ def testBoolDataTypes(self):
+ def and_func(x, y):
+ return x and y
+ dtype = dtypes.bool
+ with self.test_session():
+ x = constant_op.constant(True, dtype=dtype)
+ y = constant_op.constant(False, dtype=dtype)
+ z = self.evaluate(script_ops.py_func(and_func, [x, y], dtype))
+ self.assertEqual(z, False)
+
def testSingleType(self):
with self.test_session():
x = constant_op.constant(1.0, dtypes.float32)
diff --git a/tensorflow/python/kernel_tests/random/multinomial_op_test.py b/tensorflow/python/kernel_tests/random/multinomial_op_test.py
index a9dc7b7..051c7d8 100644
--- a/tensorflow/python/kernel_tests/random/multinomial_op_test.py
+++ b/tensorflow/python/kernel_tests/random/multinomial_op_test.py
@@ -46,7 +46,7 @@
logits = array_ops.expand_dims(logits, -1)
# [batch size, num samples]
- return math_ops.argmax(logits + noise, dimension=1)
+ return math_ops.argmax(logits + noise, axis=1)
native_sampler = random_ops.multinomial
diff --git a/tensorflow/python/kernel_tests/random/random_ops_test.py b/tensorflow/python/kernel_tests/random/random_ops_test.py
index df37dd9..e4b5c38 100644
--- a/tensorflow/python/kernel_tests/random/random_ops_test.py
+++ b/tensorflow/python/kernel_tests/random/random_ops_test.py
@@ -228,6 +228,17 @@
print("count = ", count)
self.assertTrue(count < count_limit)
+ def testUniformIntsWithInvalidShape(self):
+ for dtype in dtypes.int32, dtypes.int64:
+ with self.assertRaisesRegexp(
+ ValueError, "Shape must be rank 0 but is rank 1"):
+ random_ops.random_uniform(
+ [1000], minval=[1, 2], maxval=3, dtype=dtype)
+ with self.assertRaisesRegexp(
+ ValueError, "Shape must be rank 0 but is rank 1"):
+ random_ops.random_uniform(
+ [1000], minval=1, maxval=[2, 3], dtype=dtype)
+
# Check that uniform ints actually follow a uniform distribution.
def testUniformInts(self):
minv = -2
diff --git a/tensorflow/python/kernel_tests/string_strip_op_test.py b/tensorflow/python/kernel_tests/string_strip_op_test.py
new file mode 100644
index 0000000..30fd477
--- /dev/null
+++ b/tensorflow/python/kernel_tests/string_strip_op_test.py
@@ -0,0 +1,56 @@
+# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Tests for string_strip_op."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from tensorflow.python.ops import string_ops
+from tensorflow.python.platform import test
+
+
+class StringStripOpTest(test.TestCase):
+ """ Test cases for tf.string_strip."""
+
+ def test_string_strip(self):
+ strings = ["pigs on the wing", "animals"]
+
+ with self.test_session() as sess:
+ output = string_ops.string_strip(strings)
+ output = sess.run(output)
+ self.assertAllEqual(output, [b"pigs on the wing", b"animals"])
+
+ def test_string_strip_2d(self):
+ strings = [["pigs on the wing", "animals"],
+ [" hello ", "\n\tworld \r \n"]]
+
+ with self.test_session() as sess:
+ output = string_ops.string_strip(strings)
+ output = sess.run(output)
+ self.assertAllEqual(output, [[b"pigs on the wing", b"animals"],
+ [b"hello", b"world"]])
+
+ def test_string_strip_with_empty_strings(self):
+ strings = [" hello ", "", "world ", " \t \r \n "]
+
+ with self.test_session() as sess:
+ output = string_ops.string_strip(strings)
+ output = sess.run(output)
+ self.assertAllEqual(output, [b"hello", b"", b"world", b""])
+
+
+if __name__ == "__main__":
+ test.main()
diff --git a/tensorflow/python/lib/core/py_func.cc b/tensorflow/python/lib/core/py_func.cc
index 22317a3..8c6bb79 100644
--- a/tensorflow/python/lib/core/py_func.cc
+++ b/tensorflow/python/lib/core/py_func.cc
@@ -126,6 +126,9 @@
case NPY_INT8:
*tf = DT_INT8;
break;
+ case NPY_UINT16:
+ *tf = DT_UINT16;
+ break;
case NPY_INT16:
*tf = DT_INT16;
break;
diff --git a/tensorflow/python/ops/array_ops.py b/tensorflow/python/ops/array_ops.py
index fa26e07..ceeabe0 100644
--- a/tensorflow/python/ops/array_ops.py
+++ b/tensorflow/python/ops/array_ops.py
@@ -144,6 +144,7 @@
# pylint: disable=redefined-builtin,protected-access
@tf_export("expand_dims")
+@deprecation.deprecated_args(None, "Use the `axis` argument instead", "dim")
def expand_dims(input, axis=None, name=None, dim=None):
"""Inserts a dimension of 1 into a tensor's shape.
@@ -193,11 +194,7 @@
Raises:
ValueError: if both `dim` and `axis` are specified.
"""
- # TODO(aselle): Remove argument dim
- if dim is not None:
- if axis is not None:
- raise ValueError("can't specify both 'dim' and 'axis'")
- axis = dim
+ axis = deprecation.deprecated_argument_lookup("axis", axis, "dim", dim)
return gen_array_ops.expand_dims(input, axis, name)
@@ -2581,6 +2578,8 @@
@tf_export("squeeze")
+@deprecation.deprecated_args(None, "Use the `axis` argument instead",
+ "squeeze_dims")
def squeeze(input, axis=None, name=None, squeeze_dims=None):
# pylint: disable=redefined-builtin
"""Removes dimensions of size 1 from the shape of a tensor.
@@ -2621,10 +2620,8 @@
Raises:
ValueError: When both `squeeze_dims` and `axis` are specified.
"""
- if squeeze_dims is not None:
- if axis is not None:
- raise ValueError("Cannot specify both 'squeeze_dims' and 'axis'")
- axis = squeeze_dims
+ axis = deprecation.deprecated_argument_lookup(
+ "axis", axis, "squeeze_dims", squeeze_dims)
if np.isscalar(axis):
axis = [axis]
return gen_array_ops.squeeze(input, axis, name)
diff --git a/tensorflow/python/ops/distributions/categorical.py b/tensorflow/python/ops/distributions/categorical.py
index 66fa9e1..8f25b11 100644
--- a/tensorflow/python/ops/distributions/categorical.py
+++ b/tensorflow/python/ops/distributions/categorical.py
@@ -311,7 +311,7 @@
nn_ops.log_softmax(self.logits) * self.probs, axis=-1)
def _mode(self):
- ret = math_ops.argmax(self.logits, dimension=self._batch_rank)
+ ret = math_ops.argmax(self.logits, axis=self._batch_rank)
ret = math_ops.cast(ret, self.dtype)
ret.set_shape(self.batch_shape)
return ret
diff --git a/tensorflow/python/ops/embedding_ops.py b/tensorflow/python/ops/embedding_ops.py
index f0120f2..9e46739 100644
--- a/tensorflow/python/ops/embedding_ops.py
+++ b/tensorflow/python/ops/embedding_ops.py
@@ -331,11 +331,11 @@
representing sharded embedding tensors. Alternatively, a
`PartitionedVariable`, created by partitioning along dimension 0. Each
element must be appropriately sized for the given `partition_strategy`.
- sp_ids: N x M SparseTensor of int64 ids (typically from FeatureValueToId),
+ sp_ids: N x M `SparseTensor` of int64 ids (typically from FeatureValueToId),
where N is typically batch size and M is arbitrary.
- sp_weights: either a SparseTensor of float / double weights, or None to
- indicate all weights should be taken to be 1. If specified, sp_weights
- must have exactly the same shape and indices as sp_ids.
+ sp_weights: either a `SparseTensor` of float / double weights, or `None` to
+ indicate all weights should be taken to be 1. If specified, `sp_weights`
+ must have exactly the same shape and indices as `sp_ids`.
partition_strategy: A string specifying the partitioning strategy, relevant
if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default
is `"mod"`. See `tf.nn.embedding_lookup` for more details.
@@ -351,39 +351,43 @@
Returns:
A dense tensor representing the combined embeddings for the
- sparse ids. For each row in the dense tensor represented by sp_ids, the op
+ sparse ids. For each row in the dense tensor represented by `sp_ids`, the op
looks up the embeddings for all ids in that row, multiplies them by the
corresponding weight, and combines these embeddings as specified.
In other words, if
- shape(combined params) = [p0, p1, ..., pm]
+ `shape(combined params) = [p0, p1, ..., pm]`
and
- shape(sp_ids) = shape(sp_weights) = [d0, d1, ..., dn]
+ `shape(sp_ids) = shape(sp_weights) = [d0, d1, ..., dn]`
then
- shape(output) = [d0, d1, ..., dn-1, p1, ..., pm].
+ `shape(output) = [d0, d1, ..., dn-1, p1, ..., pm]`.
For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are
+ ```python
[0, 0]: id 1, weight 2.0
[0, 1]: id 3, weight 0.5
[1, 0]: id 0, weight 1.0
[2, 3]: id 1, weight 3.0
+ ```
with `combiner`="mean", then the output will be a 3x20 matrix where
+ ```python
output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5)
output[1, :] = (params[0, :] * 1.0) / 1.0
output[2, :] = (params[1, :] * 3.0) / 3.0
+ ```
Raises:
- TypeError: If sp_ids is not a SparseTensor, or if sp_weights is neither
- None nor SparseTensor.
- ValueError: If combiner is not one of {"mean", "sqrtn", "sum"}.
+ TypeError: If `sp_ids` is not a `SparseTensor`, or if `sp_weights` is
+ neither `None` nor `SparseTensor`.
+ ValueError: If `combiner` is not one of {"mean", "sqrtn", "sum"}.
"""
if combiner is None:
logging.warn("The default value of combiner will change from \"mean\" "
diff --git a/tensorflow/python/ops/histogram_ops.py b/tensorflow/python/ops/histogram_ops.py
index 4a1ef54..ec38d89 100644
--- a/tensorflow/python/ops/histogram_ops.py
+++ b/tensorflow/python/ops/histogram_ops.py
@@ -32,7 +32,6 @@
from tensorflow.python.ops import gen_math_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.util.tf_export import tf_export
-from tensorflow.python.util.tf_export import tf_export
@tf_export('histogram_fixed_width_bins')
diff --git a/tensorflow/python/ops/image_ops_impl.py b/tensorflow/python/ops/image_ops_impl.py
index 3369fe3..601010b 100644
--- a/tensorflow/python/ops/image_ops_impl.py
+++ b/tensorflow/python/ops/image_ops_impl.py
@@ -269,17 +269,7 @@
Raises:
ValueError: if the shape of `image` not supported.
"""
- with ops.name_scope(None, 'random_flip_up_down', [image]) as scope:
- image = ops.convert_to_tensor(image, name='image')
- image = _Assert3DImage(image)
- uniform_random = random_ops.random_uniform([], 0, 1.0, seed=seed)
- mirror_cond = math_ops.less(uniform_random, .5)
- result = control_flow_ops.cond(
- mirror_cond,
- lambda: array_ops.reverse(image, [0]),
- lambda: image,
- name=scope)
- return fix_image_flip_shape(image, result)
+ return _random_flip(image, 0, seed, 'random_flip_up_down')
@tf_export('image.random_flip_left_right')
@@ -301,14 +291,34 @@
Raises:
ValueError: if the shape of `image` not supported.
"""
- with ops.name_scope(None, 'random_flip_left_right', [image]) as scope:
+ return _random_flip(image, 1, seed, 'random_flip_left_right')
+
+
+def _random_flip(image, flip_index, seed, scope_name):
+ """Randomly (50% chance) flip an image along axis `flip_index`.
+ Args:
+ image: A 3-D tensor of shape `[height, width, channels].`
+ flip_index: The dimension along which to flip the image.
+ Vertical: 0, Horizontal: 1
+ seed: A Python integer. Used to create a random seed. See
+ @{tf.set_random_seed}
+ for behavior.
+ scope_name: Name of the scope in which the ops are added.
+
+ Returns:
+ A 3-D tensor of the same type and shape as `image`.
+
+ Raises:
+ ValueError: if the shape of `image` not supported.
+ """
+ with ops.name_scope(None, scope_name, [image]) as scope:
image = ops.convert_to_tensor(image, name='image')
image = _Assert3DImage(image)
uniform_random = random_ops.random_uniform([], 0, 1.0, seed=seed)
mirror_cond = math_ops.less(uniform_random, .5)
result = control_flow_ops.cond(
mirror_cond,
- lambda: array_ops.reverse(image, [1]),
+ lambda: array_ops.reverse(image, [flip_index]),
lambda: image,
name=scope)
return fix_image_flip_shape(image, result)
@@ -332,16 +342,7 @@
Raises:
ValueError: if the shape of `image` not supported.
"""
- with ops.name_scope(None, 'flip_left_right', [image]):
- image = ops.convert_to_tensor(image, name='image')
- image = _AssertAtLeast3DImage(image)
- shape = image.get_shape()
- if shape.ndims == 3 or shape.ndims is None:
- return fix_image_flip_shape(image, array_ops.reverse(image, [1]))
- elif shape.ndims == 4:
- return array_ops.reverse(image, [2])
- else:
- raise ValueError('\'image\' must have either 3 or 4 dimensions.')
+ return _flip(image, 1, 'flip_left_right')
@tf_export('image.flip_up_down')
@@ -362,14 +363,35 @@
Raises:
ValueError: if the shape of `image` not supported.
"""
- with ops.name_scope(None, 'flip_up_down', [image]):
+ return _flip(image, 0, 'flip_up_down')
+
+
+def _flip(image, flip_index, scope_name):
+ """Flip an image either horizontally or vertically.
+
+ Outputs the contents of `image` flipped along the dimension `flip_index`.
+
+ See also `reverse()`.
+
+ Args:
+ image: 4-D Tensor of shape `[batch, height, width, channels]` or
+ 3-D Tensor of shape `[height, width, channels]`.
+ flip_index: 0 For vertical, 1 for horizontal.
+
+ Returns:
+ A tensor of the same type and shape as `image`.
+
+ Raises:
+ ValueError: if the shape of `image` not supported.
+ """
+ with ops.name_scope(None, scope_name, [image]):
image = ops.convert_to_tensor(image, name='image')
image = _AssertAtLeast3DImage(image)
shape = image.get_shape()
if shape.ndims == 3 or shape.ndims is None:
- return fix_image_flip_shape(image, array_ops.reverse(image, [0]))
+ return fix_image_flip_shape(image, array_ops.reverse(image, [flip_index]))
elif shape.ndims == 4:
- return array_ops.reverse(image, [1])
+ return array_ops.reverse(image, [flip_index+1])
else:
raise ValueError('\'image\' must have either 3 or 4 dimensions.')
diff --git a/tensorflow/python/ops/init_ops.py b/tensorflow/python/ops/init_ops.py
index 39b7295..f93bf0a 100644
--- a/tensorflow/python/ops/init_ops.py
+++ b/tensorflow/python/ops/init_ops.py
@@ -39,10 +39,10 @@
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.ops import array_ops
-from tensorflow.python.ops import linalg_ops
+from tensorflow.python.ops import linalg_ops_impl
+from tensorflow.python.ops import gen_linalg_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import random_ops
-from tensorflow.python.ops import random_ops
from tensorflow.python.util.deprecation import deprecated
from tensorflow.python.util.tf_export import tf_export
@@ -529,7 +529,7 @@
# Generate a random matrix
a = random_ops.random_normal(flat_shape, dtype=dtype, seed=self.seed)
# Compute the qr factorization
- q, r = linalg_ops.qr(a, full_matrices=False)
+ q, r = gen_linalg_ops.qr(a, full_matrices=False)
# Make Q uniform
d = array_ops.diag_part(r)
q *= math_ops.sign(d)
@@ -577,7 +577,7 @@
a = random_ops.random_normal([shape[-1], shape[-1]],
dtype=dtype, seed=self.seed)
# Compute the qr factorization
- q, r = linalg_ops.qr(a, full_matrices=False)
+ q, r = gen_linalg_ops.qr(a, full_matrices=False)
# Make Q uniform
d = array_ops.diag_part(r)
q *= math_ops.sign(d)
@@ -636,7 +636,7 @@
a = random_ops.random_normal([n, n], dtype=self.dtype, seed=self.seed)
if self.seed:
self.seed += 1
- q, r = linalg_ops.qr(a)
+ q, r = gen_linalg_ops.qr(a)
d = array_ops.diag_part(r)
# make q uniform
q *= math_ops.sign(d)
@@ -723,7 +723,7 @@
raise ValueError("The dimension of the matrices must be the same.")
n = p1.shape.as_list()[0]
kernel2x2 = {}
- eye = linalg_ops.eye(n, dtype=self.dtype)
+ eye = linalg_ops_impl.eye(n, dtype=self.dtype)
kernel2x2[0, 0] = math_ops.matmul(p1, p2)
kernel2x2[0, 1] = math_ops.matmul(p1, (eye - p2))
kernel2x2[1, 0] = math_ops.matmul((eye - p1), p2)
@@ -848,7 +848,7 @@
"""
n = projection_matrix.shape.as_list()[0]
kernel = {}
- eye = linalg_ops.eye(n, dtype=self.dtype)
+ eye = linalg_ops_impl.eye(n, dtype=self.dtype)
kernel[0] = projection_matrix
kernel[1] = eye - projection_matrix
return kernel
@@ -976,7 +976,7 @@
if p1_shape != p2.shape.as_list() or p1_shape != p3.shape.as_list():
raise ValueError("The dimension of the matrices must be the same.")
n = p1_shape[0]
- eye = linalg_ops.eye(n, dtype=self.dtype)
+ eye = linalg_ops_impl.eye(n, dtype=self.dtype)
kernel2x2x2 = {}
def matmul(p1, p2, p3):
return math_ops.matmul(math_ops.matmul(p1, p2), p3)
@@ -1084,7 +1084,7 @@
"Identity matrix initializer can only be used for 2D matrices.")
if dtype is None:
dtype = self.dtype
- initializer = linalg_ops.eye(*full_shape, dtype=dtype)
+ initializer = linalg_ops_impl.eye(*full_shape, dtype=dtype)
if partition_info is not None:
initializer = array_ops.slice(initializer, partition_info.var_offset,
shape)
diff --git a/tensorflow/python/ops/linalg_ops.py b/tensorflow/python/ops/linalg_ops.py
index 170861b..a0dfa54 100644
--- a/tensorflow/python/ops/linalg_ops.py
+++ b/tensorflow/python/ops/linalg_ops.py
@@ -24,12 +24,13 @@
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import control_flow_ops
+from tensorflow.python.ops import functional_ops
from tensorflow.python.ops import gen_linalg_ops
+from tensorflow.python.ops import linalg_ops_impl
from tensorflow.python.ops import math_ops
# pylint: disable=wildcard-import
from tensorflow.python.ops.gen_linalg_ops import *
# pylint: enable=wildcard-import
-from tensorflow.python.util import compat
from tensorflow.python.util import deprecation
from tensorflow.python.util.tf_export import tf_export
@@ -159,36 +160,11 @@
Returns:
A `Tensor` of shape `batch_shape + [num_rows, num_columns]`
"""
- with ops.name_scope(
- name, default_name='eye', values=[num_rows, num_columns, batch_shape]):
- is_square = num_columns is None
- batch_shape = [] if batch_shape is None else batch_shape
- num_columns = num_rows if num_columns is None else num_columns
- if isinstance(num_rows, ops.Tensor) or isinstance(
- num_columns, ops.Tensor) or isinstance(batch_shape, ops.Tensor):
- batch_shape = ops.convert_to_tensor(
- batch_shape, name='shape', dtype=dtypes.int32)
- diag_size = math_ops.minimum(num_rows, num_columns)
- diag_shape = array_ops.concat((batch_shape, [diag_size]), 0)
- if not is_square:
- shape = array_ops.concat((batch_shape, [num_rows, num_columns]), 0)
- else:
- if not isinstance(num_rows, compat.integral_types) or not isinstance(
- num_columns, compat.integral_types):
- raise TypeError(
- 'num_rows and num_columns must be positive integer values.')
- batch_shape = [dim for dim in batch_shape]
- is_square = num_rows == num_columns
- diag_shape = batch_shape + [np.minimum(num_rows, num_columns)]
- if not is_square:
- shape = batch_shape + [num_rows, num_columns]
-
- diag_ones = array_ops.ones(diag_shape, dtype=dtype)
- if is_square:
- return array_ops.matrix_diag(diag_ones)
- else:
- zero_matrix = array_ops.zeros(shape, dtype=dtype)
- return array_ops.matrix_set_diag(zero_matrix, diag_ones)
+ return linalg_ops_impl.eye(num_rows,
+ num_columns=num_columns,
+ batch_shape=batch_shape,
+ dtype=dtype,
+ name=name)
@tf_export('matrix_solve_ls', 'linalg.lstsq')
@@ -454,7 +430,7 @@
This function can compute several different vector norms (the 1-norm, the
Euclidean or 2-norm, the inf-norm, and in general the p-norm for p > 0) and
- matrix norms (Frobenius, 1-norm, and inf-norm).
+ matrix norms (Frobenius, 1-norm, 2-norm and inf-norm).
Args:
tensor: `Tensor` of types `float32`, `float64`, `complex64`, `complex128`
@@ -465,7 +441,7 @@
Some restrictions apply:
a) The Frobenius norm `fro` is not defined for vectors,
b) If axis is a 2-tuple (matrix norm), only 'euclidean', 'fro', `1`,
- `np.inf` are supported.
+ `2`, `np.inf` are supported.
See the description of `axis` on how to compute norms for a batch of
vectors or matrices stored in a tensor.
axis: If `axis` is `None` (the default), the input is considered a vector
@@ -521,8 +497,7 @@
axis[0] == axis[1]):
raise ValueError(
"'axis' must be None, an integer, or a tuple of 2 unique integers")
- # TODO(rmlarsen): Implement matrix 2-norm using tf.svd().
- supported_matrix_norms = ['euclidean', 'fro', 1, np.inf]
+ supported_matrix_norms = ['euclidean', 'fro', 1, 2, np.inf]
if ord not in supported_matrix_norms:
raise ValueError("'ord' must be a supported matrix norm in %s, got %s" %
(supported_matrix_norms, ord))
@@ -539,12 +514,34 @@
with ops.name_scope(name, 'norm', [tensor]):
tensor = ops.convert_to_tensor(tensor)
+
if ord in ['fro', 'euclidean', 2, 2.0]:
- # TODO(rmlarsen): Move 2-norm to a separate clause once we support it for
- # matrices.
- result = math_ops.sqrt(
- math_ops.reduce_sum(
- tensor * math_ops.conj(tensor), axis, keepdims=True))
+ if is_matrix_norm and ord in [2, 2.0]:
+ rank = array_ops.rank(tensor)
+ positive_axis = functional_ops.map_fn(
+ lambda i: control_flow_ops.cond(i >= 0, lambda: i, lambda: i + rank),
+ ops.convert_to_tensor(axis))
+ axes = math_ops.range(rank)
+ perm_before = array_ops.concat(
+ [array_ops.setdiff1d(axes, positive_axis)[0], positive_axis],
+ axis=0)
+ perm_after = functional_ops.map_fn(
+ lambda i: math_ops.cast(
+ array_ops.squeeze(
+ array_ops.where(math_ops.equal(perm_before, i))),
+ dtype=dtypes.int32), axes)
+ permed = array_ops.transpose(tensor, perm=perm_before)
+ matrix_2_norm = array_ops.expand_dims(
+ math_ops.reduce_max(
+ math_ops.abs(gen_linalg_ops.svd(permed, compute_uv=False)[0]),
+ axis=-1,
+ keepdims=True),
+ axis=-1)
+ result = array_ops.transpose(matrix_2_norm, perm=perm_after)
+ else:
+ result = math_ops.sqrt(
+ math_ops.reduce_sum(
+ tensor * math_ops.conj(tensor), axis, keepdims=True))
else:
result = math_ops.abs(tensor)
if ord == 1:
diff --git a/tensorflow/python/ops/linalg_ops_impl.py b/tensorflow/python/ops/linalg_ops_impl.py
new file mode 100644
index 0000000..e7c89f6
--- /dev/null
+++ b/tensorflow/python/ops/linalg_ops_impl.py
@@ -0,0 +1,73 @@
+# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ==============================================================================
+"""Operations for linear algebra."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+
+from tensorflow.python.framework import dtypes
+from tensorflow.python.framework import ops
+from tensorflow.python.ops import array_ops
+from tensorflow.python.ops import math_ops
+from tensorflow.python.util import compat
+
+# Names below are lower_case.
+# pylint: disable=invalid-name
+
+
+def eye(num_rows,
+ num_columns=None,
+ batch_shape=None,
+ dtype=dtypes.float32,
+ name=None):
+ """Construct an identity matrix, or a batch of matrices.
+
+ See `linalg_ops.eye`.
+ """
+ with ops.name_scope(
+ name, default_name='eye', values=[num_rows, num_columns, batch_shape]):
+ is_square = num_columns is None
+ batch_shape = [] if batch_shape is None else batch_shape
+ num_columns = num_rows if num_columns is None else num_columns
+ if isinstance(num_rows, ops.Tensor) or isinstance(
+ num_columns, ops.Tensor) or isinstance(batch_shape, ops.Tensor):
+ batch_shape = ops.convert_to_tensor(
+ batch_shape, name='shape', dtype=dtypes.int32)
+ diag_size = math_ops.minimum(num_rows, num_columns)
+ diag_shape = array_ops.concat((batch_shape, [diag_size]), 0)
+ if not is_square:
+ shape = array_ops.concat((batch_shape, [num_rows, num_columns]), 0)
+ else:
+ if not isinstance(num_rows, compat.integral_types) or not isinstance(
+ num_columns, compat.integral_types):
+ raise TypeError(
+ 'num_rows and num_columns must be positive integer values.')
+ batch_shape = [dim for dim in batch_shape]
+ is_square = num_rows == num_columns
+ diag_shape = batch_shape + [np.minimum(num_rows, num_columns)]
+ if not is_square:
+ shape = batch_shape + [num_rows, num_columns]
+
+ diag_ones = array_ops.ones(diag_shape, dtype=dtype)
+ if is_square:
+ return array_ops.matrix_diag(diag_ones)
+ else:
+ zero_matrix = array_ops.zeros(shape, dtype=dtype)
+ return array_ops.matrix_set_diag(zero_matrix, diag_ones)
+
+# pylint: enable=invalid-name,redefined-builtin
diff --git a/tensorflow/python/ops/losses/losses_impl.py b/tensorflow/python/ops/losses/losses_impl.py
index 34ca1ad..9fc545c 100644
--- a/tensorflow/python/ops/losses/losses_impl.py
+++ b/tensorflow/python/ops/losses/losses_impl.py
@@ -29,6 +29,7 @@
from tensorflow.python.ops import weights_broadcast_ops
from tensorflow.python.ops.losses import util
from tensorflow.python.util.deprecation import deprecated_args
+from tensorflow.python.util.deprecation import deprecated_argument_lookup
from tensorflow.python.util.tf_export import tf_export
@@ -306,11 +307,8 @@
ValueError: If `predictions` shape doesn't match `labels` shape, or
`axis`, `labels`, `predictions` or `weights` is `None`.
"""
- if dim is not None:
- if axis is not None:
- raise ValueError("Cannot specify both 'axis' and 'dim'")
- axis = dim
- if axis is None and dim is None:
+ axis = deprecated_argument_lookup("axis", axis, "dim", dim)
+ if axis is None:
raise ValueError("You must specify 'axis'.")
if labels is None:
raise ValueError("labels must not be None.")
@@ -696,7 +694,7 @@
onehot_labels, logits, weights=1.0, label_smoothing=0, scope=None,
loss_collection=ops.GraphKeys.LOSSES,
reduction=Reduction.SUM_BY_NONZERO_WEIGHTS):
- """Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits.
+ """Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits_v2.
`weights` acts as a coefficient for the loss. If a scalar is provided,
then the loss is simply scaled by the given value. If `weights` is a
@@ -707,11 +705,16 @@
new_onehot_labels = onehot_labels * (1 - label_smoothing)
+ label_smoothing / num_classes
+ Note that `onehot_labels` and `logits` must have the same shape,
+ e.g. `[batch_size, num_classes]`. The shape of `weights` must be
+ broadcastable to loss, whose shape is decided by the shape of `logits`.
+ In case the shape of `logits` is `[batch_size, num_classes]`, loss is
+ a `Tensor` of shape `[batch_size]`.
+
Args:
- onehot_labels: `[batch_size, num_classes]` target one-hot-encoded labels.
- logits: `[batch_size, num_classes]` logits outputs of the network .
- weights: Optional `Tensor` whose rank is either 0, or rank 1 and is
- broadcastable to the loss which is a `Tensor` of shape `[batch_size]`.
+ onehot_labels: One-hot-encoded labels.
+ logits: Logits outputs of the network.
+ weights: Optional `Tensor` that is broadcastable to loss.
label_smoothing: If greater than 0 then smooth the labels.
scope: the scope for the operations performed in computing the loss.
loss_collection: collection to which the loss will be added.
diff --git a/tensorflow/python/ops/math_ops.py b/tensorflow/python/ops/math_ops.py
index 2b04866..2feb88c 100644
--- a/tensorflow/python/ops/math_ops.py
+++ b/tensorflow/python/ops/math_ops.py
@@ -211,11 +211,9 @@
name=None,
dimension=None,
output_type=dtypes.int64):
- if dimension is not None:
- if axis is not None:
- raise ValueError("Cannot specify both 'axis' and 'dimension'")
- axis = dimension
- elif axis is None:
+ axis = deprecation.deprecated_argument_lookup(
+ "axis", axis, "dimension", dimension)
+ if axis is None:
axis = 0
return gen_math_ops.arg_max(input, axis, name=name, output_type=output_type)
@@ -231,11 +229,9 @@
name=None,
dimension=None,
output_type=dtypes.int64):
- if dimension is not None:
- if axis is not None:
- raise ValueError("Cannot specify both 'axis' and 'dimension'")
- axis = dimension
- elif axis is None:
+ axis = deprecation.deprecated_argument_lookup(
+ "axis", axis, "dimension", dimension)
+ if axis is None:
axis = 0
return gen_math_ops.arg_min(input, axis, name=name, output_type=output_type)
@@ -761,13 +757,25 @@
tf.cast(x, tf.int32) # [1, 2], dtype=tf.int32
```
+ The operation supports data types (for `x` and `dtype`) of
+ `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `float16`, `float32`,
+ `float64`, `complex64`, `complex128`, `bfloat16`. In case of casting from
+ complex types (`complex64`, `complex128`) to real types, only the real part
+ of `x` is returned. In case of casting from real types to complex types
+ (`complex64`, `complex128`), the imaginary part of the returned value is set
+ to `0`. The handling of complex types here matches the behavior of numpy.
+
Args:
- x: A `Tensor` or `SparseTensor`.
- dtype: The destination type.
+ x: A `Tensor` or `SparseTensor` of numeric type. It could be
+ `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`,
+ `float16`, `float32`, `float64`, `complex64`, `complex128`, `bfloat16`.
+ dtype: The destination type. The list of supported dtypes is the same
+ as `x`.
name: A name for the operation (optional).
Returns:
- A `Tensor` or `SparseTensor` with same shape as `x`.
+ A `Tensor` or `SparseTensor` with same shape as `x` and
+ same type as `dtype`.
Raises:
TypeError: If `x` cannot be cast to the `dtype`.
@@ -1634,7 +1642,7 @@
tensor with a single element is returned.
Args:
- input_tensor: The tensor to reduce. Should have numeric type.
+ input_tensor: The tensor to reduce. Should have real numeric type.
axis: The dimensions to reduce. If `None` (the default),
reduces all dimensions. Must be in the range
`[-rank(input_tensor), rank(input_tensor))`.
@@ -1683,7 +1691,7 @@
tensor with a single element is returned.
Args:
- input_tensor: The tensor to reduce. Should have numeric type.
+ input_tensor: The tensor to reduce. Should have real numeric type.
axis: The dimensions to reduce. If `None` (the default),
reduces all dimensions. Must be in the range
`[-rank(input_tensor), rank(input_tensor))`.
diff --git a/tensorflow/python/ops/nn.py b/tensorflow/python/ops/nn.py
index 244702d..1d0d9a5 100644
--- a/tensorflow/python/ops/nn.py
+++ b/tensorflow/python/ops/nn.py
@@ -98,6 +98,7 @@
@@fixed_unigram_candidate_sampler
@@compute_accidental_hits
@@quantized_conv2d
+@@quantized_relu
@@quantized_relu_x
@@quantized_max_pool
@@quantized_avg_pool
diff --git a/tensorflow/python/ops/nn_impl.py b/tensorflow/python/ops/nn_impl.py
index 47cc4da..d0d5ed0 100644
--- a/tensorflow/python/ops/nn_impl.py
+++ b/tensorflow/python/ops/nn_impl.py
@@ -987,7 +987,7 @@
class biases.
labels: A `Tensor` of type `int64` and shape `[batch_size,
num_true]`. The target classes. Note that this format differs from
- the `labels` argument of `nn.softmax_cross_entropy_with_logits`.
+ the `labels` argument of `nn.softmax_cross_entropy_with_logits_v2`.
inputs: A `Tensor` of shape `[batch_size, dim]`. The forward
activations of the input network.
num_sampled: An `int`. The number of classes to randomly sample per batch.
@@ -1012,7 +1012,7 @@
out_logits: `Tensor` object with shape
`[batch_size, num_true + num_sampled]`, for passing to either
`nn.sigmoid_cross_entropy_with_logits` (NCE) or
- `nn.softmax_cross_entropy_with_logits` (sampled softmax).
+ `nn.softmax_cross_entropy_with_logits_v2` (sampled softmax).
out_labels: A Tensor object with the same shape as `out_logits`.
"""
@@ -1285,7 +1285,7 @@
logits = tf.matmul(inputs, tf.transpose(weights))
logits = tf.nn.bias_add(logits, biases)
labels_one_hot = tf.one_hot(labels, n_classes)
- loss = tf.nn.softmax_cross_entropy_with_logits(
+ loss = tf.nn.softmax_cross_entropy_with_logits_v2(
labels=labels_one_hot,
logits=logits)
```
@@ -1303,7 +1303,7 @@
biases: A `Tensor` of shape `[num_classes]`. The class biases.
labels: A `Tensor` of type `int64` and shape `[batch_size,
num_true]`. The target classes. Note that this format differs from
- the `labels` argument of `nn.softmax_cross_entropy_with_logits`.
+ the `labels` argument of `nn.softmax_cross_entropy_with_logits_v2`.
inputs: A `Tensor` of shape `[batch_size, dim]`. The forward
activations of the input network.
num_sampled: An `int`. The number of classes to randomly sample per batch.
@@ -1340,7 +1340,8 @@
partition_strategy=partition_strategy,
name=name,
seed=seed)
- sampled_losses = nn_ops.softmax_cross_entropy_with_logits(
+ labels = array_ops.stop_gradient(labels, name="labels_stop_gradient")
+ sampled_losses = nn_ops.softmax_cross_entropy_with_logits_v2(
labels=labels, logits=logits)
# sampled_losses is a [batch_size] tensor.
return sampled_losses
diff --git a/tensorflow/python/ops/nn_ops.py b/tensorflow/python/ops/nn_ops.py
index bb454b3..cd07550 100644
--- a/tensorflow/python/ops/nn_ops.py
+++ b/tensorflow/python/ops/nn_ops.py
@@ -1155,7 +1155,7 @@
Returns:
A `Tensor` with the same type as `value`.
- Output shape with `'VALID`` padding is:
+ Output shape with `'VALID'` padding is:
[batch, height - 2 * (filter_width - 1),
width - 2 * (filter_height - 1), out_channels].
@@ -1458,10 +1458,10 @@
if isinstance(output_shape, (list, np.ndarray)):
# output_shape's shape should be == [5] if reached this point.
- if not filter.get_shape()[3].is_compatible_with(output_shape[4]):
+ if not filter.get_shape()[3].is_compatible_with(output_shape[axis]):
raise ValueError(
"output_shape does not match filter's output channels, "
- "{} != {}".format(output_shape[4],
+ "{} != {}".format(output_shape[axis],
filter.get_shape()[3]))
if padding != "VALID" and padding != "SAME":
@@ -1986,7 +1986,7 @@
must provide a single specific index for the true class for each row of
`logits` (each minibatch entry). For soft softmax classification with
a probability distribution for each entry, see
- `softmax_cross_entropy_with_logits`.
+ `softmax_cross_entropy_with_logits_v2`.
**WARNING:** This op expects unscaled logits, since it performs a `softmax`
on `logits` internally for efficiency. Do not call this op with the
diff --git a/tensorflow/python/ops/rnn_cell_impl.py b/tensorflow/python/ops/rnn_cell_impl.py
index 9251e98..86dc053 100644
--- a/tensorflow/python/ops/rnn_cell_impl.py
+++ b/tensorflow/python/ops/rnn_cell_impl.py
@@ -617,9 +617,9 @@
Args:
inputs: `2-D` tensor with shape `[batch_size, input_size]`.
state: An `LSTMStateTuple` of state tensors, each shaped
- `[batch_size, self.state_size]`, if `state_is_tuple` has been set to
+ `[batch_size, num_units]`, if `state_is_tuple` has been set to
`True`. Otherwise, a `Tensor` shaped
- `[batch_size, 2 * self.state_size]`.
+ `[batch_size, 2 * num_units]`.
Returns:
A pair containing the new hidden state, and the new state (either a
diff --git a/tensorflow/python/profiler/tfprof_logger_test.py b/tensorflow/python/profiler/tfprof_logger_test.py
index 141144f..caf3869 100644
--- a/tensorflow/python/profiler/tfprof_logger_test.py
+++ b/tensorflow/python/profiler/tfprof_logger_test.py
@@ -38,7 +38,7 @@
return math_ops.matmul(a, b)
# pylint: disable=pointless-string-statement
- """# TODO(xpan): This this out of core so it doesn't depend on contrib.
+ """# TODO(xpan): This out of core so it doesn't depend on contrib.
def testFillMissingShape(self):
a, b, y = self._BuildSmallPlaceholderlModel()
run_options = config_pb2.RunOptions(
diff --git a/tensorflow/python/tools/saved_model_cli.py b/tensorflow/python/tools/saved_model_cli.py
index b88be4a..73ea85a 100644
--- a/tensorflow/python/tools/saved_model_cli.py
+++ b/tensorflow/python/tools/saved_model_cli.py
@@ -41,6 +41,7 @@
from tensorflow.python.framework import meta_graph as meta_graph_lib
from tensorflow.python.framework import ops as ops_lib
from tensorflow.python.platform import app # pylint: disable=unused-import
+from tensorflow.python.lib.io import file_io
from tensorflow.python.saved_model import loader
from tensorflow.python.tools import saved_model_utils
@@ -543,7 +544,7 @@
input_examples = preprocess_input_examples_arg_string(input_examples_str)
for input_tensor_key, (filename, variable_name) in inputs.items():
- data = np.load(filename)
+ data = np.load(file_io.FileIO(filename, mode='r'))
# When a variable_name key is specified for the input file
if variable_name:
diff --git a/tensorflow/python/training/saver_test.py b/tensorflow/python/training/saver_test.py
index 3867c0d..7049529 100644
--- a/tensorflow/python/training/saver_test.py
+++ b/tensorflow/python/training/saver_test.py
@@ -2731,7 +2731,7 @@
# The rest of the variables.
rest_variables = list(
set(variables.global_variables()) - set(var_list.keys()))
- init_rest_op = variables.initialize_variables(rest_variables)
+ init_rest_op = variables.variables_initializer(rest_variables)
with self.test_session(graph=graph) as sess:
saver = saver_module.Saver(var_list=var_list, max_to_keep=1)
diff --git a/tensorflow/python/util/compat.py b/tensorflow/python/util/compat.py
index 4163fca..3358ffe 100644
--- a/tensorflow/python/util/compat.py
+++ b/tensorflow/python/util/compat.py
@@ -42,10 +42,8 @@
from tensorflow.python.util.all_util import remove_undocumented
from tensorflow.python.util.tf_export import tf_export
-from tensorflow.python.util.tf_export import tf_export
-@tf_export('compat.as_bytes', 'compat.as_str')
def as_bytes(bytes_or_text, encoding='utf-8'):
"""Converts either bytes or unicode to `bytes`, using utf-8 encoding for text.
@@ -68,7 +66,6 @@
(bytes_or_text,))
-@tf_export('compat.as_text')
def as_text(bytes_or_text, encoding='utf-8'):
"""Returns the given argument as a unicode string.
@@ -93,8 +90,12 @@
# Convert an object to a `str` in both Python 2 and 3.
if _six.PY2:
as_str = as_bytes
+ tf_export('compat.as_bytes', 'compat.as_str')(as_bytes)
+ tf_export('compat.as_text')(as_text)
else:
as_str = as_text
+ tf_export('compat.as_bytes')(as_bytes)
+ tf_export('compat.as_text', 'compat.as_str')(as_text)
@tf_export('compat.as_str_any')
diff --git a/tensorflow/stream_executor/cuda/cuda_dnn.cc b/tensorflow/stream_executor/cuda/cuda_dnn.cc
index 640f270..102419a 100644
--- a/tensorflow/stream_executor/cuda/cuda_dnn.cc
+++ b/tensorflow/stream_executor/cuda/cuda_dnn.cc
@@ -524,11 +524,12 @@
ToString(status))};
}
-port::StatusOr<std::tuple<int, int, int>> CudnnSupport::GetVersion() {
+port::StatusOr<perftools::gputools::dnn::VersionInfo>
+CudnnSupport::GetVersion() {
CudnnVersion version;
TF_RETURN_IF_ERROR(GetLoadedCudnnVersion(&version));
- return std::make_tuple(version.major_version, version.minor_version,
- version.patch_level);
+ return perftools::gputools::dnn::VersionInfo(
+ version.major_version, version.minor_version, version.patch_level);
}
// Turns a BatchDescriptor structure into a cudnn tensor handle within a scope.
diff --git a/tensorflow/stream_executor/cuda/cuda_dnn.h b/tensorflow/stream_executor/cuda/cuda_dnn.h
index e6d12bf..5ded7cf 100644
--- a/tensorflow/stream_executor/cuda/cuda_dnn.h
+++ b/tensorflow/stream_executor/cuda/cuda_dnn.h
@@ -45,7 +45,7 @@
~CudnnSupport() override;
port::Status Init() override;
- port::StatusOr<std::tuple<int, int, int>> GetVersion() override;
+ port::StatusOr<perftools::gputools::dnn::VersionInfo> GetVersion() override;
port::StatusOr<std::unique_ptr<dnn::RnnDescriptor>> createRnnDescriptor(
int num_layers, int hidden_size, int input_size,
diff --git a/tensorflow/stream_executor/cuda/cuda_driver.cc b/tensorflow/stream_executor/cuda/cuda_driver.cc
index fedf4f5..71cab14 100644
--- a/tensorflow/stream_executor/cuda/cuda_driver.cc
+++ b/tensorflow/stream_executor/cuda/cuda_driver.cc
@@ -37,14 +37,6 @@
#include "tensorflow/stream_executor/platform/port.h"
#include "tensorflow/stream_executor/lib/inlined_vector.h"
-#if defined(PLATFORM_WINDOWS)
-// TODO: in windows ARRAYSIZE is defined in winnt.h but including it
-// here creates a conflict with cuda.h - for now define it here.
-#define ARRAYSIZE(a) \
- ((sizeof(a) / sizeof(*(a))) / \
- static_cast<size_t>(!(sizeof(a) % sizeof(*(a)))))
-#endif
-
bool FLAGS_gpuexec_cuda_driver_inject_init_error = false;
bool FLAGS_gpuexec_cuda_sync_around_driver_calls = false;
bool FLAGS_gpuexec_cuda_device_0_only = false;
@@ -719,15 +711,15 @@
port::bit_cast<void *>(uintptr_t(info_log_buffer_bytes)),
port::bit_cast<void *>(info_log_buffer.data()),
port::bit_cast<void *>(uintptr_t(log_verbose))};
- CHECK(ARRAYSIZE(options) == ARRAYSIZE(option_values));
+ CHECK(TF_ARRAYSIZE(options) == TF_ARRAYSIZE(option_values));
CUresult res;
{
// TODO(leary) Need to see if NVIDIA can expunge the leakiness in their
// module loading: see http://b/13248943
- res = cuModuleLoadDataEx(module, ptx_data, ARRAYSIZE(options), options,
- option_values);
+ res = cuModuleLoadDataEx(module, ptx_data, TF_ARRAYSIZE(options),
+ options, option_values);
}
// The PTX JIT mutates the values in the option values array to reflect the
diff --git a/tensorflow/stream_executor/cuda/cuda_gpu_executor.cc b/tensorflow/stream_executor/cuda/cuda_gpu_executor.cc
index 9700dac..7c87d33 100644
--- a/tensorflow/stream_executor/cuda/cuda_gpu_executor.cc
+++ b/tensorflow/stream_executor/cuda/cuda_gpu_executor.cc
@@ -1126,7 +1126,7 @@
builder.set_name(device_name);
}
- for (size_t i = 0; i < ARRAYSIZE(kAllUnqueryableDeviceParams); i++) {
+ for (size_t i = 0; i < TF_ARRAYSIZE(kAllUnqueryableDeviceParams); i++) {
const auto ¶ms = kAllUnqueryableDeviceParams[i];
if (params.cc_major == cc_major_ && params.cc_minor == cc_minor_) {
builder.set_blocks_per_core_limit(params.blocks_per_core_limit);
diff --git a/tensorflow/stream_executor/dnn.h b/tensorflow/stream_executor/dnn.h
index 8e202d1..39f21d8 100644
--- a/tensorflow/stream_executor/dnn.h
+++ b/tensorflow/stream_executor/dnn.h
@@ -875,6 +875,22 @@
string ElementwiseOperationString(ElementwiseOperation op);
+// A simple class representing the version of the backing library, to
+// workaround the "too perfect forwarding" issue in gcc6+ compilers.
+// See PR#16309 and issue #18402 for links discussing the issue.
+class VersionInfo {
+ public:
+ VersionInfo(int major = 0, int minor = 0, int patch = 0)
+ : major_(major), minor_(minor), patch_(patch) {}
+ int major_version() { return major_; }
+ int minor_version() { return minor_; }
+ int patch() { return patch_; }
+ private:
+ int major_;
+ int minor_;
+ int patch_;
+};
+
// Suite of operations typically used for implementing Deep/Convolutional Neural
// Nets. Note: A false return value of an operation indicates the
// implementation is not available.
@@ -885,8 +901,8 @@
virtual port::Status Init() = 0;
- // Gets the version of the backing library, as a {major, minor, patch} tuple.
- virtual port::StatusOr<std::tuple<int, int, int>> GetVersion() {
+ // Gets the version of the backing library, as a VersionInfo object.
+ virtual port::StatusOr<VersionInfo> GetVersion() {
return port::UnimplementedError(
"DnnSupport::GetVersion not implemented on this platform.");
}
diff --git a/tensorflow/stream_executor/platform/port.h b/tensorflow/stream_executor/platform/port.h
index 259cf38..57ad965 100644
--- a/tensorflow/stream_executor/platform/port.h
+++ b/tensorflow/stream_executor/platform/port.h
@@ -38,12 +38,6 @@
using std::string;
#endif
-#if !defined(COMPILER_MSVC)
-#define ARRAYSIZE(a) \
- ((sizeof(a) / sizeof(*(a))) / \
- static_cast<size_t>(!(sizeof(a) % sizeof(*(a)))))
-#endif
-
using tensorflow::LinkerInitialized;
using tensorflow::LINKER_INITIALIZED;
diff --git a/tensorflow/tensorflow.bzl b/tensorflow/tensorflow.bzl
index 528f811..51e856b 100644
--- a/tensorflow/tensorflow.bzl
+++ b/tensorflow/tensorflow.bzl
@@ -163,7 +163,6 @@
def get_win_copts(is_external=False):
WINDOWS_COPTS = [
- "/D__VERSION__=\\\"MSVC\\\"",
"/DPLATFORM_WINDOWS",
"/DEIGEN_HAS_C99_MATH",
"/DTENSORFLOW_USE_EIGEN_THREADPOOL",
@@ -1704,7 +1703,7 @@
],
outs=["util/version_info.cc"],
cmd=
- "$(location //tensorflow/tools/git:gen_git_source.py) --generate $(SRCS) \"$@\"",
+ "$(location //tensorflow/tools/git:gen_git_source.py) --generate $(SRCS) \"$@\" --git_tag_override=$${GIT_TAG_OVERRIDE:-}",
local=1,
tools=[clean_dep("//tensorflow/tools/git:gen_git_source.py")],)
diff --git a/tensorflow/tools/api/golden/tensorflow.estimator.-run-config.pbtxt b/tensorflow/tools/api/golden/tensorflow.estimator.-run-config.pbtxt
index 05e603e..c8da55d 100644
--- a/tensorflow/tools/api/golden/tensorflow.estimator.-run-config.pbtxt
+++ b/tensorflow/tools/api/golden/tensorflow.estimator.-run-config.pbtxt
@@ -7,6 +7,10 @@
mtype: "<type \'property\'>"
}
member {
+ name: "device_fn"
+ mtype: "<type \'property\'>"
+ }
+ member {
name: "evaluation_master"
mtype: "<type \'property\'>"
}
@@ -84,7 +88,7 @@
}
member_method {
name: "__init__"
- argspec: "args=[\'self\', \'model_dir\', \'tf_random_seed\', \'save_summary_steps\', \'save_checkpoints_steps\', \'save_checkpoints_secs\', \'session_config\', \'keep_checkpoint_max\', \'keep_checkpoint_every_n_hours\', \'log_step_count_steps\', \'train_distribute\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'100\', \'<object object instance>\', \'<object object instance>\', \'None\', \'5\', \'10000\', \'100\', \'None\'], "
+ argspec: "args=[\'self\', \'model_dir\', \'tf_random_seed\', \'save_summary_steps\', \'save_checkpoints_steps\', \'save_checkpoints_secs\', \'session_config\', \'keep_checkpoint_max\', \'keep_checkpoint_every_n_hours\', \'log_step_count_steps\', \'train_distribute\', \'device_fn\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'100\', \'<object object instance>\', \'<object object instance>\', \'None\', \'5\', \'10000\', \'100\', \'None\', \'None\'], "
}
member_method {
name: "replace"
diff --git a/tensorflow/tools/api/golden/tensorflow.pbtxt b/tensorflow/tools/api/golden/tensorflow.pbtxt
index c662499..0b12bc0 100644
--- a/tensorflow/tools/api/golden/tensorflow.pbtxt
+++ b/tensorflow/tools/api/golden/tensorflow.pbtxt
@@ -1981,6 +1981,10 @@
argspec: "args=[\'source\', \'delimiter\', \'skip_empty\'], varargs=None, keywords=None, defaults=[\' \', \'True\'], "
}
member_method {
+ name: "string_strip"
+ argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], "
+ }
+ member_method {
name: "string_to_hash_bucket"
argspec: "args=[\'string_tensor\', \'num_buckets\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], "
}
diff --git a/tensorflow/tools/ci_build/builds/pip.sh b/tensorflow/tools/ci_build/builds/pip.sh
index 82042b9..5fa75e1 100755
--- a/tensorflow/tools/ci_build/builds/pip.sh
+++ b/tensorflow/tools/ci_build/builds/pip.sh
@@ -123,6 +123,10 @@
BAZEL_FLAGS=$(str_strip "${BAZEL_FLAGS}")
+if [[ -z "$GIT_TAG_OVERRIDE" ]]; then
+ BAZEL_FLAGS+=" --action_env=GIT_TAG_OVERRIDE"
+fi
+
echo "Using Bazel flags: ${BAZEL_FLAGS}"
PIP_BUILD_TARGET="//tensorflow/tools/pip_package:build_pip_package"
diff --git a/tensorflow/tools/ci_build/builds/test_user_ops.sh b/tensorflow/tools/ci_build/builds/test_user_ops.sh
index caa3a40..c342367 100755
--- a/tensorflow/tools/ci_build/builds/test_user_ops.sh
+++ b/tensorflow/tools/ci_build/builds/test_user_ops.sh
@@ -213,27 +213,34 @@
echo "Invoking user op ${USER_OP} defined in file ${USER_OP_SO} "\
"via pip installation"
-ORIG_OUTPUT=$("${PYTHON_BIN_PATH}" -c "import tensorflow as tf; print(tf.Session('').run(tf.load_op_library('./${USER_OP_SO}').${USER_OP}(${OP_INPUT})))")
+function run_op() {
+ local ORIG_OUTPUT=$1
+ local ADDITIONAL_LOG=$2
-# Format OUTPUT for analysis
-if [[ -z $(echo "${ORIG_OUTPUT}" | grep -o ',') ]]; then
- if [[ ${IS_MAC} == "1" ]]; then
- OUTPUT=$(echo "${ORIG_OUTPUT}" | sed -E -e 's/[ \t]+/,/g')
+ # Format OUTPUT for analysis
+ if [[ -z $(echo "${ORIG_OUTPUT}" | grep -o ',') ]]; then
+ if [[ ${IS_MAC} == "1" ]]; then
+ local OUTPUT=$(echo "${ORIG_OUTPUT}" | sed -E -e 's/[ \t]+/,/g')
+ else
+ local OUTPUT=$(echo "${ORIG_OUTPUT}" | sed -r -e 's/[ \t]+/,/g')
+ fi
else
- OUTPUT=$(echo "${ORIG_OUTPUT}" | sed -r -e 's/[ \t]+/,/g')
+ local OUTPUT="${ORIG_OUTPUT}"
fi
-else
- OUTPUT="${ORIG_OUTPUT}"
-fi
-EQUALS_EXPECTED=$("${PYTHON_BIN_PATH}" -c "print(${OUTPUT} == ${EXPECTED_OUTPUT})")
+ local EQUALS_EXPECTED=$("${PYTHON_BIN_PATH}" -c "print(${OUTPUT} == ${EXPECTED_OUTPUT})")
-if [[ "${EQUALS_EXPECTED}" != "True" ]]; then
- die "FAILED: Output from user op (${OUTPUT}) does not match expected "\
-"output ${EXPECTED_OUTPUT}"
-else
- echo "Output from user op (${OUTPUT}) matches expected output"
-fi
+ if [[ "${EQUALS_EXPECTED}" != "True" ]]; then
+ local ERROR="FAILED: Output from user op (${OUTPUT}) does not match expected "\
+ "output ${EXPECTED_OUTPUT}"${ADDITIONAL_LOG}
+ die ${ERROR}
+ else
+ echo "Output from user op (${OUTPUT}) matches expected output"
+ fi
+}
+
+run_op $("${PYTHON_BIN_PATH}" -c "import tensorflow as tf; print(tf.Session('').run(tf.load_op_library('./${USER_OP_SO}').${USER_OP}(${OP_INPUT})))")
+run_op $("${PYTHON_BIN_PATH}" -c "import tensorflow as tf; tf.enable_eager_execution(); print(tf.load_op_library('./${USER_OP_SO}').${USER_OP}(${OP_INPUT}))") " in eager mode"
popd
diff --git a/tensorflow/tools/ci_build/linux/cpu/run_mkl.sh b/tensorflow/tools/ci_build/linux/cpu/run_mkl.sh
index dbf376b..2a9f295 100755
--- a/tensorflow/tools/ci_build/linux/cpu/run_mkl.sh
+++ b/tensorflow/tools/ci_build/linux/cpu/run_mkl.sh
@@ -30,7 +30,10 @@
yes "" | $PYTHON_BIN_PATH configure.py
# Run bazel test command. Double test timeouts to avoid flakes.
+# Setting KMP_BLOCKTIME to 0 lets OpenMP threads to sleep right after parallel execution
+# in an MKL primitive. This reduces the effects of an oversubscription of OpenMP threads
+# caused by executing multiple tests concurrently.
bazel test --test_tag_filters=-no_oss,-oss_serial,-gpu,-benchmark-test --test_lang_filters=py -k \
--jobs=${N_JOBS} --test_timeout 300,450,1200,3600 --build_tests_only \
- --config=mkl --config=opt --test_output=errors -- \
+ --config=mkl --test_env=KMP_BLOCKTIME=0 --config=opt --test_output=errors -- \
//tensorflow/... -//tensorflow/compiler/... -//tensorflow/contrib/...
diff --git a/tensorflow/tools/ci_build/windows/gpu/cmake/run_py.bat b/tensorflow/tools/ci_build/windows/gpu/cmake/run_py.bat
index 9782989..3b437d3 100644
--- a/tensorflow/tools/ci_build/windows/gpu/cmake/run_py.bat
+++ b/tensorflow/tools/ci_build/windows/gpu/cmake/run_py.bat
@@ -31,6 +31,9 @@
:: Set ctest binary location.
IF DEFINED CTEST_EXE (ECHO CTEST_EXE is set to %CTEST_EXE%) ELSE (SET CTEST_EXE="C:\Program Files\cmake\bin\ctest.exe")
+:: Install absl-py.
+%PIP_EXE% install --upgrade absl-py
+
:: Run the CMAKE build to build the pip package.
CALL %REPO_ROOT%\tensorflow\tools\ci_build\windows\gpu\cmake\run_build.bat
if %errorlevel% neq 0 exit /b %errorlevel%
@@ -40,9 +43,6 @@
set /p WHEEL_FILENAME=<wheel_filename_file
del wheel_filename_file
-:: Install absl-py.
-%PIP_EXE% install --upgrade absl-py
-
:: Install the pip package.
echo Installing PIP package...
%PIP_EXE% install --upgrade --no-deps %WHEEL_FILENAME% -v -v
diff --git a/tensorflow/tools/docker/Dockerfile.devel b/tensorflow/tools/docker/Dockerfile.devel
index b3dbe47..390d744 100644
--- a/tensorflow/tools/docker/Dockerfile.devel
+++ b/tensorflow/tools/docker/Dockerfile.devel
@@ -72,7 +72,7 @@
# Download and build TensorFlow.
WORKDIR /tensorflow
-RUN git clone --branch=r1.7 --depth=1 https://github.com/tensorflow/tensorflow.git .
+RUN git clone --branch=r1.8 --depth=1 https://github.com/tensorflow/tensorflow.git .
# TODO(craigcitro): Don't install the pip package, since it makes it
# more difficult to experiment with local changes. Instead, just add
diff --git a/tensorflow/tools/docker/Dockerfile.devel-cpu-mkl b/tensorflow/tools/docker/Dockerfile.devel-cpu-mkl
index 037d131..c65e0b7 100644
--- a/tensorflow/tools/docker/Dockerfile.devel-cpu-mkl
+++ b/tensorflow/tools/docker/Dockerfile.devel-cpu-mkl
@@ -3,7 +3,7 @@
LABEL maintainer="Clayne Robison<clayne.b.robison@intel.com>"
# These arguments are parameterized. Use --build-args to override.
-ARG TF_BRANCH=r1.7
+ARG TF_BRANCH=r1.8
ARG WHL_DIR=/whl
RUN apt-get update && apt-get install -y --no-install-recommends \
diff --git a/tensorflow/tools/docker/Dockerfile.devel-gpu b/tensorflow/tools/docker/Dockerfile.devel-gpu
index bfb96da..293028d 100644
--- a/tensorflow/tools/docker/Dockerfile.devel-gpu
+++ b/tensorflow/tools/docker/Dockerfile.devel-gpu
@@ -81,7 +81,7 @@
# Download and build TensorFlow.
WORKDIR /tensorflow
-RUN git clone --branch=r1.7 --depth=1 https://github.com/tensorflow/tensorflow.git .
+RUN git clone --branch=r1.8 --depth=1 https://github.com/tensorflow/tensorflow.git .
# Configure the build for our CUDA configuration.
ENV CI_BUILD_PYTHON python
diff --git a/tensorflow/tools/git/gen_git_source.py b/tensorflow/tools/git/gen_git_source.py
index 78d5119..73dee98 100755
--- a/tensorflow/tools/git/gen_git_source.py
+++ b/tensorflow/tools/git/gen_git_source.py
@@ -139,7 +139,7 @@
print("gen_git_source.py: spec is %r" % spec)
-def get_git_version(git_base_path):
+def get_git_version(git_base_path, git_tag_override):
"""Get the git version from the repository.
This function runs `git describe ...` in the path given as `git_base_path`.
@@ -152,6 +152,9 @@
Args:
git_base_path: where the .git directory is located
+ git_tag_override: Override the value for the git tag. This is useful for
+ releases where we want to build the release before the git tag is
+ created.
Returns:
A bytestring representing the git version
"""
@@ -161,6 +164,14 @@
"git", str("--git-dir=%s/.git" % git_base_path),
str("--work-tree=" + git_base_path), "describe", "--long", "--tags"
]).strip())
+ if git_tag_override:
+ split_val = val.split("-")
+ if len(split_val) != 3:
+ raise Exception(
+ ("Expected git version in format 'TAG-COMMITS AFTER TAG-HASH' "
+ "but got '%s'") % val)
+ split_val[0] = git_tag_override
+ val = bytes("-".join(split_val))
return val if val else unknown_label
except (subprocess.CalledProcessError, OSError):
return unknown_label
@@ -178,7 +189,15 @@
contents = """/* Generated by gen_git_source.py */
#include <string>
const char* tf_git_version() {return "%s";}
-const char* tf_compiler_version() {return __VERSION__;}
+const char* tf_compiler_version() {
+#ifdef _MSC_VER
+#define STRINGIFY(x) #x
+#define TOSTRING(x) STRINGIFY(x)
+ return "MSVC " TOSTRING(_MSC_FULL_VER);
+#else
+ return __VERSION__;
+#endif
+}
const int tf_cxx11_abi_flag() {
#ifdef _GLIBCXX_USE_CXX11_ABI
return _GLIBCXX_USE_CXX11_ABI;
@@ -197,7 +216,7 @@
open(filename, "w").write(contents)
-def generate(arglist):
+def generate(arglist, git_tag_override=None):
"""Generate version_info.cc as given `destination_file`.
Args:
@@ -217,6 +236,10 @@
`ref_symlink` is unused in this script but passed, because the build
system uses that file to detect when commits happen.
+ git_tag_override: Override the value for the git tag. This is useful for
+ releases where we want to build the release before the git tag is
+ created.
+
Raises:
RuntimeError: If ./configure needs to be run, RuntimeError will be raised.
"""
@@ -234,11 +257,11 @@
raise RuntimeError(
"Run ./configure again, branch was '%s' but is now '%s'" %
(old_branch, new_branch))
- git_version = get_git_version(data["path"])
+ git_version = get_git_version(data["path"], git_tag_override)
write_version_info(dest_file, git_version)
-def raw_generate(output_file):
+def raw_generate(output_file, source_dir, git_tag_override=None):
"""Simple generator used for cmake/make build systems.
This does not create any symlinks. It requires the build system
@@ -246,9 +269,13 @@
Args:
output_file: Output filename for the version info cc
+ source_dir: Base path of the source code
+ git_tag_override: Override the value for the git tag. This is useful for
+ releases where we want to build the release before the git tag is
+ created.
"""
- git_version = get_git_version(".")
+ git_version = get_git_version(source_dir, git_tag_override)
write_version_info(output_file, git_version)
@@ -271,6 +298,11 @@
help="Root path to place generated git files (created by --configure).")
parser.add_argument(
+ "--git_tag_override", type=str,
+ help="Override git tag value in the __git_version__ string. Useful when "
+ "creating release builds before the release tag is created.")
+
+parser.add_argument(
"--generate",
type=str,
help="Generate given spec-file, HEAD-symlink-file, ref-symlink-file",
@@ -281,6 +313,11 @@
type=str,
help="Generate version_info.cc (simpler version used for cmake/make)")
+parser.add_argument(
+ "--source_dir",
+ type=str,
+ help="Base path of the source code (used for cmake/make)")
+
args = parser.parse_args()
if args.configure is not None:
@@ -288,9 +325,12 @@
raise RuntimeError("Must pass --gen_root_path arg when running --configure")
configure(args.configure, args.gen_root_path, debug=args.debug)
elif args.generate is not None:
- generate(args.generate)
+ generate(args.generate, args.git_tag_override)
elif args.raw_generate is not None:
- raw_generate(args.raw_generate)
+ source_path = "."
+ if args.source_dir is not None:
+ source_path = args.source_dir
+ raw_generate(args.raw_generate, source_path, args.git_tag_override)
else:
raise RuntimeError("--configure or --generate or --raw_generate "
"must be used")
diff --git a/tensorflow/tools/git/gen_git_source.sh b/tensorflow/tools/git/gen_git_source.sh
index db20bb0..cd128af 100755
--- a/tensorflow/tools/git/gen_git_source.sh
+++ b/tensorflow/tools/git/gen_git_source.sh
@@ -28,7 +28,15 @@
cat <<EOF > ${OUTPUT_FILENAME}
#include <string>
const char* tf_git_version() {return "${GIT_VERSION}";}
-const char* tf_compiler_version() {return __VERSION__;}
+const char* tf_compiler_version() {
+#ifdef _MSC_VER
+#define STRINGIFY(x) #x
+#define TOSTRING(x) STRINGIFY(x)
+ return "MSVC " TOSTRING(_MSC_FULL_VER);
+#else
+ return __VERSION__;
+#endif
+}
const int tf_cxx11_abi_flag() {
#ifdef _GLIBCXX_USE_CXX11_ABI
return _GLIBCXX_USE_CXX11_ABI;
diff --git a/tensorflow/tools/graph_transforms/transform_graph.cc b/tensorflow/tools/graph_transforms/transform_graph.cc
index 28387c2..8ce8f5e 100644
--- a/tensorflow/tools/graph_transforms/transform_graph.cc
+++ b/tensorflow/tools/graph_transforms/transform_graph.cc
@@ -24,6 +24,9 @@
#include "tensorflow/core/util/command_line_flags.h"
#include "tensorflow/tools/graph_transforms/file_utils.h"
#include "tensorflow/tools/graph_transforms/transform_utils.h"
+#if !defined(PLATFORM_WINDOWS)
+#include <pwd.h>
+#endif
namespace tensorflow {
namespace graph_transforms {
@@ -130,16 +133,64 @@
return Status::OK();
}
+std::string ExpandPath(const std::string& path_string) {
+#if defined(PLATFORM_WINDOWS)
+ return path_string;
+#else
+ if (path_string.empty() || path_string[0] != '~') {
+ return path_string;
+ }
+
+ const char* home = NULL;
+ std::string::size_type prefix = path_string.find_first_of('/');
+ if (path_string.length() == 1 || prefix == 1) {
+ // The value of $HOME, e.g., ~/foo
+ home = getenv("HOME");
+ if (!home) {
+ // If HOME is not available, get uid
+ struct passwd* pw = getpwuid(getuid());
+ if (pw) {
+ home = pw->pw_dir;
+ }
+ }
+ } else {
+ // The value of ~user, e.g., ~user/foo
+ std::string user(path_string, 1, (prefix == std::string::npos)
+ ? std::string::npos
+ : prefix - 1);
+ struct passwd* pw = getpwnam(user.c_str());
+ if (pw) {
+ home = pw->pw_dir;
+ }
+ }
+
+ if (!home) {
+ return path_string;
+ }
+
+ string path(home);
+ if (prefix == std::string::npos) {
+ return path;
+ }
+
+ if (path.length() == 0 || path[path.length() - 1] != '/') {
+ path += '/';
+ }
+ path += path_string.substr(prefix + 1);
+ return path;
+#endif
+}
+
int ParseFlagsAndTransformGraph(int argc, char* argv[], bool init_main) {
- string in_graph = "";
- string out_graph = "";
+ string in_graph_string = "";
+ string out_graph_string = "";
string inputs_string = "";
string outputs_string = "";
string transforms_string = "";
bool output_as_text = false;
std::vector<Flag> flag_list = {
- Flag("in_graph", &in_graph, "input graph file name"),
- Flag("out_graph", &out_graph, "output graph file name"),
+ Flag("in_graph", &in_graph_string, "input graph file name"),
+ Flag("out_graph", &out_graph_string, "output graph file name"),
Flag("inputs", &inputs_string, "inputs"),
Flag("outputs", &outputs_string, "outputs"),
Flag("transforms", &transforms_string, "list of transforms"),
@@ -166,11 +217,11 @@
LOG(ERROR) << "Unknown argument " << argv[1] << ".\n" << usage;
return -1;
}
- if (in_graph.empty()) {
+ if (in_graph_string.empty()) {
LOG(ERROR) << "in_graph graph can't be empty.\n" << usage;
return -1;
}
- if (out_graph.empty()) {
+ if (out_graph_string.empty()) {
LOG(ERROR) << "out_graph graph can't be empty.\n" << usage;
return -1;
}
@@ -179,6 +230,9 @@
return -1;
}
+ string in_graph = ExpandPath(in_graph_string);
+ string out_graph = ExpandPath(out_graph_string);
+
std::vector<string> inputs = str_util::Split(inputs_string, ',');
std::vector<string> outputs = str_util::Split(outputs_string, ',');
TransformParameters transform_params;
@@ -197,7 +251,7 @@
GraphDef graph_def;
Status load_status = LoadTextOrBinaryGraphFile(in_graph, &graph_def);
if (!load_status.ok()) {
- LOG(ERROR) << "Loading graph '" << in_graph << "' failed with "
+ LOG(ERROR) << "Loading graph '" << in_graph_string << "' failed with "
<< load_status.error_message();
LOG(ERROR) << usage;
return -1;
@@ -219,7 +273,7 @@
save_status = WriteBinaryProto(Env::Default(), out_graph, graph_def);
}
if (!save_status.ok()) {
- LOG(ERROR) << "Saving graph '" << out_graph << "' failed with "
+ LOG(ERROR) << "Saving graph '" << out_graph_string << "' failed with "
<< save_status.error_message();
return -1;
}
diff --git a/tensorflow/tools/pip_package/setup.py b/tensorflow/tools/pip_package/setup.py
index 211f932..f84a91d 100644
--- a/tensorflow/tools/pip_package/setup.py
+++ b/tensorflow/tools/pip_package/setup.py
@@ -31,7 +31,7 @@
# This version string is semver compatible, but incompatible with pip.
# For pip, we will remove all '-' characters from this string, and use the
# result for pip.
-_VERSION = '1.7.0'
+_VERSION = '1.8.0-rc0'
REQUIRED_PACKAGES = [
'absl-py >= 0.1.6',
diff --git a/tensorflow/workspace.bzl b/tensorflow/workspace.bzl
index bbef4b9..8b26a32 100644
--- a/tensorflow/workspace.bzl
+++ b/tensorflow/workspace.bzl
@@ -167,11 +167,12 @@
tf_http_archive(
name = "gemmlowp",
urls = [
- "https://mirror.bazel.build/github.com/google/gemmlowp/archive/7c7c744640ddc3d0af18fb245b4d23228813a71b.zip",
- "https://github.com/google/gemmlowp/archive/7c7c744640ddc3d0af18fb245b4d23228813a71b.zip",
+ # TODO (yongtang): uncomment once mirror.bazel.build is propagated.
+ # "https://mirror.bazel.build/github.com/google/gemmlowp/archive/38ebac7b059e84692f53e5938f97a9943c120d98.zip",
+ "https://github.com/google/gemmlowp/archive/38ebac7b059e84692f53e5938f97a9943c120d98.zip",
],
- sha256 = "b852cc90259a7357c8a323f108f2cec6e85979fc3b18b5590b99e0130044b2cf",
- strip_prefix = "gemmlowp-7c7c744640ddc3d0af18fb245b4d23228813a71b",
+ sha256 = "b87faa7294dfcc5d678f22a59d2c01ca94ea1e2a3b488c38a95a67889ed0a658",
+ strip_prefix = "gemmlowp-38ebac7b059e84692f53e5938f97a9943c120d98",
)
tf_http_archive(
diff --git a/third_party/repo.bzl b/third_party/repo.bzl
index aa178fa..36f5aa5 100644
--- a/third_party/repo.bzl
+++ b/third_party/repo.bzl
@@ -17,6 +17,7 @@
_SINGLE_URL_WHITELIST = depset([
"arm_compiler",
"ortools_archive",
+ "gemmlowp",
])
def _is_windows(ctx):
@@ -68,7 +69,7 @@
_execute_and_check_ret_code(ctx, cmd)
def _tf_http_archive(ctx):
- if ("mirror.bazel.build" not in ctx.attr.urls[0] or
+ if ("mirror.bazel.build" not in ctx.attr.urls[0] and
(len(ctx.attr.urls) < 2 and
ctx.attr.name not in _SINGLE_URL_WHITELIST)):
fail("tf_http_archive(urls) must have redundant URLs. The " +