tf.float16
value produces a segmentation fault (CVE-2020-5215)curl
to 7.66.0
to handle CVE-2019-5482 and CVE-2019-5481sqlite3
to 3.30.01
to handle CVE-2019-19646, CVE-2019-19645 and CVE-2019-16168tf.float16
value produces a segmentation fault (CVE-2020-5215)curl
to 7.66.0
to handle CVE-2019-5482 and CVE-2019-5481sqlite3
to 3.30.01
to handle CVE-2019-19646, CVE-2019-19645 and CVE-2019-16168TensorFlow 2.1 will be the last TF release supporting Python 2. Python 2 support officially ends an January 1, 2020. As announced earlier, TensorFlow will also stop supporting Python 2 starting January 1, 2020, and no more releases are expected in 2019.
tensorflow
pip package now includes GPU support by default (same as tensorflow-gpu
) for both Linux and Windows. This runs on machines with and without NVIDIA GPUs. tensorflow-gpu
is still available, and CPU-only packages can be downloaded at tensorflow-cpu
for users who are concerned about package size.tensorflow
Pip packages are now built with Visual Studio 2019 version 16.4 in order to take advantage of the new /d2ReducedOptimizeHugeFunctions
compiler flag. To use these new packages, you must install "Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019", available from Microsoft's website here.EIGEN_STRONG_INLINE
can take over 48 hours to compile without this flag. Refer to configure.py
for more information about EIGEN_STRONG_INLINE
and /d2ReducedOptimizeHugeFunctions
.msvcp140.dll
(old) or msvcp140_1.dll
(new), are missing on your machine, import tensorflow
will print a warning message.tensorflow
pip package is built with CUDA 10.1 and cuDNN 7.6.tf.keras
TextVectorization
layer, which takes as input raw strings and takes care of text standardization, tokenization, n-gram generation, and vocabulary indexing. See this end-to-end text classification example..compile
.fit
.evaluate
and .predict
are allowed to be outside of the DistributionStrategy scope, as long as the model was constructed inside of a scope..compile
, .fit
, .evaluate
, and .predict
is available for Cloud TPUs, Cloud TPU, for all types of Keras models (sequential, functional and subclassing models).tf.summary
to be used more conveniently with Cloud TPUs..fit
, .evaluate
, .predict
on TPU using numpy data, in addition to tf.data.Dataset
.tf.data
tf.data datasets
+ DistributionStrategy for better performance. Note that the dataset also behaves slightly differently, in that the rebatched dataset cardinality will always be a multiple of the number of replicas.tf.data.Dataset
now supports automatic data distribution and sharding in distributed environments, including on TPU pods.tf.data.Dataset
can now be tuned with 1. tf.data.experimental.AutoShardPolicy(OFF, AUTO, FILE, DATA)
2. tf.data.experimental.ExternalStatePolicy(WARN, IGNORE, FAIL)
tf.debugging
tf.debugging.enable_check_numerics()
and tf.debugging.disable_check_numerics()
to help debugging the root causes of issues involving infinities and NaN
s.tf.distribute
strategy.experimental_distribute_dataset
, strategy.experimental_distribute_datasets_from_function
, strategy.experimental_run_v2
, strategy.reduce
.tf.distribute.experimental_set_strategy(),
in addition to strategy.scope()
.TensorRT
tf.experimental.tensorrt.Converter
.TF_DETERMINISTIC_OPS
has been added. When set to "true" or "1", this environment variable makes tf.nn.bias_add
operate deterministically (i.e. reproducibly), but currently only when XLA JIT compilation is not enabled. Setting TF_DETERMINISTIC_OPS
to "true" or "1" also makes cuDNN convolution and max-pooling operate deterministically. This makes Keras Conv*D and MaxPool*D layers operate deterministically in both the forward and backward directions when running on a CUDA-enabled GPU.Operation.traceback_with_start_lines
for which we know of no usages.id
from tf.Tensor.__repr__()
as id
is not useful other than internal debugging.tf.assert_*
methods now raise assertions at operation creation time if the input tensors' values are known at that time, not during the session.run()
. This only changes behavior when the graph execution would have resulted in an error. When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in feed_dict
argument to session.run()
, an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often).tf.config.list_logical_devices
, tf.config.list_physical_devices
, tf.config.get_visible_devices
, tf.config.set_visible_devices
, tf.config.get_logical_device_configuration
, tf.config.set_logical_device_configuration
.tf.config.experimentalVirtualDeviceConfiguration
has been renamed to tf.config.LogicalDeviceConfiguration
.tf.config.experimental_list_devices
has been removed, please use tf.config.list_logical_devices
.tf.data
tf.data.experimental.parallel_interleave
with sloppy=True
.tf.data.experimental.dense_to_ragged_batch()
.tf.data
parsing ops to support RaggedTensors
.tf.distribute
tf.distribute.Strategy
was used.tf.estimator
tf.estimator.CheckpointSaverHook
to not save the GraphDef
.tf.keras
depthwise_conv2d
in tf.keras.backend
.trainable_weights
, non_trainable_weights
, and weights
are explicitly deduplicated.model.load_weights
now accepts skip_mismatch
as an argument. This was available in external Keras, and has now been copied over to tf.keras
.Model.fit_generator
, Model.evaluate_generator
, Model.predict_generator
, Model.train_on_batch
, Model.test_on_batch
, and Model.predict_on_batch
methods now respect the run_eagerly
property, and will correctly run using tf.function
by default. Note that Model.fit_generator
, Model.evaluate_generator
, and Model.predict_generator
are deprecated endpoints. They are subsumed by Model.fit
, Model.evaluate
, and Model.predict
which now support generators and Sequences.tf.lite
NMS
ops in TFLite.narrow_range
and axis
to quantize_v2
and dequantize
ops.FusedBatchNormV3
in converter.errno
-like field to NNAPI
delegate for detecting NNAPI
errors for fallback behaviour.NNAPI
Delegate to support detailed reason why an operation is not accelerated.tf.tpu.experimental.initialize_tpu_system
.RaggedTensor.merge_dims()
.uniform_row_length
row-partitioning tensor to RaggedTensor
.shape
arg to RaggedTensor.to_tensor
; Improve speed of RaggedTensor.to_tensor
.tf.io.parse_sequence_example
and tf.io.parse_single_sequence_example
now support ragged features.while_v2
with variables in custom gradient.tf.cond
and tf.while_loop
using LookupTable
.vectorized_map
failed on inputs with unknown static shape.None
now behaves as expected.tf.function(f)()
, tf.function(f).get_concrete_function
and tf.function(f).get_initialization_function
thread-safe.tf.identity
to work with CompositeTensors (such as SparseTensor)dtypes
and zero-sized inputs to Einsum
Op and improved its performanceNCCL
all-reduce
inside functions executing eagerly.RFFT
, RFFT2D
, RFFT3D
, IRFFT
, IRFFT2D
, and IRFFT3D
.pfor
converter for SelfAdjointEigV2
.tf.math.ndtri
and tf.math.erfinv
.tf.config.experimental.enable_mlir_bridge
to allow using MLIR compiler bridge in eager model.tf.autodiff.ForwardAccumulator
for forward-mode autodiffLinearOperatorPermutation
.tf.reduce_logsumexp
.AUC
metriczeros_like
.None
or types with an __index__
method.tf.random.uniform
microbenchmark._protogen
suffix for proto library targets instead of _cc_protogen
suffix.swig
to pybind11
.tf.device
& MirroredStrategy
now supports passing in a tf.config.LogicalDevice
.bazelversion
file at the root of the project directory.This release contains contributions from many people at Google, as well as:
8bitmp3, Aaron Ma, AbdüLhamit Yilmaz, Abhai Kollara, aflc, Ag Ramesh, Albert Z. Guo, Alex Torres, amoitra, Andrii Prymostka, angeliand, Anshuman Tripathy, Anthony Barbier, Anton Kachatkou, Anubh-V, Anuja Jakhade, Artem Ryabov, autoih, Bairen Yi, Bas Aarts, Basit Ayantunde, Ben Barsdell, Bhavani Subramanian, Brett Koonce, candy.dc, Captain-Pool, caster, cathy, Chong Yan, Choong Yin Thong, Clayne Robison, Colle, Dan Ganea, David Norman, David Refaeli, dengziming, Diego Caballero, Divyanshu, djshen, Douman, Duncan Riach, EFanZh, Elena Zhelezina, Eric Schweitz, Evgenii Zheltonozhskii, Fei Hu, fo40225, Fred Reiss, Frederic Bastien, Fredrik Knutsson, fsx950223, fwcore, George Grzegorz Pawelczak, George Sterpu, Gian Marco Iodice, Giorgio Arena, giuros01, Gomathi Ramamurthy, Guozhong Zhuang, Haifeng Jin, Haoyu Wu, HarikrishnanBalagopal, HJYOO, Huang Chen-Yi, Ilham Firdausi Putra, Imran Salam, Jared Nielsen, Jason Zaman, Jasper Vicenti, Jeff Daily, Jeff Poznanovic, Jens Elofsson, Jerry Shih, jerryyin, Jesper Dramsch, jim.meyer, Jongwon Lee, Jun Wan, Junyuan Xie, Kaixi Hou, kamalkraj, Kan Chen, Karthik Muthuraman, Keiji Ariyama, Kevin Rose, Kevin Wang, Koan-Sin Tan, kstuedem, Kwabena W. Agyeman, Lakshay Tokas, latyas, Leslie-Fang-Intel, Li, Guizi, Luciano Resende, Lukas Folle, Lukas Geiger, Mahmoud Abuzaina, Manuel Freiberger, Mark Ryan, Martin Mlostek, Masaki Kozuki, Matthew Bentham, Matthew Denton, mbhuiyan, mdfaijul, Muhwan Kim, Nagy Mostafa, nammbash, Nathan Luehr, Nathan Wells, Niranjan Hasabnis, Oleksii Volkovskyi, Olivier Moindrot, olramde, Ouyang Jin, OverLordGoldDragon, Pallavi G, Paul Andrey, Paul Wais, pkanwar23, Pooya Davoodi, Prabindh Sundareson, Rajeshwar Reddy T, Ralovich, Kristof, Refraction-Ray, Richard Barnes, richardbrks, Robert Herbig, Romeo Kienzler, Ryan Mccormick, saishruthi, Saket Khandelwal, Sami Kama, Sana Damani, Satoshi Tanaka, Sergey Mironov, Sergii Khomenko, Shahid, Shawn Presser, ShengYang1, Siddhartha Bagaria, Simon Plovyt, skeydan, srinivasan.narayanamoorthy, Stephen Mugisha, sunway513, Takeshi Watanabe, Taylor Jakobson, TengLu, TheMindVirus, ThisIsIsaac, Tim Gates, Timothy Liu, Tomer Gafner, Trent Lo, Trevor Hickey, Trevor Morris, vcarpani, Wei Wang, Wen-Heng (Jack) Chung, wenshuai, Wenshuai-Xiaomi, wenxizhu, william, William D. Irons, Xinan Jiang, Yannic, Yasir Modak, Yasuhiro Matsumoto, Yong Tang, Yongfeng Gu, Youwei Song, Zaccharie Ramzi, Zhang, Zhenyu Guo, 王振华 (Zhenhua Wang), 韩董, 이중건 Isaac Lee
This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year.
tensorflow
pip package will by default include GPU support (same as tensorflow-gpu
now) for the platforms we currently have GPU support (Linux and Windows). It will work on machines with and without Nvidia GPUs. tensorflow-gpu
will still be available, and CPU-only packages can be downloaded at tensorflow-cpu
for users who are concerned about package size.compat.v2
module. It contains a copy of the 1.15 main module (without contrib
) in the compat.v1
module. TensorFlow 1.15 is able to emulate 2.0 behavior using the enable_v2_behavior()
function. This enables writing forward compatible code: by explicitly importing either tensorflow.compat.v1
or tensorflow.compat.v2
, you can ensure that your code works without modifications against an installation of 1.15 or 2.0.tf.enable_control_flow_v2()
and tf.disable_control_flow_v2()
for enabling/disabling v2 control flow.tf.enable_v2_behavior()
and TF2_BEHAVIOR=1
.tf.function
-decorated functions. AutoGraph is also applied in functions used with tf.data
, tf.distribute
and tf.keras
APIS.enable_tensor_equality()
, which switches the behavior such that:==
and !=
, yielding a Boolean Tensor with element-wise comparison results. This will be the default behavior in 2.0.tensorflow_core
containing all the code (in the future it will contain only the private implementation) and tensorflow
which is a virtual pip package doing forwarding to tensorflow_core
(and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation.constraint=
and .constraint
with ResourceVariable.tf.keras
:OMP_NUM_THREADS
is no longer used by the default Keras config. To configure the number of threads, use tf.config.threading
APIs.tf.keras.model.save_model
and model.save
now defaults to saving a TensorFlow SavedModel.keras.backend.resize_images
(and consequently, keras.layers.Upsampling2D
) behavior has changed, a bug in the resizing implementation was fixed.float32
, and automatically cast their inputs to the layer's dtype. If you had a model that used float64
, it will probably silently use float32
in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with tf.keras.backend.set_floatx('float64')
, or pass dtype='float64'
to each of the Layer constructors. See tf.keras.layers.Layer
for more information.tf.assert_*
methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in feed_dict
argument to session.run()
, an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often).tf.estimator
:tf.keras.estimator.model_to_estimator
now supports exporting to tf.train.Checkpoint
format, which allows the saved checkpoints to be compatible with model.load_weights
.DenseFeatures
usability in TF2tf.data
:unbatch
from experimental to core API.from_tensors
and from_tensor_slices
and batching and unbatching of nested datasets.tf.keras
:tf.keras.estimator.model_to_estimator
now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with model.load_weights
.tf.saved_model.save
now saves the list of variables, trainable variables, regularization losses, and the call function.tf.keras.experimental.export_saved_model
and tf.keras.experimental.function
. Please use tf.keras.models.save_model(..., save_format='tf')
and tf.keras.models.load_model
instead.implementation=3
mode for tf.keras.layers.LocallyConnected2D
and tf.keras.layers.LocallyConnected1D
layers using tf.SparseTensor
to store weights, allowing a dramatic speedup for large sparse models.experimental_run_tf_function
flag by default. This flag enables single training/eval/predict execution path. With this 1. All input types are converted to Dataset
. 2. When distribution strategy is not specified this goes through the no-op distribution strategy path. 3. Execution is wrapped in tf.function unless run_eagerly=True
is set in compile.batch_size
argument is used when input is dataset/generator/keras sequence.tf.lite
GATHER
support to NN API delegate.QUANTIZE
.QUANTIZED_16BIT_LSTM
.cycle_length
argument of tf.data.Dataset.interleave
to the number of schedulable CPU cores.parallel_for
: Add converter for MatrixDiag
.narrow_range
attribute to QuantizeAndDequantizeV2
and V3.tf.strings.unsorted_segment_join
.topK_v2
.TypeSpec
classes.Head
as public API.batch_dims
case.tf.sparse.from_dense
utility function.TensorFlowTestCase
.ResizeInputTensor
now works for all delegates.EXPAND_DIMS
support to NN API delegate TEST: expand_dims_testtf.cond
emits a StatelessIf op if the branch functions are stateless and do not touch any resources.tf.cond
, tf.while
and if
and while
in AutoGraph now accept a nonscalar predicate if has a single element. This does not affect non-V2 control flow.tf.while_loop
emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources.LogSoftMax
.nested_value_rowids
for ragged tensors.tf.math.cumulative_logsumexp operation
.tf.ragged.stack
.AddNewInputConstantTensor
.MemoryAllocation::MemoryAllocation()
.NNAPIDelegateKernel
from nnapi_delegate.ccFusedBatchNormV3
in converter.tf.gradients()
.This release contains contributions from many people at Google, as well as:
a6802739, Aaron Ma, Abdullah Selek, Abolfazl Shahbazi, Ag Ramesh, Albert Z. Guo, Albin Joy, Alex Itkes, Alex Sergeev, Alexander Pivovarov, Alexey Romanov, alhkad, Amit Srivastava, amoitra, Andrew Lihonosov, Andrii Prymostka, Anuj Rawat, Astropeak, Ayush Agrawal, Bairen Yi, Bas Aarts, Bastian Eichenberger, Ben Barsdell, Benjamin Peterson, bhack, Bharat Raghunathan, Bhavani Subramanian, Bryan Cutler, candy.dc, Cao Zongyan, Captain-Pool, Casper Da Costa-Luis, Chen Guoyin, Cheng Chang, chengchingwen, Chong Yan, Choong Yin Thong, Christopher Yeh, Clayne Robison, Coady, Patrick, Dan Ganea, David Norman, Denis Khalikov, Deven Desai, Diego Caballero, Duncan Dean, Duncan Riach, Dwight J Lyle, Eamon Ito-Fisher, eashtian3, EFanZh, ejot, Elroy Ashtian Jr, Eric Schweitz, Fangjun Kuang, Fei Hu, fo40225, formath, Fred Reiss, Frederic Bastien, Fredrik Knutsson, G. Hussain Chinoy, Gabriel, gehring, George Grzegorz Pawelczak, Gianluca Varisco, Gleb Popov, Greg Peatfield, Guillaume Klein, Gurpreet Singh, Gustavo Lima Chaves, haison, Haraldur TóMas HallgríMsson, HarikrishnanBalagopal, HåKon Sandsmark, I-Hong, Ilham Firdausi Putra, Imran Salam, Jason Zaman, Jason Zavaglia, jayhpark530, jefby, Jeff Daily, Jeffrey Poznanovic, Jekyll Lai, Jeroen BéDorf, Jerry Shih, jerryyin, jiakai, JiangXIAO, Joe Bowser, Joel Shapiro, Johan Gunnarsson, Jojimon Varghese, Joon, Josh Beal, Julian Niedermeier, Jun Wan, Junqin Zhang, Junyuan Xie, Justin Tunis, Kaixi Hou, Karl Lessard, Karthik Muthuraman, Kbhute-Ibm, khanhlvg, Koock Yoon, kstuedem, Kyuwon Kim, Lakshay Tokas, leike666666, leonard951, Leslie-Fang, Leslie-Fang-Intel, Li, Guizi, Lukas Folle, Lukas Geiger, Mahmoud Abuzaina, Manraj Singh Grover, Margaret Maynard-Reid, Mark Ryan, Matt Conley, Matthew Bentham, Matthew Denton, mbhuiyan, mdfaijul, Mei Jie, merturl, MichaelKonobeev, Michal W. Tarnowski, minds, mpppk, musikisomorphie, Nagy Mostafa, Nayana Thorat, Neil, Niels Ole Salscheider, Niklas SilfverströM, Niranjan Hasabnis, ocjosen, olramde, Pariksheet Pinjari, Patrick J. Lopresti, Patrik Gustavsson, per1234, PeterLee, Phan Van Nguyen Duc, Phillip Kravtsov, Pooya Davoodi, Pranav Marathe, Putra Manggala, Qingqing Cao, Rajeshwar Reddy T, Ramon ViñAs, Rasmus Diederichsen, Reuben Morais, richardbrks, robert, RonLek, Ryan Jiang, saishruthi, Saket Khandelwal, Saleem Abdulrasool, Sami Kama, Sana-Damani, Sergii Khomenko, Severen Redwood, Shubham Goyal, Sigrid Keydana, Siju Samuel, sleighsoft, smilu97, Son Tran, Srini511, srinivasan.narayanamoorthy, Sumesh Udayakumaran, Sungmann Cho, Tae-Hwan Jung, Taehoon Lee, Takeshi Watanabe, TengLu, terryky, TheMindVirus, ThisIsIsaac, Till Hoffmann, Timothy Liu, Tomer Gafner, Tongxuan Liu, Trent Lo, Trevor Morris, Uday Bondhugula, Vasileios Lioutas, vbvg2008, Vishnuvardhan Janapati, Vivek Suryamurthy, Wei Wang, Wen-Heng (Jack) Chung, wenxizhu, William D. Irons, winstonq, wyzhao, Xiaoming (Jason) Cui, Xinan Jiang, Xinping Wang, Yann-Yy, Yasir Modak, Yong Tang, Yongfeng Gu, Yuchen Ying, Yuxin Wu, zyeric, 王振华 (Zhenhua Wang)
TensorFlow 2.0 focuses on simplicity and ease of use, featuring updates like:
For details on best practices with 2.0, see the Effective 2.0 guide
For information on upgrading your existing TensorFlow 1.x models, please refer to our Upgrade and Migration guides. We have also released a collection of tutorials and getting started guides.
tf.data
, for building scalable input pipelines. Checkout guide for additional details.tf.distribute.Strategy
API to distribute training with minimal code changes, yielding great out-of-the-box performance. It supports distributed training with Keras model.fit, as well as with custom training loops. Multi-GPU support is available, along with experimental support for multi worker and Cloud TPUs. Check out the guide for more details.tf.Session
is discouraged, and replaced with by writing regular Python functions. Using the tf.function
decorator, such functions can be turned into graphs which can be executed remotely, serialized, and optimized for performance.tf.train.Optimizers
and tf.keras.Optimizers
. Use tf.keras.Optimizers
for TF2.0. compute_gradients
is removed as public API, use GradientTape
to compute gradients.tf.function
-decorated functions. AutoGraph is also applied in functions used with tf.data, tf.distribute and tf.keras APIs.tf.app
, tf.flags
, and tf.logging
in favor of absl-py.tf.global_variables_initializer
and tf.get_global_step
.tf.enable_control_flow_v2()
and tf.disable_control_flow_v2()
for enabling/disabling v2 control flow.tf.enable_v2_behavior()
and TF2_BEHAVIOR=1
.__init__.py
files.float16
for acceleration on Volta and Turing Tensor Cores. This feature can be enabled by wrapping an optimizer class with tf.train.experimental.enable_mixed_precision_graph_rewrite()
.TF_CUDNN_DETERMINISTIC
. Setting to "true" or "1" forces the selection of deterministic cuDNN convolution and max-pooling algorithms. When this is enabled, the algorithm selection procedure itself is also deterministic.Many backwards incompatible API changes have been made to clean up the APIs and make them more consistent.
Toolchains:
freeze_graph
command line tool; SavedModel
should be used in place of frozen graphs.tf.contrib
:
tf.contrib
has been deprecated, and functionality has been either migrated to the core TensorFlow API, to an ecosystem project such as tensorflow/addons or tensorflow/io, or removed entirely.tf.contrib.timeseries
dependency on TF distributions.tf.estimator.experimental.*
for apis in early_stopping.py
.tf.estimator
:
tf.keras.optimizers
instead of the tf.compat.v1.train.Optimizer
s. If you do not pass in an optimizer=
arg or if you use a string, the premade estimator will use the Keras optimizer. This is checkpoint breaking, as the optimizers have separate variables. A checkpoint converter tool for converting optimizers is included with the release, but if you want to avoid any change, switch to the v1 version of the estimator: tf.compat.v1.estimator.DNN/Linear/DNNLinearCombined*
.SUM_OVER_BATCH_SIZE
. To maintain previous default behavior, please pass SUM
as the loss aggregation method.input_layer_partitioner
arg in the API. If you have this arg, you will have to switch to tf.compat.v1 canned Estimators
.Estimator.export_savedmodel
has been renamed to export_saved_model
.tf.compat.v1.Estimator
.tf.feature_column.input_layer
has been deprecated in favor of tf.keras.layers.DenseFeatures
. v1 feature columns have direct analogues in v2 except for shared_embedding_columns
, which are not cross-compatible with v1 and v2. Use tf.feature_column.shared_embeddings
instead.tf.keras
:
OMP_NUM_THREADS
is no longer used by the default Keras config. To configure the number of threads, use tf.config.threading
APIs.tf.keras.model.save_model
and model.save
now defaults to saving a TensorFlow SavedModel. HDF5 files are still supported.tf.keras.experimental.export_saved_model
and tf.keras.experimental.function
. Please use tf.keras.models.save_model(..., save_format='tf')
and tf.keras.models.load_model
instead.Layer <layer-name>
is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with tf.keras.backend.set_floatx('float64')
, or pass dtype='float64'
to each of the Layer constructors. See tf.keras.layers.Layer
for more information.tf.lite
:
lite.OpHint
, lite.experimental
, and lite.constant
from 2.0 API.Tensors are no longer hashable, but instead compare element-wise with ==
and !=
. Use tf.compat.v1.disable_tensor_equality()
to return to the previous behavior.
Performing equality operations on Tensors or Variables with incompatible shapes an exception is no longer thrown. Instead __eq__
returns False and __ne__
returns True.
Removed tf.string_split
from v2 API.
Deprecated the use of constraint=
and .constraint
with ResourceVariable.
Add UnifiedGRU
as the new GRU implementation for tf2.0. Change the default recurrent activation function for GRU from hard_sigmoid
to sigmoid
, and reset_after
to True in 2.0. Historically recurrent activation is hard_sigmoid
since it is fast than 'sigmoid'. With new unified backend between CPU and GPU mode, since the CuDNN kernel is using sigmoid, we change the default for CPU mode to sigmoid as well. With that, the default GRU will be compatible with both CPU and GPU kernel. This will enable user with GPU to use CuDNN kernel by default and get a 10x performance boost in training. Note that this is checkpoint breaking change. If user want to use their 1.x pre-trained checkpoint, please construct the layer with GRU(recurrent_activation='hard_sigmoid', reset_after=False) to fallback to 1.x behavior.
CUDNN_INSTALL_PATH
, TENSORRT_INSTALL_PATH
, NCCL_INSTALL_PATH
, NCCL_HDR_PATH
are deprecated. Use TF_CUDA_PATHS
instead which supports a comma-separated list of base paths that are searched to find CUDA libraries and headers.
Refer to our public project status tracker and issues tagged with 2.0
on GitHub for insight into recent issues and development progress.
If you experience any snags when using TF 2.0, please let us know at the TF 2.0 Testing User Group. We have a support mailing list as well as weekly testing meetings, and would love to hear your migration feedback and questions.
tf.contrib
:
tf.contrib.proto.*
ops in tf.io
(they will exist in TF2)tf.data
:
tf.data Dataset
.tf.data
.shuffle(..., reshuffle_each_iteration=True)
and cache()
to work across different Python iterators for the same dataset.experimental_numa_aware
option from tf.data.Options
.num_parallel_reads
and passing in a Dataset containing filenames into TextLineDataset
and FixedLengthRecordDataset
.cycle_length
argument of tf.data.Dataset.interleave
to the number of schedulable CPU cores.tf.data.experimental.enumerate_dataset
to core as tf.data.Dataset.enumerate
.tf.data.experimental.unbatch
to core as tf.data.Dataset.unbatch
.tf.data.Options().experimental_slack = True
batch()
and padded_batch()
. This functionality can be enabled through tf.data.Options()
.reduce
.dataset
node name as prefix instead of the op name, to identify the component correctly in metrics, for pipelines with repeated components.from_tensors()
.unbatch
from experimental to core API.from_tensors
and from_tensor_slices
and batching and unbatching of nested datasets.tf.distribute
:
tf.distribute.experimental.MultiWorkerMirroredStrategy
working in eager mode.MultiWorkerMirroredStrategy
.run_eagerly
and distribution strategy if there are symbolic tensors added to the model using add_metric
or add_loss
.tf.distribute.Strategy
.AUTO
for improving reliability of loss scaling with distribution strategy and custom training loops. AUTO
indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE
. When used in distribution strategy scope, outside of built-in training loops such as tf.keras
compile
and fit
, we expect reduction value to be 'None' or 'SUM'. Using other values will raise an error.ncclAllReduce
in Distribution Strategy.tf.estimator
:
tf.contrib.estimator.add_metrics
with tf.estimator.add_metrics
tf.compat.v1.estimator.inputs
instead of tf.estimator.inputs
tf.estimator.experimental.*
for apis in early_s in Estimatortf.train.Optimizers
to tf.keras.optimizers
.tf.nn.compute_average_loss
, tf.nn.scale_regularization_loss
.tf.keras
:
model.save
and tf.saved_model.save
may now save to the TensorFlow SavedModel format. The model can be restored using tf.keras.models.load_model
. HDF5 files are still supported, and may be used by specifying save_format="h5"
when saving.metrics
argument in Keras compile
.tf.keras.layers.AbstractRNNCell
as the preferred implementation for RNN cells in TF v2. User can use it to implement RNN cells with custom behavior.fit/evaluate/predict
execution to use only a single unified path by default unless eager execution has been explicitly disabled, regardless of input type. This unified path places an eager-friendly training step inside of a tf.function
. With thisDataset
.tf.function
unless run_eagerly=True
is set in compile. The single path execution code does not yet support all use cases. We fallback to the existing v1 execution paths if your model contains the following:sample_weight_mode
in compileweighted_metrics
in compileexperimental_run_tf_function=False
in compile meanwhile. We have seen couple of use cases where the model usage pattern is not as expected and would not work with this change.set_session
as compat.v1
only.tf.keras.estimator.model_to_estimator
now supports exporting to tf.train.Checkpoint format
, which allows the saved checkpoints to be compatible with model.load_weights
.keras.backend.resize_images
(and consequently, keras.layers.Upsampling2D
) behavior has changed, a bug in the resizing implementation was fixed.implementation=3
mode for tf.keras.layers.LocallyConnected2D
and tf.keras.layers.LocallyConnected1D
layers using tf.SparseTensor
to store weights, allowing a dramatic speedup for large sparse models.batch_size
argument is used when input is dataset/generator/keras sequence.keras.backend.name_scope
to use TF 2.0 name_scope
.tf.losses = tf.keras.losses
& tf.metrics = tf.keras.metrics
& tf.initializers = tf.keras.initializers
& tf.optimizers = tf.keras.optimizers
.cumsum
and cumprod
keras backend functions.ValueError
if an integer is passed to the training APIs.model.fit()
with MultiWorkerMirroredStrategy
, tutorial available.tf.distribute
, Keras API is recommended over estimator.steps_per_epoch
and steps
arguments are supported with numpy arrays.tf.nn.compute_average_loss
, tf.nn.scale_regularization_loss
.Layer
apply and add_variable APIs are deprecated.add_update
, add_metric
, add_loss
, activity regularizers are used inside of a control flow branch.AUTO
: Indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE
. When used with tf.distribute.Strategy
, outside of built-in training loops such as tf.keras
compile
and fit
, we expect reduction value to be SUM
or NONE
. Using AUTO
in that case will raise an error.NONE
: Weighted losses with one dimension reduced (axis=-1, or axis specified by loss function). When this reduction type used with built-in Keras training loops like fit
/evaluate
, the unreduced vector loss is passed to the optimizer but the reported loss will be a scalar value.SUM
: Scalar sum of weighted losses. 4. SUM_OVER_BATCH_SIZE
: Scalar SUM
divided by number of elements in losses. This reduction type is not supported when used with tf.distribute.Strategy
outside of built-in training loops like tf.keras
compile
/fit
.compile
API (strings and v1 losses) which are not instances of v2 Loss
class in LossWrapper
class. => All losses will now use SUM_OVER_BATCH_SIZE
reduction as default.model.add_loss(symbolic_tensor)
should work in ambient eager.weighted
prefix from weighted metric names.AUCCurve
and AUCSummationMethod
enums.add_update
can now be passed a zero-arg callable in order to support turning off the update when setting trainable=False
on a Layer of a Model compiled with run_eagerly=True
.norm_axis
and params_axis
with axis
.tf.lite
:
COCO
minivalQUANTIZE
.GATHER
support to NN API delegate.EXPAND_DIMS
support to NN API delegate TEST.narrow_range
attribute to QuantizeAndDequantizeV2 and V3.tflite_convert
command line tool in 2.0.QUANTIZED_16BIT_LSTM
.NNAPIDelegateKernel
from nnapi_delegate.ccTensorRT
TrtGraphConverterV2
API for TensorRT conversion. TensorRT initialization arguments are now passed wrapped in a named-tuple, TrtConversionParams
, rather than as separate arguments as in TrtGraphConverter
.converter.build()
where previously is_dynamic_op=False
would be set.converter.convert()
no longer returns a tf.function
. Now the function must be accessed from the saved model.converter.calibrate()
method has been removed. To trigger calibration, a calibration_input_fn
should be provided to converter.convert()
.Other:
tf.gradients()
.gather_nd
.ResourceVariable
and Variable
no longer accepts constraint
in the constructor, nor expose it as a @property.SparseToDense
op.image.resize
in 2.0 now supports gradients for the new resize kernels.image.resize
now considers proper pixel centers and has new kernels (incl. anti-aliasing).tf.image
functions to remove duplicate "image" where it is redundant.StringViewVariantWrapper
.Fingerprint64Map
op registrationtf.matmul
.BatchMatMulV2
.tf.math.cumulative_logsumexp
operation.tf.einsum()
.nest.*
methods.strings.byte_split
.tf.strings.split
.tf.string_split
and tf.strings_split
.tf.strings.split
to support inputs with any rank.tf.random.binomial
.key
and skip
methods to random.experimental.Generator
.tf.function
with basic support for CompositeTensors arguments (such as SparseTensor
and RaggedTensor
).parallel_for.pfor
: add converters for Softmax, LogSoftmax, IsNaN, All, Any, and MatrixSetDiag.parallel_for
: add converters for LowerTriangularSolve and Cholesky.parallel_for
: add converters for LogMatrixDeterminant
and MatrixBandPart
.parallel_for
: Add converter for MatrixDiag
.parallel_for
: Add converters for OneHot
, LowerBound
, UpperBound
.parallel_for
: add converter for BroadcastTo
.pfor
converter for Squeeze
.RaggedTensor.placeholder()
.tf.squeeze
.LinearOperator.solve
to take a LinearOperator
.LinearOperatorCirculant
.LinearOperatorHouseholder
.TensorSpec
support for CompositeTensors.tf.linalg.tridiagonal_solve
op.tf.linalg.tridiagonal_solve
.tf.linalg.tridiagonal_solve
.tf.linalg.tridiagonal_mul op
.tf.linalg.tridiagonal_matmul
.LinearOperatorToeplitz
.tf.ragged.boolean_mask
.tf.switch_case
added, which selects a branch_fn based on a branch_index.trainable
arg of tf.Variable.EagerTensor
now supports numpy buffer interface for tensors.FullyConnected
Op to 5.tf.strings.unsorted_segment_join
.topK_v2
.Head
as public API.tf.sparse.from_dense
utility function.TensorFlowTestCase
.nested_value_rowids
for ragged tensors.tf.ragged.stack
.ResizeInputTensor
now works for all delegates.tf.cond
emits a StatelessIf op if the branch functions are stateless and do not touch any resources._TridiagonalSolveGrad
.LogSoftMax
.AddNewInputConstantTensor
.tf.while_loop
emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources.tf.cond
, tf.while
and if and while in AutoGraph now accept a nonscalar predicate if has a single element. This does not affect non-V2 control flow.dynamic
constructor argument in Layer and Model, which should be set to True
when using imperative control flow in the call
method.batch_dims
argument to tf.gather
.tf.gather
is now correct when axis=None
and batch_dims<0
.batch_dims
case.tf.math.nextafter
op.--define=tensorflow_mkldnn_contraction_kernel=0
.tf.linspace(start, stop, num)
now always uses "stop" as last value (for num > 1)pooling_ops
were removed. Some users may need to add explicit dependencies on :pooling_ops
if they reference the operators from that library.CompositeTensor
base class.Tensor::UnsafeCopyFromInternal
deprecated in favor Tensor::BitcastFrom
.map_vectorization
optimization, reduce the degree of parallelism in the vectorized map node.absl::string_view
.dtype.as_datatype_enum
instead of int(dtype)
to get the same result.LinearOperator.adjoint
and LinearOperator.H
(alias).tf.CriticalSection
.SignatureDef
util functions have been deprecated.Fingerprint64Map
to use aliasesadd_metric
in the graph function mode.precision_mode
argument to TrtGraphConverter
is now case insensitive.This release contains contributions from many people at Google, as well as:
1e100, a6802739, 4d55397500, a6802739, Abdullah Selek, abenmao, Abolfazl Shahbazi, Adam Richter, Adam Weiss, Ag Ramesh, Alan Du, Albin Joy, Alex, Alex Itkes, Alex Sergeev, Alexander Pivovarov, Alexey Romanov, alhkad, Aman Patel, Amit, Amit Kumar Jaiswal, Amit Srivastava, amoitra, Andreas Eberle, Andrew Lihonosov, Andy Craze, Anshuman Tripathy, Anthony Hsu, Anthony Platanios, Anuj Rawat, arp95, Arpit Shah, Armen Poghosov, armenpoghosov, Astropeak, Ashwin Ramaswami, Arpit Shah, Augustina Ragwitz, Aurelien Geron, AuréLien Geron, avasid, aweers, awesomealex1, Ayush Agrawal, Bas Aarts, Bastian Eichenberger, Bairen Yi, Bayberry Z, Ben Barsdell, Benjamin Peterson, bhack, Bharat Raghunathan, Bhavani Subramanian, Bin Fan, blairhan, BléNesi Attila, Bodin-E, Brandon Carter, Bryan Cutler, candy.dc, Cao Zongyan, Casper Da Costa-Luis, Chao Liu, Chen Guoyin, chenchc, chengchingwen, chie8842, Christian Hansen, Christoph Boeddeker, Christopher Yeh, Clayne Robison, Coady, Patrick, crafet, csukuangfj, ctiijima, Dan Jarvis, Dan Lazewatsky, Daniel Ingram, Daniel Rasmussen, Daniel Salvadori, Dave Airlie, David Norman, Dayananda V, delock, Denis Khalikov, Deven Desai, Dheeraj Rajaram Reddy, Diego Caballero, dmitrievanthony, Donovan Ong, Drew Szurko, Duncan Dean, Duncan Riach, Dustin Neighly, Dwight J Lyle, Eamon Ito-Fisher, eashtian3, Edward Forgacs, EFanZh, ejot, Elroy Ashtian Jr, Eric Schweitz, Evgeniy Polyakov, Fangjun Kuang, Federico Martinez, Fei Hu, Felix Lemke, Filip Matzner, FlashTek, fo40225, formath, FrançOis Chollet, frreiss, Fred Reiss, Frederic Bastien, Fredrik Knutsson, G. Hussain Chinoy, Gabriel, Gautam, gehring, Geoffrey Irving, George Grzegorz Pawelczak, Grzegorz Pawelczak, George Sterpu, Gianluca Varisco, Gleb Popov, Greg Peatfield, Guillaume Klein, Gurpreet Singh, Gustavo Lima Chaves, Gyoung-Yoon Ryoo, haison, Hanton Yang, HanGuo97, Haraldur TóMas HallgríMsson, Hari Shankar, hehongliang, Heungsub Lee, Hoeseong Kim, Huan Li (李卓桓), HåKon Sandsmark, I-Hong, I-Hong Jhuo, Ilham Firdausi Putra, Ilango R, Imran Salam, Innovimax, Jacky Ko, Irene Dea, Ivan Habernal, Jakub Lipinski, Jacky, Jason Zaman, Jason Zavaglia, jayhpark530, jcf94, jefby, Jeff Daily, Jeff Poznanovic, Jeffrey Poznanovic, Jekyll Lai, jer, Jeroen BéDorf, jerryyin, jhalakp, jiakai, Jia Qingtong, Jiankang, JiangXIAO, Joe Bowser, Joe Q, Joe Quadrino, Joel Shapiro, Johan Gunnarsson, Jojimon Varghese, Jonas Rauber, Jonathan Kyl, Jonathan, Joon, Joppe Geluykens, Joseph Friedman, Josh Beal, jtressle, Julian Niedermeier, Junqin Zhang, Justin Dujardin, Justin Tunis, jwu, K. Hodges, kaixih, Kaixi Hou, kjopek, Karl Lessard, Karl Weinmeister, Karthik Muthuraman, Kashif Rasul, Kay Zhu, Kbhute-Ibm, KDR, Keno Fischer, Kevin Mader, khanhlvg, Kilaru Yasaswi Sri Chandra Gandhi, Koan-Sin Tan, Koock Yoon, kouml, ktaebum, Kyuwon Kim, Lakshay Tokas, Laurent Le Brun, leike666666, leonard951, Leslie-Fang, Letian Kang, Li, Guizi, Loo Rong Jie, Lucas Hendren, Lukas Folle, Lukas Geiger, Luke Han, luxupu, lvli, Ma, Guokai, Mahmoud Abuzaina, Maksym Kysylov, Mandar Deshpande, manhyuk, Manraj Singh Grover, Marco Gaido, Marek Drozdowski, Margaret Maynard-Reid, Mark Ryan, mars20, Mateusz Chudyk, Matt Conley, mbhuiyan, mdfaijul, Mei Jie, Melissa Grueter, merturl, MichaelKonobeev, Michael KäUfl, Michal W. Tarnowski, MickaëL Schoentgen, Miguel Morin, Mihail Salnikov, Mikalai Drabovich, Mike Arpaia, Mike Holcomb, minds, monklof, Moses Marin, mpppk, Mr. Metal, Mshr-H, musikisomorphie, nammbash, Natalia Gimelshein, Nathan Luehr, Nayana-Ibm, Nayana Thorat, neargye, Neeraj Pradhan, Nehal J Wani, Neil, Nick, Nick Lewycky, Niels Ole Salscheider, Niklas SilfverströM, Niranjan Hasabnis, Nuka-137, Nutti, ocjosen, olicht, omeir1, P Sudeepam, Paige Bailey, Palmer Lao, Pan Daoxin, Pariksheet Pinjari, Pasquale Minervini, Patrick J. Lopresti, Patrik Gustavsson, Pavel Akhtyamov, Pavel Samolysov, PENGWA, per1234, PeterLee, Phan Van Nguyen Duc, Philipp Jund, Phillip Kravtsov, Pooya Davoodi, Pranav Marathe, Putra Manggala, Qingqing Cao, R S Nikhil Krishna, Rajeshwar Reddy T, Ramon ViñAs, Rasmus Diederichsen, Reuben Morais, robert, Rohit Gupta, Roland Zimmermann, Roman Soldatow, RonLek, Ruizhe, Ryan Jiang, saishruthi, Saleem Abdulrasool, Samantha Andow, Sami Kama, Sana-Damani, Saurabh Deoras, sdamani, Sean Morgan, seanshpark, Sebastien Iooss, Serv-Inc, Severen Redwood, Shahzad Lone, Shashank Gupta, shashvat, Shashvat Chand Shahi, Shubham Goyal, Shashi, Sigrid Keydana, Siju, Siju Samuel, sleighsoft, smilu97, Snease-Abq, Son Tran, Spencer Schaber, sremedios, Srini511, srinivasan.narayanamoorthy, Steve Lang, Steve Nesae, Subin, Sumesh Udayakumaran, Sungmann Cho, sunway513, Supriya Rao, sxwang, Tae-Hwan Jung, Taehoon Lee, Takeo Sawada, Taylor Jakobson, Taylor Thornton, Ted Chang, TengLu, terryky, ThisIsIsaac, ThisIsPIRI, Thomas Deegan, Thomas Hagebols, tianyapiaozi, Till Hoffmann, Tim Zaman, tomguluson92, Tongxuan Liu, Trent Lo, Trevor Morris, TungJerry, Tyorden, Uday Bondhugula, v1incent, Vagif, Vasileios Lioutas, vbvg2008, vcarpani, Vijay Ravichandran, Vikram Tiwari,Viktor Gal, Vishwak Srinivasan, Vincent, Vishnuvardhan Janapati, Vitor-Alves, Vivek Suryamurthy, wangsiyu, wateryzephyr, WeberXie, Wei Wang, WeijieSun, Wen-Heng (Jack) Chung, wenxizhu, Will Battel, William D. Irons, winstonq, wyzhao, Xiaoming (Jason) Cui, Xiaoquan Kong, Xin, Xinping Wang, Yan Facai (颜发才), Yann-Yy, Yasir Modak, Yasuhiro Matsumoto, ymodak, Yong Tang, Yongfeng Gu, Younes Khoudli, Yuan Lin, Yuan (Terry) Tang, Yuchen Ying, Yves-Noel Weweler, zhangyujing, zjjott, zyeric, 王振华 (Zhenhua Wang), 黄鑫
AUTO
for improving reliability of loss scaling with distribution strategy and custom training loops. AUTO
indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE
. When used in distribution strategy scope, outside of built-in training loops such as tf.keras
compile
and fit
, we expect reduction value to be 'None' or 'SUM'. Using other values will raise an error.compile
API (strings and v1 losses) which are not instances of v2 Loss
class in LossWrapper
class. => All losses will now use SUM_OVER_BATCH_SIZE
reduction as default.run_eagerly
and distribution strategy if there are symbolic tensors added to the model using add_metric
or add_loss
.ResourceVariable
and Variable
no longer accepts constraint
in the constructor, nor expose it as a @property.map_vectorization
optimization, reduce the degree of parallelism in the vectorized map node.norm_axis
and params_axis
with axis
.clear_losses
API to be able to clear losses at the end of forward pass in a custom training loop in eager.metrics
param in Keras compile
.cumsum
and cumprod
keras backend functions.dynamic
constructor argument in Layer and Model, which should be set to True when using imperative control flow in the call
method.add_metric
in the graph function mode.add_update
can now be passed a zero-arg callable in order to support turning off the update when setting trainable=False
on a Layer of a Model compiled with run_eagerly=True
.weighted
prefix from weighted metric names.defun
, providing an escape hatch to continue using the legacy Defun
.tensorflow_core
and tensorflow
is just a virtual pip package. No code changes are needed for projects using TensorFlow, the change is transparentThis release contains contributions from many people at Google, as well as:
1e100, 4d55397500, a6802739, abenmao, Adam Weiss, Ag Ramesh, Alan Du, Albin Joy, Alex, Aman Patel, Amit, Amit Kumar Jaiswal, Amit Srivastava, Andreas Eberle, Andy Craze, Anthony Platanios, Armen Poghosov, armenpoghosov, arp95, Arpit Shah, Ashwin Ramaswami, Aurelien Geron, AuréLien Geron, aweers, awesomealex1, Ayush Agrawal, Ben Barsdell, Bharat Raghunathan, Bhavani Subramanian, blairhan, BléNesi Attila, Brandon Carter, candy.dc, Chao Liu, chenchc, chie8842, Christian Hansen, Christian Sigg, Clayne Robison, crafet, csukuangfj, ctiijima, Dan Jarvis, Dan Lazewatsky, Daniel Ingram, Daniel Salvadori, Dave Airlie, David Norman, Dayananda V, Dayananda-V, delock, Denis Khalikov, Deven Desai, Dheeraj Rajaram Reddy, dmitrievanthony, Donovan Ong, Drew Szurko, Duncan Riach, Dustin Neighly, Edward Forgacs, EFanZh, Fei Hu, Felix Lemke, Filip Matzner, fo40225, frreiss, Gautam, gehring, Geoffrey Irving, Grzegorz George Pawelczak, Grzegorz Pawelczak, Gyoung-Yoon Ryoo, HanGuo97, Hanton Yang, Hari Shankar, hehongliang, Heungsub Lee, Hoeseong Kim, I-Hong Jhuo, Ilango R, Innovimax, Irene Dea, Jacky Ko, Jakub Lipinski, Jason Zaman, jcf94, Jeffrey Poznanovic, Jens Elofsson, Jeroen BéDorf, Jia Qingtong, Jiankang, Joe Q, Joe Quadrino, Joeran Beel, Jonas Rauber, Jonathan, Jonathan Kyl, Joppe Geluykens, Joseph Friedman, jtressle, jwu, K Yasaswi Sri Chandra Gandhi, K. Hodges, Kaixi Hou, Karl Lessard, Karl Weinmeister, Karthik Muthuraman, Kashif Rasul, KDR, Keno Fischer, Kevin Mader, kjopek, Koan-Sin Tan, kouml, ktaebum, Lakshay Tokas, Laurent Le Brun, Letian Kang, Li, Guizi, Loo Rong Jie, Lucas Hendren, Lukas Geiger, Luke Han, luxupu, Ma, Guokai, Mahmoud Abuzaina, Mandar Deshpande, manhyuk, Marco Gaido, Marek Drozdowski, Mark Collier, Mark Ryan, mars20, Mateusz Chudyk, Matt Conley, MattConley, mbhuiyan, mdfaijul, Melissa Grueter, Michael KäUfl, MickaëL Schoentgen, Miguel Morin, Mihail Salnikov, Mike Arpaia, Mike Holcomb, monklof, Moses Marin, Mshr-H, nammbash, Natalia Gimelshein, Nayana-Ibm, neargye, Neeraj Pradhan, Nehal J Wani, Nick, Niels Ole Salscheider, Niranjan Hasabnis, nlewycky, Nuka-137, Nutti, olicht, P Sudeepam, Palmer Lao, Pan Daoxin, Pariksheet Pinjari, Pavel Samolysov, PENGWA, Pooya Davoodi, R S Nikhil Krishna, Rohit Gupta, Roman Soldatow, rthadur, Ruizhe, Ryan Jiang, Samantha Andow, Sami Kama, Sana-Damani, Saurabh Deoras, sdamani, seanshpark, Sebastien Iooss, Serv-Inc, Shahzad Lone, Shashank Gupta, Shashi, shashvat, shashvatshahi1998, Siju, Siju Samuel, Snease-Abq, Spencer Schaber, sremedios, srinivasan.narayanamoorthy, Steve Lang, Steve Nesae, Sumesh Udayakumaran, Supriya Rao, Taylor Jakobson, Taylor Thornton, Ted Chang, ThisIsPIRI, Thomas Deegan, Thomas Hagebols, tianyapiaozi, Tim Zaman, tomguluson92, Tongxuan Liu, TungJerry, v1incent, Vagif, vcarpani, Vikram Tiwari, Vishwak Srinivasan, Vitor-Alves, wangsiyu, wateryzephyr, WeberXie, WeijieSun, Wen-Heng (Jack) Chung, wenxizhu, Will Battel, William D. Irons, wyzhao, Xin, Yasuhiro Matsumoto, ymodak, Yong Tang, Younes Khoudli, Yuan Lin, Yves-Noel Weweler, Zantares, zjjott, 卜居, 王振华 (Wang Zhenhua), 黄鑫
png_archive
dependency to 1.6.37 to not be affected by CVE-2019-7317, CVE-2018-13785, and CVE-2018-14048.sqlite
dependency to 3.28.0 to not be affected by CVE-2018-20506, CVE-2018-20346, and CVE-2018-20505.tf.lite
and source code is now under tensorflow/lite
rather than tensorflow/contrib/lite
.tf.constant
.gain
argument of convolutional orthogonal initializers (convolutional_delta_orthogonal
, convolutional_orthogonal_1D
, convolutional_orthogonal_2D
, convolutional_orthogonal_3D
) have consistent behavior with the tf.initializers.orthogonal
initializer, i.e. scale the output l2-norm by gain
and NOT by sqrt(gain)
. (Note that these functions are currently in tf.contrib
which is not guaranteed backward compatible).tf.acos
, tf.acosh
, tf.add
, tf.as_string
, tf.asin
, tf.asinh
, tf.atan
, tf.atan2
, tf.atanh
, tf.cos
, tf.cosh
, tf.equal
, tf.exp
, tf.floor
, tf.greater
, tf.greater_equal
, tf.less
, tf.less_equal
, tf.log
, tf.logp1
, tf.logical_and
, tf.logical_not
, tf.logical_or
, tf.maximum
, tf.minimum
, tf.not_equal
, tf.sin
, tf.sinh
, tf.tan
tf.data.Dataset.shard
.saved_model.loader.load
which is replaced by saved_model.load
and saved_model.main_op
, which will be replaced by saved_model.main_op
in V2.Variable.count_up_to
and tf.count_up_to
in favor of Dataset.range
.confusion_matrix
op as tf.math.confusion_matrix
instead of tf.train.confusion_matrix
.tf.dtypes.
endpoint for every constant in dtypes.py. Moving endpoints in versions.py to corresponding endpoints in tf.sysconfig.
and tf.version.
. Moving all constants under tf.saved_model
submodules to tf.saved_model
module. New endpoints are added in V1 and V2 but existing endpoint removals are only applied in V2.tf.register_tensor_conversion_function
.tf.contrib.saved_model.save_keras_model
.LinearOperator.matmul
now returns a new LinearOperator
.ignore_unknown
argument to parse_values
which suppresses ValueError for unknown hyperparameter types. Such * Add tf.linalg.matvec
convenience function.tf.einsum()
raises ValueError
for unsupported equations like "ii->"
.tf.signal.dct
and tf.signal.idct
.round_mode
to QuantizeAndDequantizeV2
op to select rounding algorithm.unicode_encode
, unicode_decode
, unicode_decode_with_offsets
, unicode_split
, unicode_split_with_offset
, and unicode_transcode
ops. Amongst other things, this Op adds the ability to encode, decode, and transcode a variety of input text encoding formats into the main Unicode encodings (UTF-8, UTF-16-BE, UTF-32-BE)SpaceToDepth
supports uint8 data type.tf.nn.safe_embedding_lookup_sparse
, tf.nn.sampled_softmax
and tf.nn.nce_loss
. hyperparameter are ignored.tf.spectral
into tf.signal
for TensorFlow 2.0.tensorflow/contrib/lite
to tensorflow/lite
.tf.contrib
:rate
argument, keep_prob
is deprecated.tf.contrib.estimator
were changed to tf.estimator
:tf.contrib.estimator.BaselineEstimator
with tf.estimator.BaselineEstimator
tf.contrib.estimator.DNNLinearCombinedEstimator
with tf.estimator.DNNLinearCombinedEstimator
tf.contrib.estimator.DNNEstimator
with tf.estimator.DNNEstimator
tf.contrib.estimator.LinearEstimator
with tf.estimator.LinearEstimator
tf.contrib.estimator.InMemoryEvaluatorHook
and tf.estimator.experimental.InMemoryEvaluatorHook`.tf.contrib.estimator.make_stop_at_checkpoint_step_hook
with tf.estimator.experimental.make_stop_at_checkpoint_step_hook
.tf.contrib.signal
to tf.signal
(preserving aliases in tf.contrib.signal).tf.contrib.estimator.export_all_saved_models
and related should switch to tf.estimator.Estimator.experimental_export_all_saved_models
.tf.data.experimental.StatsOptions()
, to configure options to collect statistics from tf.data.Dataset
pipeline using StatsAggregator
. Add nested option, experimental_stats
(which takes a tf.data.experimen tal.StatsOptions
object), to tf.data.Options
. Deprecates tf.data.experimental.set_stats_agregator
.tf.data.experimental.OptimizationOptions()
, to configure options to enable tf.data
performance optimizations. Add nested option, experimental_optimization
(which takes a tf.data.experimental.OptimizationOptions
object), to tf.data.Options
. Remove performance optimization options from tf.data.Options
, and add them under tf.data.experimental.OptimizationOptions
instead.map_and_batch_fusion
and noop_elimination
optimizations by default. They can be disabled by configuring tf.data.experimental.OptimizationOptions
to set map_and_batch = False
or noop_elimination = False
respectively. To disable all default optimizations, set apply_default_optimizations = False
.map_and_filter_fusion
.tf.Variable
s.tf.data.Dataset.make_one_shot_iterator()
in V1, removed it from V2, and added tf.compat.v1.data.make_one_shot_iterator()`.tf.data.Dataset.make_initializable_iterator()
in V1, removed it from V2, and added tf.compat.v1.data.make_initializable_iterator()
.tf.data
transformations.tf.data.Dataset
implementers: Added tf.data.Dataset._element_structured property
to replace Dataset.output_{types,shapes,classes}
.num_parallel_calls
of tf.data.Dataset.interleave
and tf.data.Dataset.map
work in Eager mode.EVP_MD_CTX_destroy
.:android_tensorflow_lib_selective_registration*
targets, use :android_tensorflow_lib_lite*
targets instead.RoundToEven
function to xla/client/lib/math.h.TF_XLA_DEBUG_OPTIONS_PASSTHROUGH
set to "1" or "true" allows the debug options passed within an XRTCompile op to be passed directly to the XLA compilation backend. If such variable is not set (service side), only a restricted set will be passed through.tf.contrib.estimator.BaselineEstimator
with tf.estimator.BaselineEstimator
tf.contrib.estimator.DNNLinearCombinedEstimator
with tf.estimator.DNNLinearCombinedEstimator
tf.contrib.estimator.DNNEstimator
with tf.estimator.DNNEstimator
tf.contrib.estimator.LinearEstimator
with tf.estimator.LinearEstimator
tf.contrib.estimator.export_all_saved_models
and related should switch to tf.estimator.Estimator.experimental_export_all_saved_models
.regression_head
to the new Head API for Canned Estimator V2.multi_class_head
to Head API for Canned Estimator V2.tf.contrib.estimator.InMemoryEvaluatorHook
and tf.contrib.estimator.make_stop_at_checkpoint_step_hook
with tf.estimator.experimental.InMemoryEvaluatorHook
and tf.estimator.experimental.make_stop_at_checkpoint_step_hook
This release contains contributions from many people at Google, as well as:
Abhinav Upadhyay, Ag Ramesh, akikaaa, Alexis Louis, Anders Huss, Andreas Madsen, Andrew Banchich, Andy Craze, Anton Dmitriev, Artem Malykh, Avijit-Nervana, Balint Cristian, Benjamin Tan Wei Hao, Bhavani Subramanian, Brendan Finan, Brian Nemsick, Bryan Cutler, By Shen, Cao Zongyan, Castiel, Chris Antaki, Christian Goll, Cibifang, Clayne Robison, Codrut Grosu, Cong Xu, Dalmo Cirne, Daniel Hunter, Dougal J. Sutherland, Edvard Fagerholm, EFanZh, Erik Smistad, Evgeniy Polyakov, Feiyang Chen, franklin5, Fred Reiss, Gautam, gehring, Geoffrey Irving, George Sterpu, Gitea, Grzegorz George Pawelczak, Guozhong Zhuang, himkt, Hoeseong Kim, Huan Li (李卓桓), HuiyangFei, hyunyoung, Isaac Burbank, jackonan, Jacky Ko, Jason Furmanek, Jason Zaman, Javier Luraschi, Jiang,Zhoulong, joaak, John Lin, Jonathan Wyatt Hoech, josephyearsley, Josh Gordon, Julian Niedermeier, Karl Lessard, Keno Fischer, lanhin, Leon Graser, leondgarse, Li, Guizi, Li, Yiqiang, lxl910915, Mahmoud Abuzaina, manhyuk, Marcela Morales Quispe, margaretmz, Matt Conley, Max Pumperla, mbhuiyan, mdfaijul, Meng, Peng, Michael, Michael Gielda, mrTsjolder, Muhammad Wildan, neargye, Nehal J Wani, NEWPLAN, Niranjan Hasabnis, Nutti, olicht, Pan Daoxin, Pedro Monreal, Peng Yu, pillarpond, Pooya Davoodi, qiezi, Rholais Lii, Richard Yu, Rin Arakaki, Roger Iyengar, sahilbadyal, Sami Kama, Sandip Giri, Scott Leishman, Serge Panev, Seunghoon Park, Shafi Dayatar, shengfuintel, Shimin Guo, Siju, silent567, Stefan Dyulgerov, steven, Tao Wei, Thor Johnsen, Tingbo Lu, tomguluson92, Tongxuan Liu, Trevor Morris, Ubuntu, Vadim Borisov, vanderliang, wangsiyu, Wen Yun, Wen-Heng (Jack) Chung, wenxizhu, William D. Irons, Xiaoming (Jason) Cui, Yan Facai (颜发才), Yanbo Liang, Yaniv Blumenfeld, Yash Gaurkar, Yicheng Fan, Yong Tang, Yongjoon Lee, Yuan (Terry) Tang, Yuxin Wu, zldrobit
tf.contrib.saved_model.save_keras_model()
) and used with Tensorflow Serving.tf.data.Dataset
.tf.data.Options()
, tf.data.Dataset.options()
, and tf.data.Dataset.with_options()
respectively.tf.data.Dataset.reduce()
API allows users to reduce a finite dataset to a single element using a user-provided reduce function.tf.data.Dataset.window()
API allows users to create finite windows of input dataset; when combined with the tf.data.Dataset.reduce()
API, this allows users to implement customized batching.tensorflow::data
namespace.num_parallel_calls
to tf.data.Dataset.interleave
.tf.contrib
:tf.contrib.linalg
. tf.linalg
should be used instead.tf.contrib.get_signature_def_by_key(metagraph_def, signature_def_key)
with meta_graph_def.signature_def[signature_def_key]
. Catching a ValueError exception thrown by tf.contrib.get_signature_def_by_key
should be replaced by catching a KeyError exception.tf.contrib.data
tf.nn.softplus
and tf.nn.softsign
OpDefs. This is a bugfix; these ops were never meant to support integers.tf.GraphKeys.GLOBAL_VARIABLES
.This release contains contributions from many people at Google, as well as:
(David) Siu-Kei Muk, Ag Ramesh, Anton Dmitriev, Artem Sobolev, Avijit-Nervana, Bairen Yi, Bruno Goncalves, By Shen, candy.dc, Cheng Chen, Clayne Robison, coder3101, Dao Zhang, Elms, Fei Hu, feiquan, Geoffrey Irving, Guozhong Zhuang, hellcom, Hoeseong Kim, imsheridan, Jason Furmanek, Jason Zaman, Jenny Sahng, jiefangxuanyan, Johannes Bannhofer, Jonathan Homer, Koan-Sin Tan, kouml, Loo Rong Jie, Lukas Geiger, manipopopo, Ming Li, Moritz KröGer, Naurril, Niranjan Hasabnis, Pan Daoxin, Peng Yu, pengwa, rasmi, Roger Xin, Roland Fernandez, Sami Kama, Samuel Matzek, Sangjung Woo, Sergei Lebedev, Sergii Khomenko, shaohua, Shaohua Zhang, Shujian2015, Sunitha Kambhampati, tomguluson92, ViníCius Camargo, wangsiyu, weidankong, Wen-Heng (Jack) Chung, William D. Irons, Xin Jin, Yan Facai (颜发才), Yanbo Liang, Yash Katariya, Yong Tang, 在原佐为
fit
, evaluate
and predict
to distribute their model on multiple GPUs.RandomUniform
, RandomNormal
, and TruncatedNormal
initializers have been changed to match those in external Keras.model.get_config()
on a Sequential model now returns a config dictionary (consistent with other Model instances) instead of a list of configs for the underlying layers.num_parallel_parser_calls
argument from tf.contrib.data.make_csv_dataset()
. [tf.data] Remove num_parallel_parser_calls
argument from tf.contrib.data.make_csv_dataset()
.tf.data.Dataset.list_files()
raises an exception at initialization time if the argument matches no files.tf.contrib.data.reduce_dataset
which can be used to reduce a dataset to a single element.tf.contrib.data.sliding_window_batch
.tf.contrib
:implementation
argument to tf.keras.layers.LocallyConnected2D
and tf.keras.layers.LocallyConnected1D
. The new mode (implementation=2
) performs forward pass as a single dense matrix multiplication, allowing dramatic speedups in certain scenarios (but worse performance in others - see docstring). The option also allows to use padding=same
.TFDBG_DISK_BYTES_LIMIT
to allow adjustment of this upper limit.This release contains contributions from many people at Google, as well as:
Aapeli, adoda, Ag Ramesh, Amogh Mannekote, Andrew Gibiansky, Andy Craze, Anirudh Koul, Aurelien Geron, Avijit, Avijit-Nervana, Ben, Benjamin H. Myara, bhack, Brett Koonce, Cao Zongyan, cbockman, cheerss, Chikanaga Tomoyuki, Clayne Robison, cosine0, Cui Wei, Dan J, David, David Norman, Dmitry Klimenkov, Eliel Hojman, Florian Courtial, fo40225, formath, Geoffrey Irving, gracehoney, Grzegorz Pawelczak, Guoliang Hua, Guozhong Zhuang, Herman Zvonimir DošIlović, HuiyangFei, Jacker, Jan HüNnemeyer, Jason Taylor, Jason Zaman, Jesse, Jiang,Zhoulong, Jiawei Zhang, Jie, Joe Yearsley, Johannes Schmitz, Jon Perl, Jon Triebenbach, Jonathan, Jonathan Hseu, Jongmin Park, Justin Shenk, karl@kubx.ca, Kate Hodesdon, Kb Sriram, Keishi Hattori, Kenneth Blomqvist, Koan-Sin Tan, Li Liangbin, Li, Yiqiang, Loo Rong Jie, Madiyar, Mahmoud Abuzaina, Mark Ryan, Matt Dodge, mbhuiyan, melvinljy96, Miguel Mota, Nafis Sadat, Nathan Luehr, naurril, Nehal J Wani, Niall Moran, Niranjan Hasabnis, Nishidha Panpaliya, npow, olicht, Pei Zhang, Peng Wang (Simpeng), Peng Yu, Philipp Jund, Pradeep Banavara, Pratik Kalshetti, qwertWZ, Rakesh Chada, Randy West, Ray Kim, Rholais Lii, Robin Richtsfeld, Rodrigo Silveira, Ruizhi, Santosh Kumar, Seb Bro, Sergei Lebedev, sfujiwara, Shaba Abhiram, Shashi, SneakyFish5, Soila Kavulya, Stefan Dyulgerov, Steven Winston, Sunitha Kambhampati, Surry Shome, Taehoon Lee, Thor Johnsen, Tristan Rice, TShapinsky, tucan, tucan9389, Vicente Reyes, Vilmar-Hillow, Vitaly Lavrukhin, wangershi, weidan.kong, weidankong, Wen-Heng (Jack) Chung, William D. Irons, Wim Glenn, XFeiF, Yan Facai (颜发才), Yanbo Liang, Yong Tang, Yoshihiro Yamazaki, Yuan (Terry) Tang, Yuan, Man, zhaoyongke, ÁRon Ricardo Perez-Lopez, 张天启, 张晓飞
tf.keras
:tf.lite
runtime now supports complex64
.tf.data
.tf.estimator.train_and_evaluate
which does not reload checkpoints for evaluation.RunConfig
now sets device_filters to restrict how workers and PS can communicate. This can speed up training and ensure clean shutdowns in some situations. But if you have jobs that require communication between workers, you will have to set custom session_options in your RunConfig
.tf.contrib.distributions
to Tensorflow Probability (TFP). tf.contrib.distributions
is now deprecated and will be removed by the end of 2018.tf.debugging
, tf.dtypes
, tf.image
, tf.io
, tf.linalg
, tf.manip
, tf.math
, tf.quantization
, tf.strings
tf.data
:tf.contrib.data.group_by_reducer()
is now available via the public API.tf.contrib.data.choose_from_datasets()
is now available via the public API.drop_remainder
argument to tf.data.Dataset.batch()
and tf.data.Dataset.padded_batch()
, deprecating tf.contrib.data.batch_and_drop_remainder()
and tf.contrib.data.padded_batch_and_drop_remainder()
.tf.estimator
:Estimator
s now use custom savers included in EstimatorSpec
scaffolds for saving SavedModels during export.EstimatorSpec
will now add a default prediction output for export if no export_output
is provided, eliminating the need to explicitly include a PredictOutput
object in the model_fn
for simple use-cases.DNNClassifier
, DNNRegressor
, and DNNEstimator
.synchronization
and aggregation
args to get_variable(). These args will be used for distributed variables.synchronization
and aggregation
args to the layer add_weight()
API. These args will be used for distributed variables.tf.losses.*
do not add to the global collection when executing eagerly (to avoid leaking memory).tf.train.MonitoredTrainingSession()
.tf.contrib.rnn
.tf.random_gamma
with respect to the alpha parameter.tf.igamma(a, x)
and tf.igammac(a, x)
with respect to a.tf.spectral.idct(type=2|3)
.TimeDistributed
.WALSComputePartialLhsAndRhsOp
.tf.image
namespace: tf.image.extract_image_patches
tf.debugging
namespace: tf.debugging.check_numerics
, tf.debugging.is_finite
, tf.debugging.is_inf
, tf.debugging.is_nan
.tf.dtypes
namespace: tf.dtypes.as_string
.tf.io
namespace: tf.io.decode_base64
, tf.io.decode_compressed
, tf.io.decode_json_example
, tf.io.decode_raw
, tf.io.encode_base64
, tf.io.matching_files
, tf.io.parse_tensor
, tf.io.read_file,
tf.io.write_file`.tf.linalg.cross
, tf.linalg.tensor_diag
(corresponds to tf.diag
), tf.linalg.tensor_diag_part
(corresponds to tf.diag_part
).tf.manip.batch_to_space_nd
, tf.manip.gather_nd
, tf.manip.reshape
, tf.manip.reverse
, tf.manip.scatter_nd
, tf.manip.space_to_batch_nd
, tf.manip.tile
tf.math.acos
, tf.math.acosh
, tf.math.add
, tf.math.asin
, tf.math.asinh
, tf.math.atan
, tf.math.atan2
, tf.math.atanh
, tf.math.betainc
, tf.math.ceil
, tf.math.cos
, tf.math.cosh
, tf.math.digamma
, tf.math.equal
, tf.math.erfc
, tf.math.exp
, tf.math.expm1
, tf.math.floor
, tf.math.greater
, tf.math.greater_equal
, tf.math.igamma
, tf.math.igammac
, tf.math.invert_permutation
, tf.math.less
, tf.math.less_equal
, tf.math.lgamma
, tf.math.log
, tf.math.log1p
, tf.math.logical_and
, tf.math.logical_not
, tf.math.logical_or
, tf.math.maximum
, tf.math.minimum
, tf.math.not_equal
, tf.math.polygamma
, tf.math.reciprocal
, tf.math.rint
, tf.math.rsqrt
, tf.math.segment_max
, tf.math.segment_mean
, tf.math.segment_min
, tf.math.segment_prod
, tf.math.segment_sum
, tf.math.sin
, tf.math.sinh
, tf.math.softplus
, tf.math.softsign
, tf.math.squared_difference
, tf.math.tan
, tf.math.unsorted_segment_max
, tf.math.unsorted_segment_min
, tf.math.unsorted_segment_prod
, tf.math.unsorted_segment_sum
, tf.math.zeta
.tf.quantization
namespace: tf.quantization.dequantize
, tf.quantization.fake_quant_with_min_max_args
, tf.quantization.fake_quant_with_min_max_args_gradient
, tf.quantization.fake_quant_with_min_max_vars
, tf.quantization.fake_quant_with_min_max_vars_gradient
, tf.quantization.fake_quant_with_min_max_vars_per_channel
, tf.quantization.fake_quant_with_min_max_vars_per_channel_gradient
.tf.strings.join
(corresponds to tf.string_join
), tf.strings.regex_replace
, tf.strings.to_number
(corresponds to tf.string_to_number
), tf.strings.strip
(corresponds to tf.string_strip
), tf.strings.substr
, tf.strings.to_hash_bucket
(corresponds to tf.string_to_hash_bucket
), tf.strings.to_hash_bucket_fast
(corresponds to tf.string_to_hash_bucket_fast
), tf.strings.to_hash_bucket_strong
(corresponds to tf.string_to_hash_bucket_strong
).This release contains contributions from many people at Google, as well as:
Ag Ramesh, Alex Wiltschko, Alexander Pantyukhin, Amogh Mannekote, An Jiaoyang, Andrei Nigmatulin, Andrew Ginns, BjøRn Moholt, Brett Koonce, Chengzhi Chen, Chinmay Das, Christian Ertler, Christoph Boeddeker, Clayne Robison, Courtial Florian, ctiijima, Dan Douthit, Dan J, Dan Ringwalt, EFanZh, Emanuele Ballarin, eqy, Evgeniy Zheltonozhskiy, Freedom" Koan-Sin Tan, FréDéRic Branchaud-Charron, G K, gracehoney, Guillaume Klein, Guozhong Zhuang, Hsien-Yang Li, hsm207, ImSheridan, Jayaram Bobba, Jiandong Ruan, Jie, Joel Shor, Jonas Rauber, Jongmin Baek, jsawruk, Karan Kaw, Karl Lessard, karl@kubx.ca, Kb Sriram, KinmanLam, leiiwang, Li, Yiqiang, Loo Rong Jie, Mahmoud Abuzaina, Mahmoud Aslan, ManHyuk, Martin Patz, Martin Zeitler, mktozk, Mohammad Ashraf Bhuiyan, mrTsjolder, Naman Bhalla, Nick Felt, Nicolas Lopez, Niranjan Hasabnis, Nishidha Panpaliya, Nitish, nrstott, Nutti, Parag Jain, PeterLee, Philipp Jund, Rach L, Rafal Wojdyla, Roland Zimmermann, Sergei Lebedev, SneakyFish5, Soila Kavulya, Sriram Veturi, Steven Schmatz, Taehoon Lee, Tang, Wenyi, Taras Sereda, Ted Chang, Tim Zaman, Tristan Rice, tucan, vchigrin, Vikram Tiwari, Vincent, WeberXie, William D. Irons, Yan Facai (颜发才), Yong Tang, Yu Yi, Yuxin Wu, Zé ViníCius
tf.keras
: New Keras-based get started, and programmers guide page.tf.keras
to the Keras 2.1.6 API.tf.keras.layers.CuDNNGRU
and tf.keras.layers.CuDNNLSTM
layers. Try it.toco
, tflite_convert
) is once again included in the standard pip
installation.variable_scope('', ...)
by variable_scope(tf.get_variable_scope(), ...)
.tfe.Network
is deprecated. Please inherit from tf.keras.Model
.tf.keras.layers
with custom variable scopes.tf.layers
in a subclassed tf.keras.Model
class. See here for more detailstf.data
:Dataset.from_generator()
now accepts an args
list, in order to create nested generators.Dataset.list_files()
now produces deterministic results when shuffle=False
or a seed
is passed.tf.contrib.data.sample_from_datasets()
and tf.contrib.data.choose_from_datasets()
make it easier to sample or deterministically choose elements from multiple datasets.tf.contrib.data.make_csv_dataset()
now supports line breaks in quoted strings, and two infrequently used arguments removed.DatasetBase::DebugString()
is now const
.DatasetBase::MakeIterator()
has been renamed to DatasetBase::MakeIteratorInternal()
.IteratorBase::Initialize()
method was added to support raising errors during iterator construction.tf.GradientTape.stop_recording
.tf.keras
:tf.keras.Model.save_weights
now saves in TensorFlow format by default.tf.keras.Model
training/eval methods.tf.contrib
:tf.contrib.framework.zero_initializer
supports ResourceVariable.MakeIterator
to enable propagating error status.tf.reduce_prod gradient
for complex dtypes.nn.embedding_lookup_sparse
. This helps to reduce RPC calls for looking up the embeddings when there are repeated ids in the batch.tf.gradients()
from backpropagating through integer tensors.tensorflow.linalg
.tf.train.Checkpoint
for reading/writing object-based checkpoints.This release contains contributions from many people at Google, as well as:
Abdullah Alrasheed, Achal Shah, Ad-530, ADiegoCAlonso, Aditya Yogi, Ag Ramesh, akindyakov, Andy Kernahan, Anya Petrova, Aurelien Geron, Ben, Ben Barsdell, Bhavani-Subramanian, braincodercn, Brett Koonce, Brian Nemsick, Brian Zier, Bryan Heden, candy.dc, cclauss, Clayne Robison, ctiijima, Dalmo Cirne, David Norman, David T.H. Kao, DosLin, ekelsen, Elson Rodriguez, Erik Smistad, Felix Abecassis, Fergal Cotter, fo40225, foo0x29a, Freedom" Koan-Sin Tan, FréDéRic Branchaud-Charron, gdh1995, Geoffrey Irving, Giuseppe, gracehoney, Guido Zuidhof, Guillaume Klein, Guozhong Zhuang, Haggai, Harald Husum, imsheridan, Ivan Zhang, Jan Zikes, Jayaram Bobba, Jesse Benson, Jesse Gumz, Jiajia Li, Jie, jinghuangintel, Jingwen, jjsjann123, Joe Yearsley, Joel Hestness, Joel Shor, josephyearsley, Junpeng Lao, Karol M. Langner, Kb Sriram, krantideep95, Krish Ravindranath, Letian Feng, Loo Rong Jie, Lukas Geiger, Maciej, Mahmoud Abuzaina, ManHyuk, Mark Ryan, mbhuiyan, Michal Turek, Mostafa Alaa, Myungsung Kwak, Nand Dalal, Nehal J Wani, Neil Tenenholtz, ngc92, Nicholas Nadeau, P.Eng., Avs, Niranjan Hasabnis, P-Hidringer, Paul Van Eck, Peng Yu, Qing Zhao, Qingying Chen, Quanlong, Rajendra Arora, Rholais Lii, rmanyari, Robin Richtsfeld, Russell Klopfer, Sagi, Sam Sendelbach, Sandeep N Gupta, Sandip Giri, Sarah Edkins, Scott Tseng, Sdalbsoo, Sergii Khomenko, Seungwoo Choi (Biggie), Seyed Majid Azimi, Shaoning Zeng, shengfuintel, Siu Kei, Muk, Smit Shilu, soonson, Stefan Schweter, Sukhwan Kim, Sunitha Kambhampati, Taehoon Lee, tamimaddari82, Tang, Wenyi, Ted Chang, u2takey, Utkarsh Upadhyay, Vadim Markovtsev, voegtlel, Wai Hon Law, wangsiyu, Wenhao Hu, wenhao.hu, William D. Irons, Yan Facai (颜发才), Yanbo Liang, Yihong Wang, Yilei (Dolee) Yang, Yong Tang, Yuan (Terry) Tang
tf.contrib.distribute.MirroredStrategy()
to tf.estimator.RunConfig()
to run an Estimator model on multiple GPUs on one machine.tf.contrib.data.prefetch_to_device()
, which supports prefetching to GPU memory.tf.contrib.bayesflow
is moving out to it's own repo.tf.contrib.{proto,rpc}
to allow generic proto parsing and RPC communication1.tf.data
:tf.contrib.data.prefetch_to_device
, which enables prefetching dataset elements to GPU memory.tf.contrib.data.AUTOTUNE
, which allows the tf.data runtime to automatically tune the prefetch buffer sizes based on your system and environment.tf.contrib.data.make_csv_dataset
for building datasets of CSV files.for batch in dataset:
). Both Dataset.__iter__()
and Dataset.make_one_shot_iterator()
can now be used to create iterators when eager execution is enabled.with tf.device(“/gpu:0”)
) (Fixes #14133)tf.GradientTape
has moved out of contrib.tf.keras
:image/random_brightness
, sequence/TimeseriesGenerator
, and text/hashing_trick
.tf.contrib
:tf.contrib.layers.recompute_grad
works for explicit gradient checkpointing on TPU.tf.contrib.framework.argsort
.DNNBoostedTreeCombinedEstimator
to work with core versions of feature columns and losses.tf.contrib.image.sparse_image_warp
, tf.contrib.image.dense_image_warp
, and tf.contrib.image.interpolate_spline
.tf.contrib.opt.MultitaskOptimizerWrapper
where types of tensors were mismatched.TF_C_API_GRAPH_CONSTRUCTION=0
in this release. Future releases will remove the ability to disable this change. Please file a bug if you find yourself using this escape hatch.tf.distributions.Distribution
.tf.scatter_min
and tf.scatter_max
float64
support for Conv2d
, Conv2dBackpropInput
, and Conv2dBackpropFilter
.float64
support for AvgPool
/AvgPoolGrad
.tf.image.psnr
, tf.image.ssim
, tf.image.ssim_multiscale
, tf.image.image_gradients
, tf.image.sobel_edges
.1 The cancellation logic of the RPC op contains a concurrency error. A fix has been submitted to master and will be part of the next release.
This release contains contributions from many people at Google, as well as:
4d55397500, Aghasy, Alan Du, Alan Lee, Alan Yee, Alex Wiltschko, Animesh Karnewar, Ankit Gupta, Anton Matosov, Aris L, Ben Barsdell, Brent Yi, Brett Koonce, Carl Thomé, cbockman, Chikanaga Tomoyuki, Chris Tava, CéDric Deltheil, Dahan Gong, Dalmo Cirne, Daniel Erenrich, David Norman, DavidNorman, Edd Wilder-James, Fanjin Zeng, Felix Abecassis, fo40225, George Sterpu, Giovanni Terlingen, Gor Baghdasaryan, Guillaume Klein, Hanchen Li, Ilya Polenov, Jakub Kolodziejczyk, Jason Sadler, Jayaram Bobba, Jerry Liu, jinghuangintel, Jiongyan Zhang (张炯衍), Joel Shor, Jong Wook Kim, Julian Eisenschlos, Karl Lessard, Krish Ravindranath, Loo Rong Jie, Lukas Geiger, Luke Iwanski, Mahmoud Abuzaina, ManHyuk, Marvin Richter, Maximilian Mitchell, Mohammad Ashraf Bhuiyan, msofka, Mustafa Kasap, Nathan Burnham, Nathan Luehr, Naveen Marri, ngc92, nio1814, Oleg Zabluda, Ou Changkun, Panos Ipeirotis, Paul Van Eck, Peter Lee, Piotr Czapla, qjivy, Rholais Lii, Rodrigo Formigone, Russell Klopfer, ryantimjohn, Sang Han, SebastiáN RamíRez, shengfuintel, Siby Jose Plathottam, Silver Chan, Stanislaw Antol, Taehoon Lee, Tarang Chugh, Ted Chang, Thomas Bastiani, Xian Xu, Xiaoming (Jason) Cui, Yan Facai (颜发才), yaox12, Yashal Shakti Kanungo, Yong Tang, Yuan (Terry) Tang, Yuxin Wu, Ziyue(Louis) Lu
tf.enable_eager_execution()
.tf.contrib.quantize
package.tf.custom_gradient
.Dataset
with new tf.contrib.data.SqlDataset
.tf.contrib.framework.CriticalSection
.tf.regex_replace
.tf.contrib.data.bucket_by_sequence_length
tf.contrib.tensorrt
that enables native TensorRT in TensorFlow.MaxPoolGradGrad
support for XLAtf.data
:tf.data.Dataset
tf.load_op_library()
mechanism.Dataset.list_files()
now shuffles its output by default.Dataset.shuffle(..., seed=tf.constant(0, dtype=tf.int64))
now yields the same sequence of elements as Dataset.shuffle(..., seed=0)
.num_parallel_reads
argument to tf.data.TFRecordDataset
.tf.contrib
:tf.contrib.bayesflow.halton_sequence
now supports randomization.tf.contrib.all_reduce
.effective_sample_size
to tf.contrib.bayesflow.mcmc_diagnostics
.potential_scale_reduction
to tf.contrib.bayesflow.mcmc_diagnostics
.BatchNormalization
, Kumaraswamy
bijectors.tf.contrib.learn
. Please check contrib/learn/README.md for instructions on how to convert existing code.tf.contrib.data
tf.contrib.data.Dataset
, tf.contrib.data.Iterator
, tf.contrib.data.FixedLengthRecordDataset
, tf.contrib.data.TextLineDataset
, and tf.contrib.data.TFRecordDataset
classes.bucket_by_sequence_length
, sliding_window_batch
, and make_batched_features_dataset
tf.contrib.ndlstm
. You can find it externally at https://github.com/tmbarchive/tfndlstm.tf.contrib.bayesflow
to its own repo: tfp
TPUClusterResolver
with GKE's integration for Cloud TPUs.MomentumOptimizer
lambda.tfp.layers
boilerplate via programmable docstrings.auc_with_confidence_intervals
, a method for computing the AUC and confidence interval with linearithmic time complexity.regression_head
now accepts customized link function, to satisfy the usage that user can define their own link function if the array_ops.identity
does not meet the requirement.initialized_value
and initial_value
behaviors for ResourceVariables
created from VariableDef
protos.float16
dtype
in tf.linalg.*
.tf.estimator.export.TensorServingInputReceiver
that allows tf.estimator.Estimator.export_savedmodel
to pass raw tensors to model functions.This release contains contributions from many people at Google, as well as:
4d55397500, Abe, Alistair Low, Andy Kernahan, Appledore, Ben, Ben Barsdell, Boris Pfahringer, Brad Wannow, Brett Koonce, Carl Thomé, cclauss, Chengzhi Chen, Chris Drake, Christopher Yeh, Clayne Robison, Codrut Grosu, Daniel Trebbien, Danny Goodman, David Goodwin, David Norman, Deron Eriksson, Donggeon Lim, Donny Viszneki, DosLin, DylanDmitri, Francisco Guerrero, Fred Reiss, gdh1995, Giuseppe, Glenn Weidner, gracehoney, Guozhong Zhuang, Haichen "Hc" Li, Harald Husum, harumitsu.nobuta, Henry Spivey, hsm207, Jekyll Song, Jerome, Jiongyan Zhang, jjsjann123, John Sungjin Park, Johnson145, JoshVarty, Julian Wolff, Jun Wang, June-One, Kamil Sindi, Kb Sriram, Kdavis-Mozilla, Kenji, lazypanda1, Liang-Chi Hsieh, Loo Rong Jie, Mahesh Bhosale, MandarJKulkarni, ManHyuk, Marcus Ong, Marshal Hayes, Martin Pool, matthieudelaro, mdfaijul, mholzel, Michael Zhou, Ming Li, Minmin Sun, Myungjoo Ham, MyungsungKwak, Naman Kamra, Peng Yu, Penghao Cen, Phil, Raghuraman-K, resec, Rohin Mohanadas, Sandeep N Gupta, Scott Tseng, seaotterman, Seo Sanghyeon, Sergei Lebedev, Ted Chang, terrytangyuan, Tim H, tkunic, Tod, vihanjain, Yan Facai (颜发才), Yin Li, Yong Tang, Yukun Chen, Yusuke Yamada
tf.estimator.{FinalExporter,LatestExporter}
now export stripped SavedModels. This improves forward compatibility of the SavedModel.resize_images.align_corners
parameter.FlushCaches()
method to the FileSystem interface, with an implementation for GcsFileSystem.tf.contrib.distributions.Kumaraswamy
.RetryingFileSystem::FlushCaches()
calls the base FileSystem's FlushCaches()
.auto_correlation
to distributions.tf.contrib.distributions.Autoregressive
.tf.matmul
are bfloat16, it returns bfloat16, instead of float32.tf.contrib.image.connected_components
.tf.contrib.framework.CriticalSection
that allows atomic variable access.pt
and eval
commands, allow writing tensor values to filesystem as numpy files.parallel_interleave
to support 2 kinds of prefetching.prepare_variance
boolean with default setting to False for backward compatibility.layers_dense_variational_impl.py
to layers_dense_variational.py
.Using XLA:GPU with CUDA 9 and CUDA 9.1 results in garbage results and/or CUDA_ILLEGAL_ADDRESS
failures.
Google discovered in mid-December 2017 that the PTX-to-SASS compiler in CUDA 9 and CUDA 9.1 sometimes does not properly compute the carry bit when decomposing 64-bit address calculations with large offsets (e.g. load [x + large_constant]
) into 32-bit arithmetic in SASS.
As a result, these versions of ptxas
miscompile most XLA programs which use more than 4GB of temp memory. This results in garbage results and/or CUDA_ERROR_ILLEGAL_ADDRESS
failures.
A fix in CUDA 9.1.121 is expected in late February 2018. We do not expect a fix for CUDA 9.0.x. Until the fix is available, the only workaround is to downgrade to CUDA 8.0.x or disable XLA:GPU.
TensorFlow will print a warning if you use XLA:GPU with a known-bad version of CUDA; see e00ba24c4038e7644da417ddc639169b6ea59122.
This release contains contributions from many people at Google, as well as:
4d55397500, Ag Ramesh, Aiden Scandella, Akimasa Kimura, Alex Rothberg, Allen Goodman, amilioto, Andrei Costinescu, Andrei Nigmatulin, Anjum Sayed, Anthony Platanios, Anush Elangovan, Armando Fandango, Ashish Kumar Ram, Ashwini Shukla, Ben, Bhavani Subramanian, Brett Koonce, Carl Thomé, cclauss, Cesc, Changming Sun, Christoph Boeddeker, Clayne Robison, Clemens Schulz, Clint (Woonhyuk Baek), codrut3, Cole Gerdemann, Colin Raffel, Daniel Trebbien, Daniel Ylitalo, Daniel Zhang, Daniyar, Darjan Salaj, Dave Maclachlan, David Norman, Dong--Jian, dongsamb, dssgsra, Edward H, eladweiss, elilienstein, Eric Lilienstein, error.d, Eunji Jeong, fanlu, Florian Courtial, fo40225, Fred, Gregg Helt, Guozhong Zhuang, Hanchen Li, hsm207, hyunyoung2, ImSheridan, Ishant Mrinal Haloi, Jacky Ko, Jay Young, Jean Flaherty, Jerome, JerrikEph, Jesse Kinkead, jfaath, Jian Lin, jinghuangintel, Jiongyan Zhang, Joel Hestness, Joel Shor, Johnny Chan, Julian Niedermeier, Julian Wolff, JxKing, K-W-W, Karl Lessard, Kasper Marstal, Keiji Ariyama, Koan-Sin Tan, Loki Der Quaeler, Loo Rong Jie, Luke Schaefer, Lynn Jackson, ManHyuk, Matt Basta, Matt Smith, Matthew Schulkind, Michael, michaelkhan3, Miguel Piedrafita, Mikalai Drabovich, Mike Knapp, mjwen, mktozk, Mohamed Aly, Mohammad Ashraf Bhuiyan, Myungjoo Ham, Naman Bhalla, Namrata-Ibm, Nathan Luehr, nathansilberman, Netzeband, Niranjan Hasabnis, Omar Aflak, Ozge Yalcinkaya, Parth P Panchal, patrickzzy, Patryk Chrabaszcz, Paul Van Eck, Paweł Kapica, Peng Yu, Philip Yang, Pierre Blondeau, Po-Hsien Chu, powderluv, Puyu Wang, Rajendra Arora, Rasmus, Renat Idrisov, resec, Robin Richtsfeld, Ronald Eddy Jr, Sahil Singh, Sam Matzek, Sami Kama, sandipmgiri, Santiago Castro, Sayed Hadi Hashemi, Scott Tseng, Sergii Khomenko, Shahid, Shengpeng Liu, Shreyash Sharma, Shrinidhi Kl, Simone Cirillo, simsicon, Stanislav Levental, starsblinking, Stephen Lumenta, Steven Hickson, Su Tang, Taehoon Lee, Takuya Wakisaka, Ted Chang, Ted Ying, Tijmen Verhulsdonck, Timofey Kondrashov, vade, vaibhav, Valentin Khrulkov, vchigrin, Victor Costan, Viraj Navkal, Vivek Rane, wagonhelm, Yan Facai (颜发才), Yanbo Liang, Yaroslav Bulatov, yegord, Yong Tang, Yoni Tsafir, yordun, Yuan (Terry) Tang, Yuxin Wu, zhengdi, Zhengsheng Wei, 田传武
complex64
support to XLA compiler.bfloat
support is now added to XLA infrastructure.ClusterSpec
propagation work with XLA devices.tf.contrib
:tf.contrib.distributions
:tf.contrib.distributions.Autoregressive
.tf.contrib.distributions
QuadratureCompound classes support batchtf.contrib.distributions.RelaxedOneHotCategorical
dtype
from arguments.tf.contrib.distributions
quadrature family parameterized by quadrature_grid_and_prob
vs quadrature_degree
.auto_correlation
added to tf.contrib.distributions
tf.contrib.bayesflow.layers
, a collection of probabilistic (neural) layers.tf.contrib.bayesflow.halton_sequence
.tf.contrib.data.make_saveable_from_iterator.
tf.contrib.data.shuffle_and_repeat
.tf.contrib.data.scan()
.tf.contrib.distributions.bijectors
:tf.contrib.distributions.bijectors.MaskedAutoregressiveFlow
.tf.contrib.distributions.bijectors.Permute
.tf.contrib.distributions.bijectors.Gumbel
.tf.contrib.distributions.bijectors.Reshape
.streaming_precision_recall_at_equal_thresholds,
a method for computing streaming precision and recall with O(num_thresholds + size of predictions)
time and space complexity.RunConfig
default behavior to not set a random seed, making random behavior independently random on distributed workers. We expect this to generally improve training performance. Models that do rely on determinism should set a random seed explicitly.tf.flags
with absl.flags
.CUBLAS_TENSOR_OP_MATH
in fp16 GEMMEstimator
s save checkpoints.tf2xla
bridge.SpaceToDepth
and DepthToSpace
.mfcc_mel_filterbank.h
and mfcc.h
to clarify that the input domain is squared magnitude spectra and the weighting is done on linear magnitude spectra (sqrt of inputs).tf.contrib.distributions
docstring examples to use tfd
alias rather than ds
, bs
.tf.distributions.bijectors.Bijector
.tf.assert_equal
no longer raises ValueError.
It now raises InvalidArgumentError,
as documented.import_meta_graph
's handling of partitioned variables when importing into a scope. WARNING: This may break loading checkpoints of graphs with partitioned variables saved after using import_meta_graph
with a non-empty import_scope
argument.WorkerService.DeleteWorkerSession
method to the gRPC interface, to fix a memory leak. Ensure that your master and worker servers are running the same version of TensorFlow to avoid compatibility issues.log_det_jacobian
to match log_prob
in TransformedDistribution
.import_meta_graph
's handling of partitioned variables whentf.distributions.Multinomial
doesn't underflow in log_prob
. Before this change, all partitions of an integer variable were initialized with the shape of the unpartitioned variable; after this change they are initialized correctly.DenseFlipout
probabilistic layer.ignore_live_threads
is available on train. If set to True
, it will ignore threads that remain running when tearing down infrastructure after successfully completing training, instead of throwing a RuntimeError.DenseVariational
as simpler template for other probabilistic layers.tf.data
now supports tf.SparseTensor
components in dataset elements.Tensor
s.SparseSegmentReduction
ops to have missing segment IDs.Conv2D
, Conv2DBackpropInput
, Conv2DBackpropFilter
now supports arbitrary dilations with GPU and cuDNNv6 support.Estimator
now supports Dataset
: input_fn
can return a Dataset
instead of Tensor
s.RevBlock
, a memory-efficient implementation of reversible residual layers.cross_entropy
and kl_divergence
to tf.distributions.Distribution
.tf.nn.softmax_cross_entropy_with_logits_v2
which enables backprop w.r.t. the labels.ptxas
to compile generated PTX.BufferAssignment
's protocol buffer dump is now deterministic.DynamicStitch
.quantile
to tf.distributions.TransformedDistribution
.NCHW_VECT_C
support for tf.depth_to_space
on GPU.NCHW_VECT_C
support for tf.space_to_depth
on GPU.SqueezeDims
attribute to Axis
in C++ API for Squeeze op.Stream::BlockHostUntilDone
now returns Status rather than bool.stochastic
to common
and remove stochastic
.Using XLA:GPU with CUDA 9 and CUDA 9.1 results in garbage results and/or CUDA_ILLEGAL_ADDRESS
failures.
Google discovered in mid-December 2017 that the PTX-to-SASS compiler in CUDA 9 and CUDA 9.1 sometimes does not properly compute the carry bit when decomposing 64-bit address calculations with large offsets (e.g. load [x + large_constant]
) into 32-bit arithmetic in SASS.
As a result, these versions of ptxas
miscompile most XLA programs which use more than 4GB of temp memory. This results in garbage results and/or CUDA_ERROR_ILLEGAL_ADDRESS
failures.
A fix in CUDA 9.1.121 is expected in late February 2018. We do not expect a fix for CUDA 9.0.x. Until the fix is available, the only workaround is to downgrade to CUDA 8.0.x or disable XLA:GPU.
TensorFlow will print a warning if you use XLA:GPU with a known-bad version of CUDA; see e00ba24c4038e7644da417ddc639169b6ea59122.
This release contains contributions from many people at Google, as well as:
Adam Zahran, Ag Ramesh, Alan Lee, Alan Yee, Alex Sergeev, Alexander, Amir H. Jadidinejad, Amy, Anastasios Doumoulakis, Andrei Costinescu, Andrei Nigmatulin, Anthony Platanios, Anush Elangovan, arixlin, Armen Donigian, ArtëM Sobolev, Atlas7, Ben Barsdell, Bill Prin, Bo Wang, Brett Koonce, Cameron Thomas, Carl Thomé, Cem Eteke, cglewis, Changming Sun, Charles Shenton, Chi-Hung, Chris Donahue, Chris Filo Gorgolewski, Chris Hoyean Song, Chris Tava, Christian Grail, Christoph Boeddeker, cinqS, Clayne Robison, codrut3, concerttttt, CQY, Dan Becker, Dan Jarvis, Daniel Zhang, David Norman, dmaclach, Dmitry Trifonov, Donggeon Lim, dongpilYu, Dr. Kashif Rasul, Edd Wilder-James, Eric Lv, fcharras, Felix Abecassis, FirefoxMetzger, formath, FredZhang, Gaojin Cao, Gary Deer, Guenther Schmuelling, Hanchen Li, Hanmin Qin, hannesa2, hyunyoung2, Ilya Edrenkin, Jackson Kontny, Jan, Javier Luraschi, Jay Young, Jayaram Bobba, Jeff, Jeff Carpenter, Jeremy Sharpe, Jeroen BéDorf, Jimmy Jia, Jinze Bai, Jiongyan Zhang, Joe Castagneri, Johan Ju, Josh Varty, Julian Niedermeier, JxKing, Karl Lessard, Kb Sriram, Keven Wang, Koan-Sin Tan, Kyle Mills, lanhin, LevineHuang, Loki Der Quaeler, Loo Rong Jie, Luke Iwanski, LáSzló Csomor, Mahdi Abavisani, Mahmoud Abuzaina, ManHyuk, Marek ŠUppa, MathSquared, Mats Linander, Matt Wytock, Matthew Daley, Maximilian Bachl, mdymczyk, melvyniandrag, Michael Case, Mike Traynor, miqlas, Namrata-Ibm, Nathan Luehr, Nathan Van Doorn, Noa Ezra, Nolan Liu, Oleg Zabluda, opensourcemattress, Ouwen Huang, Paul Van Eck, peisong, Peng Yu, PinkySan, pks, powderluv, Qiao Hai-Jun, Qiao Longfei, Rajendra Arora, Ralph Tang, resec, Robin Richtsfeld, Rohan Varma, Ryohei Kuroki, SaintNazaire, Samuel He, Sandeep Dcunha, sandipmgiri, Sang Han, scott, Scott Mudge, Se-Won Kim, Simon Perkins, Simone Cirillo, Steffen Schmitz, Suvojit Manna, Sylvus, Taehoon Lee, Ted Chang, Thomas Deegan, Till Hoffmann, Tim, Toni Kunic, Toon Verstraelen, Tristan Rice, Urs KöSter, Utkarsh Upadhyay, Vish (Ishaya) Abrams, Winnie Tsang, Yan Chen, Yan Facai (颜发才), Yi Yang, Yong Tang, Youssef Hesham, Yuan (Terry) Tang, Zhengsheng Wei, zxcqwe4906, 张志豪, 田传武
We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.
LinearClassifier
fix.tf.keras
is now part of the core TensorFlow API.tf.data
is now part of the core TensorFlow API.tf.contrib.data
API, see the README.Dataset.from_generator()
(for building an input pipeline from a Python generator), and the Dataset.apply()
method for applying custom transformation functions.tf.contrib.data.batch_and_drop_remainder()
and tf.contrib.data.sloppy_interleave()
.train_and_evaluate
for simple distributed Estimator
training.tf.spectral.dct
for computing the DCT-II.tf.contrib.signal
(with GPU and gradient support).import tensorflow
for Windows DLL issues.tf.depth_to_space
on GPU.eval
command to allow evaluation of arbitrary Python/numpy expressions in tfdbg command-line interface. See Debugging TensorFlow Programs for more details.has_inf_or_nan
is now added to Session
wrappers and hooks by default. So there is no need for clients to call .add_tensor_filter(tf_debug.has_inf_or_nan)
anymore.contrib.distributions
.GANEstimator
opensource.Estimator.export_savedmodel()
now includes all valid serving signatures that can be constructed from the Serving Input Receiver and all available ExportOutputs. For instance, a classifier may provide regression- and prediction-flavored outputs, in addition to the classification-flavored one. Building signatures from these allows TF Serving to honor requests using the different APIs (Classify, Regress, and Predict). Furthermore, serving_input_receiver_fn()
may now specify alternative subsets of nodes that may act as inputs. This allows, for instance, producing a prediction signature for a classifier that accepts raw Tensors
instead of a serialized tf.Example
.tf.contrib.bayesflow.hmc
.tf.contrib.distributions.MixtureSameFamily
.Dataset.shuffle()
always reshuffles after each iteration by default.tf.contrib.bayesflow.metropolis_hastings
.log_rate
parameter to tf.contrib.distributions.Poisson
.tf.contrib.distributions.bijector
API to handle some non-injective transforms.Tensor<Integer>
) for improved type-safety (courtesy @andrewcmyers).tf.contrib
) on Linux and OS Xtf.nn.rnn_cell.DropoutWrapper
is now more careful about dropping out LSTM states. Specifically, it no longer ever drops the c
(memory) state of an LSTMStateTuple
. The new behavior leads to proper dropout behavior for LSTMs and stacked LSTMs. This bug fix follows recommendations from published literature, but is a behavioral change. State dropout behavior may be customized via the new dropout_state_filter_visitor
argument.tf.contrib.training.python_input
. The same behavior, in a more flexible and reproducible package, is available via the new tf.contrib.data.Dataset.from_generator
method!tf.contrib.distributions.Affine
incorrectly computing log-det-jacobian.tf.random_gamma
incorrectly handling non-batch, scalar draws.tf.sysconfig.get_lib()
).RunConfig
default behavior to not set a random seed, making random behavior independently random on distributed workers. We expect this to generally improve training performance. Models that do rely on determinism should set a random seed explicitly.tf.contrib.data.rejection_resample()
function has been changed. It now returns a function that can be used as an argument to Dataset.apply()
.tf.contrib.data.Iterator.from_dataset()
method. Use Dataset.make_initializable_iterator()
instead.tf.contrib.data.Iterator.dispose_op()
.Dataset.from_generator()
does not support Unicode strings. You must convert any strings to bytes objects before yielding them from the generator.This release contains contributions from many people at Google, as well as:
4d55397500, Abdullah Alrasheed, abenmao, Adam Salvail, Aditya Dhulipala, Ag Ramesh, Akimasa Kimura, Alan Du, Alan Yee, Alexander, Amit Kushwaha, Amy, Andrei Costinescu, Andrei Nigmatulin, Andrew Erlichson, Andrew Myers, Andrew Stepanov, Androbin, AngryPowman, Anish Shah, Anton Daitche, Artsiom Chapialiou, asdf2014, Aseem Raj Baranwal, Ash Hall, Bart Kiers, Batchu Venkat Vishal, ben, Ben Barsdell, Bill Piel, Carl Thomé, Catalin Voss, Changming Sun, Chengzhi Chen, Chi Zeng, Chris Antaki, Chris Donahue, Chris Oelmueller, Chris Tava, Clayne Robison, Codrut, Courtial Florian, Dalmo Cirne, Dan J, Darren Garvey, David Kristoffersson, David Norman, David RöThlisberger, DavidNorman, Dhruv, DimanNe, Dorokhov, Duncan Mac-Vicar P, EdwardDixon, EMCP, error.d, FAIJUL, Fan Xia, Francois Xavier, Fred Reiss, Freedom" Koan-Sin Tan, Fritz Obermeyer, Gao, Xiang, Guenther Schmuelling, Guo Yejun (郭叶军), Hans Gaiser, HectorSVC, Hyungsuk Yoon, James Pruegsanusak, Jay Young, Jean Wanka, Jeff Carpenter, Jeremy Rutman, Jeroen BéDorf, Jett Jones, Jimmy Jia, jinghuangintel, jinze1994, JKurland, Joel Hestness, joetoth, John B Nelson, John Impallomeni, John Lawson, Jonas, Jonathan Dekhtiar, joshkyh, Jun Luan, Jun Mei, Kai Sasaki, Karl Lessard, karl@kubx.ca, Kb Sriram, Kenichi Ueno, Kevin Slagle, Kongsea, Lakshay Garg, lhlmgr, Lin Min, liu.guangcong, Loki Der Quaeler, Louie Helm, lucasmoura, Luke Iwanski, Lyndon White, Mahmoud Abuzaina, Marcel Puyat, Mark Aaron Shirley, Michele Colombo, MtDersvan, Namrata-Ibm, Nathan Luehr, Naurril, Nayana Thorat, Nicolas Lopez, Niranjan Hasabnis, Nolan Liu, Nouce, Oliver Hennigh, osdamv, Patrik Erdes, Patryk Chrabaszcz, Pavel Christof, Penghao Cen, postBG, Qingqing Cao, Qingying Chen, qjivy, Raphael, Rasmi, raymondxyang, Renze Yu, resec, Roffel, Ruben Vereecken, Ryohei Kuroki, sandipmgiri, Santiago Castro, Scott Kirkland, Sean Vig, Sebastian Raschka, Sebastian Weiss, Sergey Kolesnikov, Sergii Khomenko, Shahid, Shivam Kotwalia, Stuart Berg, Sumit Gouthaman, superzerg, Sven Mayer, tetris, Ti Zhou, Tiago Freitas Pereira, Tian Jin, Tomoaki Oiki, Vaibhav Sood, vfdev, Vivek Rane, Vladimir Moskva, wangqr, Weber Xie, Will Frey, Yan Facai (颜发才), yanivbl6, Yaroslav Bulatov, Yixing Lao, Yong Tang, youkaichao, Yuan (Terry) Tang, Yue Zhang, Yuxin Wu, Ziming Dong, ZxYuan, 黄璞
We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.
See also TensorBoard 0.1.4 release notes.
DNNClassifier
DNNRegressor
LinearClassifier
LinearRegressor
DNNLinearCombinedClassifier
DNNLinearCombinedRegressor
.import tensorflow
now goes much faster.tf.gather
.constant_values
keyword argument to tf.pad
.Dataset.interleave
transformation.ConcatenateDataset
to concatenate two datasets.Dataset.list_files
API.-s
flag to command print_tensor
or pt
.print_feed
or pf
command and clickable links in the curses UI.run -p
command.tf.distributions
.tf.where
and tf.nn.top_k
.tf.contrib.seq2seq
.tf.contrib.signal
, a library for signal processing primitives.tf.contrib.resampler
, containing CPU and GPU ops for differentiable resampling of images.tf.RewriterConfig
was removed from the Python API after being available in 1.2 release candidates (it was never in an actual release). Graph rewriting is still available, just not as tf.RewriterConfig
. Instead add an explicit import.tf.contrib.data.Dataset
APIs that expect a nested structure. Lists are now converted to tf.Tensor
implicitly. You may need to change uses of lists to tuples in existing code. In addition, dicts are now supported as a nested structure.tf.contrib.metrics
.{streaming_covariance,streaming_pearson_correlation} modified to return nan when they have seen less or equal to 1 unit of weight.strides
and begin
dtype mismatch when slicing using int64 Tensor index in python.saved_model.utils
now support SparseTensors transparently.saver.restore
.tf.spectral.rfft
& tf.spectral.irfft
.tf.layers.conv2d
when setting use_bias=True by 2x by using nn.bias_add.tf.summary
ops to allow controlling the tab name used in Tensorboard for organizing summaries.tf.Session.make_callable
.This release contains contributions from many people at Google, as well as:
4F2E4A2E, Adriano Carmezim, Adrià Arrufat, Alan Yee, Alex Lattas, Alex Rothberg, Alexandr Baranezky, Ali Siddiqui, Andreas Solleder, Andrei Costinescu, Andrew Hundt, Androbin, Andy Kernahan, Anish Shah, Anthony Platanios, Arvinds-Ds, b1rd, Baptiste Arnaud, Ben Mabey, Benedikt Linse, Beomsu Kim, Bo Wang, Boyuan Deng, Brett Koonce, Bruno Rosa, Carl Thomé, Changming Sun, Chase Roberts, Chirag Bhatia, Chris Antaki, Chris Hoyean Song, Chris Tava, Christos Nikolaou, Croath Liu, cxx, Czxck001, Daniel Ylitalo, Danny Goodman, Darren Garvey, David Brailovsky, David Norman, DavidNorman, davidpham87, ddurham2, Dhruv, DimanNe, Drew Hintz, Dustin Tran, Earthson Lu, ethiraj, Fabian Winnen, Fei Sun, Freedom" Koan-Sin Tan, Fritz Obermeyer, Gao, Xiang, Gautam, Guenther Schmuelling, Gyu-Ho Lee, Hauke Brammer, horance, Humanity123, J Alammar, Jayeol Chun, Jeroen BéDorf, Jianfei Wang, jiefangxuanyan, Jing Jun Yin, Joan Puigcerver, Joel Hestness, Johannes Mayer, John Lawson, Johnson145, Jon Malmaud, Jonathan Alvarez-Gutierrez, Juang, Yi-Lin, Julian Viereck, Kaarthik Sivashanmugam, Karl Lessard, karl@kubx.ca, Kevin Carbone, Kevin Van Der Burgt, Kongsea, ksellesk, lanhin, Lef Ioannidis, Liangliang He, Louis Tiao, Luke Iwanski, LáSzló Csomor, magixsno, Mahmoud Abuzaina, Marcel Hlopko, Mark Neumann, Maxwell Paul Brickner, mdfaijul, MichaëL Defferrard, Michał JastrzęBski, Michele Colombo, Mike Brodie, Mosnoi Ion, mouradmourafiq, myPrecious, Nayana Thorat, Neeraj Kashyap, Nelson Liu, Niranjan Hasabnis, Olivier Moindrot, orome, Pankaj Gupta, Paul Van Eck, peeyush18, Peng Yu, Pierre, preciousdp11, qjivy, Raingo, raoqiyu, ribx, Richard S. Imaoka, Rishabh Patel, Robert Walecki, Rockford Wei, Ryan Kung, Sahil Dua, Sandip Giri, Sayed Hadi Hashemi, sgt101, Shitian Ni, Shuolongbj, Siim PõDer, Simon Perkins, sj6077, SOLARIS, Spotlight0xff, Steffen Eberbach, Stephen Fox, superryanguo, Sven Mayer, Tapan Prakash, Tiago Morais Morgado, Till Hoffmann, Tj Rana, Vadim Markovtsev, vhasanov, Wei Wu, windead, Yan (Asta) Li, Yan Chen, Yann Henon, Yi Wang, Yong Tang, yorkie, Yuan (Terry) Tang, Yuxin Wu, zhengjiajin, zhongzyd, 黄璞
We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.
Python 3.6 support on Windows.
Added tf.layers.conv3d_transpose
layer for spatio temporal deconvolution.
Added tf.Session.make_callable()
, which provides a lower overhead means of running a similar step multiple times.
Added libverbs-based RDMA support to contrib (courtesy @junshi15 from Yahoo).
Bring tf.feature_column.*
into the API. Non-deprecated functionality from tf.contrib.layers.*
is moved to tf.feature_column.*
with cosmetic changes.
RNNCell
objects now subclass tf.layers.Layer
. The strictness described in the TensorFlow 1.1 release is gone: The first time an RNNCell is used, it caches its scope. All future uses of the RNNCell will reuse variables from that same scope. This is a breaking change from the behavior of RNNCells in TensorFlow versions <= 1.0.1. TensorFlow 1.1 had checks in place to ensure old code works correctly with the new semantics; this version allows more flexible uses of RNNCell but can lead to subtle errors if using code meant for TensorFlow <= 1.0.1. For example, writing: MultiRNNCell([lstm] * 5)
will now build a 5-layer LSTM stack where each layer shares the same parameters. To get 5 layers each with their own parameters, write: MultiRNNCell([LSTMCell(...) for _ in range(5)])
. If at all unsure, first test your code with TF 1.1; ensure it raises no errors, and then upgrade to TF 1.2.
RNNCells' variable names have been renamed for consistency with Keras layers. Specifically, the previous variable names "weights" and "biases" have been changed to "kernel" and "bias", respectively. This may cause backward incompatibility with regard to your old checkpoints containing such RNN cells, in which case you can use the tool checkpoint_convert script to convert the variable names in your old checkpoints.
Many of the RNN functions and classes that were in the tf.nn
namespace before the 1.0 release and which were moved to tf.contrib.rnn
have now been moved back to the core namespace. This includes RNNCell
, LSTMCell
, GRUCell
, and a number of other cells. These now reside in tf.nn.rnn_cell
(with aliases in tf.contrib.rnn
for backwards compatibility). The original tf.nn.rnn
function is now tf.nn.static_rnn
, and the bidirectional static and state saving static rnn functions are also now back in the tf.nn
namespace.
Notable exceptions are the EmbeddingWrapper
, InputProjectionWrapper
and OutputProjectionWrapper
, which will slowly be moved to deprecation in tf.contrib.rnn
. These are inefficient wrappers that should often be replaced by calling embedding_lookup
or layers.dense
as pre- or post- processing of the rnn. For RNN decoding, this functionality has been replaced with an alternative API in tf.contrib.seq2seq
.
Intel MKL Integration (https://software.intel.com/en-us/articles/tensorflow-optimizations-on-modern-intel-architecture). Intel developed a number of optimized deep learning primitives: In addition to matrix multiplication and convolution, these building blocks include: Direct batched convolution Pooling: maximum, minimum, average Normalization: LRN, batch normalization Activation: rectified linear unit (ReLU) Data manipulation: multi-dimensional transposition (conversion), split, concat, sum and scale.
TensorForest Estimator now supports SavedModel export for serving.
Support client-provided ClusterSpec's and propagate them to all workers to enable the creation of dynamic TensorFlow clusters.
TensorFlow C library now available for Windows.
We released a new open-source version of TensorBoard.
SavedModel CLI
tool available to inspect and execute MetaGraph in SavedModel
Android releases of TensorFlow are now pushed to jcenter for easier integration into apps. See https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/android/inference_interface/README.md for more details.
org.tensorflow.contrib.android.TensorFlowInferenceInterface
now throws exceptions where possible and has simplified method signatures.tf.contrib.util.create_example
.tf.contrib.image
.tf.contrib.stateless
for random ops with custom seed control.tf.contrib.kernel_methods
module with Ops and estimators for primal (explicit) kernel methods in TensorFlow.Operation.get_attr
on type attributes returns the Python DType version of the type to match expected get_attr documentation rather than the protobuf enum.categorical_column_with_vocabulary_file
.reduction
arg to losses.tf.placeholder
can represent scalar shapes and partially known.tf.summary.text
for outputting text to TensorBoard.tf.string_to_number
now supports int64 and float64 outputs.This release contains contributions from many people at Google, as well as:
4F2E4A2E, Aaron Schumacher, Abhi Agg, admcrae, Adriano Carmezim, Adrià Arrufat, agramesh1, Akimitsu Seo, Alan Mosca, Alex Egg, Alex Rothberg, Alexander Heinecke, Alexander Matyasko, Alexandr Baranezky, Alexandre Caulier, Ali Siddiqui, Anand Venkat, Andrew Hundt, Androbin, Anmol Sharma, Arie, Arno Leist, Arron Cao, AuréLien Geron, Bairen Yi, Beomsu Kim, Carl Thomé, cfperez, Changming Sun, Corey Wharton, critiqjo, Dalei Li, Daniel Rasmussen, Daniel Trebbien, DaríO Hereñú, David Eng, David Norman, David Y. Zhang, Davy Song, ddurham2, Deepak Subburam, Dmytro Kyrychuk, Dominic Rossi, Dominik SchlöSser, Dustin Tran, Eduardo Pinho, Egil Martinsson, Elliot Saba, Eric Bigelow, Erik Smistad, Evan Klitzke, Fabrizio Milo, Falcon Dai, Fei Gao, FloopCZ, Fung Lam, Gautam, GBLin5566, Greg Peatfield, Gu Wang, Guenther Schmuelling, Hans Pabst, Harun Gunaydin, Huaizheng, Ido Shamay, Ikaro Silva, Ilya Edrenkin, Immexxx, James Mishra, Jamie Cooke, Jay Young, Jayaram Bobba, Jianfei Wang, jinghua2, Joey Meyer, John Maidens, Jonghoon Jin, Julian Villella, Jun Kim, Jun Shi, Junwei Pan, jyegerlehner, Karan Desai, Karel Van De Plassche, Kb Sriram, KhabarlakKonstantin, Koan-Sin Tan, krivard, Kwotsin, Leandro Gracia Gil, Li Chen, Liangliang He, Louie Helm, lspvic, Luiz Henrique Soares, LáSzló Csomor, Mark Wong, Mathew Wicks, Matthew Rahtz, Maxwell Paul Brickner, Michael Hofmann, Miguel Flores Ruiz De Eguino, MikeTam1021, Mortada Mehyar, Mycosynth, Namnamseo, Nate Harada, Neven Miculinic, Nghia Tran, Nick Lyu, Niranjan Hasabnis, Nishidha, Oleksii Kuchaiev, Oyesh Mann Singh, Panmari, Patrick, Paul Van Eck, Piyush Chaudhary, Quim Llimona, Raingo, Richard Davies, Ruben Vereecken, Sahit Chintalapudi, Sam Abrahams, Santiago Castro, Scott Sievert, Sean O'Keefe, Sebastian Schlecht, Shane, Shubhankar Deshpande, Spencer Schaber, Sunyeop Lee, t13m, td2014, Thomas H. P. Andersen, Toby Petty, Umang Mehta, Vadim Markovtsev, Valentin Iovene, Vincent Zhao, Vit Stepanovs, Vivek Rane, Vu Pham, wannabesrevenge, weipingpku, wuhaixutab, wydwww, Xiang Gao, Xiaolin Lin, xiaoyaozhuzi, Yaroslav Bulatov, Yi Liu, Yoshihiro Sugi, Yuan (Terry) Tang, Yuming Wang, Yuxin Wu, Zader Zheng, Zhaojun Zhang, zhengjiajin, ZhipengShen, Ziming Dong, zjj2wry
We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.
tf.spectral
module. Moved existing FFT ops to tf.spectral
while keeping an alias in the old location (tf.*
).tf.spectral
.tf.bincount
function.RecordInput
.tf.contrib.image.compose_transforms
function.tf.estimator.*
into the API. Non-deprecated functionality from tf.contrib.learn.Estimator
is moved to tf.estimator.Estimator
with cosmetic changes.print_source
/ ps
)invoke_stepper
) now uses intermediate tensor dumps. It also uses TensorHandles
as direct feeds during successive cont
calls for improved performance and reduced memory consumption.reuse=True
.pmf
, pdf
, log_pmf
, log_pdf
.bayesflow.special_math
to distributions.tf.contrib.tensor_forest.python.tensor_forest.RandomForestDeviceAssigner
removed.tf.contrib.distributions.MultivariateNormalFull
replaced by tf.contrib.distributions.MultivariateNormalTriL
.tf.contrib.distributions.MultivariateNormalCholesky
replaced by tf.contrib.distributions.MultivariateNormalTriL
tf.contrib.distributions.MultivariateNormalDiagWithSoftplusStDev
replaced by tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale
tf.contrib.distributions.MultivariateNormalDiag
arguments changed from mu
, diag_stddev
to log
, scale_diag
.tf.contrib.distributions.MultivariateNormalDiagPlusVDVT
removed.tf.contrib.distributions.MultivariateNormalDiagPlusLowRank
added.tf.contrib.layers.sparse_column_with_keys
.tf.set_random_seed(0)
to be deterministic for all ops.tf.matching_files
.LogMessage
now includes a timestamp as beginning of a message.StagingArea
.sparse_matmul_op
reenabled for Android builds.TF_GraphImportGraphDefWithReturnOutputs()
)tf.while_loops
.This release contains contributions from many people at Google, as well as:
A. Besir Kurtulmus, Adal Chiriliuc, @akash, Alec-Desouza, Alex Rothberg, Alex Sergeev, Alexander Heinecke, Allen Guo, Andreas Madsen, Ankesh Anand, Anton Loss, @Aravind, @Arie, Ashutosh Das, AuréLien Geron, Bairen Yi, @bakunyo, Ben Visser, Brady Zhou, Calpa Liu, Changming Sun, Chih Cheng Liang, Christopher Berner, Clark Zinzow, @Conchylicultor, Dan Ellis, Dan J, Dan Jarvis, Daniel Ylitalo, Darren Garvey, David Norman, David Truong, @DavidNorman, Dimitar Pavlov, Dmitry Persiyanov, @Eddie, @elirex, Erfan Noury, Eron Wright, Evgeny Mazovetskiy, Fabrizio (Misto) Milo, @fanlu, Fisher Coder, Florian Courtial, Franck Dernoncourt, Gagan Goel, Gao, Xiang, @Gautam, Gefu Tang, @guilherme, @guschmue, Hannah Provenza, Hans Pabst, @hartb, Hsiao Yi, Huazuo Gao, Igor ChorążEwicz, Ivan Smirnov, Jakub Kolodziejczyk, Jason Gavris, Jason Morton, Jay Young, Jayaram Bobba, Jeremy Sawruk, Jiaming Liu, Jihun Choi, @jiqiu, Joan Thibault, John C F, Jojy George Varghese, Jon Malmaud, Julian Berman, Julian Niedermeier, Junpeng Lao, Kai Sasaki, @Kankroc, Karl Lessard, Kyle Bostelmann, @Lezcano, Li Yi, Luo Yun, @lurker, Mahmoud-Abuzaina, Mandeep Singh, Marek Kolodziej, Mark Szepieniec, Martial Hue, Medhat Omr, Memo Akten, Michael Gharbi, MichaëL Defferrard, Milan Straka, @MircoT, @mlucool, Muammar Ibn Faisal, Nayana Thorat, @nghiattran, Nicholas Connor, Nikolaas Steenbergen, Niraj Patel, Niranjan Hasabnis, @Panmari, Pavel Bulanov, Philip Pries Henningsen, Philipp Jund, @polonez, Prayag Verma, Rahul Kavi, Raphael Gontijo Lopes, @rasbt, Raven Iqqe, Reid Pryzant, Richard Shin, Rizwan Asif, Russell Kaplan, Ryo Asakura, RüDiger Busche, Saisai Shao, Sam Abrahams, @sanosay, Sean Papay, @seaotterman, @selay01, Shaurya Sharma, Sriram Narayanamoorthy, Stefano Probst, @taknevski, @tbonza, @teldridge11, Tim Anglade, Tomas Reimers, Tomer Gafner, Valentin Iovene, Vamsi Sripathi, Viktor Malyi, Vit Stepanovs, Vivek Rane, Vlad Firoiu, @wangg12, @will, Xiaoyu Tao, Yaroslav Bulatov, Yi Liu, Yuan (Terry) Tang, @Yufeng, Yuming Wang, Yuxin Wu, Zafar Takhirov, Ziming Dong
We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.
tf.core
and tf.python
modules from the API. These were never intended to be exposed. Please use the same objects through top-level tf
module instead.pip install tensorflow
command.To help you upgrade your existing TensorFlow Python code to match the API changes below, we have prepared a conversion script.
tf.div
and tf.mod
as well. To obtain forced integer truncation based behaviors you can use tf.truncatediv
and tf.truncatemod
.tf.divide()
is now the recommended division function. tf.div()
will remain, but its semantics do not respond to Python 3 or from future
mechanisms.tf.reverse(a, [True, False, True])
must now be written as tf.reverse(a, [0, 2])
. tf.reverse_v2()
will remain until 1.0 final.tf.mul
, tf.sub
and tf.neg
are deprecated in favor of tf.multiply
, tf.subtract
and tf.negative
.tf.pack
and tf.unpack
are deprecated in favor of tf.stack
and tf.unstack
.TensorArray.pack
and TensorArray.unpack
are getting deprecated in favor of TensorArray.stack
and TensorArray.unstack
.axis
when referring to specific dimensions. We have kept the old keyword arguments for compatibility currently, but we will be removing them well before the final 1.0.tf.argmax
: dimension
becomes axis
tf.argmin
: dimension
becomes axis
tf.count_nonzero
: reduction_indices
becomes axis
tf.expand_dims
: dim
becomes axis
tf.reduce_all
: reduction_indices
becomes axis
tf.reduce_any
: reduction_indices
becomes axis
tf.reduce_join
: reduction_indices
becomes axis
tf.reduce_logsumexp
: reduction_indices
becomes axis
tf.reduce_max
: reduction_indices
becomes axis
tf.reduce_mean
: reduction_indices
becomes axis
tf.reduce_min
: reduction_indices
becomes axis
tf.reduce_prod
: reduction_indices
becomes axis
tf.reduce_sum
: reduction_indices
becomes axis
tf.reverse_sequence
: batch_dim
becomes batch_axis
, seq_dim
becomes seq_axis
tf.sparse_concat
: concat_dim
becomes axis
tf.sparse_reduce_sum
: reduction_axes
becomes axis
tf.sparse_reduce_sum_sparse
: reduction_axes
becomes axis
tf.sparse_split
: split_dim
becomes axis
tf.listdiff
has been renamed to tf.setdiff1d
to match NumPy naming.tf.inv
has been renamed to be tf.reciprocal
(component-wise reciprocal) to avoid confusion with np.inv
which is matrix inversiontf.split
now takes arguments in a reversed order and with different keywords. In particular, we now match NumPy order as tf.split(value, num_or_size_splits, axis)
.tf.sparse_split
now takes arguments in reversed order and with different keywords. In particular we now match NumPy order as tf.sparse_split(sp_input, num_split, axis)
. NOTE: we have temporarily made tf.sparse_split
require keyword arguments.tf.concat
now takes arguments in reversed order and with different keywords. In particular we now match NumPy order as tf.concat(values, axis, name)
.tf.image.decode_jpeg
by default uses the faster DCT method, sacrificing a little fidelity for improved speed. One can revert to the old behavior by specifying the attribute dct_method='INTEGER_ACCURATE'
.tf.complex_abs
has been removed from the Python interface. tf.abs
supports complex tensors and should be used instead.var_scope
property renamed to .variable_scope
tf.zeros_initializer()
and tf.ones_initializer()
now return a callable that must be called with initializer arguments, in your code replace tf.zeros_initializer
with tf.zeros_initializer()
.SparseTensor.shape
has been renamed to SparseTensor.dense_shape
. Same for SparseTensorValue.shape
._ref
dtypes from the python API.{softmax,sparse_softmax,sigmoid}_cross_entropy_with_logits
to be (labels, predictions), and force use of named args.tf.nn.sampled_softmax_loss
and tf.nn.nce_loss
have both changed their API such that you need to switch the inputs, labels
to labels, inputs
parameters.SparseTensor
constructor changes its name to dense_shape
between Tensorflow 0.12 and Tensorflow 1.0.parallel_stack
.sparse_column_with_vocabulary_file
, to specify a feature column that transform string features to IDs, where the mapping is defined by a vocabulary file.index_to_string_table
which returns a lookup table that maps indices to strings.string_to_index_table
, which returns a lookup table that matches strings to indices.ParallelForWithWorkerId
function.string_to_index_table
, which returns a lookup table that matches strings to indices.contrib/session_bundle
.tf.contrib.framework.filter_variables
as a convenience function to filter lists of variables based on regular expressions.make_template()
takes an optional custom_getter_ param
.recursive_create_dir
.contrib/android/cmake
tf.saved_model
.reduce_join
to treat reduction_indices
in the same way as other reduce_
ops.TensorForestEstimator
to contrib/tensor_forest
.tf.divide
now honors the name field.StagingArea
and new ops: stage
and unstage
.This release contains contributions from many people at Google, as well as:
Aaron Hu, Abhishek Aggarwal, Adam Michael, Adriano Carmezim, @AfirSraftGarrier, Alexander Novikov, Alexander Rosenberg Johansen, Andrew Gibiansky, Andrew Hundt, Anish Shah, Anton Loss, @b0noI, @BoyuanJiang, Carl Thomé, Chad Kennedy, Comic Chang, Connor Braa, Daniel N. Lang, Daniel Trebbien, @danielgordon10, Darcy Liu, Darren Garvey, Dmitri Lapin, Eron Wright, Evan Cofer, Fabrizio Milo, Finbarr Timbers, Franck Dernoncourt, Garrett Smith, @guschmue, Hao Wei, Henrik Holst, Huazuo Gao, @Ian, @Issac, Jacob Israel, Jangsoo Park, Jin Kim, Jingtian Peng, John Pope, Kye Bostelmann, Liangliang He, Ling Zhang, Luheng He, Luke Iwanski, @lvli, Michael Basilyan, Mihir Patel, Mikalai Drabovich, Morten Just, @newge, Nick Butlin, Nishant Shukla, Pengfei Ni, Przemyslaw Tredak, @rasbt, @Ronny, Rudolf Rosa, @RustingSword, Sam Abrahams, Sam Putnam, @SeongAhJo, Shi Jiaxin, @skavulya, Steffen MüLler, @TheUSER123, @tiriplicamihai, @vhasanov, Victor Costan, Vit Stepanovs, Wangda Tan, Wenjian Huang, Xingdong Zuo, Yaroslav Bulatov, Yota Toyama, Yuan (Terry) Tang, Yuxin Wu
We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.
tf.train.Saver
. Old V1 checkpoints continue to be readable; controlled by the write_version
argument, tf.train.Saver
now by default writes out in the new V2 format. It significantly reduces the peak memory required and latency incurred during restore.matrix_solve_ls
and self_adjoint_eig
.tf.contrib.integrate.odeint
.tf.contrib.labeled_tensor
.BusAdjacency
enum replaced with a protocol buffer DeviceLocality
. PCI bus indexing now starts from 1 instead of 0, and bus_id==0
is used where previously BUS_ANY
was used.Env::FileExists
and FileSystem::FileExists
now return a tensorflow::Status instead of a bool. Any callers to this function can be converted to a bool by adding .ok() to the call.TF_SessionWithGraph
has been renamed to TF_Session
, indicating its preferred use in language bindings for TensorFlow. What was previously TF_Session
has been renamed to TF_DeprecatedSession
.TF_Port
to TF_Output
in the C API.bus_id==0
is used where previously BUS_ANY
was used.tf.contrib.layers
). This means old checkpoints written using this code will not load after this change without providing Saver
a list of variable renames. Examples of variable scope changes include RNN
-> rnn
in tf.nn.rnn
, tf.nn.dynamic_rnn
and moving from Linear/Matrix
-> weights
and Linear/Bias
-> biases
in most RNN cells.SparseTensor.shape
has been renamed to SparseTensor.dense_shape
. Same for SparseTensorValue.shape
.Env::FileExists
and FileSystem::FileExists
now return a tensorflow::Status
instead of a bool. Any callers to this function can be converted to a bool by adding .ok()
to the call.TF_SessionWithGraph
has been renamed to TF_Session
, indicating its preferred use in language bindings for TensorFlow. What was previously TF_Session
has been renamed to TF_DeprecatedSession
.TF_Port
to TF_Output
.TF_Tensor
objects provided to TF_Run
, TF_SessionRun
, TF_SetAttrTensor
etc.tf.image.per_image_whitening()
to tf.image.per_image_standardization()
tf.summary
submodule.histogram_summary
, audio_summary
, scalar_summary
, image_summary
, merge_summary
, and merge_all_summaries
.batch_*
and regular version of linear algebra and FFT ops. The regular op now handles batches as well. All batch_*
Python interfaces were removed.tf.all_variables
, tf.VARIABLES
and tf.initialize_all_variables
renamed to tf.global_variables
, tf.GLOBAL_VARIABLES
and tf.global_variables_initializer
respectively.tf.zeros_initializer()
and tf.ones_initializer()
now return a callable that must be called with initializer arguments, in your code replace tf.zeros_initializer
with tf.zeros_initializer()
lgamma
function.tf.sqrt
handling of negative arguments.batch_matmul
on multi-core CPUs.matrix_set_diag
, matrix_diag_part
and their gradients to work for rectangular matrices.This release contains contributions from many people at Google, as well as:
@a7744hsc, Abhi Agg, @admcrae, Adriano Carmezim, Aki Sukegawa, Alex Kendall, Alexander Rosenberg Johansen, @amcrae, Amlan Kar, Andre Simpelo, Andreas Eberle, Andrew Hundt, Arnaud Lenglet, @b0noI, Balachander Ramachandran, Ben Barsdell, Ben Guidarelli, Benjamin Mularczyk, Burness Duan, @c0g, Changming Sun, @chanis, Corey Wharton, Dan J, Daniel Trebbien, Darren Garvey, David Brailovsky, David Jones, Di Zeng, @DjangoPeng, Dr. Kashif Rasul, @drag0, Fabrizio (Misto) Milo, FabríCio Ceschin, @fp, @Ghedeon, @guschmue, Gökçen Eraslan, Haosdent Huang, Haroen Viaene, Harold Cooper, Henrik Holst, @hoangmit, Ivan Ukhov, Javier Dehesa, Jingtian Peng, Jithin Odattu, Joan Pastor, Johan Mathe, Johannes Mayer, Jongwook Choi, Justus Schwabedal, Kai Wolf, Kamil Hryniewicz, Kamran Amini, Karen Brems, Karl Lattimer, @kborer, Ken Shirriff, Kevin Rose, Larissa Laich, Laurent Mazare, Leonard Lee, Liang-Chi Hsieh, Liangliang He, Luke Iwanski, Marek Kolodziej, Moustafa Alzantot, @MrQianjinsi, @nagachika, Neil Han, Nick Meehan, Niels Ole Salscheider, Nikhil Mishra, @nschuc, Ondrej Skopek, OndřEj Filip, @OscarDPan, Pablo Moyano, Przemyslaw Tredak, @qitaishui, @Quarazy, @raix852, Philipp Helo, Sam Abrahams, @SriramRamesh, Till Hoffmann, Tushar Soni, @tvn, @tyfkda, Uwe Schmidt, Victor Villas, Vit Stepanovs, Vladislav Gubarev, @wujingyue, Xuesong Yang, Yi Liu, Yilei Yang, @youyou3, Yuan (Terry) Tang, Yuming Wang, Zafar Takhirov, @zhongyuk, Ziming Dong, @guotong1988
We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.
tensorflow/contrib/cudnn_rnn
.foo[1, 2:4, tf.newaxis, ..., :-3:-1, :]
are now supported. In addition we have preliminary (non-broadcasting) support for sliced assignment to variables. In particular one can write var[1:3].assign([1,11,111])
.tf.op_scope
and tf.variable_op_scope
in favor of a unified tf.name_scope
and tf.variable_scope
. The new argument order of tf.variable_scope
is incompatible with previous versions.core/util/tensor_bundle
module: a module to efficiently serialize/deserialize tensors to disk. Will be used in TF's new checkpoint format.self_adjoint_eig
or self_adjoint_eigvals
.batch_*
methods for most linear algebra and FFT ops and promoted the non-batch version of the ops to handle batches of matrices.TF_GraphGetTensorNumDims
and TF_GraphGetTensorShape
.REGISTER_OP(...).SetShapeFn(...)
. Python shape inference RegisterShape calls use the C++ shape functions with common_shapes.call_cpp_shape_fn
. A future release will remove RegisterShape
from python.tensorflow.__git_version__
now allows users to identify the version of the code that TensorFlow was compiled with. We also have tensorflow.__git_compiler__
which identifies the compiler used to compile TensorFlow's core.batch_matmul
.state_is_tuple=True
. For a quick fix while transitioning to the new default, simply pass the argument state_is_tuple=False
.uniform_unit_scaling_initializer()
no longer takes a full_shape
arg, instead relying on the partition info passed to the initializer function when it's called.node_def.proto
instead of graph.proto
.ops.NoGradient
was renamed ops.NotDifferentiable
. ops.NoGradient
will be removed soon.dot.h
/ DotGraph was removed (it was an early analysis tool prior to TensorBoard, no longer that useful). It remains in history should someone find the code useful.This release contains contributions from many people at Google, as well as:
Abid K, @afshinrahimi, @AidanGG, Ajay Rao, Aki Sukegawa, Alex Rothberg, Alexander Rosenberg Johansen, Andrew Gibiansky, Andrew Thomas, @Appleholic, Bastiaan Quast, Ben Dilday, Bofu Chen, Brandon Amos, Bryon Gloden, Cissp®, @chanis, Chenyang Liu, Corey Wharton, Daeyun Shin, Daniel Julius Lasiman, Daniel Waterworth, Danijar Hafner, Darren Garvey, Denis Gorbachev, @DjangoPeng, Egor-Krivov, Elia Palme, Eric Platon, Fabrizio Milo, Gaetan Semet, Georg Nebehay, Gu Wang, Gustav Larsson, @haosdent, Harold Cooper, Hw-Zz, @ichuang, Igor Babuschkin, Igor Macedo Quintanilha, Ilya Edrenkin, @ironhead, Jakub Kolodziejczyk, Jennifer Guo, Jihun Choi, Jonas Rauber, Josh Bleecher Snyder, @jpangburn, Jules Gagnon-Marchand, Karen Brems, @kborer, Kirill Bobyrev, Laurent Mazare, Longqi Yang, Malith Yapa, Maniteja Nandana, Martin Englund, Matthias Winkelmann, @mecab, Mu-Ik Jeon, Nand Dalal, Niels Ole Salscheider, Nikhil Mishra, Park Jiin, Pieter De Rijk, @raix852, Ritwik Gupta, Sahil Sharma, Sangheum Hwang, @SergejsRk, Shinichiro Hamaji, Simon Denel, @Steve, @suiyuan2009, Tiago Jorge, Tijmen Tieleman, @tvn, @tyfkda, Wang Yang, Wei-Ting Kuo, Wenjian Huang, Yan Chen, @YenChenLin, Yuan (Terry) Tang, Yuncheng Li, Yunfeng Wang, Zack Polizzi, @zhongzyd, Ziming Dong, @perhapszzy
We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.
tf.contrib.slim
This release contains contributions from many people at Google, as well as:
Alex Rothberg, Andrew Royer, Austin Marshall, @BlackCoal, Bob Adolf, Brian Diesel, Charles-Emmanuel Dias, @chemelnucfin, Chris Lesniewski, Daeyun Shin, Daniel Rodriguez, Danijar Hafner, Darcy Liu, Kristinn R. Thórisson, Daniel Castro, Dmitry Savintsev, Kashif Rasul, Dylan Paiton, Emmanuel T. Odeke, Ernest Grzybowski, Gavin Sherry, Gideon Dresdner, Gregory King, Harold Cooper, @heinzbeinz, Henry Saputra, Huarong Huo, Huazuo Gao, Igor Babuschkin, Igor Macedo Quintanilha, Ivan Ukhov, James Fysh, Jan Wilken Dörrie, Jihun Choi, Johnny Lim, Jonathan Raiman, Justin Francis, @lilac, Li Yi, Marc Khoury, Marco Marchesi, Max Melnick, Micael Carvalho, @mikowals, Mostafa Gazar, Nico Galoppo, Nishant Agrawal, Petr Janda, Yuncheng Li, @raix852, Robert Rose, @Robin-des-Bois, Rohit Girdhar, Sam Abrahams, satok16, Sergey Kishchenko, Sharkd Tu, @shotat, Siddharth Agrawal, Simon Denel, @sono-bfio, SunYeop Lee, Thijs Vogels, @tobegit3hub, @Undo1, Wang Yang, Wenjian Huang, Yaroslav Bulatov, Yuan Tang, Yunfeng Wang, Ziming Dong
We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.
tf.nn.dynamic_rnn
, tf.nn.rnn
, and the classes in tf.nn.rnn_cell
).tf.nn.moments()
now accepts a shift
argument. Shifting by a good estimate of the mean improves numerical stability. Also changes the behavior of the shift
argument to tf.nn.sufficient_statistics()
.This release contains contributions from many people at Google, as well as:
Aaron Schumacher, Aidan Dang, Akihiko ITOH, Aki Sukegawa, Arbit Chen, Aziz Alto, Danijar Hafner, Erik Erwitt, Fabrizio Milo, Felix Maximilian Möller, Henry Saputra, Sung Kim, Igor Babuschkin, Jan Zikes, Jeremy Barnes, Jesper Steen Møller, Johannes Mayer, Justin Harris, Kashif Rasul, Kevin Robinson, Loo Rong Jie, Lucas Moura, Łukasz Bieniasz-Krzywiec, Mario Cho, Maxim Grechkin, Michael Heilman, Mostafa Rahmani, Mourad Mourafiq, @ninotoshi, Orion Reblitz-Richardson, Yuncheng Li, @raoqiyu, Robert DiPietro, Sam Abrahams, Sebastian Raschka, Siddharth Agrawal, @snakecharmer1024, Stephen Roller, Sung Kim, SunYeop Lee, Thijs Vogels, Till Hoffmann, Victor Melo, Ville Kallioniemi, Waleed Abdulla, Wenjian Huang, Yaroslav Bulatov, Yeison Rodriguez, Yuan Tang, Yuxin Wu, @zhongzyd, Ziming Dong, Zohar Jackson
We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.
contrib/learn
contrib/linear_optimizer
contrib/tensor_forest
contrib/ctc
half
data typecontrib/
)TENSORFLOW_USE_EIGEN_THREADPOOL
definebool
-strictness: Tensors have to be explicitly compared to None
tf.while_loop
(deprecated control_flow_ops.While
)This release contains contributions from many people at Google, as well as:
Abhinav Upadhyay, Aggelos Avgerinos, Alan Wu, Alexander G. de G. Matthews, Aleksandr Yahnev, @amchercashin, Andy Kitchen, Aurelien Geron, Awni Hannun, @BanditCat, Bas Veeling, Cameron Chen, @cg31, Cheng-Lung Sung, Christopher Bonnett, Dan Becker, Dan Van Boxel, Daniel Golden, Danijar Hafner, Danny Goodman, Dave Decker, David Dao, David Kretch, Dongjoon Hyun, Dustin Dorroh, @e-lin, Eurico Doirado, Erik Erwitt, Fabrizio Milo, @gaohuazuo, Iblis Lin, Igor Babuschkin, Isaac Hodes, Isaac Turner, Iván Vallés, J Yegerlehner, Jack Zhang, James Wexler, Jan Zikes, Jay Young, Jeff Hodges, @jmtatsch, Johnny Lim, Jonas Meinertz Hansen, Kanit Wongsuphasawat, Kashif Rasul, Ken Shirriff, Kenneth Mitchner, Kenta Yonekura, Konrad Magnusson, Konstantin Lopuhin, @lahwran, @lekaha, @liyongsea, Lucas Adams, @makseq, Mandeep Singh, @manipopopo, Mark Amery, Memo Akten, Michael Heilman, Michael Peteuil, Nathan Daly, Nicolas Fauchereau, @ninotoshi, Olav Nymoen, @panmari, @papelita1234, Pedro Lopes, Pranav Sailesh Mani, RJ Ryan, Rob Culliton, Robert DiPietro, @ronrest, Sam Abrahams, Sarath Shekkizhar, Scott Graham, Sebastian Raschka, Sung Kim, Surya Bhupatiraju, Syed Ahmed, Till Hoffmann, @timsl, @urimend, @vesnica, Vlad Frolov, Vlad Zagorodniy, Wei-Ting Kuo, Wenjian Huang, William Dmitri Breaden Madden, Wladimir Schmidt, Yuan Tang, Yuwen Yan, Yuxin Wu, Yuya Kusakabe, @zhongzyd, @znah.
We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.
contrib/
directory for unsupported or experimental features, including higher level layers
moduleMetaGraphDef
which makes it easier to save graphs with metadataGraphDef
s to ensure compatibilityBUILD
files and cleaned up C++ headers*fft
, *_matrix_solve
)AdjustContrast
kernel deprecated, new kernel AdjustContrastv2
takes and outputs float only. adjust_contrast
now takes all data types.adjust_brightness
's delta
argument is now always assumed to be in [0,1]
(as is the norm for images in floating point formats), independent of the data type of the input image.min
and max
inputs any more, casting safety is handled by saturate_cast
, which makes sure over- and underflows are handled before casting to data types with smaller ranges.IsLegacyScalar
and IsLegacyVector
are now gone from TensorShapeUtils
since TensorFlow is scalar strict within Google (for example, the shape argument to tf.reshape
can't be a scalar anymore). The open source release was already scalar strict, so outside Google IsScalar
and IsVector
are exact replacements.tensorflow/core/public/
:env.h
-> ../platform/env.h
status.h
-> ../lib/core/status.h
tensor.h
-> ../framework/tensor.h
tensor_shape.h
-> ../framework/tensor_shape.h
partial_tensor_shape.h
-> ../framework/partial_tensor_shape.h
tensorflow_server.h
deletedTensorShape::ShortDebugString
has been renamed to DebugString
, and the previous DebugString
behavior is gone (it was needlessly verbose and produced a confusing empty string for scalars).GraphOptions.skip_common_subexpression_elimination
has been removed. All graph optimizer options are now specified via GraphOptions.OptimizerOptions
.ASSERT_OK
/ EXPECT_OK
macros conflicted with external projects, so they were renamed TF_ASSERT_OK
, TF_EXPECT_OK
. The existing macros are currently maintained for short-term compatibility but will be removed.nn.rnn
and the various nn.seq2seq
methods now return just the final state instead of the list of all states.tf.scatter_update
now no longer guarantees that lexicographically largest index be used for update when duplicate entries exist.tf.image.random_crop(image, [height, width])
is now tf.random_crop(image, [height, width, depth])
, and tf.random_crop
works for any rank (not just 3-D images). The C++ RandomCrop
op has been replaced with pure Python.tf.test.GetTempDir
and tf.test.IsBuiltWithCuda
to tf.test.get_temp_dir
and tf.test.is_built_with_cuda
for PEP-8 compatibility.parse_example
's interface has changed, the old interface is accessible in legacy_parse_example
(same for related functions).Variable
s are not added to the same collection several times even if a list with duplicates is passed to the constructor.list
member of AttrValue
in constructed GraphDef
messages for empty lists. The serialization of some graphs will change, but the change is both forwards and backwards compatible. It will break tests that compare a generated GraphDef
to a golden serialized GraphDef
(which is discouraged).This release contains contributions from many people at Google, as well as:
Akiomi Kamakura, Alex Vig, Alexander Rosenberg Johansen, Andre Cruz, Arun Ahuja, Bart Coppens, Bernardo Pires, Carl Vondrick, Cesar Salgado, Chen Yu, Christian Jauvin, Damien Aymeric, Dan Vanderkam, Denny Britz, Dongjoon Hyun, Eren Güven, Erik Erwitt, Fabrizio Milo, G. Hussain Chinoy, Jim Fleming, Joao Felipe Santos, Jonas Meinertz Hansen, Joshi Rekha, Julian Viereck, Keiji Ariyama, Kenton Lee, Krishna Sankar, Kristina Chodorow, Linchao Zhu, Lukas Krecan, Mark Borgerding, Mark Daoust, Moussa Taifi, Nathan Howell, Naveen Sundar Govindarajulu, Nick Sweeting, Niklas Riekenbrauck, Olivier Grisel, Patrick Christ, Povilas Liubauskas, Rainer Wasserfuhr, Romain Thouvenin, Sagan Bolliger, Sam Abrahams, Taehoon Kim, Timothy J Laurent, Vlad Zavidovych, Yangqing Jia, Yi-Lin Juang, Yuxin Wu, Zachary Lipton, Zero Chen, Alan Wu, @brchiu, @emmjaykay, @jalammar, @Mandar-Shinde, @nsipplswezey, @ninotoshi, @panmari, @prolearner and @rizzomichaelg.
We are also grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions.
Python 3.3+ support via changes to python codebase and ability to specify python version via ./configure.
Some improvements to GPU performance and memory usage: convnet benchmarks roughly equivalent with native cudnn v2 performance. Improvements mostly due to moving to 32-bit indices, faster shuffling kernels. More improvements to come in later releases.
Lots of fixes to documentation and tutorials, many contributed by the public.
271 closed issues on github issues.
tf.nn.fixed_unigram_candidate_sampler
changed its default 'distortion' attribute from 0.0 to 1.0. This was a bug in the original release that is now fixed.Initial release of TensorFlow.