Releases: keras-team/keras
Keras 2.3.1
Keras 2.3.1 is a minor bug-fix release. In particular, it fixes an issue with using Keras models across multiple threads.
Changes
- Bug fixes
- Documentation fixes
- No API changes
- No breaking changes
Keras 2.3.0
Keras 2.3.0 is the first release of multi-backend Keras that supports TensorFlow 2.0. It maintains compatibility with TensorFlow 1.14, 1.13, as well as Theano and CNTK.
This release brings the API in sync with the tf.keras API as of TensorFlow 2.0. However note that it does not support most TensorFlow 2.0 features, in particular eager execution. If you need these features, use tf.keras.
This is also the last major release of multi-backend Keras. Going forward, we recommend that users consider switching their Keras code to tf.keras in TensorFlow 2.0. It implements the same Keras 2.3.0 API (so switching should be as easy as changing the Keras import statements), but it has many advantages for TensorFlow users, such as support for eager execution, distribution, TPU training, and generally far better integration between low-level TensorFlow and high-level concepts like Layer and Model. It is also better maintained.
Development will focus on tf.keras going forward. We will keep maintaining multi-backend Keras over the next 6 months, but we will only be merging bug fixes. API changes will not be ported.
API changes
- Add
size(x)to backend API. add_metricmethod added to Layer / Model (used in a similar way asadd_loss, but for metrics), as well as the metricsproperty.- Variables set as attributes of a Layer are now tracked in
layer.weights(includinglayer.trainable_weightsorlayer.non_trainable_weightsas appropriate). - Layers set as attributes of a Layer are now tracked (so the weights/metrics/losses/etc of a sublayer are tracked by parent layers). This behavior already existed for Model specifically and is now extended to all Layer subclasses.
- Introduce class-based losses (inheriting from
Lossbase class). This enables losses to be parameterized via constructor arguments. Loss classes added:MeanSquaredErrorMeanAbsoluteErrorMeanAbsolutePercentageErrorMeanSquaredLogarithmicErrorBinaryCrossentropyCategoricalCrossentropySparseCategoricalCrossentropyHingeSquaredHingeCategoricalHingePoissonLogCoshKLDivergenceHuber
- Introduce class-based metrics (inheriting from
Metricbase class). This enables metrics to be stateful (e.g. required for supported AUC) and to be parameterized via constructor arguments. Metric classes added:AccuracyMeanSquaredErrorHingeCategoricalHingeSquaredHingeFalsePositivesTruePositivesFalseNegativesTrueNegativesBinaryAccuracyCategoricalAccuracyTopKCategoricalAccuracyLogCoshErrorPoissonKLDivergenceCosineSimilarityMeanAbsoluteErrorMeanAbsolutePercentageErrorMeanSquaredErrorMeanSquaredLogarithmicErrorRootMeanSquaredErrorBinaryCrossentropyCategoricalCrossentropyPrecisionRecallAUCSparseCategoricalAccuracySparseTopKCategoricalAccuracySparseCategoricalCrossentropy
- Add
reset_metricsargument totrain_on_batchandtest_on_batch. Set this to True to maintain metric state across different batches when writing lower-level training/evaluation loops. If False, the metric value reported as output of the method call will be the value for the current batch only. - Add
model.reset_metrics()method to Model. Use this at the start of an epoch to clear metric state when writing lower-level training/evaluation loops. - Rename
lrtolearning_ratefor all optimizers. - Deprecate argument
decayfor all optimizers. For learning rate decay, useLearningRateScheduleobjects in tf.keras.
Breaking changes
- TensorBoard callback:
batch_sizeargument is deprecated (ignored) when used with TF 2.0write_gradsis deprecated (ignored) when used with TF 2.0embeddings_freq,embeddings_layer_names,embeddings_metadata,embeddings_dataare deprecated (ignored) when used with TF 2.0
- Change loss aggregation mechanism to sum over batch size. This may change reported loss values if you were using sample weighting or class weighting. You can achieve the old behavior by making sure your sample weights sum to 1 for each batch.
- Metrics and losses are now reported under the exact name specified by the user (e.g. if you pass
metrics=['acc'], your metric will be reported under the string "acc", not "accuracy", and inverselymetrics=['accuracy']will be reported under the string "accuracy". - Change default recurrent activation to
sigmoid(fromhard_sigmoid) in all RNN layers.
Keras 2.2.5
Keras 2.2.5 is the last release of Keras that implements the 2.2.* API. It is the last release to only support TensorFlow 1 (as well as Theano and CNTK).
The next release will be 2.3.0, which makes significant API changes and add support for TensorFlow 2.0. The 2.3.0 release will be the last major release of multi-backend Keras. Multi-backend Keras is superseded by tf.keras.
At this time, we recommend that Keras users who use multi-backend Keras with the TensorFlow backend switch to tf.keras in TensorFlow 2.0. tf.keras is better maintained and has better integration with TensorFlow features.
API Changes
- Add new Applications:
ResNet101,ResNet152,ResNet50V2,ResNet101V2,ResNet152V2. - Callbacks: enable callbacks to be passed in
evaluateandpredict.- Add
callbacksargument (list of callback instances) inevaluateandpredict. - Add callback methods
on_train_batch_begin,on_train_batch_end,on_test_batch_begin,on_test_batch_end,on_predict_batch_begin,on_predict_batch_end, as well ason_test_begin,on_test_end,on_predict_begin,on_predict_end. Methodson_batch_beginandon_batch_endare now aliases foron_train_batch_beginandon_train_batch_end.
- Add
- Allow file pointers in
save_modelandload_model(in place of the filepath) - Add
nameargument in Sequential constructor - Add
validation_freqargument infit, controlling the frequency of validation (e.g. settingvalidation_freq=3would run validation every 3 epochs) - Allow Python generators (or Keras Sequence objects) to be passed in
fit,evaluate, andpredict, instead of having to use*_generatormethods.- Add generator-related arguments
max_queue_size,workers,use_multiprocessingto these methods.
- Add generator-related arguments
- Add
dilation_rateargument in layerDepthwiseConv2D. - MaxNorm constraint: rename argument
mtomax_value. - Add
dtypeargument in base layer (default dtype for layer's weights). - Add Google Cloud Storage support for model.save_weights and model.load_weights.
- Add JSON-serialization to the
Tokenizerclass. - Add
H5Dictandmodel_to_dotto utils. - Allow default Keras path to be specified at startup via environment variable KERAS_HOME.
- Add arguments
expand_nested,dpitoplot_model. - Add
update_sub,stack,cumsum,cumprod,foldl,foldrto CNTK backend - Add
merge_repeatedargument toctc_decodein TensorFlow backend
Thanks to the 89 committers who contributed code to this release!
Keras 2.2.4
This is a bugfix release, addressing two issues:
- Ability to save a model when a file with the same name already exists.
- Issue with loading legacy config files for the
Sequentialmodel.
See here for the changelog since 2.2.2.
Keras 2.2.3
Areas of improvement
- API completeness & usability improvements
- Bug fixes
- Documentation improvements
API changes
- Keras models can now be safely pickled.
- Consolidate the functionality of the activation layers
ThresholdedReLUandLeakyReLUinto theReLUlayer. - As a result, the
ReLUlayer now takes new argumentsnegative_slopeandthreshold, and therelufunction in the backend takes a newthresholdargument. - Add
update_freqargument inTensorBoardcallback, controlling how often to write TensorBoard logs. - Add the
exponentialfunction tokeras.activations. - Add
data_formatargument in all 4Pooling1Dlayers. - Add
interpolationargument inUpSampling2Dlayer and inresize_imagesbackend function, supporting modes"nearest"(previous behavior, and new default) and"bilinear"(new). - Add
dilation_rateargument inConv2DTransposelayer and inconv2d_transposebackend function. - The
LearningRateSchedulernow receives thelrkey as part of thelogsargument inon_epoch_end(current value of the learning rate). - Make
GlobalAveragePooling1Dlayer support masking. - The the
filepathargumentsave_modelandmodel.save()can now be ah5py.Groupinstance. - Add argument
restore_best_weightstoEarlyStoppingcallback (optionally reverts to the weights that obtained the highest monitored score value). - Add
dtypeargument tokeras.utils.to_categorical. - Support
run_optionsandrun_metadataas optional session arguments inmodel.compile()for the TensorFlow backend.
Breaking changes
- Modify the return value of
Sequential.get_config(). Previously, the return value was a list of the config dictionaries of the layers of the model. Now, the return value is a dictionary with keyslayers,name, and an optional keybuild_input_shape. The old config is equivalent tonew_config['layers']. This makes the output ofget_configconsistent across all model classes.
Credits
Thanks to our 38 contributors whose commits are featured in this release:
@BertrandDechoux, @ChrisGll, @Dref360, @JamesHinshelwood, @MarcoAndreaBuchmann, @ageron, @alfasst, @blue-atom, @chasebrignac, @cshubhamrao, @danFromTelAviv, @datumbox, @farizrahman4u, @fchollet, @fuzzythecat, @gabrieldemarmiesse, @hadifar, @heytitle, @hsgkim, @jankrepl, @joelthchao, @knightXun, @kouml, @linjinjin123, @lvapeab, @nikoladze, @ozabluda, @qlzh727, @roywei, @rvinas, @sriyogesh94, @tacaswell, @taehoonlee, @tedyu, @xuhdev, @yanboliang, @yongzx, @yuanxiaosc
Keras 2.2.2
This is a bugfix release, fixing a significant bug in multi_gpu_model.
For changes since version 2.2.0, see release notes for Keras 2.2.1.
Keras 2.2.1
Areas of improvement
- Bugs fixes
- Performance improvements
- Documentation improvements
API changes
- Add
output_paddingargument inConv2DTranspose(to override default padding behavior). - Enable automatic shape inference when using Lambda layers with the CNTK backend.
Breaking changes
No breaking changes recorded.
Credits
Thanks to our 33 contributors whose commits are featured in this release:
@Ajk4, @Anner-deJong, @Atcold, @Dref360, @EyeBool, @ageron, @briannemsick, @cclauss, @davidtvs, @dstine, @eTomate, @ebatuhankaynak, @eliberis, @farizrahman4u, @fchollet, @fuzzythecat, @gabrieldemarmiesse, @jlopezpena, @kamil-kaczmarek, @kbattocchi, @kmader, @kvechera, @maxpumperla, @mkaze, @pavithrasv, @rvinas, @sachinruk, @seriousmac, @soumyac1999, @taehoonlee, @yanboliang, @yongzx, @yuyang-huang
Keras 2.2.0
Areas of improvements
- New model definition API:
Modelsubclassing. - New input mode: ability to call models on TensorFlow tensors directly (TensorFlow backend only).
- Improve feature coverage of Keras with the Theano and CNTK backends.
- Bug fixes and performance improvements.
- Large refactors improving code structure, code health, and reducing test time. In particular:
- The Keras engine now follows a much more modular structure.
- The
Sequentialmodel is now a plain subclass ofModel. - The modules
applicationsandpreprocessingare now externalized to their own repositories (keras-applications and keras-preprocessing).
API changes
- Add
Modelsubclassing API (details below). - Allow symbolic tensors to be fed to models, with TensorFlow backend (details below).
- Enable CNTK and Theano support for layers
SeparableConv1D,SeparableConv2D, as well as backend methodsseparable_conv1dandseparable_conv2d(previously only available for TensorFlow). - Enable CNTK and Theano support for applications
XceptionandMobileNet(previously only available for TensorFlow). - Add
MobileNetV2application (available for all backends). - Enable loading external (non built-in) backends by changing your
~/.keras.jsonconfiguration file (e.g. PlaidML backend). - Add
sample_weightinImageDataGenerator. - Add
preprocessing.image.save_imgutility to write images to disk. - Default
Flattenlayer'sdata_formatargument toNone(which defaults to global Keras config). Sequentialis now a plain subclass ofModel. The attributesequential.modelis deprecated.- Add
baselineargument inEarlyStopping(stop training if a given baseline isn't reached). - Add
data_formatargument toConv1D. - Make the model returned by
multi_gpu_modelserializable. - Support input masking in
TimeDistributedlayer. - Add an
advanced_activationlayerReLU, making the ReLU activation easier to configure while retaining easy serialization capabilities. - Add
axis=-1argument in backend crossentropy functions specifying the class prediction axis in the input tensor.
New model definition API : Model subclassing
In addition to the Sequential API and the functional Model API, you may now define models by subclassing the Model class and writing your own call forward pass:
import keras
class SimpleMLP(keras.Model):
def __init__(self, use_bn=False, use_dp=False, num_classes=10):
super(SimpleMLP, self).__init__(name='mlp')
self.use_bn = use_bn
self.use_dp = use_dp
self.num_classes = num_classes
self.dense1 = keras.layers.Dense(32, activation='relu')
self.dense2 = keras.layers.Dense(num_classes, activation='softmax')
if self.use_dp:
self.dp = keras.layers.Dropout(0.5)
if self.use_bn:
self.bn = keras.layers.BatchNormalization(axis=-1)
def call(self, inputs):
x = self.dense1(inputs)
if self.use_dp:
x = self.dp(x)
if self.use_bn:
x = self.bn(x)
return self.dense2(x)
model = SimpleMLP()
model.compile(...)
model.fit(...)Layers are defined in __init__(self, ...), and the forward pass is specified in call(self, inputs). In call, you may specify custom losses by calling self.add_loss(loss_tensor) (like you would in a custom layer).
New input mode: symbolic TensorFlow tensors
With Keras 2.2.0 and TensorFlow 1.8 or higher, you may fit, evaluate and predict using symbolic TensorFlow tensors (that are expected to yield data indefinitely). The API is similar to the one in use in fit_generator and other generator methods:
iterator = training_dataset.make_one_shot_iterator()
x, y = iterator.get_next()
model.fit(x, y, steps_per_epoch=100, epochs=10)
iterator = validation_dataset.make_one_shot_iterator()
x, y = iterator.get_next()
model.evaluate(x, y, steps=50)This is achieved by dynamically rewiring the TensorFlow graph to feed the input tensors to the existing model placeholders. There is no performance loss compared to building your model on top of the input tensors in the first place.
Breaking changes
- Remove legacy
Mergelayers and associated functionality (remnant of Keras 0), which were deprecated in May 2016, with full removal initially scheduled for August 2017. Models from the Keras 0 API using these layers cannot be loaded with Keras 2.2.0 and above. - The
truncated_normalbase initializer now returns values that are scaled by ~0.9 (resulting in correct variance value after truncation). This has a small chance of affecting initial convergence behavior on some models.
Credits
Thanks to our 46 contributors whose commits are featured in this release:
@ASvyatkovskiy, @AmirAlavi, @Anirudh-Swaminathan, @davidariel, @Dref360, @JonathanCMitchell, @KuzMenachem, @PeterChe1990, @Saharkakavand, @StefanoCappellini, @ageron, @askskro, @bileschi, @bonlime, @bottydim, @brge17, @briannemsick, @bzamecnik, @christian-lanius, @clemens-tolboom, @dschwertfeger, @dynamicwebpaige, @farizrahman4u, @fchollet, @fuzzythecat, @ghostplant, @giuscri, @huyu398, @jnphilipp, @masstomato, @morenoh149, @mrTsjolder, @nittanycolonial, @r-kellerm, @reidjohnson, @roatienza, @Sbebo, @stevemurr, @taehoonlee, @tiferet, @tkoivisto, @tzerrell, @vkk800, @wangkechn, @wouterdobbels, @zwang36wang
Keras 2.1.6
Areas of improvement
- Bug fixes
- Documentation improvements
- Minor usability improvements
API changes
- In callback
ReduceLROnPlateau, renameepsilonargument tomin_delta(backwards-compatible). - In callback
RemoteMonitor, add argumentsend_as_json. - In backend
softmaxfunction, add argumentaxis. - In
Flattenlayer, add argumentdata_format. - In
save_model(Model.save) andload_modelfunctions, allow thefilepathargument to be ah5py.Fileobject. - In
Model.evaluate_generator, addverboseargument. - In
Bidirectionalwrapper layer, addconstantsargument. - In
multi_gpu_modelfunction, add argumentscpu_mergeandcpu_relocation(controlling whether to force the template model's weights to be on CPU, and whether to operate merge operations on CPU or GPU). - In
ImageDataGenerator, allow argumentwidth_shift_rangeto beintor 1D array-like.
Breaking changes
This release does not include any known breaking changes.
Credits
Thanks to our 37 contributors whose commits are featured in this release:
@Dref360, @FirefoxMetzger, @Naereen, @NiharG15, @StefanoCappellini, @WindQAQ, @dmadeka, @edrogers, @eltronix, @farizrahman4u, @fchollet, @gabrieldemarmiesse, @ghostplant, @jedrekfulara, @jlherren, @joeyearsley, @johanahlqvist, @johnyf, @jsaporta, @kalkun, @lucasdavid, @masstomato, @mrlzla, @myutwo150, @nisargjhaveri, @obi1kenobi, @olegantonyan, @ozabluda, @pasky, @Planck35, @sotlampr, @souptc, @srjoglekar246, @stamate, @taehoonlee, @vkk800, @xuhdev
Keras 2.1.5
Areas of improvement
- Bug fixes.
- New APIs: sequence generation API
TimeseriesGenerator, and new layerDepthwiseConv2D. - Unit tests / CI improvements.
- Documentation improvements.
API changes
- Add new sequence generation API
keras.preprocessing.sequence.TimeseriesGenerator. - Add new convolutional layer
keras.layers.DepthwiseConv2D. - Allow weights from
keras.layers.CuDNNLSTMto be loaded into akeras.layers.LSTMlayer (e.g. for inference on CPU). - Add
brightness_rangedata augmentation argument inkeras.preprocessing.image.ImageDataGenerator. - Add
validation_splitAPI inkeras.preprocessing.image.ImageDataGenerator. You can passvalidation_splitto the constructor (float), then select between training/validation subsets by passing the argumentsubset='validation'orsubset='training'to methodsflowandflow_from_directory.
Breaking changes
- As a side effect of a refactor of
ConvLSTM2Dto a modular implementation, recurrent dropout support in Theano has been dropped for this layer.
Credits
Thanks to our 28 contributors whose commits are featured in this release:
@DomHudson, @Dref360, @VitamintK, @abrad1212, @ahundt, @bojone, @brainnoise, @bzamecnik, @caisq, @cbensimon, @davinnovation, @farizrahman4u, @fchollet, @gabrieldemarmiesse, @khosravipasha, @ksindi, @lenjoy, @masstomato, @mewwts, @ozabluda, @paulpister, @sandpiturtle, @saralajew, @srjoglekar246, @stefangeneralao, @taehoonlee, @tiangolo, @treszkai