@tensorflow/tfjs-layers
- Version 4.22.0
- Published
- 31.1 MB
- No dependencies
- Apache-2.0 AND MIT license
Install
npm i @tensorflow/tfjs-layers
yarn add @tensorflow/tfjs-layers
pnpm add @tensorflow/tfjs-layers
Overview
TensorFlow layers API in JavaScript
Index
Variables
Functions
Classes
LayersModel
- checkTrainableWeightsConsistency()
- className
- compile()
- dispose()
- evaluate()
- evaluateDataset()
- execute()
- fit()
- fitDataset()
- fitLoop()
- getDedupedMetricsNames()
- getNamedWeights()
- getTrainingConfig()
- getUserDefinedMetadata()
- history
- isOptimizerOwned
- isTraining
- loadTrainingConfig()
- loss
- lossFunctions
- makeTrainFunction()
- metrics
- metricsNames
- metricsTensors
- optimizer
- optimizer_
- predict()
- predictOnBatch()
- save()
- setUserDefinedMetadata()
- standardizeUserData()
- standardizeUserDataXY()
- stopTraining
- stopTraining_
- summary()
- trainOnBatch()
Interfaces
Type Aliases
Namespaces
layers
- activation()
- add()
- alphaDropout()
- average()
- averagePooling1d()
- averagePooling2d()
- averagePooling3d()
- avgPool1d()
- avgPool2d()
- avgPool3d()
- avgPooling1d()
- avgPooling2d()
- avgPooling3d()
- batchNormalization()
- bidirectional()
- categoryEncoding()
- centerCrop()
- concatenate()
- conv1d()
- conv2d()
- conv2dTranspose()
- conv3d()
- conv3dTranspose()
- convLstm2d()
- convLstm2dCell()
- cropping2D()
- dense()
- depthwiseConv2d()
- dot()
- dropout()
- elu()
- embedding()
- flatten()
- gaussianDropout()
- gaussianNoise()
- globalAveragePooling1d()
- globalAveragePooling2d()
- globalMaxPool1d
- globalMaxPool2d
- globalMaxPooling1d()
- globalMaxPooling2d()
- gru()
- gruCell()
- input()
- inputLayer()
- Layer
- layerNormalization()
- leakyReLU()
- lstm()
- lstmCell()
- masking()
- maximum()
- maxPool1d
- maxPool2d
- maxPooling1d()
- maxPooling2d()
- maxPooling3d()
- minimum()
- multiply()
- permute()
- prelu()
- randomWidth()
- reLU()
- repeatVector()
- rescaling()
- reshape()
- resizing()
- rnn()
- RNN
- RNNCell
- separableConv2d()
- simpleRNN()
- simpleRNNCell()
- softmax()
- spatialDropout1d()
- stackedRNNCells()
- thresholdedReLU()
- timeDistributed()
- upSampling2d()
- zeroPadding2d()
Variables
variable callbacks
const callbacks: { earlyStopping: typeof earlyStopping };
variable version_layers
const version_layers: string;
See the LICENSE file.
Functions
function input
input: (config: InputConfig) => SymbolicTensor;
Used to instantiate an input to a model as a
tf.SymbolicTensor
.Users should call the
input
factory function for consistency with other generator functions.Example:
// Defines a simple logistic regression model with 32 dimensional input// and 3 dimensional output.const x = tf.input({shape: [32]});const y = tf.layers.dense({units: 3, activation: 'softmax'}).apply(x);const model = tf.model({inputs: x, outputs: y});model.predict(tf.ones([2, 32])).print();Note:
input
is only necessary when usingmodel
. When usingsequential
, specifyinputShape
for the first layer or useinputLayer
as the first layer.{heading: 'Models', subheading: 'Inputs'}
function loadLayersModel
loadLayersModel: ( pathOrIOHandler: string | io.IOHandler, options?: io.LoadOptions) => Promise<LayersModel>;
Load a model composed of Layer objects, including its topology and optionally weights. See the Tutorial named "How to import a Keras Model" for usage examples.
This method is applicable to:
1. Models created with the
tf.layers.*
,tf.sequential
, andtf.model
APIs of TensorFlow.js and later saved with thetf.LayersModel.save
method. 2. Models converted from Keras or TensorFlow tf.keras using the [tensorflowjs_converter](https://github.com/tensorflow/tfjs/tree/master/tfjs-converter).This mode is *not* applicable to TensorFlow
SavedModel
s or their converted forms. For those models, usetf.loadGraphModel
.Example 1. Load a model from an HTTP server.
const model = await tf.loadLayersModel('https://storage.googleapis.com/tfjs-models/tfjs/iris_v1/model.json');model.summary();Example 2: Save
model
's topology and weights to browser [local storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage); then load it back.const model = tf.sequential({layers: [tf.layers.dense({units: 1, inputShape: [3]})]});console.log('Prediction from original model:');model.predict(tf.ones([1, 3])).print();const saveResults = await model.save('localstorage://my-model-1');const loadedModel = await tf.loadLayersModel('localstorage://my-model-1');console.log('Prediction from loaded model:');loadedModel.predict(tf.ones([1, 3])).print();Example 3. Saving
model
's topology and weights to browser [IndexedDB](https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API); then load it back.const model = tf.sequential({layers: [tf.layers.dense({units: 1, inputShape: [3]})]});console.log('Prediction from original model:');model.predict(tf.ones([1, 3])).print();const saveResults = await model.save('indexeddb://my-model-1');const loadedModel = await tf.loadLayersModel('indexeddb://my-model-1');console.log('Prediction from loaded model:');loadedModel.predict(tf.ones([1, 3])).print();Example 4. Load a model from user-selected files from HTML [file input elements](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input/file).
// Note: this code snippet will not work without the HTML elements in the// pageconst jsonUpload = document.getElementById('json-upload');const weightsUpload = document.getElementById('weights-upload');const model = await tf.loadLayersModel(tf.io.browserFiles([jsonUpload.files[0], weightsUpload.files[0]]));Parameter pathOrIOHandler
Can be either of the two formats 1. A string path to the
ModelAndWeightsConfig
JSON describing the model in the canonical TensorFlow.js format. For file:// (tfjs-node-only), http:// and https:// schemas, the path can be either absolute or relative. The content of the JSON file is assumed to be a JSON object with the following fields and values: - 'modelTopology': A JSON object that can be either of: 1. a model architecture JSON consistent with the format of the return value ofkeras.Model.to_json()
2. a full model JSON in the format ofkeras.models.save_model()
. - 'weightsManifest': A TensorFlow.js weights manifest. See the Python converter functionsave_model()
for more details. It is also assumed that model weights can be accessed from relative paths described by thepaths
fields in weights manifest. 2. Atf.io.IOHandler
object that loads model artifacts with itsload
method.Parameter options
Optional configuration arguments for the model loading, including: -
strict
: Require that the provided weights exactly match those required by the layers. Default true. Passing false means that both extra weights and missing weights will be silently ignored. -onProgress
: A progress callback of the form:(fraction: number) => void
. This callback can be used to monitor the model-loading process.Returns
A
Promise
oftf.LayersModel
, with the topology and weights loaded.{heading: 'Models', subheading: 'Loading'}
function model
model: (args: ContainerArgs) => LayersModel;
A model is a data structure that consists of
Layers
and defines inputs and outputs.The key difference between
tf.model
andtf.sequential
is thattf.model
is more generic, supporting an arbitrary graph (without cycles) of layers.tf.sequential
is less generic and supports only a linear stack of layers.When creating a
tf.LayersModel
, specify its input(s) and output(s). Layers are used to wire input(s) to output(s).For example, the following code snippet defines a model consisting of two
dense
layers, with 10 and 4 units, respectively.// Define input, which has a size of 5 (not including batch dimension).const input = tf.input({shape: [5]});// First dense layer uses relu activation.const denseLayer1 = tf.layers.dense({units: 10, activation: 'relu'});// Second dense layer uses softmax activation.const denseLayer2 = tf.layers.dense({units: 4, activation: 'softmax'});// Obtain the output symbolic tensor by applying the layers on the input.const output = denseLayer2.apply(denseLayer1.apply(input));// Create the model based on the inputs.const model = tf.model({inputs: input, outputs: output});// The model can be used for training, evaluation and prediction.// For example, the following line runs prediction with the model on// some fake data.model.predict(tf.ones([2, 5])).print();See also:
tf.sequential
,tf.loadLayersModel
.{heading: 'Models', subheading: 'Creation'}
function registerCallbackConstructor
registerCallbackConstructor: ( verbosityLevel: number, callbackConstructor: BaseCallbackConstructor) => void;
function sequential
sequential: (config?: SequentialArgs) => Sequential;
Creates a
tf.Sequential
model. A sequential model is any model where the outputs of one layer are the inputs to the next layer, i.e. the model topology is a simple 'stack' of layers, with no branching or skipping.This means that the first layer passed to a
tf.Sequential
model should have a defined input shape. What that means is that it should have received aninputShape
orbatchInputShape
argument, or for some type of layers (recurrent, Dense...) aninputDim
argument.The key difference between
tf.model
andtf.sequential
is thattf.sequential
is less generic, supporting only a linear stack of layers.tf.model
is more generic and supports an arbitrary graph (without cycles) of layers.Examples:
const model = tf.sequential();// First layer must have an input shape defined.model.add(tf.layers.dense({units: 32, inputShape: [50]}));// Afterwards, TF.js does automatic shape inference.model.add(tf.layers.dense({units: 4}));// Inspect the inferred shape of the model's output, which equals// `[null, 4]`. The 1st dimension is the undetermined batch dimension; the// 2nd is the output size of the model's last layer.console.log(JSON.stringify(model.outputs[0].shape));It is also possible to specify a batch size (with potentially undetermined batch dimension, denoted by "null") for the first layer using the
batchInputShape
key. The following example is equivalent to the above:const model = tf.sequential();// First layer must have a defined input shapemodel.add(tf.layers.dense({units: 32, batchInputShape: [null, 50]}));// Afterwards, TF.js does automatic shape inference.model.add(tf.layers.dense({units: 4}));// Inspect the inferred shape of the model's output.console.log(JSON.stringify(model.outputs[0].shape));You can also use an
Array
of already-constructedLayer
s to create atf.Sequential
model:const model = tf.sequential({layers: [tf.layers.dense({units: 32, inputShape: [50]}),tf.layers.dense({units: 4})]});console.log(JSON.stringify(model.outputs[0].shape));{heading: 'Models', subheading: 'Creation'}
Classes
class Callback
abstract class Callback extends BaseCallback {}
class CallbackList
class CallbackList {}
Container abstracting a list of callbacks.
constructor
constructor(callbacks?: BaseCallback[], queueLength?: number);
Constructor of CallbackList.
Parameter callbacks
Array of
Callback
instances.Parameter queueLength
Queue length for keeping running statistics over callback execution time.
property callbacks
callbacks: BaseCallback[];
property queueLength
queueLength: number;
method append
append: (callback: BaseCallback) => void;
method onBatchBegin
onBatchBegin: (batch: number, logs?: UnresolvedLogs) => Promise<void>;
Called right before processing a batch.
Parameter batch
Index of batch within the current epoch.
Parameter logs
Dictionary of logs.
method onBatchEnd
onBatchEnd: (batch: number, logs?: UnresolvedLogs) => Promise<void>;
Called at the end of a batch.
Parameter batch
Index of batch within the current epoch.
Parameter logs
Dictionary of logs.
method onEpochBegin
onEpochBegin: (epoch: number, logs?: UnresolvedLogs) => Promise<void>;
Called at the start of an epoch.
Parameter epoch
Index of epoch.
Parameter logs
Dictionary of logs.
method onEpochEnd
onEpochEnd: (epoch: number, logs?: UnresolvedLogs) => Promise<void>;
Called at the end of an epoch.
Parameter epoch
Index of epoch.
Parameter logs
Dictionary of logs.
method onTrainBegin
onTrainBegin: (logs?: UnresolvedLogs) => Promise<void>;
Called at the beginning of training.
Parameter logs
Dictionary of logs.
method onTrainEnd
onTrainEnd: (logs?: UnresolvedLogs) => Promise<void>;
Called at the end of training.
Parameter logs
Dictionary of logs.
method setModel
setModel: (model: Container) => void;
method setParams
setParams: (params: Params) => void;
class CustomCallback
class CustomCallback extends BaseCallback {}
Custom callback for training.
constructor
constructor(args: CustomCallbackArgs, yieldEvery?: YieldEveryOptions);
property batchBegin
protected readonly batchBegin: ( batch: number, logs?: Logs) => void | Promise<void>;
property batchEnd
protected readonly batchEnd: ( batch: number, logs?: Logs) => void | Promise<void>;
property epochBegin
protected readonly epochBegin: ( epoch: number, logs?: Logs) => void | Promise<void>;
property epochEnd
protected readonly epochEnd: ( epoch: number, logs?: Logs) => void | Promise<void>;
property nextFrameFunc
nextFrameFunc: Function;
property nowFunc
nowFunc: Function;
property trainBegin
protected readonly trainBegin: (logs?: Logs) => void | Promise<void>;
property trainEnd
protected readonly trainEnd: (logs?: Logs) => void | Promise<void>;
property yield
protected readonly yield: ( epoch: number, batch: number, logs: Logs) => void | Promise<void>;
method maybeWait
maybeWait: (epoch: number, batch: number, logs: UnresolvedLogs) => Promise<void>;
method onBatchBegin
onBatchBegin: (batch: number, logs?: UnresolvedLogs) => Promise<void>;
method onBatchEnd
onBatchEnd: (batch: number, logs?: UnresolvedLogs) => Promise<void>;
method onEpochBegin
onEpochBegin: (epoch: number, logs?: UnresolvedLogs) => Promise<void>;
method onEpochEnd
onEpochEnd: (epoch: number, logs?: UnresolvedLogs) => Promise<void>;
method onTrainBegin
onTrainBegin: (logs?: UnresolvedLogs) => Promise<void>;
method onTrainEnd
onTrainEnd: (logs?: UnresolvedLogs) => Promise<void>;
class EarlyStopping
class EarlyStopping extends Callback {}
A Callback that stops training when a monitored quantity has stopped improving.
constructor
constructor(args?: EarlyStoppingCallbackArgs);
property baseline
protected readonly baseline: number;
property minDelta
protected readonly minDelta: number;
property mode
protected readonly mode: 'auto' | 'min' | 'max';
property monitor
protected readonly monitor: string;
property monitorFunc
protected monitorFunc: (currVal: number, prevVal: number) => boolean;
property patience
protected readonly patience: number;
property verbose
protected readonly verbose: number;
method onEpochEnd
onEpochEnd: (epoch: number, logs?: Logs) => Promise<void>;
method onTrainBegin
onTrainBegin: (logs?: Logs) => Promise<void>;
method onTrainEnd
onTrainEnd: (logs?: Logs) => Promise<void>;
class History
class History extends BaseCallback {}
Callback that records events into a
History
object. This callback is automatically applied to every TF.js Layers model. TheHistory
object gets returned by thefit
method of models.
property epoch
epoch: number[];
property history
history: { [key: string]: any[] };
method onEpochEnd
onEpochEnd: (epoch: number, logs?: UnresolvedLogs) => Promise<void>;
method onTrainBegin
onTrainBegin: (logs?: UnresolvedLogs) => Promise<void>;
method syncData
syncData: () => Promise<void>;
Await the values of all losses and metrics.
class InputSpec
class InputSpec {}
Specifies the ndim, dtype and shape of every input to a layer.
Every layer should expose (if appropriate) an
inputSpec
attribute: a list of instances of InputSpec (one per input tensor).A null entry in a shape is compatible with any dimension, a null shape is compatible with any shape.
constructor
constructor(args: InputSpecArgs);
property axes
axes?: { [axis: number]: number };
Dictionary mapping integer axes to a specific dimension value.
property dtype
dtype?: DataType;
Expected datatype of the input.
property maxNDim
maxNDim?: number;
Maximum rank of the input.
property minNDim
minNDim?: number;
Minimum rank of the input.
property ndim
ndim?: number;
Expected rank of the input.
property shape
shape?: Shape;
Expected shape of the input (may include null for unchecked axes).
class LayersModel
class LayersModel extends Container implements tfc.InferenceModel {}
A
tf.LayersModel
is a directed, acyclic graph oftf.Layer
s plus methods for training, evaluation, prediction and saving.tf.LayersModel
is the basic unit of training, inference and evaluation in TensorFlow.js. To create atf.LayersModel
, usetf.LayersModel
.See also:
tf.Sequential
,tf.loadLayersModel
.{heading: 'Models', subheading: 'Classes'}
constructor
constructor(args: ContainerArgs);
property className
static className: string;
property history
history: History;
property isOptimizerOwned
protected isOptimizerOwned: boolean;
property isTraining
protected isTraining: boolean;
property loss
loss: | string | string[] | { [outputName: string]: string } | LossOrMetricFn | LossOrMetricFn[] | { [outputName: string]: LossOrMetricFn };
property lossFunctions
lossFunctions: LossOrMetricFn[];
property metrics
metrics: | string | LossOrMetricFn | (string | LossOrMetricFn)[] | { [outputName: string]: string | LossOrMetricFn };
property metricsNames
metricsNames: string[];
property metricsTensors
metricsTensors: [LossOrMetricFn, number][];
property optimizer
optimizer: Optimizer;
property optimizer_
protected optimizer_: Optimizer;
property stopTraining
stopTraining: boolean;
property stopTraining_
protected stopTraining_: boolean;
method checkTrainableWeightsConsistency
protected checkTrainableWeightsConsistency: () => void;
Check trainable weights count consistency.
This will raise a warning if
this.trainableWeights
andthis.collectedTrainableWeights
are inconsistent (i.e., have different numbers of parameters). Inconsistency will typically arise when one modifiesmodel.trainable
without callingmodel.compile()
again.
method compile
compile: (args: ModelCompileArgs) => void;
Configures and prepares the model for training and evaluation. Compiling outfits the model with an optimizer, loss, and/or metrics. Calling
fit
orevaluate
on an un-compiled model will throw an error.Parameter args
a
ModelCompileArgs
specifying the loss, optimizer, and metrics to be used for fitting and evaluating this model.{heading: 'Models', subheading: 'Classes'}
method dispose
dispose: () => DisposeResult;
method evaluate
evaluate: ( x: Tensor | Tensor[], y: Tensor | Tensor[], args?: ModelEvaluateArgs) => Scalar | Scalar[];
Returns the loss value & metrics values for the model in test mode.
Loss and metrics are specified during
compile()
, which needs to happen before calls toevaluate()
.Computation is done in batches.
const model = tf.sequential({layers: [tf.layers.dense({units: 1, inputShape: [10]})]});model.compile({optimizer: 'sgd', loss: 'meanSquaredError'});const result = model.evaluate(tf.ones([8, 10]), tf.ones([8, 1]), {batchSize: 4});result.print();Parameter x
tf.Tensor
of test data, or anArray
oftf.Tensor
s if the model has multiple inputs.Parameter y
tf.Tensor
of target data, or anArray
oftf.Tensor
s if the model has multiple outputs.Parameter args
A
ModelEvaluateArgs
, containing optional fields.Scalar
test loss (if the model has a single output and no metrics) orArray
ofScalar
s (if the model has multiple outputs and/or metrics). The attributemodel.metricsNames
will give you the display labels for the scalar outputs.{heading: 'Models', subheading: 'Classes'}
method evaluateDataset
evaluateDataset: ( dataset: Dataset<{}>, args?: ModelEvaluateDatasetArgs) => Promise<Scalar | Scalar[]>;
Evaluate model using a dataset object.
Note: Unlike
evaluate()
, this method is asynchronous (async
).Parameter dataset
A dataset object. Its
iterator()
method is expected to generate a dataset iterator object, thenext()
method of which is expected to produce data batches for evaluation. The return value of thenext()
call ought to contain a booleandone
field and avalue
field. Thevalue
field is expected to be an array of twotf.Tensor
s or an array of two nestedtf.Tensor
structures. The former case is for models with exactly one input and one output (e.g. a sequential model). The latter case is for models with multiple inputs and/or multiple outputs. Of the two items in the array, the first is the input feature(s) and the second is the output target(s).Parameter args
A configuration object for the dataset-based evaluation.
Returns
Loss and metric values as an Array of
Scalar
objects.{heading: 'Models', subheading: 'Classes'}
method execute
execute: ( inputs: Tensor | Tensor[] | NamedTensorMap, outputs: string | string[]) => Tensor | Tensor[];
Execute internal tensors of the model with input data feed.
Parameter inputs
Input data feed. Must match the inputs of the model.
Parameter outputs
Names of the output tensors to be fetched. Must match names of the SymbolicTensors that belong to the graph.
Returns
Fetched values for
outputs
.
method fit
fit: (x: any, y: any, args?: ModelFitArgs) => Promise<History>;
Trains the model for a fixed number of epochs (iterations on a dataset).
const model = tf.sequential({layers: [tf.layers.dense({units: 1, inputShape: [10]})]});model.compile({optimizer: 'sgd', loss: 'meanSquaredError'});for (let i = 1; i < 5 ; ++i) {const h = await model.fit(tf.ones([8, 10]), tf.ones([8, 1]), {batchSize: 4,epochs: 3});console.log("Loss after Epoch " + i + " : " + h.history.loss[0]);}Parameter x
tf.Tensor
of training data, or an array oftf.Tensor
s if the model has multiple inputs. If all inputs in the model are named, you can also pass a dictionary mapping input names totf.Tensor
s.Parameter y
tf.Tensor
of target (label) data, or an array oftf.Tensor
s if the model has multiple outputs. If all outputs in the model are named, you can also pass a dictionary mapping output names totf.Tensor
s.Parameter args
A
ModelFitArgs
, containing optional fields.A
History
instance. Itshistory
attribute contains all information collected during training.ValueError In case of mismatch between the provided input data and what the model expects.
{heading: 'Models', subheading: 'Classes'}
method fitDataset
fitDataset: <T>( dataset: Dataset<T>, args: ModelFitDatasetArgs<T>) => Promise<History>;
Trains the model using a dataset object.
Parameter dataset
A dataset object. Its
iterator()
method is expected to generate a dataset iterator object, thenext()
method of which is expected to produce data batches for training. The return value of thenext()
call ought to contain a booleandone
field and avalue
field. Thevalue
field is expected to be an array of twotf.Tensor
s or an array of two nestedtf.Tensor
structures. The former case is for models with exactly one input and one output (e.g. a sequential model). The latter case is for models with multiple inputs and/or multiple outputs. Of the two items in the array, the first is the input feature(s) and the second is the output target(s).Parameter args
A
ModelFitDatasetArgs
, containing optional fields.A
History
instance. Itshistory
attribute contains all information collected during training.{heading: 'Models', subheading: 'Classes'}
method fitLoop
fitLoop: ( f: (data: Tensor[]) => Scalar[], ins: Tensor[], outLabels?: string[], batchSize?: number, epochs?: number, verbose?: number, callbacks?: BaseCallback[], valF?: (data: Tensor[]) => Scalar[], valIns?: Tensor[], shuffle?: boolean | string, callbackMetrics?: string[], initialEpoch?: number, stepsPerEpoch?: number, validationSteps?: number) => Promise<History>;
Abstract fit function for
f(ins)
.Parameter f
A Function returning a list of tensors. For training, this function is expected to perform the updates to the variables.
Parameter ins
List of tensors to be fed to
f
.Parameter outLabels
List of strings, display names of the outputs of
f
.Parameter batchSize
Integer batch size or
== null
if unknown. Default : 32.Parameter epochs
Number of times to iterate over the data. Default : 1.
Parameter verbose
Verbosity mode: 0, 1, or 2. Default: 1.
Parameter callbacks
List of callbacks to be called during training.
Parameter valF
Function to call for validation.
Parameter valIns
List of tensors to be fed to
valF
.Parameter shuffle
Whether to shuffle the data at the beginning of every epoch. Default : true.
Parameter callbackMetrics
List of strings, the display names of the metrics passed to the callbacks. They should be the concatenation of the display names of the outputs of
f
and the list of display names of the outputs ofvalF
.Parameter initialEpoch
Epoch at which to start training (useful for resuming a previous training run). Default : 0.
Parameter stepsPerEpoch
Total number of steps (batches on samples) before declaring one epoch finished and starting the next epoch. Ignored with the default value of
undefined
ornull
.Parameter validationSteps
Number of steps to run validation for (only if doing validation from data tensors). Not applicable for tfjs-layers.
Returns
A
History
object.
method getDedupedMetricsNames
protected getDedupedMetricsNames: () => string[];
method getNamedWeights
protected getNamedWeights: (config?: io.SaveConfig) => NamedTensor[];
Extract weight values of the model.
Parameter config
: An instance of
io.SaveConfig
, which specifies model-saving options such as whether only trainable weights are to be saved.Returns
A
NamedTensorMap
mapping original weight names (i.e., non-uniqueified weight names) to their values.
method getTrainingConfig
protected getTrainingConfig: () => TrainingConfig;
method getUserDefinedMetadata
getUserDefinedMetadata: () => {};
Get user-defined metadata.
The metadata is supplied via one of the two routes: 1. By calling
setUserDefinedMetadata()
. 2. Loaded during model loading (if the model is constructed viatf.loadLayersModel()
.)If no user-defined metadata is available from either of the two routes, this function will return
undefined
.
method loadTrainingConfig
loadTrainingConfig: (trainingConfig: TrainingConfig) => void;
method makeTrainFunction
protected makeTrainFunction: () => (data: Tensor[]) => Scalar[];
Creates a function that performs the following actions:
1. computes the losses 2. sums them to get the total loss 3. call the optimizer computes the gradients of the LayersModel's trainable weights w.r.t. the total loss and update the variables 4. calculates the metrics 5. returns the values of the losses and metrics.
method predict
predict: (x: Tensor | Tensor[], args?: ModelPredictArgs) => Tensor | Tensor[];
Generates output predictions for the input samples.
Computation is done in batches.
Note: the "step" mode of predict() is currently not supported. This is because the TensorFlow.js core backend is imperative only.
const model = tf.sequential({layers: [tf.layers.dense({units: 1, inputShape: [10]})]});model.predict(tf.ones([8, 10]), {batchSize: 4}).print();Parameter x
The input data, as a Tensor, or an
Array
oftf.Tensor
s if the model has multiple inputs.Parameter args
A
ModelPredictArgs
object containing optional fields.Prediction results as a
tf.Tensor
(s).ValueError In case of mismatch between the provided input data and the model's expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.
{heading: 'Models', subheading: 'Classes'}
method predictOnBatch
predictOnBatch: (x: Tensor | Tensor[]) => Tensor | Tensor[];
Returns predictions for a single batch of samples.
const model = tf.sequential({layers: [tf.layers.dense({units: 1, inputShape: [10]})]});model.predictOnBatch(tf.ones([8, 10])).print();Parameter x
: Input samples, as a Tensor (for models with exactly one input) or an array of Tensors (for models with more than one input). Tensor(s) of predictions
{heading: 'Models', subheading: 'Classes'}
method save
save: ( handlerOrURL: io.IOHandler | string, config?: io.SaveConfig) => Promise<io.SaveResult>;
Save the configuration and/or weights of the LayersModel.
An
IOHandler
is an object that has asave
method of the proper signature defined. Thesave
method manages the storing or transmission of serialized data ("artifacts") that represent the model's topology and weights onto or via a specific medium, such as file downloads, local storage, IndexedDB in the web browser and HTTP requests to a server. TensorFlow.js providesIOHandler
implementations for a number of frequently used saving mediums, such astf.io.browserDownloads
andtf.io.browserLocalStorage
. Seetf.io
for more details.This method also allows you to refer to certain types of
IOHandler
s as URL-like string shortcuts, such as 'localstorage://' and 'indexeddb://'.Example 1: Save
model
's topology and weights to browser [local storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage); then load it back.const model = tf.sequential({layers: [tf.layers.dense({units: 1, inputShape: [3]})]});console.log('Prediction from original model:');model.predict(tf.ones([1, 3])).print();const saveResults = await model.save('localstorage://my-model-1');const loadedModel = await tf.loadLayersModel('localstorage://my-model-1');console.log('Prediction from loaded model:');loadedModel.predict(tf.ones([1, 3])).print();Example 2. Saving
model
's topology and weights to browser [IndexedDB](https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API); then load it back.const model = tf.sequential({layers: [tf.layers.dense({units: 1, inputShape: [3]})]});console.log('Prediction from original model:');model.predict(tf.ones([1, 3])).print();const saveResults = await model.save('indexeddb://my-model-1');const loadedModel = await tf.loadLayersModel('indexeddb://my-model-1');console.log('Prediction from loaded model:');loadedModel.predict(tf.ones([1, 3])).print();Example 3. Saving
model
's topology and weights as two files (my-model-1.json
andmy-model-1.weights.bin
) downloaded from browser.const model = tf.sequential({layers: [tf.layers.dense({units: 1, inputShape: [3]})]});const saveResults = await model.save('downloads://my-model-1');Example 4. Send
model
's topology and weights to an HTTP server. See the documentation oftf.io.http
for more details including specifying request parameters and implementation of the server.const model = tf.sequential({layers: [tf.layers.dense({units: 1, inputShape: [3]})]});const saveResults = await model.save('http://my-server/model/upload');Parameter handlerOrURL
An instance of
IOHandler
or a URL-like, scheme-based string shortcut forIOHandler
.Parameter config
Options for saving the model.
Returns
A
Promise
ofSaveResult
, which summarizes the result of the saving, such as byte sizes of the saved artifacts for the model's topology and weight values.{heading: 'Models', subheading: 'Classes', ignoreCI: true}
method setUserDefinedMetadata
setUserDefinedMetadata: (userDefinedMetadata: {}) => void;
Set user-defined metadata.
The set metadata will be serialized together with the topology and weights of the model during
save()
calls.Parameter setUserDefinedMetadata
method standardizeUserData
protected standardizeUserData: ( x: any, y: any, sampleWeight?: any, classWeight?: ClassWeight | ClassWeight[] | ClassWeightMap, checkBatchAxis?: boolean, batchSize?: number) => Promise<[Tensor[], Tensor[], Tensor[]]>;
method standardizeUserDataXY
protected standardizeUserDataXY: ( x: any, y: any, checkBatchAxis?: boolean, batchSize?: number) => [Tensor[], Tensor[]];
method summary
summary: ( lineLength?: number, positions?: number[], printFn?: (message?: any, ...optionalParams: any[]) => void) => void;
Print a text summary of the model's layers.
The summary includes - Name and type of all layers that comprise the model. - Output shape(s) of the layers - Number of weight parameters of each layer - If the model has non-sequential-like topology, the inputs each layer receives - The total number of trainable and non-trainable parameters of the model.
const input1 = tf.input({shape: [10]});const input2 = tf.input({shape: [20]});const dense1 = tf.layers.dense({units: 4}).apply(input1);const dense2 = tf.layers.dense({units: 8}).apply(input2);const concat = tf.layers.concatenate().apply([dense1, dense2]);const output =tf.layers.dense({units: 3, activation: 'softmax'}).apply(concat);const model = tf.model({inputs: [input1, input2], outputs: output});model.summary();Parameter lineLength
Custom line length, in number of characters.
Parameter positions
Custom widths of each of the columns, as either fractions of
lineLength
(e.g.,[0.5, 0.75, 1]
) or absolute number of characters (e.g.,[30, 50, 65]
). Each number corresponds to right-most (i.e., ending) position of a column.Parameter printFn
Custom print function. Can be used to replace the default
console.log
. For example, you can usex => {}
to mute the printed messages in the console.{heading: 'Models', subheading: 'Classes'}
method trainOnBatch
trainOnBatch: (x: any, y: any) => Promise<number | number[]>;
Runs a single gradient update on a single batch of data.
This method differs from
fit()
andfitDataset()
in the following regards: - It operates on exactly one batch of data. - It returns only the loss and metric values, instead of returning the batch-by-batch loss and metric values. - It doesn't support fine-grained options such as verbosity and callbacks.Parameter x
Input data. It could be one of the following: - A
tf.Tensor
, or an Array oftf.Tensor
s (in case the model has multiple inputs). - An Object mapping input names to correspondingtf.Tensor
(if the model has named inputs).Parameter y
Target data. It could be either a
tf.Tensor
or multipletf.Tensor
s. It should be consistent withx
.Returns
Training loss or losses (in case the model has multiple outputs), along with metrics (if any), as numbers.
{heading: 'Models', subheading: 'Classes'}
class LayerVariable
class LayerVariable {}
A
tf.layers.LayerVariable
is similar to atf.Tensor
in that it has a dtype and shape, but its value is mutable. The value is itself represented as atf.Tensor
, and can be read with theread()
method and updated with thewrite()
method.
constructor
constructor( val: Tensor, dtype?: DataType, name?: string, trainable?: boolean, constraint?: Constraint);
Construct Variable from a
tf.Tensor
.If not explicitly named, the Variable will be given a name with the prefix 'Variable'. Variable names are unique. In the case of name collision, suffixies '_' will be added to the name.
Parameter val
Initial value of the Variable.
Parameter name
Name of the variable. If
null
orundefined
is provided, it will default a name with the prefix 'Variable'.Parameter constraint
Optional, projection function to be applied to the variable after optimize updates
Throws
ValueError if
name
isnull
orundefined
.
property constraint
readonly constraint: Constraint;
property dtype
readonly dtype: DataType;
property id
readonly id: number;
property name
readonly name: string;
property originalName
readonly originalName: string;
property shape
readonly shape: Shape;
property trainable
trainable: boolean;
property val
protected readonly val: tfc.Variable;
method assertNotDisposed
protected assertNotDisposed: () => void;
method dispose
dispose: () => void;
Dispose this LayersVariable instance from memory.
method read
read: () => Tensor;
Get a snapshot of the Variable's value.
The returned value is a snapshot of the Variable's value at the time of the invocation. Future mutations in the value of the tensor will only be reflected by future calls to this method.
method write
write: (newVal: Tensor) => this;
Update the value of the Variable.
Parameter newVal
: The new value to update to. Must be consistent with the dtype and shape of the Variable. This Variable.
class RNN
class RNN extends Layer {}
constructor
constructor(args: RNNLayerArgs);
property cell
readonly cell: RNNCell;
property className
static className: string;
property goBackwards
readonly goBackwards: boolean;
property keptStates
protected keptStates: Tensor[][];
property nonTrainableWeights
readonly nonTrainableWeights: LayerVariable[];
property returnSequences
readonly returnSequences: boolean;
property returnState
readonly returnState: boolean;
property states
states: Tensor[];
Get the current state tensors of the RNN.
If the state hasn't been set, return an array of
null
s of the correct length.
property states_
protected states_: Tensor[];
property stateSpec
stateSpec: InputSpec[];
property trainableWeights
readonly trainableWeights: LayerVariable[];
property unroll
readonly unroll: boolean;
method apply
apply: ( inputs: Tensor | Tensor[] | SymbolicTensor | SymbolicTensor[], kwargs?: Kwargs) => Tensor | Tensor[] | SymbolicTensor | SymbolicTensor[];
method build
build: (inputShape: Shape | Shape[]) => void;
method call
call: (inputs: Tensor | Tensor[], kwargs: Kwargs) => Tensor | Tensor[];
method computeMask
computeMask: ( inputs: Tensor | Tensor[], mask?: Tensor | Tensor[]) => Tensor | Tensor[];
method computeOutputShape
computeOutputShape: (inputShape: Shape | Shape[]) => Shape | Shape[];
method fromConfig
static fromConfig: <T extends serialization.Serializable>( cls: serialization.SerializableConstructor<T>, config: serialization.ConfigDict, customObjects?: serialization.ConfigDict) => T;
method getConfig
getConfig: () => serialization.ConfigDict;
method getInitialState
getInitialState: (inputs: Tensor) => Tensor[];
method getStates
getStates: () => Tensor[];
method resetStates
resetStates: (states?: Tensor | Tensor[], training?: boolean) => void;
Reset the state tensors of the RNN.
If the
states
argument isundefined
ornull
, will set the state tensor(s) of the RNN to all-zero tensors of the appropriate shape(s).If
states
is provided, will set the state tensors of the RNN to its value.Parameter states
Optional externally-provided initial states.
Parameter training
Whether this call is done during training. For stateful RNNs, this affects whether the old states are kept or discarded. In particular, if
training
istrue
, the old states will be kept so that subsequent backpropgataion through time (BPTT) may work properly. Else, the old states will be discarded.
method setFastWeightInitDuringBuild
setFastWeightInitDuringBuild: (value: boolean) => void;
method setStates
setStates: (states: Tensor[]) => void;
class Sequential
class Sequential extends LayersModel {}
A model with a stack of layers, feeding linearly from one to the next.
tf.sequential
is a factory function that creates an instance oftf.Sequential
.// Define a model for linear regression.const model = tf.sequential();model.add(tf.layers.dense({units: 1, inputShape: [1]}));// Prepare the model for training: Specify the loss and the optimizer.model.compile({loss: 'meanSquaredError', optimizer: 'sgd'});// Generate some synthetic data for training.const xs = tf.tensor2d([1, 2, 3, 4], [4, 1]);const ys = tf.tensor2d([1, 3, 5, 7], [4, 1]);// Train the model using the data then do inference on a data point the// model hasn't seen:await model.fit(xs, ys);model.predict(tf.tensor2d([5], [1, 1])).print();{heading: 'Models', subheading: 'Classes'}
constructor
constructor(args?: SequentialArgs);
property className
static className: string;
property optimizer
optimizer: Optimizer;
property stopTraining
stopTraining: boolean;
method add
add: (layer: Layer) => void;
Adds a layer instance on top of the layer stack.
const model = tf.sequential();model.add(tf.layers.dense({units: 8, inputShape: [1]}));model.add(tf.layers.dense({units: 4, activation: 'relu6'}));model.add(tf.layers.dense({units: 1, activation: 'relu6'}));// Note that the untrained model is random at this point.model.predict(tf.randomNormal([10, 1])).print();Parameter layer
Layer instance.
ValueError In case the
layer
argument does not know its input shape. ValueError In case thelayer
argument has multiple output tensors, or is already connected somewhere else (forbidden inSequential
models).{heading: 'Models', subheading: 'Classes'}
method build
build: (inputShape?: Shape | Shape[]) => void;
method call
call: (inputs: Tensor | Tensor[], kwargs: Kwargs) => Tensor | Tensor[];
method compile
compile: (args: ModelCompileArgs) => void;
See
LayersModel.compile
.Parameter args
method countParams
countParams: () => number;
method evaluate
evaluate: ( x: Tensor | Tensor[], y: Tensor | Tensor[], args?: ModelEvaluateArgs) => Scalar | Scalar[];
Returns the loss value & metrics values for the model in test mode.
Loss and metrics are specified during
compile()
, which needs to happen before calls toevaluate()
.Computation is done in batches.
const model = tf.sequential({layers: [tf.layers.dense({units: 1, inputShape: [10]})]});model.compile({optimizer: 'sgd', loss: 'meanSquaredError'});const result = model.evaluate(tf.ones([8, 10]), tf.ones([8, 1]), {batchSize: 4,});result.print();Parameter x
tf.Tensor
of test data, or anArray
oftf.Tensor
s if the model has multiple inputs.Parameter y
tf.Tensor
of target data, or anArray
oftf.Tensor
s if the model has multiple outputs.Parameter args
A
ModelEvaluateConfig
, containing optional fields.Scalar
test loss (if the model has a single output and no metrics) orArray
ofScalar
s (if the model has multiple outputs and/or metrics). The attributemodel.metricsNames
will give you the display labels for the scalar outputs.{heading: 'Models', subheading: 'Classes'}
method evaluateDataset
evaluateDataset: ( dataset: Dataset<{}>, args: ModelEvaluateDatasetArgs) => Promise<Scalar | Scalar[]>;
Evaluate model using a dataset object.
Note: Unlike
evaluate()
, this method is asynchronous (async
).Parameter dataset
A dataset object. Its
iterator()
method is expected to generate a dataset iterator object, thenext()
method of which is expected to produce data batches for evaluation. The return value of thenext()
call ought to contain a booleandone
field and avalue
field. Thevalue
field is expected to be an array of twotf.Tensor
s or an array of two nestedtf.Tensor
structures. The former case is for models with exactly one input and one output (e.g. a sequential model). The latter case is for models with multiple inputs and/or multiple outputs. Of the two items in the array, the first is the input feature(s) and the second is the output target(s).Parameter args
A configuration object for the dataset-based evaluation.
Returns
Loss and metric values as an Array of
Scalar
objects.{heading: 'Models', subheading: 'Classes'}
method fit
fit: (x: any, y: any, args?: ModelFitArgs) => Promise<History>;
Trains the model for a fixed number of epochs (iterations on a dataset).
const model = tf.sequential({layers: [tf.layers.dense({units: 1, inputShape: [10]})]});model.compile({optimizer: 'sgd', loss: 'meanSquaredError'});const history = await model.fit(tf.ones([8, 10]), tf.ones([8, 1]), {batchSize: 4,epochs: 3});console.log(history.history.loss[0]);Parameter x
tf.Tensor
of training data, or an array oftf.Tensor
s if the model has multiple inputs. If all inputs in the model are named, you can also pass a dictionary mapping input names totf.Tensor
s.Parameter y
tf.Tensor
of target (label) data, or an array oftf.Tensor
s if the model has multiple outputs. If all outputs in the model are named, you can also pass a dictionary mapping output names totf.Tensor
s.Parameter args
A
ModelFitConfig
, containing optional fields.A
History
instance. Itshistory
attribute contains all information collected during training.ValueError In case of mismatch between the provided input data and what the model expects.
{heading: 'Models', subheading: 'Classes'}
method fitDataset
fitDataset: <T>( dataset: Dataset<T>, args: ModelFitDatasetArgs<T>) => Promise<History>;
Trains the model using a dataset object.
const xArray = [[1, 1, 1, 1, 1, 1, 1, 1, 1],[1, 1, 1, 1, 1, 1, 1, 1, 1],[1, 1, 1, 1, 1, 1, 1, 1, 1],[1, 1, 1, 1, 1, 1, 1, 1, 1],];const yArray = [1, 1, 1, 1];// Create a dataset from the JavaScript array.const xDataset = tf.data.array(xArray);const yDataset = tf.data.array(yArray);// Zip combines the `x` and `y` Datasets into a single Dataset, the// iterator of which will return an object containing of two tensors,// corresponding to `x` and `y`. The call to `batch(4)` will bundle// four such samples into a single object, with the same keys now pointing// to tensors that hold 4 examples, organized along the batch dimension.// The call to `shuffle(4)` causes each iteration through the dataset to// happen in a different order. The size of the shuffle window is 4.const xyDataset = tf.data.zip({xs: xDataset, ys: yDataset}).batch(4).shuffle(4);const model = tf.sequential({layers: [tf.layers.dense({units: 1, inputShape: [9]})]});model.compile({optimizer: 'sgd', loss: 'meanSquaredError'});const history = await model.fitDataset(xyDataset, {epochs: 4,callbacks: {onEpochEnd: (epoch, logs) => console.log(logs.loss)}});Parameter dataset
A dataset object. Its
iterator()
method is expected to generate a dataset iterator object, thenext()
method of which is expected to produce data batches for evaluation. The return value of thenext()
call ought to contain a booleandone
field and avalue
field.The
value
field is expected to be an object of with fieldsxs
andys
, which point to the feature tensor and the target tensor, respectively. This case is for models with exactly one input and one output (e.g. a sequential model). For example: ```js {value: {xs: xsTensor, ys: ysTensor}, done: false} ```If the model has multiple inputs, the
xs
field ofvalue
should be an object mapping input names to their respective feature tensors. For example: ```js { value: { xs: { input_1: xsTensor1, input_2: xsTensor2 }, ys: ysTensor }, done: false } ``` If the model has multiple outputs, theys
field ofvalue
should be an object mapping output names to their respective target tensors. For example: ```js { value: { xs: xsTensor, ys: { output_1: ysTensor1, output_2: ysTensor2 }, }, done: false } ```Parameter args
A
ModelFitDatasetArgs
, containing optional fields.A
History
instance. Itshistory
attribute contains all information collected during training.{heading: 'Models', subheading: 'Classes', ignoreCI: true}
method fromConfig
static fromConfig: <T extends serialization.Serializable>( cls: serialization.SerializableConstructor<T>, config: serialization.ConfigDict, customObjects?: serialization.ConfigDict, fastWeightInit?: boolean) => T;
method getConfig
getConfig: () => any;
method pop
pop: () => void;
Removes the last layer in the model.
TypeError if there are no layers in the model.
method predict
predict: (x: Tensor | Tensor[], args?: ModelPredictArgs) => Tensor | Tensor[];
Generates output predictions for the input samples.
Computation is done in batches.
Note: the "step" mode of predict() is currently not supported. This is because the TensorFlow.js core backend is imperative only.
const model = tf.sequential({layers: [tf.layers.dense({units: 1, inputShape: [10]})]});model.predict(tf.ones([2, 10])).print();Parameter x
The input data, as a Tensor, or an
Array
oftf.Tensor
s if the model has multiple inputs.Parameter conifg
A
ModelPredictConfig
object containing optional fields.tf.Tensor
(s) of predictions.ValueError In case of mismatch between the provided input data and the model's expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.
{heading: 'Models', subheading: 'Classes'}
method predictOnBatch
predictOnBatch: (x: Tensor) => Tensor | Tensor[];
Returns predictions for a single batch of samples.
Parameter x
: Input samples, as a Tensor, or list of Tensors (if the model has multiple inputs). Tensor(s) of predictions
method setWeights
setWeights: (weights: Tensor[]) => void;
Sets the weights of the model.
Parameter weights
Should be a list of Tensors with shapes and types matching the output of
model.getWeights()
.
method summary
summary: ( lineLength?: number, positions?: number[], printFn?: (message?: any, ...optionalParams: any[]) => void) => void;
Print a text summary of the Sequential model's layers.
The summary includes - Name and type of all layers that comprise the model. - Output shape(s) of the layers - Number of weight parameters of each layer - The total number of trainable and non-trainable parameters of the model.
const model = tf.sequential();model.add(tf.layers.dense({units: 100, inputShape: [10], activation: 'relu'}));model.add(tf.layers.dense({units: 1, activation: 'sigmoid'}));model.summary();Parameter lineLength
Custom line length, in number of characters.
Parameter positions
Custom widths of each of the columns, as either fractions of
lineLength
(e.g.,[0.5, 0.75, 1]
) or absolute number of characters (e.g.,[30, 50, 65]
). Each number corresponds to right-most (i.e., ending) position of a column.Parameter printFn
Custom print function. Can be used to replace the default
console.log
. For example, you can usex => {}
to mute the printed messages in the console.{heading: 'Models', subheading: 'Classes'}
method trainOnBatch
trainOnBatch: (x: any, y: any) => Promise<number | number[]>;
Runs a single gradient update on a single batch of data.
This method differs from
fit()
andfitDataset()
in the following regards: - It operates on exactly one batch of data. - It returns only the loss and metric values, instead of returning the batch-by-batch loss and metric values. - It doesn't support fine-grained options such as verbosity and callbacks.Parameter x
Input data. It could be one of the following: - A
tf.Tensor
, or an Array oftf.Tensor
s (in case the model has multiple inputs). - An Object mapping input names to correspondingtf.Tensor
(if the model has named inputs).Parameter y
Target data. It could be either a
tf.Tensor
or multipletf.Tensor
s. It should be consistent withx
.Returns
Training loss or losses (in case the model has multiple outputs), along with metrics (if any), as numbers.
{heading: 'Models', subheading: 'Classes'}
class SymbolicTensor
class SymbolicTensor {}
tf.SymbolicTensor
is a placeholder for a Tensor without any concrete value.They are most often encountered when building a graph of
Layer
s for atf.LayersModel
and the input data's shape, but not values are known.{heading: 'Models', 'subheading': 'Classes'}
constructor
constructor( dtype: DataType, shape: Shape, sourceLayer: Layer, inputs: SymbolicTensor[], callArgs: Kwargs, name?: string, outputTensorIndex?: number);
Parameter dtype
Parameter shape
Parameter sourceLayer
The Layer that produced this symbolic tensor.
Parameter inputs
The inputs passed to sourceLayer's __call__() method.
Parameter nodeIndex
Parameter tensorIndex
Parameter callArgs
The keyword arguments passed to the __call__() method.
Parameter name
Parameter outputTensorIndex
The index of this tensor in the list of outputs returned by apply().
property callArgs
readonly callArgs: Kwargs;
property dtype
readonly dtype: DataType;
property id
readonly id: number;
property inputs
readonly inputs: SymbolicTensor[];
property name
readonly name: string;
property nodeIndex
nodeIndex: number;
Replacement for _keras_history.
property originalName
readonly originalName?: string;
property outputTensorIndex
readonly outputTensorIndex?: number;
property rank
readonly rank: number;
Rank/dimensionality of the tensor.
property shape
readonly shape: Shape;
property sourceLayer
sourceLayer: Layer;
property tensorIndex
tensorIndex: number;
Replacement for _keras_history.
Interfaces
interface CustomCallbackArgs
interface CustomCallbackArgs {}
property nextFrameFunc
nextFrameFunc?: Function;
property nowFunc
nowFunc?: Function;
property onBatchBegin
onBatchBegin?: (batch: number, logs?: Logs) => void | Promise<void>;
property onBatchEnd
onBatchEnd?: (batch: number, logs?: Logs) => void | Promise<void>;
property onEpochBegin
onEpochBegin?: (epoch: number, logs?: Logs) => void | Promise<void>;
property onEpochEnd
onEpochEnd?: (epoch: number, logs?: Logs) => void | Promise<void>;
property onTrainBegin
onTrainBegin?: (logs?: Logs) => void | Promise<void>;
property onTrainEnd
onTrainEnd?: (logs?: Logs) => void | Promise<void>;
property onYield
onYield?: (epoch: number, batch: number, logs: Logs) => void | Promise<void>;
interface EarlyStoppingCallbackArgs
interface EarlyStoppingCallbackArgs {}
property baseline
baseline?: number;
Baseline value of the monitored quantity.
If specified, training will be stopped if the model doesn't show improvement over the baseline.
property minDelta
minDelta?: number;
Minimum change in the monitored quantity to qualify as improvement, i.e., an absolute change of less than
minDelta
will count as no improvement.Defaults to 0.
property mode
mode?: 'auto' | 'min' | 'max';
Mode: one of 'min', 'max', and 'auto'. - In 'min' mode, training will be stopped when the quantity monitored has stopped decreasing. - In 'max' mode, training will be stopped when the quantity monitored has stopped increasing. - In 'auto' mode, the direction is inferred automatically from the name of the monitored quantity.
Defaults to 'auto'.
property monitor
monitor?: string;
Quantity to be monitored.
Defaults to 'val_loss'.
property patience
patience?: number;
Number of epochs with no improvement after which training will be stopped.
Defaults to 0.
property restoreBestWeights
restoreBestWeights?: boolean;
Whether to restore model weights from the epoch with the best value of the monitored quantity. If
False
, the model weights obtained at the last step of training are used.**
True
is not supported yet.**
property verbose
verbose?: number;
Verbosity mode.
interface GRUCellLayerArgs
interface GRUCellLayerArgs extends SimpleRNNCellLayerArgs {}
property implementation
implementation?: number;
Implementation mode, either 1 or 2.
Mode 1 will structure its operations as a larger number of smaller dot products and additions.
Mode 2 will batch them into fewer, larger operations. These modes will have different performance profiles on different hardware and for different applications.
Note: For superior performance, TensorFlow.js always uses implementation 2, regardless of the actual value of this configuration field.
property recurrentActivation
recurrentActivation?: ActivationIdentifier;
Activation function to use for the recurrent step.
Defaults to hard sigmoid (
hardSigmoid
).If
null
, no activation is applied.
property resetAfter
resetAfter?: boolean;
GRU convention (whether to apply reset gate after or before matrix multiplication). false = "before", true = "after" (only false is supported).
interface GRULayerArgs
interface GRULayerArgs extends SimpleRNNLayerArgs {}
property implementation
implementation?: number;
Implementation mode, either 1 or 2.
Mode 1 will structure its operations as a larger number of smaller dot products and additions.
Mode 2 will batch them into fewer, larger operations. These modes will have different performance profiles on different hardware and for different applications.
Note: For superior performance, TensorFlow.js always uses implementation 2, regardless of the actual value of this configuration field.
property recurrentActivation
recurrentActivation?: ActivationIdentifier;
Activation function to use for the recurrent step.
Defaults to hard sigmoid (
hardSigmoid
).If
null
, no activation is applied.
interface LSTMCellLayerArgs
interface LSTMCellLayerArgs extends SimpleRNNCellLayerArgs {}
property implementation
implementation?: number;
Implementation mode, either 1 or 2.
Mode 1 will structure its operations as a larger number of smaller dot products and additions.
Mode 2 will batch them into fewer, larger operations. These modes will have different performance profiles on different hardware and for different applications.
Note: For superior performance, TensorFlow.js always uses implementation 2, regardless of the actual value of this configuration field.
property recurrentActivation
recurrentActivation?: ActivationIdentifier;
Activation function to use for the recurrent step.
Defaults to hard sigmoid (
hardSigmoid
).If
null
, no activation is applied.
property unitForgetBias
unitForgetBias?: boolean;
If
true
, add 1 to the bias of the forget gate at initialization. Setting it totrue
will also forcebiasInitializer = 'zeros'
. This is recommended in [Jozefowicz et al.](http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf)
interface LSTMLayerArgs
interface LSTMLayerArgs extends SimpleRNNLayerArgs {}
property implementation
implementation?: number;
Implementation mode, either 1 or 2. Mode 1 will structure its operations as a larger number of smaller dot products and additions, whereas mode 2 will batch them into fewer, larger operations. These modes will have different performance profiles on different hardware and for different applications.
Note: For superior performance, TensorFlow.js always uses implementation 2, regardless of the actual value of this config field.
property recurrentActivation
recurrentActivation?: ActivationIdentifier;
Activation function to use for the recurrent step.
Defaults to hard sigmoid (
hardSigmoid
).If
null
, no activation is applied.
property unitForgetBias
unitForgetBias?: boolean;
If
true
, add 1 to the bias of the forget gate at initialization. Setting it totrue
will also forcebiasInitializer = 'zeros'
. This is recommended in [Jozefowicz et al.](http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf)
interface ModelAndWeightsConfig
interface ModelAndWeightsConfig {}
Options for loading a saved mode in TensorFlow.js format.
property modelTopology
modelTopology: PyJsonDict;
A JSON object or JSON string containing the model config.
This can be either of the following two formats: - A model archiecture-only config, i.e., a format consistent with the return value of
keras.Model.to_json()
. - A full model config, containing not only model architecture, but also training options and state, i.e., a format consistent with the return value ofkeras.models.save_model()
.
property pathPrefix
pathPrefix?: string;
Path to prepend to the paths in
weightManifest
before fetching.The path may optionally end in a slash ('/').
property weightsManifest
weightsManifest?: io.WeightsManifestConfig;
A weights manifest in TensorFlow.js format.
interface ModelCompileArgs
interface ModelCompileArgs {}
Configuration for calls to
LayersModel.compile()
.
property loss
loss: | string | string[] | { [outputName: string]: string; } | LossOrMetricFn | LossOrMetricFn[] | { [outputName: string]: LossOrMetricFn; };
Object function(s) or name(s) of object function(s). If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or an Array of losses. The loss value that will be minimized by the model will then be the sum of all individual losses.
property metrics
metrics?: | string | LossOrMetricFn | Array<string | LossOrMetricFn> | { [outputName: string]: string | LossOrMetricFn; };
List of metrics to be evaluated by the model during training and testing. Typically you will use
metrics=['accuracy']
. To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary.
property optimizer
optimizer: string | Optimizer;
An instance of
tf.train.Optimizer
or a string name for an Optimizer.
interface ModelEvaluateArgs
interface ModelEvaluateArgs {}
property batchSize
batchSize?: number;
Batch size (Integer). If unspecified, it will default to 32.
property sampleWeight
sampleWeight?: Tensor;
Tensor of weights to weight the contribution of different samples to the loss and metrics.
property steps
steps?: number;
integer: total number of steps (batches of samples) before declaring the evaluation round finished. Ignored with the default value of
undefined
.
property verbose
verbose?: ModelLoggingVerbosity;
Verbosity mode.
interface ModelFitArgs
interface ModelFitArgs {}
Interface configuration model training based on data as
tf.Tensor
s.
property batchSize
batchSize?: number;
Number of samples per gradient update. If unspecified, it will default to 32.
property callbacks
callbacks?: BaseCallback[] | CustomCallbackArgs | CustomCallbackArgs[];
List of callbacks to be called during training. Can have one or more of the following callbacks: -
onTrainBegin(logs)
: called when training starts. -onTrainEnd(logs)
: called when training ends. -onEpochBegin(epoch, logs)
: called at the start of every epoch. -onEpochEnd(epoch, logs)
: called at the end of every epoch. -onBatchBegin(batch, logs)
: called at the start of every batch. -onBatchEnd(batch, logs)
: called at the end of every batch. -onYield(epoch, batch, logs)
: called everyyieldEvery
milliseconds with the current epoch, batch and logs. The logs are the same as inonBatchEnd()
. Note thatonYield
can skip batches or epochs. See also docs foryieldEvery
below.
property classWeight
classWeight?: ClassWeight | ClassWeight[] | ClassWeightMap;
Optional object mapping class indices (integers) to a weight (float) to apply to the model's loss for the samples from this class during training. This can be useful to tell the model to "pay more attention" to samples from an under-represented class.
If the model has multiple outputs, a class weight can be specified for each of the outputs by setting this field an array of weight object or an object that maps model output names (e.g.,
model.outputNames[0]
) to weight objects.
property epochs
epochs?: number;
Integer number of times to iterate over the training data arrays.
property initialEpoch
initialEpoch?: number;
Epoch at which to start training (useful for resuming a previous training run). When this is used,
epochs
is the index of the "final epoch". The model is not trained for a number of iterations given byepochs
, but merely until the epoch of indexepochs
is reached.
property sampleWeight
sampleWeight?: Tensor;
Optional array of the same length as x, containing weights to apply to the model's loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequenceLength), to apply a different weight to every timestep of every sample. In this case you should make sure to specify sampleWeightMode="temporal" in compile().
property shuffle
shuffle?: boolean;
Whether to shuffle the training data before each epoch. Has no effect when
stepsPerEpoch
is notnull
.
property stepsPerEpoch
stepsPerEpoch?: number;
Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with Input Tensors such as TensorFlow data tensors, the default
null
is equal to the number of unique samples in your dataset divided by the batch size, or 1 if that cannot be determined.
property validationData
validationData?: | [Tensor | Tensor[], Tensor | Tensor[]] | [Tensor | Tensor[], Tensor | Tensor[], Tensor | Tensor[]];
Data on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. This could be a tuple [xVal, yVal] or a tuple [xVal, yVal, valSampleWeights]. The model will not be trained on this data.
validationData
will overridevalidationSplit
.
property validationSplit
validationSplit?: number;
Float between 0 and 1: fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the
x
andy
data provided, before shuffling.
property validationSteps
validationSteps?: number;
Only relevant if
stepsPerEpoch
is specified. Total number of steps (batches of samples) to validate before stopping.
property verbose
verbose?: ModelLoggingVerbosity | 2;
Verbosity level.
Expected to be 0, 1, or 2. Default: 1.
0 - No printed message during fit() call. 1 - In Node.js (tfjs-node), prints the progress bar, together with real-time updates of loss and metric values and training speed. In the browser: no action. This is the default. 2 - Not implemented yet.
property yieldEvery
yieldEvery?: YieldEveryOptions;
Configures the frequency of yielding the main thread to other tasks.
In the browser environment, yielding the main thread can improve the responsiveness of the page during training. In the Node.js environment, it can ensure tasks queued in the event loop can be handled in a timely manner.
The value can be one of the following: -
'auto'
: The yielding happens at a certain frame rate (currently set at 125ms). This is the default. -'batch'
: yield every batch. -'epoch'
: yield every epoch. - anynumber
: yield everynumber
milliseconds. -'never'
: never yield. (yielding can still happen through `await nextFrame()` calls in custom callbacks.)
interface ModelFitDatasetArgs
interface ModelFitDatasetArgs<T> {}
Interface for configuring model training based on a dataset object.
property batchesPerEpoch
batchesPerEpoch?: number;
(Optional) Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. It should typically be equal to the number of samples of your dataset divided by the batch size, so that
fitDataset
() call can utilize the entire dataset. If it is not provided, usedone
return value initerator.next()
as signal to finish an epoch.
property callbacks
callbacks?: BaseCallback[] | CustomCallbackArgs | CustomCallbackArgs[];
List of callbacks to be called during training. Can have one or more of the following callbacks: -
onTrainBegin(logs)
: called when training starts. -onTrainEnd(logs)
: called when training ends. -onEpochBegin(epoch, logs)
: called at the start of every epoch. -onEpochEnd(epoch, logs)
: called at the end of every epoch. -onBatchBegin(batch, logs)
: called at the start of every batch. -onBatchEnd(batch, logs)
: called at the end of every batch. -onYield(epoch, batch, logs)
: called everyyieldEvery
milliseconds with the current epoch, batch and logs. The logs are the same as inonBatchEnd()
. Note thatonYield
can skip batches or epochs. See also docs foryieldEvery
below.
property classWeight
classWeight?: ClassWeight | ClassWeight[] | ClassWeightMap;
Optional object mapping class indices (integers) to a weight (float) to apply to the model's loss for the samples from this class during training. This can be useful to tell the model to "pay more attention" to samples from an under-represented class.
If the model has multiple outputs, a class weight can be specified for each of the outputs by setting this field an array of weight object or an object that maps model output names (e.g.,
model.outputNames[0]
) to weight objects.
property epochs
epochs: number;
Integer number of times to iterate over the training dataset.
property initialEpoch
initialEpoch?: number;
Epoch at which to start training (useful for resuming a previous training run). When this is used,
epochs
is the index of the "final epoch". The model is not trained for a number of iterations given byepochs
, but merely until the epoch of indexepochs
is reached.
property validationBatches
validationBatches?: number;
(Optional) Only relevant if
validationData
is specified and is a dataset object.Total number of batches of samples to draw from
validationData
for validation purpose before stopping at the end of every epoch. If not specified,evaluateDataset
will useiterator.next().done
as signal to stop validation.
property validationBatchSize
validationBatchSize?: number;
Optional batch size for validation.
Used only if
validationData
is an array oftf.Tensor
objects, i.e., not a dataset object.If not specified, its value defaults to 32.
property validationData
validationData?: | [TensorOrArrayOrMap, TensorOrArrayOrMap] | [TensorOrArrayOrMap, TensorOrArrayOrMap, TensorOrArrayOrMap] | Dataset<T>;
Data on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. This could be any of the following:
- An array
[xVal, yVal]
, where the two values may betf.Tensor
, an array of Tensors, or a map of string to Tensor. - Similarly, an array[xVal, yVal, valSampleWeights]
(not implemented yet). - aDataset
object with elements of the form{xs: xVal, ys: yVal}
, wherexs
andys
are the feature and label tensors, respectively.If
validationData
is an Array of Tensor objects, eachtf.Tensor
will be sliced into batches during validation, using the parametervalidationBatchSize
(which defaults to 32). The entirety of thetf.Tensor
objects will be used in the validation.If
validationData
is a dataset object, and thevalidationBatches
parameter is specified, the validation will usevalidationBatches
batches drawn from the dataset object. IfvalidationBatches
parameter is not specified, the validation will stop when the dataset is exhausted.The model will not be trained on this data.
property verbose
verbose?: ModelLoggingVerbosity;
Verbosity level.
Expected to be 0, 1, or 2. Default: 1.
0 - No printed message during fit() call. 1 - In Node.js (tfjs-node), prints the progress bar, together with real-time updates of loss and metric values and training speed. In the browser: no action. This is the default. 2 - Not implemented yet.
property yieldEvery
yieldEvery?: YieldEveryOptions;
Configures the frequency of yielding the main thread to other tasks.
In the browser environment, yielding the main thread can improve the responsiveness of the page during training. In the Node.js environment, it can ensure tasks queued in the event loop can be handled in a timely manner.
The value can be one of the following: -
'auto'
: The yielding happens at a certain frame rate (currently set at 125ms). This is the default. -'batch'
: yield every batch. -'epoch'
: yield every epoch. - anumber
: Will yield everynumber
milliseconds. -'never'
: never yield. (But yielding can still happen through `await nextFrame()` calls in custom callbacks.)
interface RNNLayerArgs
interface RNNLayerArgs extends BaseRNNLayerArgs {}
RNNLayerConfig is identical to BaseRNNLayerConfig, except it makes the
cell
property required. This interface is to be used with constructors of concrete RNN layer subtypes.
property cell
cell: RNNCell | RNNCell[];
interface SequentialArgs
interface SequentialArgs {}
Configuration for a Sequential model.
interface SimpleRNNCellLayerArgs
interface SimpleRNNCellLayerArgs extends LayerArgs {}
property activation
activation?: ActivationIdentifier;
Activation function to use. Default: hyperbolic tangent ('tanh'). If you pass
null
, 'linear' activation will be applied.
property biasConstraint
biasConstraint?: ConstraintIdentifier | Constraint;
Constraint function applied to the bias vector.
property biasInitializer
biasInitializer?: InitializerIdentifier | Initializer;
Initializer for the bias vector.
property biasRegularizer
biasRegularizer?: RegularizerIdentifier | Regularizer;
Regularizer function applied to the bias vector.
property dropout
dropout?: number;
Float number between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
property dropoutFunc
dropoutFunc?: Function;
This is added for test DI purpose.
property kernelConstraint
kernelConstraint?: ConstraintIdentifier | Constraint;
Constraint function applied to the
kernel
weights matrix.
property kernelInitializer
kernelInitializer?: InitializerIdentifier | Initializer;
Initializer for the
kernel
weights matrix, used for the linear transformation of the inputs.
property kernelRegularizer
kernelRegularizer?: RegularizerIdentifier | Regularizer;
Regularizer function applied to the
kernel
weights matrix.
property recurrentConstraint
recurrentConstraint?: ConstraintIdentifier | Constraint;
Constraint function applied to the
recurrentKernel
weights matrix.
property recurrentDropout
recurrentDropout?: number;
Float number between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
property recurrentInitializer
recurrentInitializer?: InitializerIdentifier | Initializer;
Initializer for the
recurrentKernel
weights matrix, used for linear transformation of the recurrent state.
property recurrentRegularizer
recurrentRegularizer?: RegularizerIdentifier | Regularizer;
Regularizer function applied to the
recurrent_kernel
weights matrix.
property units
units: number;
units: Positive integer, dimensionality of the output space.
property useBias
useBias?: boolean;
Whether the layer uses a bias vector.
interface SimpleRNNLayerArgs
interface SimpleRNNLayerArgs extends BaseRNNLayerArgs {}
property activation
activation?: ActivationIdentifier;
Activation function to use.
Defaults to hyperbolic tangent (
tanh
)If you pass
null
, no activation will be applied.
property biasConstraint
biasConstraint?: ConstraintIdentifier | Constraint;
Constraint function applied to the bias vector.
property biasInitializer
biasInitializer?: InitializerIdentifier | Initializer;
Initializer for the bias vector.
property biasRegularizer
biasRegularizer?: RegularizerIdentifier | Regularizer;
Regularizer function applied to the bias vector.
property dropout
dropout?: number;
Number between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
property dropoutFunc
dropoutFunc?: Function;
This is added for test DI purpose.
property kernelConstraint
kernelConstraint?: ConstraintIdentifier | Constraint;
Constraint function applied to the kernel weights matrix.
property kernelInitializer
kernelInitializer?: InitializerIdentifier | Initializer;
Initializer for the
kernel
weights matrix, used for the linear transformation of the inputs.
property kernelRegularizer
kernelRegularizer?: RegularizerIdentifier | Regularizer;
Regularizer function applied to the kernel weights matrix.
property recurrentConstraint
recurrentConstraint?: ConstraintIdentifier | Constraint;
Constraint function applied to the recurrentKernel weights matrix.
property recurrentDropout
recurrentDropout?: number;
Number between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
property recurrentInitializer
recurrentInitializer?: InitializerIdentifier | Initializer;
Initializer for the
recurrentKernel
weights matrix, used for linear transformation of the recurrent state.
property recurrentRegularizer
recurrentRegularizer?: RegularizerIdentifier | Regularizer;
Regularizer function applied to the recurrentKernel weights matrix.
property units
units: number;
Positive integer, dimensionality of the output space.
property useBias
useBias?: boolean;
Whether the layer uses a bias vector.
Type Aliases
type ClassWeight
type ClassWeight = { [classIndex: number]: number;};
For multi-class classification problems, this object is designed to store a mapping from class index to the "weight" of the class, where higher weighted classes have larger impact on loss, accuracy, and other metrics.
This is useful for cases in which you want the model to "pay more attention" to examples from an under-represented class, e.g., in unbalanced datasets.
type ClassWeightMap
type ClassWeightMap = { [outputName: string]: ClassWeight;};
Class weighting for a model with multiple outputs.
This object maps each output name to a class-weighting object.
type Logs
type Logs = { [key: string]: number;};
Logs in which values can only be numbers.
Used when calling client-provided custom callbacks.
type Shape
type Shape = Array<null | number>;
(null | number)[]
Namespaces
namespace constraints
module 'dist/exports_constraints.d.ts' {}
Copyright 2018 Google LLC
Use of this source code is governed by an MIT-style license that can be found in the LICENSE file or at https://opensource.org/licenses/MIT. =============================================================================
function maxNorm
maxNorm: (args: MaxNormArgs) => Constraint;
MaxNorm weight constraint.
Constrains the weights incident to each hidden unit to have a norm less than or equal to a desired value.
References - [Dropout: A Simple Way to Prevent Neural Networks from Overfitting Srivastava, Hinton, et al. 2014](http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf)
{heading: 'Constraints',namespace: 'constraints'}
function minMaxNorm
minMaxNorm: (config: MinMaxNormArgs) => Constraint;
{heading: 'Constraints', namespace: 'constraints'}
function nonNeg
nonNeg: () => Constraint;
Constrains the weight to be non-negative.
{heading: 'Constraints', namespace: 'constraints'}
function unitNorm
unitNorm: (args: UnitNormArgs) => Constraint;
Constrains the weights incident to each hidden unit to have unit norm.
{heading: 'Constraints', namespace: 'constraints'}
namespace initializers
module 'dist/exports_initializers.d.ts' {}
Copyright 2018 Google LLC
Use of this source code is governed by an MIT-style license that can be found in the LICENSE file or at https://opensource.org/licenses/MIT. =============================================================================
function constant
constant: (args: ConstantArgs) => Initializer;
Initializer that generates values initialized to some constant.
{heading: 'Initializers', namespace: 'initializers'}
function glorotNormal
glorotNormal: (args: SeedOnlyInitializerArgs) => Initializer;
Glorot normal initializer, also called Xavier normal initializer. It draws samples from a truncated normal distribution centered on 0 with
stddev = sqrt(2 / (fan_in + fan_out))
wherefan_in
is the number of input units in the weight tensor andfan_out
is the number of output units in the weight tensor.Reference: Glorot & Bengio, AISTATS 2010 http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf
{heading: 'Initializers', namespace: 'initializers'}
function glorotUniform
glorotUniform: (args: SeedOnlyInitializerArgs) => Initializer;
Glorot uniform initializer, also called Xavier uniform initializer. It draws samples from a uniform distribution within [-limit, limit] where
limit
issqrt(6 / (fan_in + fan_out))
wherefan_in
is the number of input units in the weight tensor andfan_out
is the number of output units in the weight tensorReference: Glorot & Bengio, AISTATS 2010 http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf.
{heading: 'Initializers', namespace: 'initializers'}
function heNormal
heNormal: (args: SeedOnlyInitializerArgs) => Initializer;
He normal initializer.
It draws samples from a truncated normal distribution centered on 0 with
stddev = sqrt(2 / fanIn)
wherefanIn
is the number of input units in the weight tensor.Reference: He et al., http://arxiv.org/abs/1502.01852
{heading: 'Initializers', namespace: 'initializers'}
function heUniform
heUniform: (args: SeedOnlyInitializerArgs) => Initializer;
He uniform initializer.
It draws samples from a uniform distribution within [-limit, limit] where
limit
issqrt(6 / fan_in)
wherefanIn
is the number of input units in the weight tensor.Reference: He et al., http://arxiv.org/abs/1502.01852
{heading: 'Initializers',namespace: 'initializers'}
function identity
identity: (args: IdentityArgs) => Initializer;
Initializer that generates the identity matrix. Only use for square 2D matrices.
{heading: 'Initializers', namespace: 'initializers'}
function leCunNormal
leCunNormal: (args: SeedOnlyInitializerArgs) => Initializer;
LeCun normal initializer.
It draws samples from a truncated normal distribution centered on 0 with
stddev = sqrt(1 / fanIn)
wherefanIn
is the number of input units in the weight tensor.References: [Self-Normalizing Neural Networks](https://arxiv.org/abs/1706.02515) [Efficient Backprop](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf)
{heading: 'Initializers', namespace: 'initializers'}
function leCunUniform
leCunUniform: (args: SeedOnlyInitializerArgs) => Initializer;
LeCun uniform initializer.
It draws samples from a uniform distribution in the interval
[-limit, limit]
withlimit = sqrt(3 / fanIn)
, wherefanIn
is the number of input units in the weight tensor.{heading: 'Initializers', namespace: 'initializers'}
function ones
ones: () => Initializer;
Initializer that generates tensors initialized to 1.
{heading: 'Initializers', namespace: 'initializers'}
function orthogonal
orthogonal: (args: OrthogonalArgs) => Initializer;
Initializer that generates a random orthogonal matrix.
Reference: [Saxe et al., http://arxiv.org/abs/1312.6120](http://arxiv.org/abs/1312.6120)
{heading: 'Initializers', namespace: 'initializers'}
function randomNormal
randomNormal: (args: RandomNormalArgs) => Initializer;
Initializer that generates random values initialized to a normal distribution.
{heading: 'Initializers', namespace: 'initializers'}
function randomUniform
randomUniform: (args: RandomUniformArgs) => Initializer;
Initializer that generates random values initialized to a uniform distribution.
Values will be distributed uniformly between the configured minval and maxval.
{heading: 'Initializers', namespace: 'initializers'}
function truncatedNormal
truncatedNormal: (args: TruncatedNormalArgs) => Initializer;
Initializer that generates random values initialized to a truncated normal distribution.
These values are similar to values from a
RandomNormal
except that values more than two standard deviations from the mean are discarded and re-drawn. This is the recommended initializer for neural network weights and filters.{heading: 'Initializers', namespace: 'initializers'}
function varianceScaling
varianceScaling: (config: VarianceScalingArgs) => Initializer;
Initializer capable of adapting its scale to the shape of weights. With distribution=NORMAL, samples are drawn from a truncated normal distribution centered on zero, with
stddev = sqrt(scale / n)
where n is: - number of input units in the weight tensor, if mode = FAN_IN. - number of output units, if mode = FAN_OUT. - average of the numbers of input and output units, if mode = FAN_AVG. With distribution=UNIFORM, samples are drawn from a uniform distribution within [-limit, limit], withlimit = sqrt(3 * scale / n)
.{heading: 'Initializers',namespace: 'initializers'}
function zeros
zeros: () => Zeros;
Initializer that generates tensors initialized to 0.
{heading: 'Initializers', namespace: 'initializers'}
namespace layers
module 'dist/exports_layers.d.ts' {}
Copyright 2018 Google LLC
Use of this source code is governed by an MIT-style license that can be found in the LICENSE file or at https://opensource.org/licenses/MIT. =============================================================================
variable globalMaxPool1d
const globalMaxPool1d: (args?: LayerArgs) => GlobalMaxPooling1D;
variable globalMaxPool2d
const globalMaxPool2d: (args: GlobalPooling2DLayerArgs) => GlobalMaxPooling2D;
variable maxPool1d
const maxPool1d: (args: Pooling1DLayerArgs) => MaxPooling1D;
variable maxPool2d
const maxPool2d: (args: Pooling2DLayerArgs) => MaxPooling2D;
function activation
activation: (args: ActivationLayerArgs) => Activation;
Applies an activation function to an output.
This layer applies element-wise activation function. Other layers, notably
dense
can also apply activation functions. Use this isolated activation function to extract the values before and after the activation. For instance:const input = tf.input({shape: [5]});const denseLayer = tf.layers.dense({units: 1});const activationLayer = tf.layers.activation({activation: 'relu6'});// Obtain the output symbolic tensors by applying the layers in order.const denseOutput = denseLayer.apply(input);const activationOutput = activationLayer.apply(denseOutput);// Create the model based on the inputs.const model = tf.model({inputs: input,outputs: [denseOutput, activationOutput]});// Collect both outputs and print separately.const [denseOut, activationOut] = model.predict(tf.randomNormal([6, 5]));denseOut.print();activationOut.print();{heading: 'Layers', subheading: 'Basic', namespace: 'layers'}
function add
add: (args?: LayerArgs) => Add;
Layer that performs element-wise addition on an
Array
of inputs.It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape). The inputs are specified as an
Array
when theapply
method of theAdd
layer instance is called. For example:const input1 = tf.input({shape: [2, 2]});const input2 = tf.input({shape: [2, 2]});const addLayer = tf.layers.add();const sum = addLayer.apply([input1, input2]);console.log(JSON.stringify(sum.shape));// You get [null, 2, 2], with the first dimension as the undetermined batch// dimension.{heading: 'Layers', subheading: 'Merge', namespace: 'layers'}
function alphaDropout
alphaDropout: (args: AlphaDropoutArgs) => AlphaDropout;
Applies Alpha Dropout to the input.
As it is a regularization layer, it is only active at training time.
Alpha Dropout is a
Dropout
that keeps mean and variance of inputs to their original values, in order to ensure the self-normalizing property even after this dropout. Alpha Dropout fits well to Scaled Exponential Linear Units by randomly setting activations to the negative saturation value.Arguments: -
rate
: float, drop probability (as withDropout
). The multiplicative noise will have standard deviationsqrt(rate / (1 - rate))
. -noise_shape
: A 1-DTensor
of typeint32
, representing the shape for randomly generated keep/drop flags.Input shape: Arbitrary. Use the keyword argument
inputShape
(tuple of integers, does not include the samples axis) when using this layer as the first layer in a model.Output shape: Same shape as input.
References: - [Self-Normalizing Neural Networks](https://arxiv.org/abs/1706.02515)
{heading: 'Layers', subheading: 'Noise', namespace: 'layers'}
function average
average: (args?: LayerArgs) => Average;
Layer that performs element-wise averaging on an
Array
of inputs.It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape). For example:
const input1 = tf.input({shape: [2, 2]});const input2 = tf.input({shape: [2, 2]});const averageLayer = tf.layers.average();const average = averageLayer.apply([input1, input2]);console.log(JSON.stringify(average.shape));// You get [null, 2, 2], with the first dimension as the undetermined batch// dimension.{heading: 'Layers', subheading: 'Merge', namespace: 'layers'}
function averagePooling1d
averagePooling1d: (args: Pooling1DLayerArgs) => AveragePooling1D;
Average pooling operation for spatial data.
Input shape:
[batchSize, inLength, channels]
Output shape:
[batchSize, pooledLength, channels]
tf.avgPool1d
is an alias.{heading: 'Layers', subheading: 'Pooling', namespace: 'layers'}
function averagePooling2d
averagePooling2d: (args: Pooling2DLayerArgs) => AveragePooling2D;
Average pooling operation for spatial data.
Input shape: - If
dataFormat === CHANNEL_LAST
: 4D tensor with shape:[batchSize, rows, cols, channels]
- IfdataFormat === CHANNEL_FIRST
: 4D tensor with shape:[batchSize, channels, rows, cols]
Output shape - If
dataFormat === CHANNEL_LAST
: 4D tensor with shape:[batchSize, pooledRows, pooledCols, channels]
- IfdataFormat === CHANNEL_FIRST
: 4D tensor with shape:[batchSize, channels, pooledRows, pooledCols]
tf.avgPool2d
is an alias.{heading: 'Layers', subheading: 'Pooling', namespace: 'layers'}
function averagePooling3d
averagePooling3d: (args: Pooling3DLayerArgs) => AveragePooling3D;
Average pooling operation for 3D data.
Input shape - If
dataFormat === channelsLast
: 5D tensor with shape:[batchSize, depths, rows, cols, channels]
- IfdataFormat === channelsFirst
: 4D tensor with shape:[batchSize, channels, depths, rows, cols]
Output shape - If
dataFormat=channelsLast
: 5D tensor with shape:[batchSize, pooledDepths, pooledRows, pooledCols, channels]
- IfdataFormat=channelsFirst
: 5D tensor with shape:[batchSize, channels, pooledDepths, pooledRows, pooledCols]
{heading: 'Layers', subheading: 'Pooling', namespace: 'layers'}
function avgPool1d
avgPool1d: (args: Pooling1DLayerArgs) => AveragePooling1D;
function avgPool2d
avgPool2d: (args: Pooling2DLayerArgs) => AveragePooling2D;
function avgPool3d
avgPool3d: (args: Pooling3DLayerArgs) => AveragePooling3D;
function avgPooling1d
avgPooling1d: (args: Pooling1DLayerArgs) => AveragePooling1D;
function avgPooling2d
avgPooling2d: (args: Pooling2DLayerArgs) => AveragePooling2D;
function avgPooling3d
avgPooling3d: (args: Pooling3DLayerArgs) => AveragePooling3D;
function batchNormalization
batchNormalization: (args?: BatchNormalizationLayerArgs) => BatchNormalization;
Batch normalization layer (Ioffe and Szegedy, 2014).
Normalize the activations of the previous layer at each batch, i.e. applies a transformation that maintains the mean activation close to 0 and the activation standard deviation close to 1.
Input shape: Arbitrary. Use the keyword argument
inputShape
(Array of integers, does not include the sample axis) when calling the constructor of this class, if this layer is used as a first layer in a model.Output shape: Same shape as input.
References: - [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift](https://arxiv.org/abs/1502.03167)
{heading: 'Layers', subheading: 'Normalization', namespace: 'layers'}
function bidirectional
bidirectional: (args: BidirectionalLayerArgs) => Bidirectional;
{heading: 'Layers', subheading: 'Wrapper', namespace: 'layers'}
function categoryEncoding
categoryEncoding: (args: CategoryEncodingArgs) => CategoryEncoding;
A preprocessing layer which encodes integer features.
This layer provides options for condensing data into a categorical encoding when the total number of tokens are known in advance. It accepts integer values as inputs, and it outputs a dense representation of those inputs.
Arguments:
numTokens: The total number of tokens the layer should support. All inputs to the layer must integers in the range `0 <= value < numTokens`, or an error will be thrown.
outputMode: Specification for the output of the layer. Defaults to
multiHot
. Values can beoneHot
,multiHot
orcount
, configuring the layer as follows:oneHot: Encodes each individual element in the input into an array of
numTokens
size, containing a 1 at the element index. If the last dimension is size 1, will encode on that dimension. If the last dimension is not size 1, will append a new dimension for the encoded output.multiHot: Encodes each sample in the input into a single array of
numTokens
size, containing a 1 for each vocabulary term present in the sample. Treats the last dimension as the sample dimension, if input shape is(..., sampleLength)
, output shape will be(..., numTokens)
.count: Like
multiHot
, but the int array contains a count of the number of times the token at that index appeared in the sample.For all output modes, currently only output up to rank 2 is supported. Call arguments: inputs: A 1D or 2D tensor of integer inputs. countWeights: A tensor in the same shape as
inputs
indicating the weight for each sample value when summing up incount
mode. Not used inmultiHot
oroneHot
modes.{heading: 'Layers', subheading: 'CategoryEncoding', namespace: 'layers'}
function centerCrop
centerCrop: (args?: CenterCropArgs) => CenterCrop;
A preprocessing layer which center crops images.
This layers crops the central portion of the images to a target size. If an image is smaller than the target size, it will be resized and cropped so as to return the largest possible window in the image that matches the target aspect ratio.
Input pixel values can be of any range (e.g.
[0., 1.)
or[0, 255]
) and of integer or floating point dtype.If the input height/width is even and the target height/width is odd (or inversely), the input image is left-padded by 1 pixel.
Arguments:
height
: Integer, the height of the output shape.width
: Integer, the width of the output shape.Input shape: 3D (unbatched) or 4D (batched) tensor with shape:
(..., height, width, channels)
, inchannelsLast
format.Output shape: 3D (unbatched) or 4D (batched) tensor with shape:
(..., targetHeight, targetWidth, channels)
.{heading: 'Layers', subheading: 'CenterCrop', namespace: 'layers'}
function concatenate
concatenate: (args?: ConcatenateLayerArgs) => Concatenate;
Layer that concatenates an
Array
of inputs.It takes a list of tensors, all of the same shape except for the concatenation axis, and returns a single tensor, the concatenation of all inputs. For example:
const input1 = tf.input({shape: [2, 2]});const input2 = tf.input({shape: [2, 3]});const concatLayer = tf.layers.concatenate();const output = concatLayer.apply([input1, input2]);console.log(JSON.stringify(output.shape));// You get [null, 2, 5], with the first dimension as the undetermined batch// dimension. The last dimension (5) is the result of concatenating the// last dimensions of the inputs (2 and 3).{heading: 'Layers', subheading: 'Merge', namespace: 'layers'}
function conv1d
conv1d: (args: ConvLayerArgs) => Conv1D;
1D convolution layer (e.g., temporal convolution).
This layer creates a convolution kernel that is convolved with the layer input over a single spatial (or temporal) dimension to produce a tensor of outputs.
If
use_bias
is True, a bias vector is created and added to the outputs.If
activation
is notnull
, it is applied to the outputs as well.When using this layer as the first layer in a model, provide an
inputShape
argumentArray
ornull
.For example,
inputShape
would be: -[10, 128]
for sequences of 10 vectors of 128-dimensional vectors -[null, 128]
for variable-length sequences of 128-dimensional vectors.{heading: 'Layers', subheading: 'Convolutional', namespace: 'layers'}
function conv2d
conv2d: (args: ConvLayerArgs) => Conv2D;
2D convolution layer (e.g. spatial convolution over images).
This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs.
If
useBias
is True, a bias vector is created and added to the outputs.If
activation
is notnull
, it is applied to the outputs as well.When using this layer as the first layer in a model, provide the keyword argument
inputShape
(Array of integers, does not include the sample axis), e.g.inputShape=[128, 128, 3]
for 128x128 RGB pictures indataFormat='channelsLast'
.{heading: 'Layers', subheading: 'Convolutional', namespace: 'layers'}
function conv2dTranspose
conv2dTranspose: (args: ConvLayerArgs) => Conv2DTranspose;
Transposed convolutional layer (sometimes called Deconvolution).
The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.
When using this layer as the first layer in a model, provide the configuration
inputShape
(Array
of integers, does not include the sample axis), e.g.,inputShape: [128, 128, 3]
for 128x128 RGB pictures indataFormat: 'channelsLast'
.Input shape: 4D tensor with shape:
[batch, channels, rows, cols]
ifdataFormat
is'channelsFirst'
. or 4D tensor with shape[batch, rows, cols, channels]
ifdataFormat
is'channelsLast'
.Output shape: 4D tensor with shape:
[batch, filters, newRows, newCols]
ifdataFormat
is'channelsFirst'
. or 4D tensor with shape:[batch, newRows, newCols, filters]
ifdataFormat
is'channelsLast'
.References: - [A guide to convolution arithmetic for deep learning](https://arxiv.org/abs/1603.07285v1) - [Deconvolutional Networks](http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf)
{heading: 'Layers', subheading: 'Convolutional', namespace: 'layers'}
function conv3d
conv3d: (args: ConvLayerArgs) => Conv3D;
3D convolution layer (e.g. spatial convolution over volumes).
This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs.
If
useBias
is True, a bias vector is created and added to the outputs.If
activation
is notnull
, it is applied to the outputs as well.When using this layer as the first layer in a model, provide the keyword argument
inputShape
(Array of integers, does not include the sample axis), e.g.inputShape=[128, 128, 128, 1]
for 128x128x128 grayscale volumes indataFormat='channelsLast'
.{heading: 'Layers', subheading: 'Convolutional', namespace: 'layers'}
function conv3dTranspose
conv3dTranspose: (args: ConvLayerArgs) => Layer;
function convLstm2d
convLstm2d: (args: ConvLSTM2DArgs) => ConvLSTM2D;
{heading: 'Layers', subheading: 'Recurrent', namespace: 'layers'}
function convLstm2dCell
convLstm2dCell: (args: ConvLSTM2DCellArgs) => ConvLSTM2DCell;
{heading: 'Layers', subheading: 'Recurrent', namespace: 'layers'}
function cropping2D
cropping2D: (args: Cropping2DLayerArgs) => Cropping2D;
Cropping layer for 2D input (e.g., image).
This layer can crop an input at the top, bottom, left and right side of an image tensor.
Input shape: 4D tensor with shape: - If
dataFormat
is"channelsLast"
:[batch, rows, cols, channels]
- Ifdata_format
is"channels_first"
:[batch, channels, rows, cols]
.Output shape: 4D with shape: - If
dataFormat
is"channelsLast"
:[batch, croppedRows, croppedCols, channels]
- IfdataFormat
is"channelsFirst"
:[batch, channels, croppedRows, croppedCols]
.Examples
const model = tf.sequential();model.add(tf.layers.cropping2D({cropping:[[2, 2], [2, 2]],inputShape: [128, 128, 3]}));//now output shape is [batch, 124, 124, 3]{heading: 'Layers', subheading: 'Convolutional', namespace: 'layers'}
function dense
dense: (args: DenseLayerArgs) => Dense;
Creates a dense (fully connected) layer.
This layer implements the operation:
output = activation(dot(input, kernel) + bias)
activation
is the element-wise activation function passed as theactivation
argument.kernel
is a weights matrix created by the layer.bias
is a bias vector created by the layer (only applicable ifuseBias
istrue
).**Input shape:**
nD
tf.Tensor
with shape:(batchSize, ..., inputDim)
.The most common situation would be a 2D input with shape
(batchSize, inputDim)
.**Output shape:**
nD tensor with shape:
(batchSize, ..., units)
.For instance, for a 2D input with shape
(batchSize, inputDim)
, the output would have shape(batchSize, units)
.Note: if the input to the layer has a rank greater than 2, then it is flattened prior to the initial dot product with the kernel.
{heading: 'Layers', subheading: 'Basic', namespace: 'layers'}
function depthwiseConv2d
depthwiseConv2d: (args: DepthwiseConv2DLayerArgs) => DepthwiseConv2D;
Depthwise separable 2D convolution.
Depthwise Separable convolutions consists in performing just the first step in a depthwise spatial convolution (which acts on each input channel separately). The
depthMultiplier
argument controls how many output channels are generated per input channel in the depthwise step.{heading: 'Layers', subheading: 'Convolutional', namespace: 'layers'}
function dot
dot: (args: DotLayerArgs) => Dot;
Layer that computes a dot product between samples in two tensors.
E.g., if applied to a list of two tensors
a
andb
both of shape[batchSize, n]
, the output will be a tensor of shape[batchSize, 1]
, where each entry at index[i, 0]
will be the dot product betweena[i, :]
andb[i, :]
.Example:
const dotLayer = tf.layers.dot({axes: -1});const x1 = tf.tensor2d([[10, 20], [30, 40]]);const x2 = tf.tensor2d([[-1, -2], [-3, -4]]);// Invoke the layer's apply() method in eager (imperative) mode.const y = dotLayer.apply([x1, x2]);y.print();{heading: 'Layers', subheading: 'Merge', namespace: 'layers'}
function dropout
dropout: (args: DropoutLayerArgs) => Dropout;
Applies [dropout](http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf) to the input.
Dropout consists in randomly setting a fraction
rate
of input units to 0 at each update during training time, which helps prevent overfitting.{heading: 'Layers', subheading: 'Basic', namespace: 'layers'}
function elu
elu: (args?: ELULayerArgs) => ELU;
Exponential Linear Unit (ELU).
It follows:
f(x) = alpha * (exp(x) - 1.) for x < 0
,f(x) = x for x >= 0
.Input shape: Arbitrary. Use the configuration
inputShape
when using this layer as the first layer in a model.Output shape: Same shape as the input.
References: - [Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)](https://arxiv.org/abs/1511.07289v1)
{ heading: 'Layers', subheading: 'Advanced Activation', namespace: 'layers' }
function embedding
embedding: (args: EmbeddingLayerArgs) => Embedding;
Maps positive integers (indices) into dense vectors of fixed size. E.g. [[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]]
**Input shape:** 2D tensor with shape:
[batchSize, sequenceLength]
.**Output shape:** 3D tensor with shape: `[batchSize, sequenceLength, outputDim]`.
{heading: 'Layers', subheading: 'Basic', namespace: 'layers'}
function flatten
flatten: (args?: FlattenLayerArgs) => Flatten;
Flattens the input. Does not affect the batch size.
A
Flatten
layer flattens each batch in its inputs to 1D (making the output 2D).For example:
const input = tf.input({shape: [4, 3]});const flattenLayer = tf.layers.flatten();// Inspect the inferred output shape of the flatten layer, which// equals `[null, 12]`. The 2nd dimension is 4 * 3, i.e., the result of the// flattening. (The 1st dimension is the undermined batch size.)console.log(JSON.stringify(flattenLayer.apply(input).shape));{heading: 'Layers', subheading: 'Basic', namespace: 'layers'}
function gaussianDropout
gaussianDropout: (args: GaussianDropoutArgs) => GaussianDropout;
Apply multiplicative 1-centered Gaussian noise.
As it is a regularization layer, it is only active at training time.
Arguments: -
rate
: float, drop probability (as withDropout
). The multiplicative noise will have standard deviationsqrt(rate / (1 - rate))
.Input shape: Arbitrary. Use the keyword argument
inputShape
(tuple of integers, does not include the samples axis) when using this layer as the first layer in a model.Output shape: Same shape as input.
References: - [Dropout: A Simple Way to Prevent Neural Networks from Overfitting]( http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf)
{heading: 'Layers', subheading: 'Noise', namespace: 'layers'}
function gaussianNoise
gaussianNoise: (args: GaussianNoiseArgs) => GaussianNoise;
Apply additive zero-centered Gaussian noise.
As it is a regularization layer, it is only active at training time.
This is useful to mitigate overfitting (you could see it as a form of random data augmentation). Gaussian Noise (GS) is a natural choice as corruption process for real valued inputs.
# Arguments stddev: float, standard deviation of the noise distribution.
# Input shape Arbitrary. Use the keyword argument
input_shape
(tuple of integers, does not include the samples axis) when using this layer as the first layer in a model.# Output shape Same shape as input.
{heading: 'Layers', subheading: 'Noise', namespace: 'layers'}
function globalAveragePooling1d
globalAveragePooling1d: (args?: LayerArgs) => GlobalAveragePooling1D;
Global average pooling operation for temporal data.
Input Shape: 3D tensor with shape:
[batchSize, steps, features]
.Output Shape: 2D tensor with shape:
[batchSize, features]
.{heading: 'Layers', subheading: 'Pooling', namespace: 'layers'}
function globalAveragePooling2d
globalAveragePooling2d: ( args: GlobalPooling2DLayerArgs) => GlobalAveragePooling2D;
Global average pooling operation for spatial data.
Input shape: - If
dataFormat
isCHANNEL_LAST
: 4D tensor with shape:[batchSize, rows, cols, channels]
. - IfdataFormat
isCHANNEL_FIRST
: 4D tensor with shape:[batchSize, channels, rows, cols]
.Output shape: 2D tensor with shape:
[batchSize, channels]
.{heading: 'Layers', subheading: 'Pooling', namespace: 'layers'}
function globalMaxPooling1d
globalMaxPooling1d: (args?: LayerArgs) => GlobalMaxPooling1D;
Global max pooling operation for temporal data.
Input Shape: 3D tensor with shape:
[batchSize, steps, features]
.Output Shape: 2D tensor with shape:
[batchSize, features]
.{heading: 'Layers', subheading: 'Pooling', namespace: 'layers'}
function globalMaxPooling2d
globalMaxPooling2d: (args: GlobalPooling2DLayerArgs) => GlobalMaxPooling2D;
Global max pooling operation for spatial data.
Input shape: - If
dataFormat
isCHANNEL_LAST
: 4D tensor with shape:[batchSize, rows, cols, channels]
. - IfdataFormat
isCHANNEL_FIRST
: 4D tensor with shape:[batchSize, channels, rows, cols]
.Output shape: 2D tensor with shape:
[batchSize, channels]
.{heading: 'Layers', subheading: 'Pooling', namespace: 'layers'}
function gru
gru: (args: GRULayerArgs) => GRU;
Gated Recurrent Unit - Cho et al. 2014.
This is an
RNN
layer consisting of oneGRUCell
. However, unlike the underlyingGRUCell
, theapply
method ofSimpleRNN
operates on a sequence of inputs. The shape of the input (not including the first, batch dimension) needs to be at least 2-D, with the first dimension being time steps. For example:```js const rnn = tf.layers.gru({units: 8, returnSequences: true});
// Create an input with 10 time steps. const input = tf.input({shape: [10, 20]}); const output = rnn.apply(input);
console.log(JSON.stringify(output.shape)); // [null, 10, 8]: 1st dimension is unknown batch size; 2nd dimension is the // same as the sequence length of
input
, due toreturnSequences
:true
; // 3rd dimension is theGRUCell
's number of units.{heading: 'Layers', subheading: 'Recurrent', namespace: 'layers'}
function gruCell
gruCell: (args: GRUCellLayerArgs) => GRUCell;
Cell class for
GRU
.GRUCell
is distinct from theRNN
subclassGRU
in that itsapply
method takes the input data of only a single time step and returns the cell's output at the time step, whileGRU
takes the input data over a number of time steps. For example:const cell = tf.layers.gruCell({units: 2});const input = tf.input({shape: [10]});const output = cell.apply(input);console.log(JSON.stringify(output.shape));// [null, 10]: This is the cell's output at a single time step. The 1st// dimension is the unknown batch size.Instance(s) of
GRUCell
can be used to constructRNN
layers. The most typical use of this workflow is to combine a number of cells into a stacked RNN cell (i.e.,StackedRNNCell
internally) and use it to create an RNN. For example:const cells = [tf.layers.gruCell({units: 4}),tf.layers.gruCell({units: 8}),];const rnn = tf.layers.rnn({cell: cells, returnSequences: true});// Create an input with 10 time steps and a length-20 vector at each step.const input = tf.input({shape: [10, 20]});const output = rnn.apply(input);console.log(JSON.stringify(output.shape));// [null, 10, 8]: 1st dimension is unknown batch size; 2nd dimension is the// same as the sequence length of `input`, due to `returnSequences`: `true`;// 3rd dimension is the last `gruCell`'s number of units.To create an
RNN
consisting of only *one*GRUCell
, use thetf.layers.gru
.{heading: 'Layers', subheading: 'Recurrent', namespace: 'layers'}
function input
input: (config: InputConfig) => SymbolicTensor;
Used to instantiate an input to a model as a
tf.SymbolicTensor
.Users should call the
input
factory function for consistency with other generator functions.Example:
// Defines a simple logistic regression model with 32 dimensional input// and 3 dimensional output.const x = tf.input({shape: [32]});const y = tf.layers.dense({units: 3, activation: 'softmax'}).apply(x);const model = tf.model({inputs: x, outputs: y});model.predict(tf.ones([2, 32])).print();Note:
input
is only necessary when usingmodel
. When usingsequential
, specifyinputShape
for the first layer or useinputLayer
as the first layer.{heading: 'Models', subheading: 'Inputs'}
function inputLayer
inputLayer: (args: InputLayerArgs) => InputLayer;
An input layer is an entry point into a
tf.LayersModel
.InputLayer
is generated automatically fortf.Sequential
models by specifying theinputshape
orbatchInputShape
for the first layer. It should not be specified explicitly. However, it can be useful sometimes, e.g., when constructing a sequential model from a subset of another sequential model's layers. Like the code snippet below shows.// Define a model which simply adds two inputs.const model1 = tf.sequential();model1.add(tf.layers.dense({inputShape: [4], units: 3, activation: 'relu'}));model1.add(tf.layers.dense({units: 1, activation: 'sigmoid'}));model1.summary();model1.predict(tf.zeros([1, 4])).print();// Construct another model, reusing the second layer of `model1` while// not using the first layer of `model1`. Note that you cannot add the second// layer of `model` directly as the first layer of the new sequential model,// because doing so will lead to an error related to the fact that the layer// is not an input layer. Instead, you need to create an `inputLayer` and add// it to the new sequential model before adding the reused layer.const model2 = tf.sequential();// Use an inputShape that matches the input shape of `model1`'s second// layer.model2.add(tf.layers.inputLayer({inputShape: [3]}));model2.add(model1.layers[1]);model2.summary();model2.predict(tf.zeros([1, 3])).print();{heading: 'Layers', subheading: 'Inputs', namespace: 'layers'}
function layerNormalization
layerNormalization: (args?: LayerNormalizationLayerArgs) => LayerNormalization;
Layer-normalization layer (Ba et al., 2016).
Normalizes the activations of the previous layer for each given example in a batch independently, instead of across a batch like in
batchNormalization
. In other words, this layer applies a transformation that maintains the mean activation within each example close to 0 and activation variance close to 1.Input shape: Arbitrary. Use the argument
inputShape
when using this layer as the first layer in a model.Output shape: Same as input.
References: - [Layer Normalization](https://arxiv.org/abs/1607.06450)
{heading: 'Layers', subheading: 'Normalization', namespace: 'layers'}
function leakyReLU
leakyReLU: (args?: LeakyReLULayerArgs) => LeakyReLU;
Leaky version of a rectified linear unit.
It allows a small gradient when the unit is not active:
f(x) = alpha * x for x < 0.
f(x) = x for x >= 0.
Input shape: Arbitrary. Use the configuration
inputShape
when using this layer as the first layer in a model.Output shape: Same shape as the input.
{ heading: 'Layers', subheading: 'Advanced Activation', namespace: 'layers' }
function lstm
lstm: (args: LSTMLayerArgs) => LSTM;
Long-Short Term Memory layer - Hochreiter 1997.
This is an
RNN
layer consisting of oneLSTMCell
. However, unlike the underlyingLSTMCell
, theapply
method ofLSTM
operates on a sequence of inputs. The shape of the input (not including the first, batch dimension) needs to be at least 2-D, with the first dimension being time steps. For example:```js const lstm = tf.layers.lstm({units: 8, returnSequences: true});
// Create an input with 10 time steps. const input = tf.input({shape: [10, 20]}); const output = lstm.apply(input);
console.log(JSON.stringify(output.shape)); // [null, 10, 8]: 1st dimension is unknown batch size; 2nd dimension is the // same as the sequence length of
input
, due toreturnSequences
:true
; // 3rd dimension is theLSTMCell
's number of units.{heading: 'Layers', subheading: 'Recurrent', namespace: 'layers'}
function lstmCell
lstmCell: (args: LSTMCellLayerArgs) => LSTMCell;
Cell class for
LSTM
.LSTMCell
is distinct from theRNN
subclassLSTM
in that itsapply
method takes the input data of only a single time step and returns the cell's output at the time step, whileLSTM
takes the input data over a number of time steps. For example:const cell = tf.layers.lstmCell({units: 2});const input = tf.input({shape: [10]});const output = cell.apply(input);console.log(JSON.stringify(output.shape));// [null, 10]: This is the cell's output at a single time step. The 1st// dimension is the unknown batch size.Instance(s) of
LSTMCell
can be used to constructRNN
layers. The most typical use of this workflow is to combine a number of cells into a stacked RNN cell (i.e.,StackedRNNCell
internally) and use it to create an RNN. For example:const cells = [tf.layers.lstmCell({units: 4}),tf.layers.lstmCell({units: 8}),];const rnn = tf.layers.rnn({cell: cells, returnSequences: true});// Create an input with 10 time steps and a length-20 vector at each step.const input = tf.input({shape: [10, 20]});const output = rnn.apply(input);console.log(JSON.stringify(output.shape));// [null, 10, 8]: 1st dimension is unknown batch size; 2nd dimension is the// same as the sequence length of `input`, due to `returnSequences`: `true`;// 3rd dimension is the last `lstmCell`'s number of units.To create an
RNN
consisting of only *one*LSTMCell
, use thetf.layers.lstm
.{heading: 'Layers', subheading: 'Recurrent', namespace: 'layers'}
function masking
masking: (args?: MaskingArgs) => Masking;
Masks a sequence by using a mask value to skip timesteps.
If all features for a given sample timestep are equal to
mask_value
, then the sample timestep will be masked (skipped) in all downstream layers (as long as they support masking).If any downstream layer does not support masking yet receives such an input mask, an exception will be raised.
Arguments: -
maskValue
: Either None or mask value to skip.Input shape: Arbitrary. Use the keyword argument
inputShape
(tuple of integers, does not include the samples axis) when using this layer as the first layer in a model.Output shape: Same shape as input.
{heading: 'Layers', subheading: 'Mask', namespace: 'layers'}
function maximum
maximum: (args?: LayerArgs) => Maximum;
Layer that computes the element-wise maximum of an
Array
of inputs.It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape). For example:
const input1 = tf.input({shape: [2, 2]});const input2 = tf.input({shape: [2, 2]});const maxLayer = tf.layers.maximum();const max = maxLayer.apply([input1, input2]);console.log(JSON.stringify(max.shape));// You get [null, 2, 2], with the first dimension as the undetermined batch// dimension.{heading: 'Layers', subheading: 'Merge', namespace: 'layers'}
function maxPooling1d
maxPooling1d: (args: Pooling1DLayerArgs) => MaxPooling1D;
Max pooling operation for temporal data.
Input shape:
[batchSize, inLength, channels]
Output shape:
[batchSize, pooledLength, channels]
{heading: 'Layers', subheading: 'Pooling', namespace: 'layers'}
function maxPooling2d
maxPooling2d: (args: Pooling2DLayerArgs) => MaxPooling2D;
Max pooling operation for spatial data.
Input shape - If
dataFormat === CHANNEL_LAST
: 4D tensor with shape:[batchSize, rows, cols, channels]
- IfdataFormat === CHANNEL_FIRST
: 4D tensor with shape:[batchSize, channels, rows, cols]
Output shape - If
dataFormat=CHANNEL_LAST
: 4D tensor with shape:[batchSize, pooledRows, pooledCols, channels]
- IfdataFormat=CHANNEL_FIRST
: 4D tensor with shape:[batchSize, channels, pooledRows, pooledCols]
{heading: 'Layers', subheading: 'Pooling', namespace: 'layers'}
function maxPooling3d
maxPooling3d: (args: Pooling3DLayerArgs) => MaxPooling3D;
Max pooling operation for 3D data.
Input shape - If
dataFormat === channelsLast
: 5D tensor with shape:[batchSize, depths, rows, cols, channels]
- IfdataFormat === channelsFirst
: 5D tensor with shape:[batchSize, channels, depths, rows, cols]
Output shape - If
dataFormat=channelsLast
: 5D tensor with shape:[batchSize, pooledDepths, pooledRows, pooledCols, channels]
- IfdataFormat=channelsFirst
: 5D tensor with shape:[batchSize, channels, pooledDepths, pooledRows, pooledCols]
{heading: 'Layers', subheading: 'Pooling', namespace: 'layers'}
function minimum
minimum: (args?: LayerArgs) => Minimum;
Layer that computes the element-wise minimum of an
Array
of inputs.It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape). For example:
const input1 = tf.input({shape: [2, 2]});const input2 = tf.input({shape: [2, 2]});const minLayer = tf.layers.minimum();const min = minLayer.apply([input1, input2]);console.log(JSON.stringify(min.shape));// You get [null, 2, 2], with the first dimension as the undetermined batch// dimension.{heading: 'Layers', subheading: 'Merge', namespace: 'layers'}
function multiply
multiply: (args?: LayerArgs) => Multiply;
Layer that multiplies (element-wise) an
Array
of inputs.It takes as input an Array of tensors, all of the same shape, and returns a single tensor (also of the same shape). For example:
```js const input1 = tf.input({shape: [2, 2]}); const input2 = tf.input({shape: [2, 2]}); const input3 = tf.input({shape: [2, 2]}); const multiplyLayer = tf.layers.multiply(); const product = multiplyLayer.apply([input1, input2, input3]); console.log(product.shape); // You get [null, 2, 2], with the first dimension as the undetermined batch // dimension.
{heading: 'Layers', subheading: 'Merge', namespace: 'layers'}
function permute
permute: (args: PermuteLayerArgs) => Permute;
Permutes the dimensions of the input according to a given pattern.
Useful for, e.g., connecting RNNs and convnets together.
Example:
const model = tf.sequential();model.add(tf.layers.permute({dims: [2, 1],inputShape: [10, 64]}));console.log(model.outputShape);// Now model's output shape is [null, 64, 10], where null is the// unpermuted sample (batch) dimension.Input shape: Arbitrary. Use the configuration field
inputShape
when using this layer as the first layer in a model.Output shape: Same rank as the input shape, but with the dimensions re-ordered (i.e., permuted) according to the
dims
configuration of this layer.{heading: 'Layers', subheading: 'Basic', namespace: 'layers'}
function prelu
prelu: (args?: PReLULayerArgs) => PReLU;
Parameterized version of a leaky rectified linear unit.
It follows
f(x) = alpha * x for x < 0.
f(x) = x for x >= 0.
whereinalpha
is a trainable weight.Input shape: Arbitrary. Use the configuration
inputShape
when using this layer as the first layer in a model.Output shape: Same shape as the input.
{ heading: 'Layers', subheading: 'Advanced Activation', namespace: 'layers' }
function randomWidth
randomWidth: (args: RandomWidthArgs) => RandomWidth;
A preprocessing layer which randomly varies image width during training.
This layer will randomly adjusts the width of a batch of images of a batch of images by a random factor.
The input should be a 3D (unbatched) or 4D (batched) tensor in the
"channels_last"
image data format. Input pixel values can be of any range (e.g.[0., 1.)
or[0, 255]
) and of integer or floating point dtype. By default, the layer will output floats. By default, this layer is inactive during inference. For an overview and full list of preprocessing layers, see the preprocessing [guide] (https://www.tensorflow.org/guide/keras/preprocessing_layers).Arguments:
factor: A positive float (fraction of original width), or a tuple of size 2 representing lower and upper bound for resizing vertically. When represented as a single float, this value is used for both the upper and lower bound. For instance,
factor=(0.2, 0.3)
results in an output with width changed by a random amount in the range[20%, 30%]
.factor=(-0.2, 0.3)
results in an output with width changed by a random amount in the range[-20%, +30%]
.factor=0.2
results in an output with width changed by a random amount in the range[-20%, +20%]
. interpolation: String, the interpolation method. Defaults tobilinear
. Supports"bilinear"
,"nearest"
. The tf methods"bicubic"
,"area"
,"lanczos3"
,"lanczos5"
,"gaussian"
,"mitchellcubic"
are unimplemented in tfjs. seed: Integer. Used to create a random seed.Input shape: 3D (unbatched) or 4D (batched) tensor with shape:
(..., height, width, channels)
, in"channels_last"
format. Output shape: 3D (unbatched) or 4D (batched) tensor with shape:(..., height, random_width, channels)
.{heading: 'Layers', subheading: 'RandomWidth', namespace: 'layers'}
function reLU
reLU: (args?: ReLULayerArgs) => ReLU;
Rectified Linear Unit activation function.
Input shape: Arbitrary. Use the config field
inputShape
(Array of integers, does not include the sample axis) when using this layer as the first layer in a model.Output shape: Same shape as the input.
{ heading: 'Layers', subheading: 'Advanced Activation', namespace: 'layers' }
function repeatVector
repeatVector: (args: RepeatVectorLayerArgs) => RepeatVector;
Repeats the input n times in a new dimension.
const model = tf.sequential();model.add(tf.layers.repeatVector({n: 4, inputShape: [2]}));const x = tf.tensor2d([[10, 20]]);// Use the model to do inference on a data point the model hasn't seenmodel.predict(x).print();// output shape is now [batch, 2, 4]{heading: 'Layers', subheading: 'Basic', namespace: 'layers'}
function rescaling
rescaling: (args?: RescalingArgs) => Rescaling;
A preprocessing layer which rescales input values to a new range.
This layer rescales every value of an input (often an image) by multiplying by
scale
and addingoffset
.For instance: 1. To rescale an input in the ``[0, 255]`` range to be in the
[0, 1]
range, you would passscale=1/255
. 2. To rescale an input in the ``[0, 255]`` range to be in the[-1, 1]
range, you would passscale=1./127.5, offset=-1
. The rescaling is applied both during training and inference. Inputs can be of integer or floating point dtype, and by default the layer will output floats.Arguments: -
scale
: Float, the scale to apply to the inputs. -offset
: Float, the offset to apply to the inputs.Input shape: Arbitrary.
Output shape: Same as input.
{heading: 'Layers', subheading: 'Rescaling', namespace: 'layers'}
function reshape
reshape: (args: ReshapeLayerArgs) => Reshape;
Reshapes an input to a certain shape.
const input = tf.input({shape: [4, 3]});const reshapeLayer = tf.layers.reshape({targetShape: [2, 6]});// Inspect the inferred output shape of the Reshape layer, which// equals `[null, 2, 6]`. (The 1st dimension is the undermined batch size.)console.log(JSON.stringify(reshapeLayer.apply(input).shape));Input shape: Arbitrary, although all dimensions in the input shape must be fixed. Use the configuration
inputShape
when using this layer as the first layer in a model.Output shape: [batchSize, targetShape[0], targetShape[1], ..., targetShape[targetShape.length - 1]].
{heading: 'Layers', subheading: 'Basic', namespace: 'layers'}
function resizing
resizing: (args?: ResizingArgs) => Resizing;
A preprocessing layer which resizes images. This layer resizes an image input to a target height and width. The input should be a 4D (batched) or 3D (unbatched) tensor in
"channels_last"
format. Input pixel values can be of any range (e.g.[0., 1.)
or `[0, 255]`) and of interger or floating point dtype. By default, the layer will output floats.Arguments: -
height
: number, the height for the output tensor. -width
: number, the width for the output tensor. -interpolation
: string, the method for image resizing interpolation. -cropToAspectRatio
: boolean, whether to keep image aspect ratio.Input shape: Arbitrary.
Output shape: height, width, num channels.
{heading: 'Layers', subheading: 'Resizing', namespace: 'layers'}
function rnn
rnn: (args: RNNLayerArgs) => RNN;
Base class for recurrent layers.
Input shape: 3D tensor with shape
[batchSize, timeSteps, inputDim]
.Output shape: - if
returnState
, an Array of tensors (i.e.,tf.Tensor
s). The first tensor is the output. The remaining tensors are the states at the last time step, each with shape[batchSize, units]
. - ifreturnSequences
, the output will have shape[batchSize, timeSteps, units]
. - else, the output will have shape[batchSize, units]
.Masking: This layer supports masking for input data with a variable number of timesteps. To introduce masks to your data, use an embedding layer with the
mask_zero
parameter set toTrue
.Notes on using statefulness in RNNs: You can set RNN layers to be 'stateful', which means that the states computed for the samples in one batch will be reused as initial states for the samples in the next batch. This assumes a one-to-one mapping between samples in different successive batches.
To enable statefulness: - specify
stateful: true
in the layer constructor. - specify a fixed batch size for your model, by passing if sequential model:batchInputShape=[...]
to the first layer in your model. else for functional model with 1 or more Input layers:batchShape=[...]
to all the first layers in your model. This is the expected shape of your inputs *including the batch size*. It should be a tuple of integers, e.g.(32, 10, 100)
. - specifyshuffle=False
when calling fit().To reset the states of your model, call
.resetStates()
on either a specific layer, or on your entire model.Note on specifying the initial state of RNNs You can specify the initial state of RNN layers symbolically by calling them with the option
initialState
. The value ofinitialState
should be a tensor or list of tensors representing the initial state of the RNN layer.You can specify the initial state of RNN layers numerically by calling
resetStates
with the keyword argumentstates
. The value ofstates
should be a numpy array or list of numpy arrays representing the initial state of the RNN layer.Note on passing external constants to RNNs You can pass "external" constants to the cell using the
constants
keyword argument ofRNN.call
method. This requires that thecell.call
method accepts the same keyword argumentconstants
. Such constants can be used to condition the cell transformation on additional static inputs (not changing over time), a.k.a. an attention mechanism.{heading: 'Layers', subheading: 'Recurrent', namespace: 'layers'}
function separableConv2d
separableConv2d: (args: SeparableConvLayerArgs) => SeparableConv2D;
Depthwise separable 2D convolution.
Separable convolution consists of first performing a depthwise spatial convolution (which acts on each input channel separately) followed by a pointwise convolution which mixes together the resulting output channels. The
depthMultiplier
argument controls how many output channels are generated per input channel in the depthwise step.Intuitively, separable convolutions can be understood as a way to factorize a convolution kernel into two smaller kernels, or as an extreme version of an Inception block.
Input shape: 4D tensor with shape:
[batch, channels, rows, cols]
if data_format='channelsFirst' or 4D tensor with shape:[batch, rows, cols, channels]
if data_format='channelsLast'.Output shape: 4D tensor with shape:
[batch, filters, newRows, newCols]
if data_format='channelsFirst' or 4D tensor with shape:[batch, newRows, newCols, filters]
if data_format='channelsLast'.rows
andcols
values might have changed due to padding.{heading: 'Layers', subheading: 'Convolutional', namespace: 'layers'}
function simpleRNN
simpleRNN: (args: SimpleRNNLayerArgs) => SimpleRNN;
Fully-connected RNN where the output is to be fed back to input.
This is an
RNN
layer consisting of oneSimpleRNNCell
. However, unlike the underlyingSimpleRNNCell
, theapply
method ofSimpleRNN
operates on a sequence of inputs. The shape of the input (not including the first, batch dimension) needs to be at least 2-D, with the first dimension being time steps. For example:const rnn = tf.layers.simpleRNN({units: 8, returnSequences: true});// Create an input with 10 time steps.const input = tf.input({shape: [10, 20]});const output = rnn.apply(input);console.log(JSON.stringify(output.shape));// [null, 10, 8]: 1st dimension is unknown batch size; 2nd dimension is the// same as the sequence length of `input`, due to `returnSequences`: `true`;// 3rd dimension is the `SimpleRNNCell`'s number of units.{heading: 'Layers', subheading: 'Recurrent', namespace: 'layers'}
function simpleRNNCell
simpleRNNCell: (args: SimpleRNNCellLayerArgs) => SimpleRNNCell;
Cell class for
SimpleRNN
.SimpleRNNCell
is distinct from theRNN
subclassSimpleRNN
in that itsapply
method takes the input data of only a single time step and returns the cell's output at the time step, whileSimpleRNN
takes the input data over a number of time steps. For example:const cell = tf.layers.simpleRNNCell({units: 2});const input = tf.input({shape: [10]});const output = cell.apply(input);console.log(JSON.stringify(output.shape));// [null, 10]: This is the cell's output at a single time step. The 1st// dimension is the unknown batch size.Instance(s) of
SimpleRNNCell
can be used to constructRNN
layers. The most typical use of this workflow is to combine a number of cells into a stacked RNN cell (i.e.,StackedRNNCell
internally) and use it to create an RNN. For example:const cells = [tf.layers.simpleRNNCell({units: 4}),tf.layers.simpleRNNCell({units: 8}),];const rnn = tf.layers.rnn({cell: cells, returnSequences: true});// Create an input with 10 time steps and a length-20 vector at each step.const input = tf.input({shape: [10, 20]});const output = rnn.apply(input);console.log(JSON.stringify(output.shape));// [null, 10, 8]: 1st dimension is unknown batch size; 2nd dimension is the// same as the sequence length of `input`, due to `returnSequences`: `true`;// 3rd dimension is the last `SimpleRNNCell`'s number of units.To create an
RNN
consisting of only *one*SimpleRNNCell
, use thetf.layers.simpleRNN
.{heading: 'Layers', subheading: 'Recurrent', namespace: 'layers'}
function softmax
softmax: (args?: SoftmaxLayerArgs) => Softmax;
Softmax activation layer.
Input shape: Arbitrary. Use the configuration
inputShape
when using this layer as the first layer in a model.Output shape: Same shape as the input.
{ heading: 'Layers', subheading: 'Advanced Activation', namespace: 'layers' }
function spatialDropout1d
spatialDropout1d: (args: SpatialDropout1DLayerConfig) => SpatialDropout1D;
Spatial 1D version of Dropout.
This Layer type performs the same function as the Dropout layer, but it drops entire 1D feature maps instead of individual elements. For example, if an input example consists of 3 timesteps and the feature map for each timestep has a size of 4, a
spatialDropout1d
layer may zero out the feature maps of the 1st timesteps and 2nd timesteps completely while sparing all feature elements of the 3rd timestep.If adjacent frames (timesteps) are strongly correlated (as is normally the case in early convolution layers), regular dropout will not regularize the activation and will otherwise just result in merely an effective learning rate decrease. In this case,
spatialDropout1d
will help promote independence among feature maps and should be used instead.**Arguments:** rate: A floating-point number >=0 and <=1. Fraction of the input elements to drop.
**Input shape:** 3D tensor with shape
(samples, timesteps, channels)
.**Output shape:** Same as the input shape.
References: - [Efficient Object Localization Using Convolutional Networks](https://arxiv.org/abs/1411.4280)
{heading: 'Layers', subheading: 'Basic', namespace: 'layers'}
function stackedRNNCells
stackedRNNCells: (args: StackedRNNCellsArgs) => StackedRNNCells;
Wrapper allowing a stack of RNN cells to behave as a single cell.
Used to implement efficient stacked RNNs.
{heading: 'Layers', subheading: 'Recurrent', namespace: 'layers'}
function thresholdedReLU
thresholdedReLU: (args?: ThresholdedReLULayerArgs) => ThresholdedReLU;
Thresholded Rectified Linear Unit.
It follows:
f(x) = x for x > theta
,f(x) = 0 otherwise
.Input shape: Arbitrary. Use the configuration
inputShape
when using this layer as the first layer in a model.Output shape: Same shape as the input.
References: - [Zero-Bias Autoencoders and the Benefits of Co-Adapting Features](http://arxiv.org/abs/1402.3337)
{ heading: 'Layers', subheading: 'Advanced Activation', namespace: 'layers' }
function timeDistributed
timeDistributed: (args: WrapperLayerArgs) => TimeDistributed;
This wrapper applies a layer to every temporal slice of an input.
The input should be at least 3D, and the dimension of the index
1
will be considered to be the temporal dimension.Consider a batch of 32 samples, where each sample is a sequence of 10 vectors of 16 dimensions. The batch input shape of the layer is then `[32, 10, 16]
, and the
inputShape`, not including the sample dimension, is[10, 16]
.You can then use
TimeDistributed
to apply aDense
layer to each of the 10 timesteps, independently:const model = tf.sequential();model.add(tf.layers.timeDistributed({layer: tf.layers.dense({units: 8}),inputShape: [10, 16],}));// Now model.outputShape = [null, 10, 8].// The output will then have shape `[32, 10, 8]`.// In subsequent layers, there is no need for `inputShape`:model.add(tf.layers.timeDistributed({layer: tf.layers.dense({units: 32})}));console.log(JSON.stringify(model.outputs[0].shape));// Now model.outputShape = [null, 10, 32].The output will then have shape
[32, 10, 32]
.TimeDistributed
can be used with arbitrary layers, not justDense
, for instance aConv2D
layer.const model = tf.sequential();model.add(tf.layers.timeDistributed({layer: tf.layers.conv2d({filters: 64, kernelSize: [3, 3]}),inputShape: [10, 299, 299, 3],}));console.log(JSON.stringify(model.outputs[0].shape));{heading: 'Layers', subheading: 'Wrapper', namespace: 'layers'}
function upSampling2d
upSampling2d: (args: UpSampling2DLayerArgs) => UpSampling2D;
Upsampling layer for 2D inputs.
Repeats the rows and columns of the data by size[0] and size[1] respectively.
Input shape: 4D tensor with shape: - If
dataFormat
is"channelsLast"
:[batch, rows, cols, channels]
- IfdataFormat
is"channelsFirst"
:[batch, channels, rows, cols]
Output shape: 4D tensor with shape: - If
dataFormat
is"channelsLast"
:[batch, upsampledRows, upsampledCols, channels]
- IfdataFormat
is"channelsFirst"
:[batch, channels, upsampledRows, upsampledCols]
{heading: 'Layers', subheading: 'Convolutional', namespace: 'layers'}
function zeroPadding2d
zeroPadding2d: (args?: ZeroPadding2DLayerArgs) => ZeroPadding2D;
Zero-padding layer for 2D input (e.g., image).
This layer can add rows and columns of zeros at the top, bottom, left and right side of an image tensor.
Input shape: 4D tensor with shape: - If
dataFormat
is"channelsLast"
:[batch, rows, cols, channels]
- Ifdata_format
is"channels_first"
:[batch, channels, rows, cols]
.Output shape: 4D with shape: - If
dataFormat
is"channelsLast"
:[batch, paddedRows, paddedCols, channels]
- IfdataFormat
is"channelsFirst"
:[batch, channels, paddedRows, paddedCols]
.{heading: 'Layers', subheading: 'Padding', namespace: 'layers'}
class Layer
abstract class Layer extends serialization.Serializable {}
A layer is a grouping of operations and weights that can be composed to create a
tf.LayersModel
.Layers are constructed by using the functions under the [tf.layers](#Layers-Basic) namespace.
{heading: 'Layers', subheading: 'Classes', namespace: 'layers'}
constructor
constructor(args?: LayerArgs);
property activityRegularizer
activityRegularizer: Regularizer;
property batchInputShape
batchInputShape: Shape;
property built
built: boolean;
property dtype
dtype: DataType;
property id
readonly id: number;
property inboundNodes
inboundNodes: Node[];
property initialWeights
initialWeights: Tensor[];
property input
readonly input: SymbolicTensor | SymbolicTensor[];
Retrieves the input tensor(s) of a layer.
Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.
Input tensor or list of input tensors.
AttributeError if the layer is connected to more than one incoming layers.
property inputSpec
inputSpec: InputSpec[];
List of InputSpec class instances.
Each entry describes one required input: - ndim - dtype A layer with
n
input tensors must have aninputSpec
of lengthn
.
property losses
readonly losses: RegularizerFn[];
property name
name: string;
Name for this layer. Must be unique within a model.
property nonTrainableWeights
nonTrainableWeights: LayerVariable[];
property outboundNodes
outboundNodes: Node[];
property output
readonly output: SymbolicTensor | SymbolicTensor[];
Retrieves the output tensor(s) of a layer.
Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.
Output tensor or list of output tensors.
AttributeError if the layer is connected to more than one incoming layers.
property outputShape
readonly outputShape: Shape | Shape[];
Retrieves the output shape(s) of a layer.
Only applicable if the layer has only one inbound node, or if all inbound nodes have the same output shape.
Returns
Output shape or shapes.
Throws
AttributeError: if the layer is connected to more than one incoming nodes.
{heading: 'Models', 'subheading': 'Classes'}
property stateful
readonly stateful: boolean;
property supportsMasking
supportsMasking: boolean;
property trainable
trainable: boolean;
property trainable_
protected trainable_: boolean;
Whether the layer weights will be updated during training.
property trainableWeights
trainableWeights: LayerVariable[];
property updates
readonly updates: Tensor[];
property weights
readonly weights: LayerVariable[];
The concatenation of the lists trainableWeights and nonTrainableWeights (in this order).
method addLoss
addLoss: (losses: RegularizerFn | RegularizerFn[]) => void;
Add losses to the layer.
The loss may potentially be conditional on some inputs tensors, for instance activity losses are conditional on the layer's inputs.
{heading: 'Models', 'subheading': 'Classes'}
method addWeight
protected addWeight: ( name: string, shape: Shape, dtype?: DataType, initializer?: Initializer, regularizer?: Regularizer, trainable?: boolean, constraint?: Constraint, getInitializerFunc?: Function) => LayerVariable;
Adds a weight variable to the layer.
Parameter name
Name of the new weight variable.
Parameter shape
The shape of the weight.
Parameter dtype
The dtype of the weight.
Parameter initializer
An initializer instance.
Parameter regularizer
A regularizer instance.
Parameter trainable
Whether the weight should be trained via backprop or not (assuming that the layer itself is also trainable).
Parameter constraint
An optional trainable. The created weight variable.
{heading: 'Models', 'subheading': 'Classes'}
method apply
apply: ( inputs: Tensor | Tensor[] | SymbolicTensor | SymbolicTensor[], kwargs?: Kwargs) => Tensor | Tensor[] | SymbolicTensor | SymbolicTensor[];
Builds or executes a
Layer
's logic.When called with
tf.Tensor
(s), execute theLayer
's computation and return Tensor(s). For example:const denseLayer = tf.layers.dense({units: 1,kernelInitializer: 'zeros',useBias: false});// Invoke the layer's apply() method with a `tf.Tensor` (with concrete// numeric values).const input = tf.ones([2, 2]);const output = denseLayer.apply(input);// The output's value is expected to be [[0], [0]], due to the fact that// the dense layer has a kernel initialized to all-zeros and does not have// a bias.output.print();When called with
tf.SymbolicTensor
(s), this will prepare the layer for future execution. This entails internal book-keeping on shapes of expected Tensors, wiring layers together, and initializing weights.Calling
apply
withtf.SymbolicTensor
s are typically used during the building of non-tf.Sequential
models. For example:const flattenLayer = tf.layers.flatten();const denseLayer = tf.layers.dense({units: 1});// Use tf.layers.input() to obtain a SymbolicTensor as input to apply().const input = tf.input({shape: [2, 2]});const output1 = flattenLayer.apply(input);// output1.shape is [null, 4]. The first dimension is the undetermined// batch size. The second dimension comes from flattening the [2, 2]// shape.console.log(JSON.stringify(output1.shape));// The output SymbolicTensor of the flatten layer can be used to call// the apply() of the dense layer:const output2 = denseLayer.apply(output1);// output2.shape is [null, 1]. The first dimension is the undetermined// batch size. The second dimension matches the number of units of the// dense layer.console.log(JSON.stringify(output2.shape));// The input and output can be used to construct a model that consists// of the flatten and dense layers.const model = tf.model({inputs: input, outputs: output2});Parameter inputs
a
tf.Tensor
ortf.SymbolicTensor
or an Array of them.Parameter kwargs
Additional keyword arguments to be passed to
call()
.Output of the layer's
call
method.ValueError error in case the layer is missing shape information for its
build
call.{heading: 'Models', 'subheading': 'Classes'}
method assertInputCompatibility
protected assertInputCompatibility: ( inputs: Tensor | Tensor[] | SymbolicTensor | SymbolicTensor[]) => void;
Checks compatibility between the layer and provided inputs.
This checks that the tensor(s)
input
verify the input assumptions of the layer (if any). If not, exceptions are raised.Parameter inputs
Input tensor or list of input tensors.
ValueError in case of mismatch between the provided inputs and the expectations of the layer.
method assertNotDisposed
protected assertNotDisposed: () => void;
method build
build: (inputShape: Shape | Shape[]) => void;
Creates the layer weights.
Must be implemented on all layers that have weights.
Called when apply() is called to construct the weights.
Parameter inputShape
A
Shape
or array ofShape
(unused).{heading: 'Models', 'subheading': 'Classes'}
method calculateLosses
calculateLosses: () => Scalar[];
Retrieves the Layer's current loss values.
Used for regularizers during training.
method call
call: (inputs: Tensor | Tensor[], kwargs: Kwargs) => Tensor | Tensor[];
This is where the layer's logic lives.
Parameter inputs
Input tensor, or list/tuple of input tensors.
Parameter kwargs
Additional keyword arguments.
A tensor or list/tuple of tensors.
method clearCallHook
clearCallHook: () => void;
Clear call hook. This is currently used for testing only.
method computeMask
computeMask: ( inputs: Tensor | Tensor[], mask?: Tensor | Tensor[]) => Tensor | Tensor[];
Computes an output mask tensor.
Parameter inputs
Tensor or list of tensors.
Parameter mask
Tensor or list of tensors.
null or a tensor (or list of tensors, one per output tensor of the layer).
method computeOutputShape
computeOutputShape: (inputShape: Shape | Shape[]) => Shape | Shape[];
Computes the output shape of the layer.
Assumes that the layer will be built to match that input shape provided.
Parameter inputShape
A shape (tuple of integers) or a list of shape tuples (one per output tensor of the layer). Shape tuples can include null for free dimensions, instead of an integer.
{heading: 'Models', 'subheading': 'Classes'}
method countParams
countParams: () => number;
Counts the total number of numbers (e.g., float32, int32) in the weights.
Returns
An integer count.
Throws
RuntimeError: If the layer is not built yet (in which case its weights are not defined yet.)
{heading: 'Models', 'subheading': 'Classes'}
method dispose
dispose: () => DisposeResult;
Attempt to dispose layer's weights.
This method decreases the reference count of the Layer object by 1.
A Layer is reference-counted. Its reference count is incremented by 1 the first item its
apply()
method is called and when it becomes a part of a newNode
(through calling theapply()
method on atf.SymbolicTensor
).If the reference count of a Layer becomes 0, all the weights will be disposed and the underlying memory (e.g., the textures allocated in WebGL) will be freed.
Note: If the reference count is greater than 0 after the decrement, the weights of the Layer will *not* be disposed.
After a Layer is disposed, it cannot be used in calls such as
apply()
,getWeights()
orsetWeights()
anymore.Returns
A DisposeResult Object with the following fields: - refCountAfterDispose: The reference count of the Container after this
dispose()
call. - numDisposedVariables: Number oftf.Variable
s (i.e., weights) disposed during thisdispose()
call.Throws
{Error} If the layer is not built yet, or if the layer has already been disposed.
{heading: 'Models', 'subheading': 'Classes'}
method disposeWeights
protected disposeWeights: () => number;
Dispose the weight variables that this Layer instance holds.
Returns
{number} Number of disposed variables.
method getConfig
getConfig: () => serialization.ConfigDict;
Returns the config of the layer.
A layer config is a TS dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by 'Container' (one layer of abstraction above).
Porting Note: The TS dictionary follows TS naming standards for keys, and uses tfjs-layers type-safe Enums. Serialization methods should use a helper function to convert to the pythonic storage standard. (see serialization_utils.convertTsToPythonic)
Returns
TS dictionary of configuration.
{heading: 'Models', 'subheading': 'Classes'}
method getInputAt
getInputAt: (nodeIndex: number) => SymbolicTensor | SymbolicTensor[];
Retrieves the input tensor(s) of a layer at a given node.
Parameter nodeIndex
Integer, index of the node from which to retrieve the attribute. E.g.
nodeIndex=0
will correspond to the first time the layer was called.A tensor (or list of tensors if the layer has multiple inputs).
method getOutputAt
getOutputAt: (nodeIndex: number) => SymbolicTensor | SymbolicTensor[];
Retrieves the output tensor(s) of a layer at a given node.
Parameter nodeIndex
Integer, index of the node from which to retrieve the attribute. E.g.
nodeIndex=0
will correspond to the first time the layer was called.A tensor (or list of tensors if the layer has multiple outputs).
method getWeights
getWeights: (trainableOnly?: boolean) => Tensor[];
Returns the current values of the weights of the layer.
Parameter trainableOnly
Whether to get the values of only trainable weights.
Returns
Weight values as an
Array
oftf.Tensor
s.{heading: 'Models', 'subheading': 'Classes'}
method invokeCallHook
protected invokeCallHook: (inputs: Tensor | Tensor[], kwargs: Kwargs) => void;
method nodeKey
protected static nodeKey: (layer: Layer, nodeIndex: number) => string;
Converts a layer and its index to a unique (immutable type) name. This function is used internally with
this.containerNodes
.Parameter layer
The layer.
Parameter nodeIndex
The layer's position (e.g. via enumerate) in a list of nodes.
Returns
The unique name.
method resetStates
resetStates: () => void;
Reset the states of the layer.
This method of the base Layer class is essentially a no-op. Subclasses that are stateful (e.g., stateful RNNs) should override this method.
method setCallHook
setCallHook: (callHook: CallHook) => void;
Set call hook. This is currently used for testing only.
Parameter callHook
method setFastWeightInitDuringBuild
setFastWeightInitDuringBuild: (value: boolean) => void;
Set the fast-weight-initialization flag.
In cases where the initialized weight values will be immediately overwritten by loaded weight values during model loading, setting the flag to
true
saves unnecessary calls to potentially expensive initializers and speeds up the loading process.Parameter value
Target value of the flag.
method setWeights
setWeights: (weights: Tensor[]) => void;
Sets the weights of the layer, from Tensors.
Parameter weights
a list of Tensors. The number of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of
getWeights
).ValueError If the provided weights list does not match the layer's specifications.
{heading: 'Models', 'subheading': 'Classes'}
method warnOnIncompatibleInputShape
protected warnOnIncompatibleInputShape: (inputShape: Shape) => void;
Check compatibility between input shape and this layer's batchInputShape.
Print warning if any incompatibility is found.
Parameter inputShape
Input shape to be checked.
class RNN
class RNN extends Layer {}
constructor
constructor(args: RNNLayerArgs);
property cell
readonly cell: RNNCell;
property className
static className: string;
property goBackwards
readonly goBackwards: boolean;
property keptStates
protected keptStates: Tensor[][];
property nonTrainableWeights
readonly nonTrainableWeights: LayerVariable[];
property returnSequences
readonly returnSequences: boolean;
property returnState
readonly returnState: boolean;
property states
states: Tensor[];
Get the current state tensors of the RNN.
If the state hasn't been set, return an array of
null
s of the correct length.
property states_
protected states_: Tensor[];
property stateSpec
stateSpec: InputSpec[];
property trainableWeights
readonly trainableWeights: LayerVariable[];
property unroll
readonly unroll: boolean;
method apply
apply: ( inputs: Tensor | Tensor[] | SymbolicTensor | SymbolicTensor[], kwargs?: Kwargs) => Tensor | Tensor[] | SymbolicTensor | SymbolicTensor[];
method build
build: (inputShape: Shape | Shape[]) => void;
method call
call: (inputs: Tensor | Tensor[], kwargs: Kwargs) => Tensor | Tensor[];
method computeMask
computeMask: ( inputs: Tensor | Tensor[], mask?: Tensor | Tensor[]) => Tensor | Tensor[];
method computeOutputShape
computeOutputShape: (inputShape: Shape | Shape[]) => Shape | Shape[];
method fromConfig
static fromConfig: <T extends serialization.Serializable>( cls: serialization.SerializableConstructor<T>, config: serialization.ConfigDict, customObjects?: serialization.ConfigDict) => T;
method getConfig
getConfig: () => serialization.ConfigDict;
method getInitialState
getInitialState: (inputs: Tensor) => Tensor[];
method getStates
getStates: () => Tensor[];
method resetStates
resetStates: (states?: Tensor | Tensor[], training?: boolean) => void;
Reset the state tensors of the RNN.
If the
states
argument isundefined
ornull
, will set the state tensor(s) of the RNN to all-zero tensors of the appropriate shape(s).If
states
is provided, will set the state tensors of the RNN to its value.Parameter states
Optional externally-provided initial states.
Parameter training
Whether this call is done during training. For stateful RNNs, this affects whether the old states are kept or discarded. In particular, if
training
istrue
, the old states will be kept so that subsequent backpropgataion through time (BPTT) may work properly. Else, the old states will be discarded.
method setFastWeightInitDuringBuild
setFastWeightInitDuringBuild: (value: boolean) => void;
method setStates
setStates: (states: Tensor[]) => void;
class RNNCell
abstract class RNNCell extends Layer {}
An RNNCell layer.
{heading: 'Layers', subheading: 'Classes'}
property dropoutMask
dropoutMask: any;
property recurrentDropoutMask
recurrentDropoutMask: any;
property stateSize
abstract stateSize: number | number[];
Size(s) of the states. For RNN cells with only a single state, this is a single integer.
namespace metrics
module 'dist/exports_metrics.d.ts' {}
Copyright 2018 Google LLC
Use of this source code is governed by an MIT-style license that can be found in the LICENSE file or at https://opensource.org/licenses/MIT. =============================================================================
function binaryAccuracy
binaryAccuracy: (yTrue: Tensor, yPred: Tensor) => Tensor;
Binary accuracy metric function.
yTrue
andyPred
can have 0-1 values. Example:const x = tf.tensor2d([[1, 1, 1, 1], [0, 0, 0, 0]], [2, 4]);const y = tf.tensor2d([[1, 0, 1, 0], [0, 0, 0, 1]], [2, 4]);const accuracy = tf.metrics.binaryAccuracy(x, y);accuracy.print();yTrue
andyPred
can also have floating-number values between 0 and 1, in which case the values will be thresholded at 0.5 to yield 0-1 values (i.e., a value >= 0.5 and <= 1.0 is interpreted as 1).Example:
const x = tf.tensor1d([1, 1, 1, 1, 0, 0, 0, 0]);const y = tf.tensor1d([0.2, 0.4, 0.6, 0.8, 0.2, 0.3, 0.4, 0.7]);const accuracy = tf.metrics.binaryAccuracy(x, y);accuracy.print();Parameter yTrue
Binary Tensor of truth.
Parameter yPred
Binary Tensor of prediction. Accuracy Tensor.
{heading: 'Metrics', namespace: 'metrics'}
function binaryCrossentropy
binaryCrossentropy: (yTrue: Tensor, yPred: Tensor) => Tensor;
Binary crossentropy metric function.
Example:
const x = tf.tensor2d([[0], [1], [1], [1]]);const y = tf.tensor2d([[0], [0], [0.5], [1]]);const crossentropy = tf.metrics.binaryCrossentropy(x, y);crossentropy.print();Parameter yTrue
Binary Tensor of truth.
Parameter yPred
Binary Tensor of prediction, probabilities for the
1
case. Accuracy Tensor.{heading: 'Metrics', namespace: 'metrics'}
function categoricalAccuracy
categoricalAccuracy: (yTrue: Tensor, yPred: Tensor) => Tensor;
Categorical accuracy metric function.
Example:
const x = tf.tensor2d([[0, 0, 0, 1], [0, 0, 0, 1]]);const y = tf.tensor2d([[0.1, 0.8, 0.05, 0.05], [0.1, 0.05, 0.05, 0.8]]);const accuracy = tf.metrics.categoricalAccuracy(x, y);accuracy.print();Parameter yTrue
Binary Tensor of truth: one-hot encoding of categories.
Parameter yPred
Binary Tensor of prediction: probabilities or logits for the same categories as in
yTrue
. Accuracy Tensor.{heading: 'Metrics', namespace: 'metrics'}
function categoricalCrossentropy
categoricalCrossentropy: (yTrue: Tensor, yPred: Tensor) => Tensor;
Categorical crossentropy between an output tensor and a target tensor.
Parameter target
A tensor of the same shape as
output
.Parameter output
A tensor resulting from a softmax (unless
fromLogits
istrue
, in which caseoutput
is expected to be the logits).Parameter fromLogits
Boolean, whether
output
is the result of a softmax, or is a tensor of logits.{heading: 'Metrics', namespace: 'metrics'}
function cosineProximity
cosineProximity: (yTrue: Tensor, yPred: Tensor) => Tensor;
Loss or metric function: Cosine proximity.
Mathematically, cosine proximity is defined as:
-sum(l2Normalize(yTrue) * l2Normalize(yPred))
, whereinl2Normalize()
normalizes the L2 norm of the input to 1 and*
represents element-wise multiplication.const yTrue = tf.tensor2d([[1, 0], [1, 0]]);const yPred = tf.tensor2d([[1 / Math.sqrt(2), 1 / Math.sqrt(2)], [0, 1]]);const proximity = tf.metrics.cosineProximity(yTrue, yPred);proximity.print();Parameter yTrue
Truth Tensor.
Parameter yPred
Prediction Tensor. Cosine proximity Tensor.
{heading: 'Metrics', namespace: 'metrics'}
function mape
mape: (yTrue: Tensor, yPred: Tensor) => Tensor;
function MAPE
MAPE: (yTrue: Tensor, yPred: Tensor) => Tensor;
function meanAbsoluteError
meanAbsoluteError: (yTrue: Tensor, yPred: Tensor) => Tensor;
Loss or metric function: Mean absolute error.
Mathematically, mean absolute error is defined as:
mean(abs(yPred - yTrue))
, wherein themean
is applied over feature dimensions.const yTrue = tf.tensor2d([[0, 1], [0, 0], [2, 3]]);const yPred = tf.tensor2d([[0, 1], [0, 1], [-2, -3]]);const mse = tf.metrics.meanAbsoluteError(yTrue, yPred);mse.print();Parameter yTrue
Truth Tensor.
Parameter yPred
Prediction Tensor. Mean absolute error Tensor.
{heading: 'Metrics', namespace: 'metrics'}
function meanAbsolutePercentageError
meanAbsolutePercentageError: (yTrue: Tensor, yPred: Tensor) => Tensor;
Loss or metric function: Mean absolute percentage error.
const yTrue = tf.tensor2d([[0, 1], [10, 20]]);const yPred = tf.tensor2d([[0, 1], [11, 24]]);const mse = tf.metrics.meanAbsolutePercentageError(yTrue, yPred);mse.print();Aliases:
tf.metrics.MAPE
,tf.metrics.mape
.Parameter yTrue
Truth Tensor.
Parameter yPred
Prediction Tensor. Mean absolute percentage error Tensor.
{heading: 'Metrics', namespace: 'metrics'}
function meanSquaredError
meanSquaredError: (yTrue: Tensor, yPred: Tensor) => Tensor;
Loss or metric function: Mean squared error.
const yTrue = tf.tensor2d([[0, 1], [3, 4]]);const yPred = tf.tensor2d([[0, 1], [-3, -4]]);const mse = tf.metrics.meanSquaredError(yTrue, yPred);mse.print();Aliases:
tf.metrics.MSE
,tf.metrics.mse
.Parameter yTrue
Truth Tensor.
Parameter yPred
Prediction Tensor. Mean squared error Tensor.
{heading: 'Metrics', namespace: 'metrics'}
function mse
mse: (yTrue: Tensor, yPred: Tensor) => Tensor;
function MSE
MSE: (yTrue: Tensor, yPred: Tensor) => Tensor;
function precision
precision: (yTrue: Tensor, yPred: Tensor) => Tensor;
Computes the precision of the predictions with respect to the labels.
Example:
const x = tf.tensor2d([[0, 0, 0, 1],[0, 1, 0, 0],[0, 0, 0, 1],[1, 0, 0, 0],[0, 0, 1, 0]]);const y = tf.tensor2d([[0, 0, 1, 0],[0, 1, 0, 0],[0, 0, 0, 1],[0, 1, 0, 0],[0, 1, 0, 0]]);const precision = tf.metrics.precision(x, y);precision.print();Parameter yTrue
The ground truth values. Expected to contain only 0-1 values.
Parameter yPred
The predicted values. Expected to contain only 0-1 values. Precision Tensor.
{heading: 'Metrics', namespace: 'metrics'}
function r2Score
r2Score: (yTrue: Tensor, yPred: Tensor) => Tensor;
Computes R2 score.
const yTrue = tf.tensor2d([[0, 1], [3, 4]]);const yPred = tf.tensor2d([[0, 1], [-3, -4]]);const r2Score = tf.metrics.r2Score(yTrue, yPred);r2Score.print();Parameter yTrue
Truth Tensor.
Parameter yPred
Prediction Tensor. R2 score Tensor.
{heading: 'Metrics', namespace: 'metrics'}
function recall
recall: (yTrue: Tensor, yPred: Tensor) => Tensor;
Computes the recall of the predictions with respect to the labels.
Example:
const x = tf.tensor2d([[0, 0, 0, 1],[0, 1, 0, 0],[0, 0, 0, 1],[1, 0, 0, 0],[0, 0, 1, 0]]);const y = tf.tensor2d([[0, 0, 1, 0],[0, 1, 0, 0],[0, 0, 0, 1],[0, 1, 0, 0],[0, 1, 0, 0]]);const recall = tf.metrics.recall(x, y);recall.print();Parameter yTrue
The ground truth values. Expected to contain only 0-1 values.
Parameter yPred
The predicted values. Expected to contain only 0-1 values. Recall Tensor.
{heading: 'Metrics', namespace: 'metrics'}
function sparseCategoricalAccuracy
sparseCategoricalAccuracy: (yTrue: Tensor, yPred: Tensor) => Tensor;
Sparse categorical accuracy metric function.
Example:
const yTrue = tf.tensor1d([1, 1, 2, 2, 0]);const yPred = tf.tensor2d([[0, 1, 0], [1, 0, 0], [0, 0.4, 0.6], [0, 0.6, 0.4], [0.7, 0.3, 0]]);const crossentropy = tf.metrics.sparseCategoricalAccuracy(yTrue, yPred);crossentropy.print();Parameter yTrue
True labels: indices.
Parameter yPred
Predicted probabilities or logits.
Returns
Accuracy tensor.
{heading: 'Metrics', namespace: 'metrics'}
namespace models
module 'dist/exports_models.d.ts' {}
Copyright 2018 Google LLC
Use of this source code is governed by an MIT-style license that can be found in the LICENSE file or at https://opensource.org/licenses/MIT. =============================================================================
function modelFromJSON
modelFromJSON: ( modelAndWeightsConfig: ModelAndWeightsConfig | PyJsonDict, customObjects?: serialization.ConfigDict) => Promise<LayersModel>;
Parses a JSON model configuration file and returns a model instance.
// This example shows how to serialize a model using `toJSON()` and// deserialize it as another model using `tf.models.modelFromJSON()`.// Note: this example serializes and deserializes only the topology// of the model; the weights of the loaded model will be different// from those of the the original model, due to random weight// initialization.// To load the topology and weights of a model, use `tf.loadLayersModel()`.const model1 = tf.sequential();model1.add(tf.layers.repeatVector({inputShape: [2], n: 4}));// Serialize `model1` as a JSON object.const model1JSON = model1.toJSON(null, false);model1.summary();const model2 = await tf.models.modelFromJSON(model1JSON);model2.summary();Parameter modelAndWeightsConfig
JSON object or string encoding a model and weights configuration. It can also be only the topology JSON of the model, in which case the weights will not be loaded.
Parameter custom_objects
Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization.
Returns
A TensorFlow.js Layers
tf.LayersModel
instance (uncompiled).
namespace regularizers
module 'dist/exports_regularizers.d.ts' {}
Regularizer for L1 and L2 regularization.
Adds a term to the loss to penalize large weights: loss += sum(l1 * abs(x)) + sum(l2 * x^2)
{heading: 'Regularizers', namespace: 'regularizers'}
function l1
l1: (config?: L1Args) => Regularizer;
Regularizer for L1 regularization.
Adds a term to the loss to penalize large weights: loss += sum(l1 * abs(x))
Parameter args
l1 config.
{heading: 'Regularizers', namespace: 'regularizers'}
function l1l2
l1l2: (config?: L1L2Args) => Regularizer;
Regularizer for L1 and L2 regularization.
Adds a term to the loss to penalize large weights: loss += sum(l1 * abs(x)) + sum(l2 * x^2)
{heading: 'Regularizers', namespace: 'regularizers'}
function l2
l2: (config?: L2Args) => Regularizer;
Regularizer for L2 regularization.
Adds a term to the loss to penalize large weights: loss += sum(l2 * x^2)
Parameter args
l2 config.
{heading: 'Regularizers', namespace: 'regularizers'}
Package Files (21)
- dist/base_callbacks.d.ts
- dist/callbacks.d.ts
- dist/engine/topology.d.ts
- dist/engine/training.d.ts
- dist/engine/training_dataset.d.ts
- dist/engine/training_tensors.d.ts
- dist/engine/training_utils.d.ts
- dist/exports.d.ts
- dist/exports_constraints.d.ts
- dist/exports_initializers.d.ts
- dist/exports_layers.d.ts
- dist/exports_metrics.d.ts
- dist/exports_models.d.ts
- dist/exports_regularizers.d.ts
- dist/index.d.ts
- dist/keras_format/common.d.ts
- dist/layers/recurrent.d.ts
- dist/logs.d.ts
- dist/models.d.ts
- dist/variables.d.ts
- dist/version.d.ts
Dependencies (0)
No dependencies.
Dev Dependencies (2)
Peer Dependencies (1)
Badge
To add a badge like this oneto your package's README, use the codes available below.
You may also use Shields.io to create a custom badge linking to https://www.jsdocs.io/package/@tensorflow/tfjs-layers
.
- Markdown[![jsDocs.io](https://img.shields.io/badge/jsDocs.io-reference-blue)](https://www.jsdocs.io/package/@tensorflow/tfjs-layers)
- HTML<a href="https://www.jsdocs.io/package/@tensorflow/tfjs-layers"><img src="https://img.shields.io/badge/jsDocs.io-reference-blue" alt="jsDocs.io"></a>
- Updated .
Package analyzed in 9694 ms. - Missing or incorrect documentation? Open an issue for this package.