@tensorflow/tfjs-converter
- Version 4.21.0
- Published
- 25 MB
- No dependencies
- Apache-2.0 license
Install
npm i @tensorflow/tfjs-converter
yarn add @tensorflow/tfjs-converter
pnpm add @tensorflow/tfjs-converter
Overview
Tensorflow model converter for javascript
Index
Variables
Functions
Classes
Interfaces
Variables
variable version_converter
const version_converter: string;
See the LICENSE file.
Functions
function deregisterOp
deregisterOp: (name: string) => void;
Deregister the Op for graph model executor.
Parameter name
The Tensorflow Op name.
{heading: 'Models', subheading: 'Op Registry'}
function loadGraphModel
loadGraphModel: ( modelUrl: string | io.IOHandler, options?: io.LoadOptions, tfio?: any) => Promise<GraphModel>;
Load a graph model given a URL to the model definition.
Example of loading MobileNetV2 from a URL and making a prediction with a zeros input:
const modelUrl ='https://storage.googleapis.com/tfjs-models/savedmodel/mobilenet_v2_1.0_224/model.json';const model = await tf.loadGraphModel(modelUrl);const zeros = tf.zeros([1, 224, 224, 3]);model.predict(zeros).print();Example of loading MobileNetV2 from a TF Hub URL and making a prediction with a zeros input:
const modelUrl ='https://tfhub.dev/google/imagenet/mobilenet_v2_140_224/classification/2';const model = await tf.loadGraphModel(modelUrl, {fromTFHub: true});const zeros = tf.zeros([1, 224, 224, 3]);model.predict(zeros).print();Parameter modelUrl
The url or an
io.IOHandler
that loads the model.Parameter options
Options for the HTTP request, which allows to send credentials and custom headers.
{heading: 'Models', subheading: 'Loading'}
function loadGraphModelSync
loadGraphModelSync: ( modelSource: io.IOHandlerSync | io.ModelArtifacts | [io.ModelJSON, ArrayBuffer]) => GraphModel<io.IOHandlerSync>;
Load a graph model given a synchronous IO handler with a 'load' method.
Parameter modelSource
The
io.IOHandlerSync
that loads the model, or theio.ModelArtifacts
that encode the model, or a tuple of[io.ModelJSON, ArrayBuffer]
of which the first element encodes the model and the second contains the weights.{heading: 'Models', subheading: 'Loading'}
function registerOp
registerOp: (name: string, opFunc: OpExecutor) => void;
Register an Op for graph model executor. This allows you to register TensorFlow custom op or override existing op.
Here is an example of registering a new MatMul Op.
const customMatmul = (node) =>tf.matMul(node.inputs[0], node.inputs[1],node.attrs['transpose_a'], node.attrs['transpose_b']);tf.registerOp('MatMul', customMatmul);The inputs and attrs of the node object are based on the TensorFlow op registry.
Parameter name
The Tensorflow Op name.
Parameter opFunc
An op function which is called with the current graph node during execution and needs to return a tensor or a list of tensors. The node has the following attributes: - attr: A map from attribute name to its value - inputs: A list of input tensors
{heading: 'Models', subheading: 'Op Registry'}
Classes
class GraphModel
class GraphModel<ModelURL extends Url = string | io.IOHandler> implements InferenceModel {}
A
tf.GraphModel
is a directed, acyclic graph built from a SavedModel GraphDef and allows inference execution.A
tf.GraphModel
can only be created by loading from a model converted from a [TensorFlow SavedModel](https://www.tensorflow.org/guide/saved_model) using the command line converter tool and loaded viatf.loadGraphModel
.{heading: 'Models', subheading: 'Classes'}
constructor
constructor(modelUrl: {}, loadOptions?: io.LoadOptions, tfio?: any);
Parameter modelUrl
url for the model, or an
io.IOHandler
.Parameter weightManifestUrl
url for the weight file generated by scripts/convert.py script.
Parameter requestOption
options for Request, which allows to send credentials and custom headers.
Parameter onProgress
Optional, progress callback function, fired periodically before the load is completed.
property inputNodes
readonly inputNodes: string[];
property inputs
readonly inputs: TensorInfo[];
property metadata
readonly metadata: {};
property modelSignature
readonly modelSignature: {};
property modelStructuredOutputKeys
readonly modelStructuredOutputKeys: {};
property modelVersion
readonly modelVersion: string;
property outputNodes
readonly outputNodes: string[];
property outputs
readonly outputs: TensorInfo[];
property weights
readonly weights: NamedTensorsMap;
method dispose
dispose: () => void;
Releases the memory used by the weight tensors and resourceManager.
{heading: 'Models', subheading: 'Classes'}
method disposeIntermediateTensors
disposeIntermediateTensors: () => void;
Dispose intermediate tensors for model debugging mode (flag KEEP_INTERMEDIATE_TENSORS is true).
{heading: 'Models', subheading: 'Classes'}
method execute
execute: ( inputs: Tensor | Tensor[] | NamedTensorMap, outputs?: string | string[]) => Tensor | Tensor[];
Executes inference for the model for given input tensors.
Parameter inputs
tensor, tensor array or tensor map of the inputs for the model, keyed by the input node names.
Parameter outputs
output node name from the TensorFlow model, if no outputs are specified, the default outputs of the model would be used. You can inspect intermediate nodes of the model by adding them to the outputs array.
Returns
A single tensor if provided with a single output or no outputs are provided and there is only one default output, otherwise return a tensor array. The order of the tensor array is the same as the outputs if provided, otherwise the order of outputNodes attribute of the model.
{heading: 'Models', subheading: 'Classes'}
method executeAsync
executeAsync: ( inputs: Tensor | Tensor[] | NamedTensorMap, outputs?: string | string[]) => Promise<Tensor | Tensor[]>;
Executes inference for the model for given input tensors in async fashion, use this method when your model contains control flow ops.
Parameter inputs
tensor, tensor array or tensor map of the inputs for the model, keyed by the input node names.
Parameter outputs
output node name from the TensorFlow model, if no outputs are specified, the default outputs of the model would be used. You can inspect intermediate nodes of the model by adding them to the outputs array.
Returns
A Promise of single tensor if provided with a single output or no outputs are provided and there is only one default output, otherwise return a tensor map.
{heading: 'Models', subheading: 'Classes'}
method getIntermediateTensors
getIntermediateTensors: () => NamedTensorsMap;
Get intermediate tensors for model debugging mode (flag KEEP_INTERMEDIATE_TENSORS is true).
{heading: 'Models', subheading: 'Classes'}
method load
load: () => UrlIOHandler<ModelURL> extends io.IOHandlerSync ? boolean : Promise<boolean>;
Loads the model and weight files, construct the in memory weight map and compile the inference graph.
method loadSync
loadSync: (artifacts: io.ModelArtifacts) => boolean;
Synchronously construct the in memory weight map and compile the inference graph.
{heading: 'Models', subheading: 'Classes', ignoreCI: true}
method predict
predict: ( inputs: Tensor | Tensor[] | NamedTensorMap, config?: ModelPredictConfig) => Tensor | Tensor[] | NamedTensorMap;
Execute the inference for the input tensors.
Parameter input
The input tensors, when there is single input for the model, inputs param should be a
tf.Tensor
. For models with multiple inputs, inputs params should be in eithertf.Tensor
[] if the input order is fixed, or otherwise NamedTensorMap format.For model with multiple inputs, we recommend you use NamedTensorMap as the input type, if you use
tf.Tensor
[], the order of the array needs to follow the order of inputNodes array.Parameter config
Prediction configuration for specifying the batch size. Currently the batch size option is ignored for graph model.
Returns
Inference result tensors. If the model is converted and it originally had structured_outputs in tensorflow, then a NamedTensorMap will be returned matching the structured_outputs. If no structured_outputs are present, the output will be single
tf.Tensor
if the model has single output node, otherwise Tensor[].{heading: 'Models', subheading: 'Classes'}
See Also
You can also feed any intermediate nodes using the NamedTensorMap as the input type. For example, given the graph InputNode => Intermediate => OutputNode, you can execute the subgraph Intermediate => OutputNode by calling model.execute('IntermediateNode' : tf.tensor(...));
This is useful for models that uses tf.dynamic_rnn, where the intermediate state needs to be fed manually.
For batch inference execution, the tensors for each input need to be concatenated together. For example with mobilenet, the required input shape is [1, 244, 244, 3], which represents the [batch, height, width, channel]. If we are provide a batched data of 100 images, the input tensor should be in the shape of [100, 244, 244, 3].
method predictAsync
predictAsync: ( inputs: Tensor | Tensor[] | NamedTensorMap, config?: ModelPredictConfig) => Promise<Tensor | Tensor[] | NamedTensorMap>;
Execute the inference for the input tensors in async fashion, use this method when your model contains control flow ops.
Parameter input
The input tensors, when there is single input for the model, inputs param should be a
tf.Tensor
. For models with mutliple inputs, inputs params should be in eithertf.Tensor
[] if the input order is fixed, or otherwise NamedTensorMap format.For model with multiple inputs, we recommend you use NamedTensorMap as the input type, if you use
tf.Tensor
[], the order of the array needs to follow the order of inputNodes array.Parameter config
Prediction configuration for specifying the batch size. Currently the batch size option is ignored for graph model.
Returns
A Promise of inference result tensors. If the model is converted and it originally had structured_outputs in tensorflow, then a NamedTensorMap will be returned matching the structured_outputs. If no structured_outputs are present, the output will be single
tf.Tensor
if the model has single output node, otherwise Tensor[].{heading: 'Models', subheading: 'Classes'}
See Also
You can also feed any intermediate nodes using the NamedTensorMap as the input type. For example, given the graph InputNode => Intermediate => OutputNode, you can execute the subgraph Intermediate => OutputNode by calling model.execute('IntermediateNode' : tf.tensor(...));
This is useful for models that uses tf.dynamic_rnn, where the intermediate state needs to be fed manually.
For batch inference execution, the tensors for each input need to be concatenated together. For example with mobilenet, the required input shape is [1, 244, 244, 3], which represents the [batch, height, width, channel]. If we are provide a batched data of 100 images, the input tensor should be in the shape of [100, 244, 244, 3].
method save
save: ( handlerOrURL: io.IOHandler | string, config?: io.SaveConfig) => Promise<io.SaveResult>;
Save the configuration and/or weights of the GraphModel.
An
IOHandler
is an object that has asave
method of the proper signature defined. Thesave
method manages the storing or transmission of serialized data ("artifacts") that represent the model's topology and weights onto or via a specific medium, such as file downloads, local storage, IndexedDB in the web browser and HTTP requests to a server. TensorFlow.js providesIOHandler
implementations for a number of frequently used saving mediums, such astf.io.browserDownloads
andtf.io.browserLocalStorage
. Seetf.io
for more details.This method also allows you to refer to certain types of
IOHandler
s as URL-like string shortcuts, such as 'localstorage://' and 'indexeddb://'.Example 1: Save
model
's topology and weights to browser [local storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage); then load it back.const modelUrl ='https://storage.googleapis.com/tfjs-models/savedmodel/mobilenet_v2_1.0_224/model.json';const model = await tf.loadGraphModel(modelUrl);const zeros = tf.zeros([1, 224, 224, 3]);model.predict(zeros).print();const saveResults = await model.save('localstorage://my-model-1');const loadedModel = await tf.loadGraphModel('localstorage://my-model-1');console.log('Prediction from loaded model:');model.predict(zeros).print();Parameter handlerOrURL
An instance of
IOHandler
or a URL-like, scheme-based string shortcut forIOHandler
.Parameter config
Options for saving the model.
Returns
A
Promise
ofSaveResult
, which summarizes the result of the saving, such as byte sizes of the saved artifacts for the model's topology and weight values.{heading: 'Models', subheading: 'Classes', ignoreCI: true}
Interfaces
interface GraphNode
interface GraphNode {}
interface IAttrValue
interface IAttrValue {}
Properties of an AttrValue.
property b
b?: boolean | null;
AttrValue b
property f
f?: number | null;
AttrValue f
property func
func?: INameAttrList | null;
AttrValue func
property i
i?: number | string | null;
AttrValue i
property list
list?: AttrValue.IListValue | null;
AttrValue list
property placeholder
placeholder?: string | null;
AttrValue placeholder
property s
s?: string | null;
AttrValue s
property shape
shape?: ITensorShape | null;
AttrValue shape
property tensor
tensor?: ITensor | null;
AttrValue tensor
property type
type?: DataType | null;
AttrValue type
interface INameAttrList
interface INameAttrList {}
Properties of a NameAttrList.
interface INodeDef
interface INodeDef {}
Properties of a NodeDef.
interface ITensor
interface ITensor {}
Properties of a Tensor.
property boolVal
boolVal?: boolean[] | null;
Tensor boolVal
property doubleVal
doubleVal?: number[] | null;
Tensor doubleVal
property dtype
dtype?: DataType | null;
Tensor dtype
property floatVal
floatVal?: number[] | null;
Tensor floatVal
property int64Val
int64Val?: (number | string)[] | null;
Tensor int64Val
property intVal
intVal?: number[] | null;
Tensor intVal
property scomplexVal
scomplexVal?: number[] | null;
Tensor scomplexVal
property stringVal
stringVal?: Uint8Array[] | null;
Tensor stringVal
property tensorContent
tensorContent?: Uint8Array | null;
Tensor tensorContent
property tensorShape
tensorShape?: ITensorShape | null;
Tensor tensorShape
property uint32Val
uint32Val?: number[] | null;
Tensor uint32Val
property uint64Val
uint64Val?: (number | string)[] | null;
Tensor uint64Val
property versionNumber
versionNumber?: number | null;
Tensor versionNumber
interface ITensorShape
interface ITensorShape {}
Properties of a TensorShape.
property dim
dim?: TensorShape.IDim[] | null;
TensorShape dim
property unknownRank
unknownRank?: boolean | null;
TensorShape unknownRank
interface OpExecutor
interface OpExecutor {}
call signature
(node: GraphNode): Tensor | Tensor[] | Promise<Tensor | Tensor[]>;
Package Files (6)
Dependencies (0)
No dependencies.
Dev Dependencies (14)
Peer Dependencies (1)
Badge
To add a badge like this oneto your package's README, use the codes available below.
You may also use Shields.io to create a custom badge linking to https://www.jsdocs.io/package/@tensorflow/tfjs-converter
.
- Markdown[![jsDocs.io](https://img.shields.io/badge/jsDocs.io-reference-blue)](https://www.jsdocs.io/package/@tensorflow/tfjs-converter)
- HTML<a href="https://www.jsdocs.io/package/@tensorflow/tfjs-converter"><img src="https://img.shields.io/badge/jsDocs.io-reference-blue" alt="jsDocs.io"></a>
- Updated .
Package analyzed in 5168 ms. - Missing or incorrect documentation? Open an issue for this package.