@tensorflow/tfjs-converter

  • Version 3.11.0
  • Published
  • 21.8 MB
  • No dependencies
  • Apache-2.0 license

Install

npm i @tensorflow/tfjs-converter
yarn add @tensorflow/tfjs-converter
pnpm add @tensorflow/tfjs-converter

Overview

Tensorflow model converter for javascript

Index

Variables

variable version_converter

const version_converter: string;
  • See the LICENSE file.

Functions

function deregisterOp

deregisterOp: (name: string) => void;
  • Deregister the Op for graph model executor.

    Parameter name

    The Tensorflow Op name.

    {heading: 'Models', subheading: 'Op Registry'}

function loadGraphModel

loadGraphModel: (
modelUrl: string | io.IOHandler,
options?: any
) => Promise<GraphModel>;
  • Load a graph model given a URL to the model definition.

    Example of loading MobileNetV2 from a URL and making a prediction with a zeros input:

    const modelUrl =
    'https://storage.googleapis.com/tfjs-models/savedmodel/mobilenet_v2_1.0_224/model.json';
    const model = await tf.loadGraphModel(modelUrl);
    const zeros = tf.zeros([1, 224, 224, 3]);
    model.predict(zeros).print();

    Example of loading MobileNetV2 from a TF Hub URL and making a prediction with a zeros input:

    const modelUrl =
    'https://tfhub.dev/google/imagenet/mobilenet_v2_140_224/classification/2';
    const model = await tf.loadGraphModel(modelUrl, {fromTFHub: true});
    const zeros = tf.zeros([1, 224, 224, 3]);
    model.predict(zeros).print();

    Parameter modelUrl

    The url or an io.IOHandler that loads the model.

    Parameter options

    Options for the HTTP request, which allows to send credentials and custom headers.

    {heading: 'Models', subheading: 'Loading'}

function registerOp

registerOp: (name: string, opFunc: OpExecutor) => void;
  • Register an Op for graph model executor. This allow you to register TensorFlow custom op or override existing op.

    Here is an example of registering a new MatMul Op.

    const customMatmul = (node) =>
    tf.matMul(
    node.inputs[0], node.inputs[1],
    node.attrs['transpose_a'], node.attrs['transpose_b']);
    tf.registerOp('MatMul', customMatmul);

    The inputs and attrs of the node object is based on the TensorFlow op registry.

    Parameter name

    The Tensorflow Op name.

    Parameter opFunc

    An op function which is called with the current graph node during execution and needs to return a tensor or a list of tensors. The node has the following attributes: - attr: A map from attribute name to its value - inputs: A list of input tensors

    {heading: 'Models', subheading: 'Op Registry'}

Classes

class GraphModel

class GraphModel implements InferenceModel {}
  • A tf.GraphModel is a directed, acyclic graph built from a SavedModel GraphDef and allows inference execution.

    A tf.GraphModel can only be created by loading from a model converted from a [TensorFlow SavedModel](https://www.tensorflow.org/guide/saved_model) using the command line converter tool and loaded via tf.loadGraphModel.

    {heading: 'Models', subheading: 'Classes'}

constructor

constructor(modelUrl: any, loadOptions?: any);
  • Parameter modelUrl

    url for the model, or an io.IOHandler.

    Parameter weightManifestUrl

    url for the weight file generated by scripts/convert.py script.

    Parameter requestOption

    options for Request, which allows to send credentials and custom headers.

    Parameter onProgress

    Optional, progress callback function, fired periodically before the load is completed.

property inputNodes

readonly inputNodes: string[];

    property inputs

    readonly inputs: TensorInfo[];

      property metadata

      readonly metadata: {};

        property modelSignature

        readonly modelSignature: {};

          property modelVersion

          readonly modelVersion: string;

            property outputNodes

            readonly outputNodes: string[];

              property outputs

              readonly outputs: TensorInfo[];

                property weights

                readonly weights: NamedTensorsMap;

                  method dispose

                  dispose: () => void;
                  • Releases the memory used by the weight tensors and resourceManager.

                    {heading: 'Models', subheading: 'Classes'}

                  method execute

                  execute: (
                  inputs: Tensor | Tensor[] | NamedTensorMap,
                  outputs?: string | string[]
                  ) => Tensor | Tensor[];
                  • Executes inference for the model for given input tensors.

                    Parameter inputs

                    tensor, tensor array or tensor map of the inputs for the model, keyed by the input node names.

                    Parameter outputs

                    output node name from the Tensorflow model, if no outputs are specified, the default outputs of the model would be used. You can inspect intermediate nodes of the model by adding them to the outputs array.

                    Returns

                    A single tensor if provided with a single output or no outputs are provided and there is only one default output, otherwise return a tensor array. The order of the tensor array is the same as the outputs if provided, otherwise the order of outputNodes attribute of the model.

                    {heading: 'Models', subheading: 'Classes'}

                  method executeAsync

                  executeAsync: (
                  inputs: Tensor | Tensor[] | NamedTensorMap,
                  outputs?: string | string[]
                  ) => Promise<Tensor | Tensor[]>;
                  • Executes inference for the model for given input tensors in async fashion, use this method when your model contains control flow ops.

                    Parameter inputs

                    tensor, tensor array or tensor map of the inputs for the model, keyed by the input node names.

                    Parameter outputs

                    output node name from the Tensorflow model, if no outputs are specified, the default outputs of the model would be used. You can inspect intermediate nodes of the model by adding them to the outputs array.

                    Returns

                    A Promise of single tensor if provided with a single output or no outputs are provided and there is only one default output, otherwise return a tensor map.

                    {heading: 'Models', subheading: 'Classes'}

                  method load

                  load: () => Promise<boolean>;
                  • Loads the model and weight files, construct the in memory weight map and compile the inference graph.

                  method loadSync

                  loadSync: (artifacts: any) => boolean;
                  • Synchronously construct the in memory weight map and compile the inference graph. Also initialize hashtable if any.

                    {heading: 'Models', subheading: 'Classes', ignoreCI: true}

                  method predict

                  predict: (
                  inputs: Tensor | Tensor[] | NamedTensorMap,
                  config?: any
                  ) => Tensor | Tensor[] | NamedTensorMap;
                  • Execute the inference for the input tensors.

                    Parameter input

                    The input tensors, when there is single input for the model, inputs param should be a tf.Tensor. For models with mutliple inputs, inputs params should be in either tf.Tensor[] if the input order is fixed, or otherwise NamedTensorMap format.

                    For model with multiple inputs, we recommend you use NamedTensorMap as the input type, if you use tf.Tensor[], the order of the array needs to follow the order of inputNodes array.

                    Parameter config

                    Prediction configuration for specifying the batch size and output node names. Currently the batch size option is ignored for graph model.

                    Returns

                    Inference result tensors. The output would be single tf.Tensor if model has single output node, otherwise Tensor[] or NamedTensorMap[] will be returned for model with multiple outputs.

                    {heading: 'Models', subheading: 'Classes'}

                    See Also

                    • GraphModel.inputNodes

                      You can also feed any intermediate nodes using the NamedTensorMap as the input type. For example, given the graph InputNode => Intermediate => OutputNode, you can execute the subgraph Intermediate => OutputNode by calling model.execute('IntermediateNode' : tf.tensor(...));

                      This is useful for models that uses tf.dynamic_rnn, where the intermediate state needs to be fed manually.

                      For batch inference execution, the tensors for each input need to be concatenated together. For example with mobilenet, the required input shape is [1, 244, 244, 3], which represents the [batch, height, width, channel]. If we are provide a batched data of 100 images, the input tensor should be in the shape of [100, 244, 244, 3].

                  method save

                  save: (
                  handlerOrURL: io.IOHandler | string,
                  config?: any
                  ) => Promise<io.SaveResult>;
                  • Save the configuration and/or weights of the GraphModel.

                    An IOHandler is an object that has a save method of the proper signature defined. The save method manages the storing or transmission of serialized data ("artifacts") that represent the model's topology and weights onto or via a specific medium, such as file downloads, local storage, IndexedDB in the web browser and HTTP requests to a server. TensorFlow.js provides IOHandler implementations for a number of frequently used saving mediums, such as tf.io.browserDownloads and tf.io.browserLocalStorage. See tf.io for more details.

                    This method also allows you to refer to certain types of IOHandlers as URL-like string shortcuts, such as 'localstorage://' and 'indexeddb://'.

                    Example 1: Save model's topology and weights to browser [local storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage); then load it back.

                    const modelUrl =
                    'https://storage.googleapis.com/tfjs-models/savedmodel/mobilenet_v2_1.0_224/model.json';
                    const model = await tf.loadGraphModel(modelUrl);
                    const zeros = tf.zeros([1, 224, 224, 3]);
                    model.predict(zeros).print();
                    const saveResults = await model.save('localstorage://my-model-1');
                    const loadedModel = await tf.loadGraphModel('localstorage://my-model-1');
                    console.log('Prediction from loaded model:');
                    model.predict(zeros).print();

                    Parameter handlerOrURL

                    An instance of IOHandler or a URL-like, scheme-based string shortcut for IOHandler.

                    Parameter config

                    Options for saving the model.

                    Returns

                    A Promise of SaveResult, which summarizes the result of the saving, such as byte sizes of the saved artifacts for the model's topology and weight values.

                    {heading: 'Models', subheading: 'Classes', ignoreCI: true}

                  Interfaces

                  interface GraphNode

                  interface GraphNode {}

                    property attrs

                    attrs: {
                    [key: string]: ValueType;
                    };

                      property inputs

                      inputs: Tensor[];

                        interface OpExecutor

                        interface OpExecutor {}

                          call signature

                          (node: GraphNode): Tensor | Tensor[] | Promise<Tensor | Tensor[]>;

                            Package Files (5)

                            Dependencies (0)

                            No dependencies.

                            Dev Dependencies (35)

                            Peer Dependencies (1)

                            Badge

                            To add a badge like this onejsDocs.io badgeto your package's README, use the codes available below.

                            You may also use Shields.io to create a custom badge linking to https://www.jsdocs.io/package/@tensorflow/tfjs-converter.

                            • Markdown
                              [![jsDocs.io](https://img.shields.io/badge/jsDocs.io-reference-blue)](https://www.jsdocs.io/package/@tensorflow/tfjs-converter)
                            • HTML
                              <a href="https://www.jsdocs.io/package/@tensorflow/tfjs-converter"><img src="https://img.shields.io/badge/jsDocs.io-reference-blue" alt="jsDocs.io"></a>