@google-cloud/bigquery
- Version 6.2.0
- Published
- 615 kB
- 13 dependencies
- Apache-2.0 license
Install
npm i @google-cloud/bigquery
yarn add @google-cloud/bigquery
pnpm add @google-cloud/bigquery
Overview
Google BigQuery Client Library for Node.js
Index
Variables
Classes
BigQuery
- createDataset()
- createJob()
- createQueryJob()
- createQueryStream()
- dataset()
- date()
- datetime()
- decodeIntegerValue_()
- geography()
- getDatasets()
- getDatasetsStream()
- getJobs()
- getJobsStream()
- getTypeDescriptorFromProvidedType_()
- getTypeDescriptorFromValue_()
- int()
- job()
- location
- mergeSchemaWithRows_()
- query()
- queryAsStream_()
- time()
- timestamp()
- valueToQueryParameter_()
Table
- bigQuery
- copy()
- copyFrom()
- createCopyFromJob()
- createCopyJob()
- createExtractJob()
- createInsertStream()
- createLoadJob()
- createQueryJob()
- createQueryStream()
- createReadStream()
- createSchemaFromString_()
- createWriteStream()
- createWriteStream_()
- dataset
- encodeValue_()
- extract()
- formatMetadata_()
- getIamPolicy()
- getRows()
- insert()
- load()
- location
- query()
- rowQueue
- setIamPolicy()
- setMetadata()
- testIamPermissions()
Interfaces
Type Aliases
- CancelCallback
- CancelResponse
- CopyTableMetadata
- CreateCopyJobMetadata
- CreateDatasetOptions
- CreateExtractJobOptions
- DatasetCallback
- DatasetResource
- DatasetResponse
- DatasetsCallback
- DatasetsResponse
- FormattedMetadata
- GetDatasetsOptions
- GetJobsCallback
- GetJobsOptions
- GetJobsResponse
- GetModelsCallback
- GetModelsOptions
- GetModelsResponse
- GetPolicyOptions
- GetRoutinesCallback
- GetRoutinesOptions
- GetRoutinesResponse
- GetRowsOptions
- GetTablesCallback
- GetTablesOptions
- GetTablesResponse
- InsertRowsCallback
- InsertRowsOptions
- InsertRowsResponse
- InsertRowsStreamResponse
- IntegerTypeCastValue
- JobCallback
- JobLoadMetadata
- JobMetadata
- JobMetadataCallback
- JobMetadataResponse
- JobOptions
- JobRequest
- JobResponse
- PagedRequest
- PagedResponse
- PermissionsCallback
- PermissionsResponse
- Policy
- PolicyCallback
- PolicyRequest
- PolicyResponse
- ProvidedTypeArray
- Query
- QueryOptions
- QueryParameter
- QueryResultsOptions
- QueryRowsCallback
- QueryRowsResponse
- QueryStreamOptions
- RoutineCallback
- RoutineMetadata
- RoutineResponse
- RowMetadata
- RowsCallback
- RowsResponse
- SetPolicyOptions
- SetTableMetadataOptions
- SimpleQueryRowsCallback
- SimpleQueryRowsResponse
- TableCallback
- TableField
- TableMetadata
- TableResponse
- TableRow
- TableRowField
- TableRowValue
- TableSchema
- ValueType
- ViewDefinition
Variables
variable PROTOCOL_REGEX
const PROTOCOL_REGEX: RegExp;
Classes
class BigQuery
class BigQuery extends Service {}
In the following examples from this page and the other modules (
Dataset
,Table
, etc.), we are going to be using a dataset from data.gov of higher education institutions.We will create a table with the correct schema, import the public CSV file into that table, and query it for data.
Parameter options
Constructor options.
Example 1
Install the client library with npm:
npm install @google-cloud/bigqueryExample 2
Import the client library
const {BigQuery} = require('@google-cloud/bigquery');Example 3
Create a client that uses Application Default Credentials (ADC):
const bigquery = new BigQuery();Example 4
Create a client with explicit credentials:
const bigquery = new BigQuery({projectId: 'your-project-id',keyFilename: '/path/to/keyfile.json'});Example 5
include:samples/quickstart.js region_tag:bigquery_quickstart Full quickstart example:
constructor
constructor(options?: BigQueryOptions);
property location
location?: string;
method createDataset
createDataset: { (id: string, options?: DatasetResource): Promise<DatasetResponse>; (id: string, options: bigquery.IDataset, callback: DatasetCallback): void; (id: string, callback: DatasetCallback): void;};
Create a dataset.
Parameter id
ID of the dataset to create.
Parameter options
See a Dataset resource.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request
Parameter
{Dataset} callback.dataset The newly created dataset
Parameter
{object} callback.apiResponse The full API response.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();bigquery.createDataset('my-dataset', function(err, dataset, apiResponse){});//-// If the callback is omitted, we'll return a Promise.//-bigquery.createDataset('my-dataset').then(function(data) {const dataset = data[0];const apiResponse = data[1];});
method createJob
createJob: { (options: JobOptions): Promise<JobResponse>; (options: JobOptions, callback: JobCallback): void;};
Creates a job. Typically when creating a job you'll have a very specific task in mind. For this we recommend one of the following methods:
- BigQuery.createQueryJob - Table#createCopyJob - Table#createCopyFromJob - Table#createExtractJob - Table#createLoadJob
However in the event you need a finer level of control over the job creation, you can use this method to pass in a raw Job resource object.
Parameter options
Object in the form of a Job resource;
Parameter
{string} [options.jobId] Custom job id.
Parameter
{string} [options.jobPrefix] Prefix to apply to the job id.
Parameter
{string} [options.location] The geographic location of the job. Required except for US and EU.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request.
Parameter
{Job} callback.job The newly created job.
Parameter
{object} callback.apiResponse The full API response.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const options = {configuration: {query: {query: 'SELECT url FROM `publicdata.samples.github_nested` LIMIT 100'}}};bigquery.createJob(options, function(err, job) {if (err) {// Error handling omitted.}job.getQueryResults(function(err, rows) {});});//-// If the callback is omitted, we'll return a Promise.//-bigquery.createJob(options).then(function(data) {const job = data[0];return job.getQueryResults();});
method createQueryJob
createQueryJob: { (options: Query | string): Promise<JobResponse>; (options: string | Query, callback: JobCallback): void;};
Run a query as a job. No results are immediately returned. Instead, your callback will be executed with a Job object that you must ping for the results. See the Job documentation for explanations of how to check on the status of the job.
Parameter options
The configuration object. This must be in the format of the `configuration.query` property of a Jobs resource. If a string is provided, this is used as the query string, and all other options are defaulted.
Parameter
{Table} [options.destination] The table to save the query's results to. If omitted, a new table will be created.
Parameter
{boolean} [options.dryRun] If set, don't actually run this job. A valid query will update the job with processing statistics. These can be accessed via
job.metadata
.Parameter
{object} [options.labels] String key/value pairs to be attached as labels to the newly created Job.
Parameter
{string} [options.location] The geographic location of the job. Required except for US and EU.
Parameter
{number} [options.jobTimeoutMs] Job timeout in milliseconds. If this time limit is exceeded, BigQuery might attempt to stop the job.
Parameter
{string} [options.jobId] Custom job id.
Parameter
{string} [options.jobPrefix] Prefix to apply to the job id.
Parameter
{string} options.query A query string, following the BigQuery query syntax, of the query to execute.
Parameter
{boolean} [options.useLegacySql=false] Option to use legacy sql syntax.
Parameter
{object} [options.defaultDataset] The dataset. This must be in the format of the `DatasetReference`
Parameter
{boolean} [options.wrapIntegers] Optionally wrap INT64 in BigQueryInt or custom INT64 value type.
Parameter
{object|array} [options.params] Option to provide query prarameters.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request.
Parameter
{Job} callback.job The newly created job for your query.
Parameter
{object} callback.apiResponse The full API response.
Throws
{Error} If a query is not specified.
Throws
{Error} If a Table is not provided as a destination.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const query = 'SELECT url FROM `publicdata.samples.github_nested` LIMIT100';//-// You may pass only a query string, having a new table created to storethe// results of the query.//-bigquery.createQueryJob(query, function(err, job) {});//-// You can also control the destination table by providing a// {@link Table} object.//-bigquery.createQueryJob({destination: bigquery.dataset('higher_education').table('institutions'),query: query}, function(err, job) {});//-// After you have run `createQueryJob`, your query will execute in a job.Your// callback is executed with a {@link Job} object so that you may// check for the results.//-bigquery.createQueryJob(query, function(err, job) {if (!err) {job.getQueryResults(function(err, rows, apiResponse) {});}});//-// If the callback is omitted, we'll return a Promise.//-bigquery.createQueryJob(query).then(function(data) {const job = data[0];const apiResponse = data[1];return job.getQueryResults();});
method createQueryStream
createQueryStream: (options?: Query | string) => ResourceStream<any>;
method dataset
dataset: (id: string, options?: DatasetOptions) => Dataset;
Create a reference to a dataset.
Parameter id
ID of the dataset.
Parameter options
Dataset options.
Parameter
{string} [options.location] The geographic location of the dataset. Required except for US and EU.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('higher_education');
method date
static date: (value: BigQueryDateOptions | string) => BigQueryDate;
The
DATE
type represents a logical calendar date, independent of time zone. It does not represent a specific 24-hour time period. Rather, a given DATE value represents a different 24-hour period when interpreted in different time zones, and may represent a shorter or longer day during Daylight Savings Time transitions.Parameter value
The date. If a string, this should be in the format the API describes:
YYYY-[M]M-[D]D
. Otherwise, provide an object.Parameter
{string|number} value.year Four digits.
Parameter
{string|number} value.month One or two digits.
Parameter
{string|number} value.day One or two digits.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const date = bigquery.date('2017-01-01');//-// Alternatively, provide an object.//-const date2 = bigquery.date({year: 2017,month: 1,day: 1});Parameter value
The date. If a string, this should be in the format the API describes:
YYYY-[M]M-[D]D
. Otherwise, provide an object.Parameter
{string|number} value.year Four digits.
Parameter
{string|number} value.month One or two digits.
Parameter
{string|number} value.day One or two digits.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const date = BigQuery.date('2017-01-01');//-// Alternatively, provide an object.//-const date2 = BigQuery.date({year: 2017,month: 1,day: 1});
method datetime
static datetime: (value: BigQueryDatetimeOptions | string) => BigQueryDatetime;
A
DATETIME
data type represents a point in time. Unlike aTIMESTAMP
, this does not refer to an absolute instance in time. Instead, it is the civil time, or the time that a user would see on a watch or calendar.Parameter value
The time. If a string, this should be in the format the API describes:
YYYY-[M]M-[D]D[ [H]H:[M]M:[S]S[.DDDDDD]]
. Otherwise, provide an object.Parameter
{string|number} value.year Four digits.
Parameter
{string|number} value.month One or two digits.
Parameter
{string|number} value.day One or two digits.
Parameter
{string|number} [value.hours] One or two digits (
00
-23
).Parameter
{string|number} [value.minutes] One or two digits (
00
-59
).Parameter
{string|number} [value.seconds] One or two digits (
00
-59
).Parameter
{string|number} [value.fractional] Up to six digits for microsecond precision.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const datetime = bigquery.datetime('2017-01-01 13:00:00');//-// Alternatively, provide an object.//-const datetime = bigquery.datetime({year: 2017,month: 1,day: 1,hours: 14,minutes: 0,seconds: 0});
method decodeIntegerValue_
static decodeIntegerValue_: (value: IntegerTypeCastValue) => number;
Convert an INT64 value to Number.
Parameter value
The INT64 value to convert.
method geography
static geography: (value: string) => Geography;
A geography value represents a surface area on the Earth in Well-known Text (WKT) format.
Parameter value
The geospatial data.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const geography = bigquery.geography('POINT(1, 2)');
method getDatasets
getDatasets: { (options?: GetDatasetsOptions): Promise<DatasetsResponse>; (options: GetDatasetsOptions, callback: DatasetsCallback): void; (callback: DatasetsCallback): void;};
List all or some of the datasets in your project.
Parameter options
Configuration object.
Parameter
{boolean} [options.all] List all datasets, including hidden ones.
Parameter
{boolean} [options.autoPaginate] Have pagination handled automatically. Default: true.
Parameter
{number} [options.maxApiCalls] Maximum number of API calls to make.
Parameter
{number} [options.maxResults] Maximum number of results to return.
Parameter
{string} [options.pageToken] Token returned from a previous call, to request the next page of results.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request
Parameter
{Dataset[]} callback.datasets The list of datasets in your project.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();bigquery.getDatasets(function(err, datasets) {if (!err) {// datasets is an array of Dataset objects.}});//-// To control how many API requests are made and page through the results// manually, set `autoPaginate` to `false`.//-function manualPaginationCallback(err, datasets, nextQuery, apiResponse) {if (nextQuery) {// More results exist.bigquery.getDatasets(nextQuery, manualPaginationCallback);}}bigquery.getDatasets({autoPaginate: false}, manualPaginationCallback);//-// If the callback is omitted, we'll return a Promise.//-bigquery.getDatasets().then(function(datasets) {});
method getDatasetsStream
getDatasetsStream: (options?: GetDatasetsOptions) => ResourceStream<Dataset>;
method getJobs
getJobs: { (options?: GetJobsOptions): Promise<GetJobsResponse>; (options: GetJobsOptions, callback: GetJobsCallback): void; (callback: GetJobsCallback): void;};
Get all of the jobs from your project.
Parameter options
Configuration object.
Parameter
{boolean} [options.allUsers] Display jobs owned by all users in the project.
Parameter
{boolean} [options.autoPaginate] Have pagination handled automatically. Default: true.
Parameter
{number} [options.maxApiCalls] Maximum number of API calls to make.
Parameter
{number} [options.maxResults] Maximum number of results to return.
Parameter
{string} [options.pageToken] Token returned from a previous call, to request the next page of results.
Parameter
{string} [options.projection] Restrict information returned to a set of selected fields. Acceptable values are "full", for all job data, and "minimal", to not include the job configuration.
Parameter
{string} [options.stateFilter] Filter for job state. Acceptable values are "done", "pending", and "running". Sending an array to this option performs a disjunction.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request
Parameter
{Job[]} callback.jobs The list of jobs in your project.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();bigquery.getJobs(function(err, jobs) {if (!err) {// jobs is an array of Job objects.}});//-// To control how many API requests are made and page through the results// manually, set `autoPaginate` to `false`.//-function manualPaginationCallback(err, jobs, nextQuery, apiRespose) {if (nextQuery) {// More results exist.bigquery.getJobs(nextQuery, manualPaginationCallback);}}bigquery.getJobs({autoPaginate: false}, manualPaginationCallback);//-// If the callback is omitted, we'll return a Promise.//-bigquery.getJobs().then(function(data) {const jobs = data[0];});
method getJobsStream
getJobsStream: (options?: GetJobsOptions) => ResourceStream<Job>;
method getTypeDescriptorFromProvidedType_
static getTypeDescriptorFromProvidedType_: ( providedType: string | ProvidedTypeStruct | ProvidedTypeArray) => ValueType;
Return a value's provided type.
Parameter providedType
The type.
Returns
{string} The valid type provided.
Throws
{error} If the type provided is invalid.
See Data Type
method getTypeDescriptorFromValue_
static getTypeDescriptorFromValue_: (value: unknown) => ValueType;
Detect a value's type.
Parameter value
The value.
Returns
{string} The type detected from the value.
Throws
{error} If the type could not be detected.
See Data Type
method int
static int: ( value: string | number | IntegerTypeCastValue, typeCastOptions?: IntegerTypeCastOptions) => BigQueryInt;
A BigQueryInt wraps 'INT64' values. Can be used to maintain precision.
Parameter value
The INT64 value to convert.
Parameter typeCastOptions
Configuration to convert value. Must provide an
integerTypeCastFunction
to handle conversion.Returns
{BigQueryInt}
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const largeIntegerValue = Number.MAX_SAFE_INTEGER + 1;const options = {integerTypeCastFunction: value => value.split(),};const bqInteger = bigquery.int(largeIntegerValue, options);const customValue = bqInteger.valueOf();// customValue is the value returned from your `integerTypeCastFunction`.
method job
job: (id: string, options?: JobOptions) => Job;
Create a reference to an existing job.
Parameter id
ID of the job.
Parameter options
Configuration object.
Parameter
{string} [options.location] The geographic location of the job. Required except for US and EU.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const myExistingJob = bigquery.job('job-id');
method mergeSchemaWithRows_
static mergeSchemaWithRows_: ( schema: TableSchema | TableField, rows: TableRow[], wrapIntegers: boolean | IntegerTypeCastOptions, selectedFields?: string[]) => any[];
Merge a rowset returned from the API with a table schema.
Parameter schema
Parameter rows
Parameter wrapIntegers
Wrap values of 'INT64' type in BigQueryInt objects. If a
boolean
, this will wrap values in BigQueryInt objects. If anobject
, this will return a value returned bywrapIntegers.integerTypeCastFunction
. Please see IntegerTypeCastOptions for options descriptions.Parameter selectedFields
List of fields to return. If unspecified, all fields are returned.
Returns
Fields using their matching names from the table's schema.
method query
query: { (query: string, options?: QueryOptions): Promise<QueryRowsResponse>; ( query: Query, options?: QueryResultsOptions ): Promise<SimpleQueryRowsResponse>; ( query: string, options: QueryResultsOptions, callback?: QueryRowsCallback ): void; ( query: Query, options: QueryResultsOptions, callback?: SimpleQueryRowsCallback ): void; (query: string, callback?: QueryRowsCallback): void; (query: Query, callback?: SimpleQueryRowsCallback): void;};
Run a query scoped to your project. For manual pagination please refer to BigQuery.createQueryJob.
Parameter query
A string SQL query or configuration object. For all available options, see Jobs: query request body.
Parameter
{string} [query.location] The geographic location of the job. Required except for US and EU.
Parameter
{string} [query.jobId] Custom id for the underlying job.
Parameter
{string} [query.jobPrefix] Prefix to apply to the underlying job id.
Parameter
{object|Array<*>} query.params For positional SQL parameters, provide an array of values. For named SQL parameters, provide an object which maps each named parameter to its value. The supported types are integers, floats, BigQuery.date objects, BigQuery.datetime objects, BigQuery.time objects, BigQuery.timestamp objects, Strings, Booleans, and Objects.
Parameter
{string} query.query A query string, following the BigQuery query syntax, of the query to execute.
Parameter
{object|Array<*>} query.types Provided types for query parameters. For positional SQL parameters, provide an array of types. For named SQL parameters, provide an object which maps each named parameter to its type.
Parameter
{boolean} [query.useLegacySql=false] Option to use legacy sql syntax.
Parameter options
Configuration object for query results.
Parameter
{number} [options.maxResults] Maximum number of results to read.
Parameter
{number} [options.timeoutMs] How long to wait for the query to complete, in milliseconds, before returning. Default is 10 seconds. If the timeout passes before the job completes, an error will be returned and the 'jobComplete' field in the response will be false.
Parameter
{boolean|IntegerTypeCastOptions} [options.wrapIntegers=false] Wrap values of 'INT64' type in BigQueryInt objects. If a
boolean
, this will wrap values in BigQueryInt objects. If anobject
, this will return a value returned bywrapIntegers.integerTypeCastFunction
. Please see IntegerTypeCastOptions for options descriptions.Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request
Parameter
{array} callback.rows The list of results from your query.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const query = 'SELECT url FROM `publicdata.samples.github_nested` LIMIT100';bigquery.query(query, function(err, rows) {if (!err) {// rows is an array of results.}});//-// Positional SQL parameters are supported.//-bigquery.query({query: ['SELECT url','FROM `publicdata.samples.github_nested`','WHERE repository.owner = ?'].join(' '),params: ['google']}, function(err, rows) {});//-// Or if you prefer to name them, that's also supported.//-bigquery.query({query: ['SELECT url','FROM `publicdata.samples.github_nested`','WHERE repository.owner = @owner'].join(' '),params: {owner: 'google'}}, function(err, rows) {});//-// Providing types for SQL parameters is supported.//-bigquery.query({query: ['SELECT url','FROM `publicdata.samples.github_nested`','WHERE repository.owner = ?'].join(' '),params: [null],types: ['string']}, function(err, rows) {});//-// If you need to use a `DATE`, `DATETIME`, `TIME`, or `TIMESTAMP` type in// your query, see {@link BigQuery.date}, {@link BigQuery.datetime},// {@link BigQuery.time}, and {@link BigQuery.timestamp}.//-//-// If the callback is omitted, we'll return a Promise.//-bigquery.query(query).then(function(data) {const rows = data[0];});
method queryAsStream_
queryAsStream_: (query: Query, callback?: SimpleQueryRowsCallback) => void;
This method will be called by
createQueryStream()
. It is required to properly set theautoPaginate
option value.
method time
static time: (value: BigQueryTimeOptions | string) => BigQueryTime;
A
TIME
data type represents a time, independent of a specific date.Parameter value
The time. If a string, this should be in the format the API describes:
[H]H:[M]M:[S]S[.DDDDDD]
. Otherwise, provide an object.Parameter
{string|number} [value.hours] One or two digits (
00
-23
).Parameter
{string|number} [value.minutes] One or two digits (
00
-59
).Parameter
{string|number} [value.seconds] One or two digits (
00
-59
).Parameter
{string|number} [value.fractional] Up to six digits for microsecond precision.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const time = bigquery.time('14:00:00'); // 2:00 PM//-// Alternatively, provide an object.//-const time = bigquery.time({hours: 14,minutes: 0,seconds: 0});
method timestamp
static timestamp: ( value: Date | PreciseDate | string | number) => BigQueryTimestamp;
A timestamp represents an absolute point in time, independent of any time zone or convention such as Daylight Savings Time.
Parameter value
The time.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const timestamp = bigquery.timestamp(new Date());
method valueToQueryParameter_
static valueToQueryParameter_: ( value: any, providedType?: string | ProvidedTypeStruct | ProvidedTypeArray) => bigquery.IQueryParameter;
Convert a value into a
queryParameter
object.Parameter value
The value.
Parameter providedType
Provided query parameter type.
Returns
{object} A properly-formed
queryParameter
object.
class BigQueryDate
class BigQueryDate {}
Date class for BigQuery.
constructor
constructor(value: string | BigQueryDateOptions);
property value
value: string;
class BigQueryDatetime
class BigQueryDatetime {}
Datetime class for BigQuery.
constructor
constructor(value: string | BigQueryDatetimeOptions);
property value
value: string;
class BigQueryInt
class BigQueryInt extends Number {}
Build a BigQueryInt object. For long integers, a string can be provided.
Parameter value
The 'INT64' value.
Parameter typeCastOptions
Configuration to convert values of 'INT64' type to a custom value. Must provide an
integerTypeCastFunction
to handle conversion.Parameter
{function} typeCastOptions.integerTypeCastFunction A custom user provided function to convert value.
Parameter
{string|string[]} [typeCastOptions.fields] Schema field names to be converted using
integerTypeCastFunction
.Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const anInt = bigquery.int(7);
constructor
constructor( value: string | number | IntegerTypeCastValue, typeCastOptions?: IntegerTypeCastOptions);
property type
type: string;
property typeCastFunction
typeCastFunction?: Function;
property value
value: string;
method toJSON
toJSON: () => Json;
method valueOf
valueOf: () => any;
class BigQueryTime
class BigQueryTime {}
Time class for BigQuery.
constructor
constructor(value: string | BigQueryTimeOptions);
property value
value: string;
class BigQueryTimestamp
class BigQueryTimestamp {}
Timestamp class for BigQuery.
constructor
constructor(value: any);
property value
value: string;
method fromFloatValue_
fromFloatValue_: (value: number) => PreciseDate;
class Dataset
class Dataset extends ServiceObject {}
Interact with your BigQuery dataset. Create a Dataset instance with BigQuery#createDataset or BigQuery#dataset.
Parameter bigQuery
BigQuery instance.
Parameter id
The ID of the Dataset.
Parameter options
Dataset options.
Parameter
{string} [options.location] The geographic location of the dataset. Defaults to US.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('institutions');
constructor
constructor(bigQuery: BigQuery, id: string, options?: DatasetOptions);
property bigQuery
bigQuery: BigQuery;
property location
location?: string;
property projectId
projectId?: string;
method createQueryJob
createQueryJob: { (options: string | Query): Promise<JobResponse>; (options: string | Query, callback: JobCallback): void;};
Run a query as a job. No results are immediately returned. Instead, your callback will be executed with a Job object that you must ping for the results. See the Job documentation for explanations of how to check on the status of the job.
See BigQuery#createQueryJob for full documentation of this method.
Parameter options
See BigQuery#createQueryJob for full documentation of this method.
Parameter callback
See BigQuery#createQueryJob for full documentation of this method.
Returns
{Promise} See BigQuery#createQueryJob for full documentation of this method.
method createQueryStream
createQueryStream: (options: Query | string) => Duplex;
Run a query scoped to your dataset as a readable object stream.
See BigQuery#createQueryStream for full documentation of this method.
Parameter options
See BigQuery#createQueryStream for full documentation of this method.
Returns
{stream}
method createRoutine
createRoutine: { (id: string, config: RoutineMetadata): Promise<RoutineResponse>; (id: string, config: bigquery.IRoutine, callback: RoutineCallback): void;};
Create a Routine.
Parameter id
The routine ID.
Parameter config
A [routine resource]https://cloud.google.com/bigquery/docs/reference/rest/v2/routines#Routine.
Parameter callback
The callback function.
Returns
{Promise}
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('my-dataset');const id = 'my-routine';const config = {arguments: [{name: 'x',dataType: {typeKind: 'INT64'}}],definitionBody: 'x * 3',routineType: 'SCALAR_FUNCTION',returnType: {typeKind: 'INT64'}};dataset.createRoutine(id, config, (err, routine, apiResponse) => {if (!err) {// The routine was created successfully.}});Example 2
If the callback is omitted a Promise will be returned
const [routine, apiResponse] = await dataset.createRoutine(id, config);
method createTable
createTable: { (id: string, options: TableMetadata): Promise<TableResponse>; (id: string, options: TableMetadata, callback: TableCallback): void; (id: string, callback: TableCallback): void;};
Create a Table given a tableId or configuration object.
Parameter id
Table id.
Parameter options
See a Table resource.
Parameter
{string|object} [options.schema] A comma-separated list of name:type pairs. Valid types are "string", "integer", "float", "boolean", and "timestamp". If the type is omitted, it is assumed to be "string". Example: "name:string, age:integer". Schemas can also be specified as a JSON array of fields, which allows for nested and repeated fields. See a Table resource for more detailed information.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request
Parameter
{Table} callback.table The newly created table.
Parameter
{object} callback.apiResponse The full API response.
Returns
{Promise}
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('institutions');const tableId = 'institution_data';const options = {// From the data.gov CSV dataset (http://goo.gl/kSE7z6):schema: 'UNITID,INSTNM,ADDR,CITY,STABBR,ZIP,FIPS,OBEREG,CHFNM,...'};dataset.createTable(tableId, options, (err, table, apiResponse) => {});//-// If the callback is omitted, we'll return a Promise.//-dataset.createTable(tableId, options).then((data) => {const table = data[0];const apiResponse = data[1];});
method delete
delete: { (options?: DatasetDeleteOptions): Promise<[Metadata]>; (options: DatasetDeleteOptions, callback: DeleteCallback): void; (callback: DeleteCallback): void;};
Delete the dataset.
Parameter options
The configuration object.
Parameter
{boolean} [options.force=false] Force delete dataset and all tables.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request
Parameter
{object} callback.apiResponse The full API response.
Returns
{Promise}
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('institutions');//-// Delete the dataset, only if it does not have any tables.//-dataset.delete((err, apiResponse) => {});//-// Delete the dataset and any tables it contains.//-dataset.delete({ force: true }, (err, apiResponse) => {});//-// If the callback is omitted, we'll return a Promise.//-dataset.delete().then((data) => {const apiResponse = data[0];});
method getModels
getModels: { (options?: GetModelsOptions): Promise<GetModelsResponse>; (options: GetModelsOptions, callback: GetModelsCallback): void; (callback: GetModelsCallback): void;};
Get a list of Model resources.
Parameter options
Configuration object.
Parameter
{boolean} [options.autoPaginate=true] Have pagination handled automatically.
Parameter
{number} [options.maxApiCalls] Maximum number of API calls to make.
Parameter
{number} [options.maxResults] Maximum number of results to return.
Parameter
{string} [options.pageToken] Token returned from a previous call, to request the next page of results.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request
Parameter
{Model[]} callback.models The list of models from your Dataset.
Parameter
{GetModelsOptions} callback.nextQuery If
autoPaginate
is set to true, this will be a prepared query for the next page of results.Parameter
{object} callback.apiResponse The full API response.
Returns
{Promise}
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('institutions');dataset.getModels((err, models) => {// models is an array of `Model` objects.});Example 2
To control how many API requests are made and page through the results manually, set
autoPaginate
tofalse
.function manualPaginationCallback(err, models, nextQuery, apiResponse) {if (nextQuery) {// More results exist.dataset.getModels(nextQuery, manualPaginationCallback);}}dataset.getModels({autoPaginate: false}, manualPaginationCallback);Example 3
If the callback is omitted, we'll return a Promise.
dataset.getModels().then((data) => {const models = data[0];});
method getModelsStream
getModelsStream: (options?: GetModelsOptions) => ResourceStream<Model>;
method getRoutines
getRoutines: { (options?: GetRoutinesOptions): Promise<GetRoutinesResponse>; (options: GetRoutinesOptions, callback: GetRoutinesCallback): void; (callback: GetRoutinesCallback): void;};
Get a list of routines.
Parameter options
Request options.
Parameter
{boolean} [options.autoPaginate=true] Have pagination handled automatically.
Parameter
{number} [options.maxApiCalls] Maximum number of API calls to make.
Parameter
{number} [options.maxResults] Maximum number of results to return.
Parameter
{string} [options.pageToken] Token returned from a previous call, to request the next page of results.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request
Parameter
{Routine[]} callback.routines The list of models from your Dataset.
Parameter
{GetRoutinesOptions} callback.nextQuery If
autoPaginate
is set to true, this will be a prepared query for the next page of results.Parameter
{object} callback.apiResponse The full API response.
Returns
{Promise}
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('institutions');dataset.getRoutines((err, routines) => {// routines is an array of `Routine` objects.});Example 2
To control how many API requests are made and page through the results manually, set
autoPaginate
tofalse
.function manualPaginationCallback(err, routines, nextQuery, apiResponse) {if (nextQuery) {// More results exist.dataset.getRoutines(nextQuery, manualPaginationCallback);}}dataset.getRoutines({autoPaginate: false}, manualPaginationCallback);Example 3
If the callback is omitted a Promise will be returned
const [routines] = await dataset.getRoutines();
method getRoutinesStream
getRoutinesStream: (options?: GetRoutinesOptions) => ResourceStream<Routine>;
method getTables
getTables: { (options?: GetTablesOptions): Promise<GetTablesResponse>; (options: GetTablesOptions, callback: GetTablesCallback): void; (callback: GetTablesCallback): void;};
Get a list of Table resources.
Parameter options
Configuration object.
Parameter
{boolean} [options.autoPaginate=true] Have pagination handled automatically.
Parameter
{number} [options.maxApiCalls] Maximum number of API calls to make.
Parameter
{number} [options.maxResults] Maximum number of results to return.
Parameter
{string} [options.pageToken] Token returned from a previous call, to request the next page of results.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request
Parameter
{Table[]} callback.tables The list of tables from your Dataset.
Parameter
{GetTablesOptions} callback.nextQuery If
autoPaginate
is set to true, this will be a prepared query for the next page of results.Parameter
{object} callback.apiResponse The full API response.
Returns
{Promise}
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('institutions');dataset.getTables((err, tables) => {// tables is an array of `Table` objects.});//-// To control how many API requests are made and page through the results// manually, set `autoPaginate` to `false`.//-function manualPaginationCallback(err, tables, nextQuery, apiResponse) {if (nextQuery) {// More results exist.dataset.getTables(nextQuery, manualPaginationCallback);}}dataset.getTables({autoPaginate: false}, manualPaginationCallback);//-// If the callback is omitted, we'll return a Promise.//-dataset.getTables().then((data) => {const tables = data[0];});
method getTablesStream
getTablesStream: (options?: GetTablesOptions) => ResourceStream<Table>;
method model
model: (id: string) => Model;
Create a Model object.
Parameter id
The ID of the model. {Model}
Throws
{TypeError} if model ID is missing.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('institutions');const model = dataset.model('my-model');
method query
query: { (options: Query): Promise<QueryRowsResponse>; (options: string): Promise<QueryRowsResponse>; (options: Query, callback: SimpleQueryRowsCallback): void; (options: string, callback: SimpleQueryRowsCallback): void;};
Run a query scoped to your dataset.
See BigQuery#query for full documentation of this method.
Parameter options
See BigQuery#query for full documentation of this method.
Parameter callback
See BigQuery#query for full documentation of this method.
Returns
{Promise} See BigQuery#query for full documentation of this method.
method routine
routine: (id: string) => Routine;
Create a Routine object.
Parameter id
The ID of the routine.
Returns
{Routine}
Throws
{TypeError} if routine ID is missing.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('institutions');const routine = dataset.routine('my_routine');
method table
table: (id: string, options?: TableOptions) => Table;
Create a Table object.
Parameter id
The ID of the table.
Parameter options
Table options.
Parameter
{string} [options.location] The geographic location of the table, by default this value is inherited from the dataset. This can be used to configure the location of all jobs created through a table instance. It cannot be used to set the actual location of the table. This value will be superseded by any API responses containing location data for the table. {Table}
Throws
{TypeError} if table ID is missing.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('institutions');const institutions = dataset.table('institution_data');
class Geography
class Geography {}
Geography class for BigQuery.
constructor
constructor(value: string);
property value
value: string;
class Job
class Job extends Operation {}
Job objects are returned from various places in the BigQuery API:
- BigQuery#getJobs - BigQuery#job - BigQuery#query - BigQuery#createJob - Table#copy - Table#createWriteStream - Table#extract - Table#load
They can be used to check the status of a running job or fetching the results of a previously-executed one.
Parameter bigQuery
BigQuery instance.
Parameter id
The ID of the job.
Parameter options
Configuration object.
Parameter
{string} [options.location] The geographic location of the job. Required except for US and EU.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const job = bigquery.job('job-id');//-// All jobs are event emitters. The status of each job is polled// continuously, starting only after you register a "complete" listener.//-job.on('complete', (metadata) => {// The job is complete.});//-// Be sure to register an error handler as well to catch any issues which// impeded the job.//-job.on('error', (err) => {// An error occurred during the job.});//-// To force the Job object to stop polling for updates, simply remove any// "complete" listeners you've registered.//// The easiest way to do this is with `removeAllListeners()`.//-job.removeAllListeners();
constructor
constructor(bigQuery: BigQuery, id: string, options?: JobOptions);
property bigQuery
bigQuery: BigQuery;
property location
location?: string;
property projectId
projectId?: string;
method cancel
cancel: { (): Promise<CancelResponse>; (callback: CancelCallback): void };
Cancel a job. Use Job#getMetadata to see if the cancel completes successfully. See an example implementation below.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request.
Parameter
{object} callback.apiResponse The full API response.
Returns
{Promise}
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const job = bigquery.job('job-id');job.cancel((err, apiResponse) =>{// Check to see if the job completes successfully.job.on('error', (err) => {});job.on('complete', (metadata) => {});});//-// If the callback is omitted, we'll return a Promise.//-job.cancel().then((data) => {const apiResponse = data[0];});
method getQueryResults
getQueryResults: { (options?: QueryResultsOptions): Promise<QueryRowsResponse>; (options: QueryResultsOptions, callback: QueryRowsCallback): void; (callback: QueryRowsCallback): void;};
Get the results of a job.
Parameter options
Configuration object.
Parameter
{boolean} [options.autoPaginate=true] Have pagination handled automatically.
Parameter
{number} [options.maxApiCalls] Maximum number of API calls to make.
Parameter
{number} [options.maxResults] Maximum number of results to read.
Parameter
{string} [options.pageToken] Page token, returned by a previous call, to request the next page of results. Note: This is automatically added to the
nextQuery
argument of your callback.Parameter
{number} [options.startIndex] Zero-based index of the starting row.
Parameter
{number} [options.timeoutMs] How long to wait for the query to complete, in milliseconds, before returning. Default is 10 seconds. If the timeout passes before the job completes, an error will be returned and the 'jobComplete' field in the response will be false.
Parameter
{boolean|IntegerTypeCastOptions} [options.wrapIntegers=false] Wrap values of 'INT64' type in BigQueryInt objects. If a
boolean
, this will wrap values in BigQueryInt objects. If anobject
, this will return a value returned bywrapIntegers.integerTypeCastFunction
.Parameter callback
The callback function. If
autoPaginate
is set to false a ManualQueryResultsCallback should be used.Returns
{Promise}
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const job = bigquery.job('job-id');//-// Get all of the results of a query.//-job.getQueryResults((err, rows) => {if (!err) {// rows is an array of results.}});//-// Customize the results you want to fetch.//-job.getQueryResults({maxResults: 100}, (err, rows) => {});//-// To control how many API requests are made and page through the results// manually, set `autoPaginate` to `false`.//-function manualPaginationCallback(err, rows, nextQuery, apiResponse) {if (nextQuery) {// More results exist.job.getQueryResults(nextQuery, manualPaginationCallback);}}job.getQueryResults({autoPaginate: false}, manualPaginationCallback);//-// If the callback is omitted, we'll return a Promise.//-job.getQueryResults().then((data) => {const rows = data[0];});
method getQueryResultsAsStream_
getQueryResultsAsStream_: ( options: QueryResultsOptions, callback: QueryRowsCallback) => void;
This method will be called by
getQueryResultsStream()
. It is required to properly set theautoPaginate
option value.
method getQueryResultsStream
getQueryResultsStream: (options?: QueryResultsOptions) => ResourceStream<any>;
method poll_
poll_: (callback: MetadataCallback) => void;
Poll for a status update. Execute the callback:
- callback(err): Job failed - callback(): Job incomplete - callback(null, metadata): Job complete
Parameter callback
class Model
class Model extends ServiceObject {}
Model objects are returned by methods such as Dataset#model and Dataset#getModels.
Parameter dataset
Dataset instance.
Parameter id
The ID of the model.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('my-dataset');const model = dataset.model('my-model');
constructor
constructor(dataset: Dataset, id: string);
property bigQuery
bigQuery: BigQuery;
property dataset
dataset: Dataset;
method createExtractJob
createExtractJob: { ( destination: string | File, options?: CreateExtractJobOptions ): Promise<JobResponse>; ( destination: string | File, options: CreateExtractJobOptions, callback: JobCallback ): void; (destination: string | File, callback: JobCallback): void;};
Export model to Cloud Storage.
Parameter destination
Where the model should be exported to. A string or object.
Parameter options
The configuration object. For all extract job options, see [CreateExtractJobOptions]https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationExtract.
Parameter
{string} [options.format] The format to export the data in. Allowed options are "ML_TF_SAVED_MODEL" or "ML_XGBOOST_BOOSTER". Default: "ML_TF_SAVED_MODEL".
Parameter
{string} [options.jobId] Custom job id.
Parameter
{string} [options.jobPrefix] Prefix to apply to the job id.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request.
Parameter
{Job} callback.job The job used to export the model.
Parameter
{object} callback.apiResponse The full API response.
Returns
{Promise}
Throws
{Error} If a destination isn't a string or File object.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('my-dataset');const model = dataset.model('my-model');const extractedModel = 'gs://my-bucket/extracted-model';function callback(err, job, apiResponse) {// `job` is a Job object that can be used to check the status of the// request.}//-// To use the default options, just pass a string or a {@linkhttps://googleapis.dev/nodejs/storage/latest/File.html File}object.//// Note: The default format is 'ML_TF_SAVED_MODEL'.//-model.createExtractJob(extractedModel, callback);//-// If you need more customization, pass an `options` object.//-const options = {format: 'ML_TF_SAVED_MODEL',jobId: '123abc'};model.createExtractJob(extractedModel, options, callback);//-// If the callback is omitted, we'll return a Promise.//-model.createExtractJob(extractedModel, options).then((data) => {const job = data[0];const apiResponse = data[1];});
method extract
extract: { ( destination: string | File, options?: CreateExtractJobOptions ): Promise<JobMetadataResponse>; ( destination: string | File, options: CreateExtractJobOptions, callback?: JobMetadataCallback ): void; (destination: string | File, callback?: JobMetadataCallback): void;};
Export model to Cloud Storage.
Parameter destination
Where the model should be exported to. A string or object.
Parameter options
The configuration object. For all extract job options, see [CreateExtractJobOptions]https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationExtract.
Parameter
{string} [options.format] The format to export the data in. Allowed options are "ML_TF_SAVED_MODEL" or "ML_XGBOOST_BOOSTER". Default: "ML_TF_SAVED_MODEL".
Parameter
{string} [options.jobId] Custom id for the underlying job.
Parameter
{string} [options.jobPrefix] Prefix to apply to the underlying job id.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request
Parameter
{object} callback.apiResponse The full API response.
Returns
{Promise}
Throws
{Error} If destination isn't a string or File object.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('my-dataset');const model = dataset.model('my-model');const extractedModel = 'gs://my-bucket/extracted-model';//-function callback(err, job, apiResponse) {// `job` is a Job object that can be used to check the status of the// request.}//-// To use the default options, just pass a string or a {@linkhttps://googleapis.dev/nodejs/storage/latest/File.html File}object.//// Note: The default format is 'ML_TF_SAVED_MODEL'.//-model.createExtractJob(extractedModel, callback);//-// If you need more customization, pass an `options` object.//-const options = {format: 'ML_TF_SAVED_MODEL',jobId: '123abc'};model.createExtractJob(extractedModel, options, callback);//-// If the callback is omitted, we'll return a Promise.//-model.createExtractJob(extractedModel, options).then((data) => {const job = data[0];const apiResponse = data[1];});
class Routine
class Routine extends ServiceObject {}
Routine objects are returned by methods such as Dataset#routine, Dataset#createRoutine, and Dataset#getRoutines.
Parameter dataset
Dataset instance.
Parameter id
The ID of the routine.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('my-dataset');const routine = dataset.routine('my_routine');
constructor
constructor(dataset: Dataset, id: string);
method setMetadata
setMetadata: { (metadata: RoutineMetadata): Promise<SetMetadataResponse>; (metadata: bigquery.IRoutine, callback: ResponseCallback): void;};
class RowBatch
class RowBatch {}
Call used to help batch rows.
Parameter options
The batching options.
constructor
constructor(options: RowBatchOptions);
property batchOptions
batchOptions: RowBatchOptions;
property bytes
bytes: number;
property callbacks
callbacks: InsertRowsCallback[];
property created
created: number;
property rows
rows: any[];
method add
add: (row: RowMetadata, callback?: InsertRowsCallback) => void;
Adds a row to the current batch.
Parameter row
The row to insert.
Parameter callback
The callback function.
method canFit
canFit: (row: RowMetadata) => boolean;
Indicates if a given row can fit in the batch.
Parameter row
The row in question.
Returns
{boolean}
method isAtMax
isAtMax: () => boolean;
Checks to see if this batch is at the maximum allowed payload size.
Returns
{boolean}
method isFull
isFull: () => boolean;
Indicates if the batch is at capacity.
Returns
{boolean}
class RowQueue
class RowQueue {}
Standard row queue used for inserting rows.
Parameter table
The table.
Parameter dup
Row stream.
Parameter options
Insert and batch options.
constructor
constructor(table: Table, dup: Stream, options?: InsertStreamOptions);
property batch
batch: RowBatch;
property batchOptions
batchOptions?: RowBatchOptions;
property inFlight
inFlight: boolean;
property insertRowsOptions
insertRowsOptions: InsertRowsOptions;
property pending
pending?: number;
property stream
stream: Stream;
property table
table: Table;
method add
add: (row: RowMetadata, callback: InsertRowsCallback) => void;
Adds a row to the queue.
Parameter row
The row to insert.
Parameter callback
The insert callback.
method getOptionDefaults
getOptionDefaults: () => RowBatchOptions;
method insert
insert: (callback?: InsertRowsCallback) => void;
Cancels any pending inserts and calls _insert immediately.
method setOptions
setOptions: (options?: RowBatchOptions) => void;
Sets the batching options.
Parameter options
The batching options.
class Table
class Table extends ServiceObject {}
Table objects are returned by methods such as Dataset#table, Dataset#createTable, and Dataset#getTables.
Parameter dataset
Dataset instance.
Parameter id
The ID of the table.
Parameter options
Table options.
Parameter
{string} [options.location] The geographic location of the table, by default this value is inherited from the dataset. This can be used to configure the location of all jobs created through a table instance. It cannot be used to set the actual location of the table. This value will be superseded by any API responses containing location data for the table.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('my-dataset');const table = dataset.table('my-table');
constructor
constructor(dataset: Dataset, id: string, options?: TableOptions);
property bigQuery
bigQuery: BigQuery;
property dataset
dataset: Dataset;
property location
location?: string;
property rowQueue
rowQueue?: RowQueue;
method copy
copy: { ( destination: Table, metadata?: CopyTableMetadata ): Promise<JobMetadataResponse>; ( destination: Table, metadata: CopyTableMetadata, callback: JobMetadataCallback ): void; (destination: Table, callback: JobMetadataCallback): void;};
Copy data from one table to another, optionally creating that table.
Parameter destination
The destination table.
Parameter metadata
Metadata to set with the copy operation. The metadata object should be in the format of a `JobConfigurationTableCopy` object. object.
Parameter
{string} [metadata.jobId] Custom id for the underlying job.
Parameter
{string} [metadata.jobPrefix] Prefix to apply to the underlying job id.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request
Parameter
{object} callback.apiResponse The full API response.
Returns
{Promise}
Throws
{Error} If a destination other than a Table object is provided.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('my-dataset');const table = dataset.table('my-table');const yourTable = dataset.table('your-table');table.copy(yourTable, (err, apiResponse) => {});//-// See https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationTableCopy// for all available options.//-const metadata = {createDisposition: 'CREATE_NEVER',writeDisposition: 'WRITE_TRUNCATE'};table.copy(yourTable, metadata, (err, apiResponse) => {});//-// If the callback is omitted, we'll return a Promise.//-table.copy(yourTable, metadata).then((data) => {const apiResponse = data[0];});
method copyFrom
copyFrom: { ( sourceTables: Table | Table[], metadata?: CopyTableMetadata ): Promise<JobMetadataResponse>; ( sourceTables: Table | Table[], metadata: CopyTableMetadata, callback: JobMetadataCallback ): void; (sourceTables: Table | Table[], callback: JobMetadataCallback): void;};
Copy data from multiple tables into this table.
Parameter sourceTables
The source table(s) to copy data from.
Parameter metadata
Metadata to set with the copy operation. The metadata object should be in the format of a `JobConfigurationTableCopy` object.
Parameter
{string} [metadata.jobId] Custom id for the underlying job.
Parameter
{string} [metadata.jobPrefix] Prefix to apply to the underlying job id.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request
Parameter
{object} callback.apiResponse The full API response.
Returns
{Promise}
Throws
{Error} If a source other than a Table object is provided.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('my-dataset');const table = dataset.table('my-table');const sourceTables = [dataset.table('your-table'),dataset.table('your-second-table')];table.copyFrom(sourceTables, (err, apiResponse) => {});//-// See https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationTableCopy// for all available options.//-const metadata = {createDisposition: 'CREATE_NEVER',writeDisposition: 'WRITE_TRUNCATE'};table.copyFrom(sourceTables, metadata, (err, apiResponse) => {});//-// If the callback is omitted, we'll return a Promise.//-table.copyFrom(sourceTables, metadata).then((data) => {const apiResponse = data[0];});
method createCopyFromJob
createCopyFromJob: { ( source: Table | Table[], metadata?: CopyTableMetadata ): Promise<JobResponse>; ( source: Table | Table[], metadata: CopyTableMetadata, callback: JobCallback ): void; (source: Table | Table[], callback: JobCallback): void;};
Copy data from multiple tables into this table.
Parameter sourceTables
The source table(s) to copy data from.
Parameter metadata
Metadata to set with the copy operation. The metadata object should be in the format of a `JobConfigurationTableCopy` object.
Parameter
{string} [metadata.jobId] Custom job id.
Parameter
{string} [metadata.jobPrefix] Prefix to apply to the job id.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request
Parameter
{Job} callback.job The job used to copy your table.
Parameter
{object} callback.apiResponse The full API response.
Returns
{Promise}
Throws
{Error} If a source other than a Table object is provided.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('my-dataset');const table = dataset.table('my-table');const sourceTables = [dataset.table('your-table'),dataset.table('your-second-table')];const callback = (err, job, apiResponse) => {// `job` is a Job object that can be used to check the status of the// request.};table.createCopyFromJob(sourceTables, callback);//-// See https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationTableCopy// for all available options.//-const metadata = {createDisposition: 'CREATE_NEVER',writeDisposition: 'WRITE_TRUNCATE'};table.createCopyFromJob(sourceTables, metadata, callback);//-// If the callback is omitted, we'll return a Promise.//-table.createCopyFromJob(sourceTables, metadata).then((data) => {const job = data[0];const apiResponse = data[1];});
method createCopyJob
createCopyJob: { (destination: Table, metadata?: CreateCopyJobMetadata): Promise<JobResponse>; ( destination: Table, metadata: CopyTableMetadata, callback: JobCallback ): void; (destination: Table, callback: JobCallback): void;};
Copy data from one table to another, optionally creating that table.
Parameter destination
The destination table.
Parameter metadata
Metadata to set with the copy operation. The metadata object should be in the format of a `JobConfigurationTableCopy` object.
Parameter
{string} [metadata.jobId] Custom job id.
Parameter
{string} [metadata.jobPrefix] Prefix to apply to the job id.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request
Parameter
{Job} callback.job The job used to copy your table.
Parameter
{object} callback.apiResponse The full API response.
Returns
{Promise}
Throws
{Error} If a destination other than a Table object is provided.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('my-dataset');const table = dataset.table('my-table');const yourTable = dataset.table('your-table');table.createCopyJob(yourTable, (err, job, apiResponse) => {// `job` is a Job object that can be used to check the status of the// request.});//-// See https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationTableCopy// for all available options.//-const metadata = {createDisposition: 'CREATE_NEVER',writeDisposition: 'WRITE_TRUNCATE'};table.createCopyJob(yourTable, metadata, (err, job, apiResponse) => {});//-// If the callback is omitted, we'll return a Promise.//-table.createCopyJob(yourTable, metadata).then((data) => {const job = data[0];const apiResponse = data[1];});
method createExtractJob
createExtractJob: { (destination: File, options?: CreateExtractJobOptions): Promise<JobResponse>; ( destination: File, options: CreateExtractJobOptions, callback: JobCallback ): void; (destination: File, callback: JobCallback): void;};
Export table to Cloud Storage.
Parameter destination
Where the file should be exported to. A string or a object.
Parameter options
The configuration object.
Parameter
{string} options.format - The format to export the data in. Allowed options are "CSV", "JSON", "AVRO", or "PARQUET". Default: "CSV".
Parameter
{boolean} options.gzip - Specify if you would like the file compressed with GZIP. Default: false.
Parameter
{string} [options.jobId] Custom job id.
Parameter
{string} [options.jobPrefix] Prefix to apply to the job id.
Parameter callback
The callback function.
Parameter
{?error} callback.err - An error returned while making this request
Parameter
{Job} callback.job - The job used to export the table.
Parameter
{object} callback.apiResponse - The full API response.
Returns
{Promise}
Throws
{Error} If destination isn't a File object.
Throws
{Error} If destination format isn't recongized.
Example 1
const {Storage} = require('@google-cloud/storage');const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('my-dataset');const table = dataset.table('my-table');const storage = new Storage({projectId: 'grape-spaceship-123'});const extractedFile = storage.bucket('institutions').file('2014.csv');function callback(err, job, apiResponse) {// `job` is a Job object that can be used to check the status of the// request.}//-// To use the default options, just pass a {@linkhttps://googleapis.dev/nodejs/storage/latest/File.html File}object.//// Note: The exported format type will be inferred by the file's extension.// If you wish to override this, or provide an array of destination files,// you must provide an `options` object.//-table.createExtractJob(extractedFile, callback);//-// If you need more customization, pass an `options` object.//-const options = {format: 'json',gzip: true};table.createExtractJob(extractedFile, options, callback);//-// You can also specify multiple destination files.//-table.createExtractJob([storage.bucket('institutions').file('2014.json'),storage.bucket('institutions-copy').file('2014.json')], options, callback);//-// If the callback is omitted, we'll return a Promise.//-table.createExtractJob(extractedFile, options).then((data) => {const job = data[0];const apiResponse = data[1];});
method createInsertStream
createInsertStream: (options?: InsertStreamOptions) => Writable;
method createLoadJob
createLoadJob: { ( source: string | File | File[], metadata?: JobLoadMetadata ): Promise<JobResponse>; ( source: string | File | File[], metadata: JobLoadMetadata, callback: JobCallback ): void; (source: string | File | File[], callback: JobCallback): void;};
Load data from a local file or Storage .
By loading data this way, you create a load job that will run your data load asynchronously. If you would like instantaneous access to your data, insert it using .
Note: The file type will be inferred by the given file's extension. If you wish to override this, you must provide
metadata.format
.Parameter source
The source file to load. A string (path) to a local file, or one or more objects.
Parameter metadata
Metadata to set with the load operation. The metadata object should be in the format of the `configuration.load` property of a Jobs resource.
Parameter
{string} [metadata.format] The format the data being loaded is in. Allowed options are "AVRO", "CSV", "JSON", "ORC", or "PARQUET".
Parameter
{string} [metadata.jobId] Custom job id.
Parameter
{string} [metadata.jobPrefix] Prefix to apply to the job id.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request
Parameter
{Job} callback.job The job used to load your data.
Parameter
{object} callback.apiResponse The full API response.
Returns
{Promise}
Throws
{Error} If the source isn't a string file name or a File instance.
Example 1
const {Storage} = require('@google-cloud/storage');const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('my-dataset');const table = dataset.table('my-table');//-// Load data from a local file.//-const callback = (err, job, apiResponse) => {// `job` is a Job object that can be used to check the status of the// request.};table.createLoadJob('./institutions.csv', callback);//-// You may also pass in metadata in the format of a Jobs resource. See// (https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad)// for a full list of supported values.//-const metadata = {encoding: 'ISO-8859-1',sourceFormat: 'NEWLINE_DELIMITED_JSON'};table.createLoadJob('./my-data.csv', metadata, callback);//-// Load data from a file in your Cloud Storage bucket.//-const storage = new Storage({projectId: 'grape-spaceship-123'});const data = storage.bucket('institutions').file('data.csv');table.createLoadJob(data, callback);//-// Load data from multiple files in your Cloud Storage bucket(s).//-table.createLoadJob([storage.bucket('institutions').file('2011.csv'),storage.bucket('institutions').file('2012.csv')], callback);//-// If the callback is omitted, we'll return a Promise.//-table.createLoadJob(data).then((data) => {const job = data[0];const apiResponse = data[1];});
method createQueryJob
createQueryJob: { (options: Query): Promise<JobResponse>; (options: Query, callback: JobCallback): void;};
Run a query as a job. No results are immediately returned. Instead, your callback will be executed with a Job object that you must ping for the results. See the Job documentation for explanations of how to check on the status of the job.
See BigQuery#createQueryJob for full documentation of this method.
method createQueryStream
createQueryStream: (query: Query) => Duplex;
Run a query scoped to your dataset as a readable object stream.
See BigQuery#createQueryStream for full documentation of this method.
Parameter query
See BigQuery#createQueryStream for full documentation of this method.
Returns
{stream} See BigQuery#createQueryStream for full documentation of this method.
method createReadStream
createReadStream: (options?: GetRowsOptions) => ResourceStream<any>;
method createSchemaFromString_
static createSchemaFromString_: (str: string) => TableSchema;
Convert a comma-separated name:type string to a table schema object.
Parameter str
Comma-separated schema string.
Returns
{object} Table schema in the format the API expects.
method createWriteStream
createWriteStream: (metadata: JobLoadMetadata | string) => Writable;
Load data into your table from a readable stream of AVRO, CSV, JSON, ORC, or PARQUET data.
Parameter metadata
Metadata to set with the load operation. The metadata object should be in the format of the `configuration.load` property of a Jobs resource. If a string is given, it will be used as the filetype.
Parameter
{string} [metadata.jobId] Custom job id.
Parameter
{string} [metadata.jobPrefix] Prefix to apply to the job id.
Returns
{WritableStream}
Throws
{Error} If source format isn't recognized.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('my-dataset');const table = dataset.table('my-table');//-// Load data from a CSV file.//-const request = require('request');const csvUrl = 'http://goo.gl/kSE7z6';const metadata = {allowJaggedRows: true,skipLeadingRows: 1};request.get(csvUrl).pipe(table.createWriteStream(metadata)).on('job', (job) => {// `job` is a Job object that can be used to check the status of the// request.}).on('complete', (job) => {// The job has completed successfully.});//-// Load data from a JSON file.//-const fs = require('fs');fs.createReadStream('./test/testdata/testfile.json').pipe(table.createWriteStream('json')).on('job', (job) => {// `job` is a Job object that can be used to check the status of the// request.}).on('complete', (job) => {// The job has completed successfully.});
method createWriteStream_
createWriteStream_: (metadata: JobLoadMetadata | string) => Writable;
Creates a write stream. Unlike the public version, this will not automatically poll the underlying job.
Parameter metadata
Metadata to set with the load operation. The metadata object should be in the format of the `configuration.load` property of a Jobs resource. If a string is given, it will be used as the filetype.
Parameter
{string} [metadata.jobId] Custom job id.
Parameter
{string} [metadata.jobPrefix] Prefix to apply to the job id.
Returns
{WritableStream}
method encodeValue_
static encodeValue_: (value?: {} | null) => {} | null;
Convert a row entry from native types to their encoded types that the API expects.
Parameter value
The value to be converted.
Returns
{*} The converted value.
method extract
extract: { ( destination: File, options?: CreateExtractJobOptions ): Promise<JobMetadataResponse>; ( destination: File, options: CreateExtractJobOptions, callback?: JobMetadataCallback ): void; (destination: File, callback?: JobMetadataCallback): void;};
Export table to Cloud Storage.
Parameter destination
Where the file should be exported to. A string or a .
Parameter options
The configuration object.
Parameter
{string} [options.format="CSV"] The format to export the data in. Allowed options are "AVRO", "CSV", "JSON", "ORC" or "PARQUET".
Parameter
{boolean} [options.gzip] Specify if you would like the file compressed with GZIP. Default: false.
Parameter
{string} [options.jobId] Custom id for the underlying job.
Parameter
{string} [options.jobPrefix] Prefix to apply to the underlying job id.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request
Parameter
{object} callback.apiResponse The full API response.
Returns
{Promise}
Throws
{Error} If destination isn't a File object.
Throws
{Error} If destination format isn't recongized.
Example 1
const Storage = require('@google-cloud/storage');const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('my-dataset');const table = dataset.table('my-table');const storage = new Storage({projectId: 'grape-spaceship-123'});const extractedFile = storage.bucket('institutions').file('2014.csv');//-// To use the default options, just pass a {@linkhttps://googleapis.dev/nodejs/storage/latest/File.html File}object.//// Note: The exported format type will be inferred by the file's extension.// If you wish to override this, or provide an array of destination files,// you must provide an `options` object.//-table.extract(extractedFile, (err, apiResponse) => {});//-// If you need more customization, pass an `options` object.//-const options = {format: 'json',gzip: true};table.extract(extractedFile, options, (err, apiResponse) => {});//-// You can also specify multiple destination files.//-table.extract([storage.bucket('institutions').file('2014.json'),storage.bucket('institutions-copy').file('2014.json')], options, (err, apiResponse) => {});//-// If the callback is omitted, we'll return a Promise.//-table.extract(extractedFile, options).then((data) => {const apiResponse = data[0];});
method formatMetadata_
static formatMetadata_: (options: TableMetadata) => FormattedMetadata;
method getIamPolicy
getIamPolicy: { ( optionsOrCallback?: GetPolicyOptions | PolicyCallback ): Promise<PolicyResponse>; (options: bigquery.IGetPolicyOptions, callback: PolicyCallback): void;};
Run a query scoped to your dataset.
Returns
{Promise}
method getRows
getRows: { (options?: GetRowsOptions): Promise<RowsResponse>; (options: GetRowsOptions, callback: RowsCallback): void; (callback: RowsCallback): void;};
{array} RowsResponse {array} 0 The rows.
method insert
insert: { ( rows: RowMetadata | RowMetadata[], options?: InsertRowsOptions ): Promise<InsertRowsResponse>; (rows: any, options: InsertRowsOptions, callback: InsertRowsCallback): void; (rows: any, callback: InsertRowsCallback): void;};
Stream data into BigQuery one record at a time without running a load job.
If you need to create an entire table from a file, consider using Table#load instead.
Note, if a table was recently created, inserts may fail until the table is consistent within BigQuery. If a
schema
is supplied, this method will automatically retry those failed inserts, and it will even create the table with the provided schema if it does not exist.See Tabledata: insertAll API Documentation See Streaming Insert Limits See Troubleshooting Errors
Parameter rows
The rows to insert into the table.
Parameter options
Configuration object.
Parameter
{boolean} [options.createInsertId=true] Automatically insert a default row id when one is not provided.
Parameter
{boolean} [options.ignoreUnknownValues=false] Accept rows that contain values that do not match the schema. The unknown values are ignored.
Parameter
{number} [options.partialRetries=3] Number of times to retry inserting rows for cases of partial failures.
Parameter
{boolean} [options.raw] If
true
, therows
argument is expected to be formatted as according to the specification.Parameter
{string|object} [options.schema] If provided will automatically create a table if it doesn't already exist. Note that this can take longer than 2 minutes to complete. A comma-separated list of name:type pairs. Valid types are "string", "integer", "float", "boolean", and "timestamp". If the type is omitted, it is assumed to be "string". Example: "name:string, age:integer". Schemas can also be specified as a JSON array of fields, which allows for nested and repeated fields. See a Table resource for more detailed information.
Parameter
{boolean} [options.skipInvalidRows=false] Insert all valid rows of a request, even if invalid rows exist.
Parameter
{string} [options.templateSuffix] Treat the destination table as a base template, and insert the rows into an instance table named "{destination}{templateSuffix}". BigQuery will manage creation of the instance table, using the schema of the base template table. See Automatic table creation using template tables for considerations when working with templates tables.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request.
Parameter
{object[]} callback.err.errors If present, these represent partial failures. It's possible for part of your request to be completed successfully, while the other part was not.
Parameter
{object} callback.apiResponse The full API response.
Returns
{Promise}
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('my-dataset');const table = dataset.table('my-table');//-// Insert a single row.//-table.insert({INSTNM: 'Motion Picture Institute of Michigan',CITY: 'Troy',STABBR: 'MI'}, insertHandler);//-// Insert multiple rows at a time.//-const rows = [{INSTNM: 'Motion Picture Institute of Michigan',CITY: 'Troy',STABBR: 'MI'},// ...];table.insert(rows, insertHandler);//-// Insert a row as according to the <a href="https://cloud.google.com/bigquery/docs/reference/v2/tabledata/insertAll">specification</a>.//-const row = {insertId: '1',json: {INSTNM: 'Motion Picture Institute of Michigan',CITY: 'Troy',STABBR: 'MI'}};const options = {raw: true};table.insert(row, options, insertHandler);//-// Handling the response. See <a href="https://developers.google.com/bigquery/troubleshooting-errors">Troubleshooting Errors</a> for best practices on how to handle errors.//-function insertHandler(err, apiResponse) {if (err) {// An API error or partial failure occurred.if (err.name === 'PartialFailureError') {// Some rows failed to insert, while others may have succeeded.// err.errors (object[]):// err.errors[].row (original row object passed to `insert`)// err.errors[].errors[].reason// err.errors[].errors[].message}}}//-// If the callback is omitted, we'll return a Promise.//-table.insert(rows).then((data) => {const apiResponse = data[0];}).catch((err) => {// An API error or partial failure occurred.if (err.name === 'PartialFailureError') {// Some rows failed to insert, while others may have succeeded.// err.errors (object[]):// err.errors[].row (original row object passed to `insert`)// err.errors[].errors[].reason// err.errors[].errors[].message}});
method load
load: { ( source: string | File | File[], metadata?: JobLoadMetadata ): Promise<JobMetadataResponse>; ( source: string | File | File[], metadata: JobLoadMetadata, callback: JobMetadataCallback ): void; (source: string | File | File[], callback: JobMetadataCallback): void; ( source: string | File | File[], metadata?: JobLoadMetadata ): Promise<JobMetadataResponse>; ( source: string | File | File[], metadata: JobLoadMetadata, callback: JobMetadataCallback ): void; (source: string | File | File[], callback: JobMetadataCallback): void;};
Load data from a local file or Storage .
By loading data this way, you create a load job that will run your data load asynchronously. If you would like instantaneous access to your data, insert it using Table#insert.
Note: The file type will be inferred by the given file's extension. If you wish to override this, you must provide
metadata.format
.Parameter source
The source file to load. A filepath as a string or a object.
Parameter metadata
Metadata to set with the load operation. The metadata object should be in the format of the `configuration.load` property of a Jobs resource.
Parameter
{string} [metadata.format] The format the data being loaded is in. Allowed options are "AVRO", "CSV", "JSON", "ORC", or "PARQUET".
Parameter
{string} [metadata.jobId] Custom id for the underlying job.
Parameter
{string} [metadata.jobPrefix] Prefix to apply to the underlying job id.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request
Parameter
{object} callback.apiResponse The full API response.
Returns
{Promise}
Throws
{Error} If the source isn't a string file name or a File instance.
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('my-dataset');const table = dataset.table('my-table');//-// Load data from a local file.//-table.load('./institutions.csv', (err, apiResponse) => {});//-// You may also pass in metadata in the format of a Jobs resource. See// (https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad)// for a full list of supported values.//-const metadata = {encoding: 'ISO-8859-1',sourceFormat: 'NEWLINE_DELIMITED_JSON'};table.load('./my-data.csv', metadata, (err, apiResponse) => {});//-// Load data from a file in your Cloud Storage bucket.//-const gcs = require('@google-cloud/storage')({projectId: 'grape-spaceship-123'});const data = gcs.bucket('institutions').file('data.csv');table.load(data, (err, apiResponse) => {});//-// Load data from multiple files in your Cloud Storage bucket(s).//-table.load([gcs.bucket('institutions').file('2011.csv'),gcs.bucket('institutions').file('2012.csv')], function(err, apiResponse) {});//-// If the callback is omitted, we'll return a Promise.//-table.load(data).then(function(data) {const apiResponse = data[0];});
method query
query: { (query: Query): Promise<SimpleQueryRowsResponse>; (query: string): Promise<SimpleQueryRowsResponse>; (query: Query, callback: SimpleQueryRowsCallback): void;};
Run a query scoped to your dataset.
See BigQuery#query for full documentation of this method.
Parameter query
See BigQuery#query for full documentation of this method.
Parameter callback
See BigQuery#query for full documentation of this method.
Returns
{Promise}
method setIamPolicy
setIamPolicy: { (policy: Policy, options?: SetPolicyOptions): Promise<PolicyResponse>; ( policy: bigquery.IPolicy, options: SetPolicyOptions, callback: PolicyCallback ): void; (policy: bigquery.IPolicy, callback: PolicyCallback): void;};
Run a query scoped to your dataset.
Returns
{Promise}
method setMetadata
setMetadata: { (metadata: SetTableMetadataOptions): Promise<SetMetadataResponse>; (metadata: TableMetadata, callback: ResponseCallback): void;};
Set the metadata on the table.
Parameter metadata
The metadata key/value object to set.
Parameter
{string} metadata.description A user-friendly description of the table.
Parameter
{string} metadata.name A descriptive name for the table.
Parameter
{string|object} metadata.schema A comma-separated list of name:type pairs. Valid types are "string", "integer", "float", "boolean", "bytes", "record", and "timestamp". If the type is omitted, it is assumed to be "string". Example: "name:string, age:integer". Schemas can also be specified as a JSON array of fields, which allows for nested and repeated fields. See a Table resource for more detailed information.
Parameter callback
The callback function.
Parameter
{?error} callback.err An error returned while making this request.
Parameter
{object} callback.apiResponse The full API response.
Returns
{Promise<common.SetMetadataResponse>}
Example 1
const {BigQuery} = require('@google-cloud/bigquery');const bigquery = new BigQuery();const dataset = bigquery.dataset('my-dataset');const table = dataset.table('my-table');const metadata = {name: 'My recipes',description: 'A table for storing my recipes.',schema: 'name:string, servings:integer, cookingTime:float, quick:boolean'};table.setMetadata(metadata, (err, metadata, apiResponse) => {});//-// If the callback is omitted, we'll return a Promise.//-table.setMetadata(metadata).then((data) => {const metadata = data[0];const apiResponse = data[1];});
method testIamPermissions
testIamPermissions: { (permissions: string | string[]): Promise<PermissionsResponse>; (permissions: string | string[], callback: PermissionsCallback): void;};
Run a query scoped to your dataset.
Returns
{Promise}
Interfaces
interface BigQueryDateOptions
interface BigQueryDateOptions {}
interface BigQueryDatetimeOptions
interface BigQueryDatetimeOptions {}
interface BigQueryOptions
interface BigQueryOptions extends GoogleAuthOptions {}
property apiEndpoint
apiEndpoint?: string;
The API endpoint of the service used to make requests. Defaults to
bigquery.googleapis.com
.
property autoRetry
autoRetry?: boolean;
Automatically retry requests if the response is related to rate limits or certain intermittent server errors. We will exponentially backoff subsequent requests by default.
Defaults to
true
.
property location
location?: string;
The geographic location of all datasets and jobs referenced and created through the client.
property maxRetries
maxRetries?: number;
Maximum number of automatic retries attempted before returning the error.
Defaults to 3.
property userAgent
userAgent?: string;
The value to be prepended to the User-Agent header in API requests.
interface BigQueryTimeOptions
interface BigQueryTimeOptions {}
property fractional
fractional?: number | string;
property hours
hours?: number | string;
property minutes
minutes?: number | string;
property seconds
seconds?: number | string;
interface DatasetDeleteOptions
interface DatasetDeleteOptions {}
property force
force?: boolean;
interface DatasetOptions
interface DatasetOptions {}
interface File
interface File {}
property bucket
bucket: any;
property generation
generation?: number;
property kmsKeyName
kmsKeyName?: string;
property name
name: string;
property userProject
userProject?: string;
interface InsertRow
interface InsertRow {}
interface InsertStreamOptions
interface InsertStreamOptions {}
property batchOptions
batchOptions?: RowBatchOptions;
property insertRowsOptions
insertRowsOptions?: InsertRowsOptions;
interface IntegerTypeCastOptions
interface IntegerTypeCastOptions {}
property fields
fields?: string | string[];
property integerTypeCastFunction
integerTypeCastFunction: Function;
interface Json
interface Json {}
index signature
[field: string]: string;
interface PagedCallback
interface PagedCallback<T, Q, R> {}
call signature
( err: Error | null, resource?: T[] | null, nextQuery?: Q | null, response?: R | null): void;
interface PartialInsertFailure
interface PartialInsertFailure {}
interface ProvidedTypeStruct
interface ProvidedTypeStruct {}
index signature
[key: string]: string | ProvidedTypeArray | ProvidedTypeStruct;
interface RequestCallback
interface RequestCallback<T> {}
call signature
(err: Error | null, response?: T | null): void;
interface ResourceCallback
interface ResourceCallback<T, R> {}
call signature
(err: Error | null, resource?: T | null, response?: R | null): void;
interface TableOptions
interface TableOptions {}
property location
location?: string;
Type Aliases
type CancelCallback
type CancelCallback = RequestCallback<bigquery.IJobCancelResponse>;
type CancelResponse
type CancelResponse = [bigquery.IJobCancelResponse];
type CopyTableMetadata
type CopyTableMetadata = JobRequest<bigquery.IJobConfigurationTableCopy>;
type CreateCopyJobMetadata
type CreateCopyJobMetadata = CopyTableMetadata;
type CreateDatasetOptions
type CreateDatasetOptions = bigquery.IDataset;
type CreateExtractJobOptions
type CreateExtractJobOptions = JobRequest<bigquery.IJobConfigurationExtract> & { format?: 'ML_TF_SAVED_MODEL' | 'ML_XGBOOST_BOOSTER';};
type DatasetCallback
type DatasetCallback = ResourceCallback<Dataset, bigquery.IDataset>;
type DatasetResource
type DatasetResource = bigquery.IDataset;
type DatasetResponse
type DatasetResponse = [Dataset, bigquery.IDataset];
type DatasetsCallback
type DatasetsCallback = PagedCallback< Dataset, GetDatasetsOptions, bigquery.IDatasetList>;
type DatasetsResponse
type DatasetsResponse = PagedResponse< Dataset, GetDatasetsOptions, bigquery.IDatasetList>;
type FormattedMetadata
type FormattedMetadata = bigquery.ITable;
type GetDatasetsOptions
type GetDatasetsOptions = PagedRequest<bigquery.datasets.IListParams>;
type GetJobsCallback
type GetJobsCallback = PagedCallback<Job, GetJobsOptions, bigquery.IJobList>;
type GetJobsOptions
type GetJobsOptions = PagedRequest<bigquery.jobs.IListParams>;
type GetJobsResponse
type GetJobsResponse = PagedResponse<Job, GetJobsOptions, bigquery.IJobList>;
type GetModelsCallback
type GetModelsCallback = PagedCallback< Model, GetModelsOptions, bigquery.IListModelsResponse>;
type GetModelsOptions
type GetModelsOptions = PagedRequest<bigquery.models.IListParams>;
type GetModelsResponse
type GetModelsResponse = PagedResponse< Model, GetModelsOptions, bigquery.IListModelsResponse>;
type GetPolicyOptions
type GetPolicyOptions = bigquery.IGetPolicyOptions;
type GetRoutinesCallback
type GetRoutinesCallback = PagedCallback< Routine, GetRoutinesOptions, bigquery.IListRoutinesResponse>;
type GetRoutinesOptions
type GetRoutinesOptions = PagedRequest<bigquery.routines.IListParams>;
type GetRoutinesResponse
type GetRoutinesResponse = PagedResponse< Routine, GetRoutinesOptions, bigquery.IListRoutinesResponse>;
type GetRowsOptions
type GetRowsOptions = PagedRequest<bigquery.tabledata.IListParams> & { wrapIntegers?: boolean | IntegerTypeCastOptions;};
type GetTablesCallback
type GetTablesCallback = PagedCallback<Table, GetTablesOptions, bigquery.ITableList>;
type GetTablesOptions
type GetTablesOptions = PagedRequest<bigquery.tables.IListParams>;
type GetTablesResponse
type GetTablesResponse = PagedResponse<Table, GetTablesOptions, bigquery.ITableList>;
type InsertRowsCallback
type InsertRowsCallback = RequestCallback< bigquery.ITableDataInsertAllResponse | bigquery.ITable>;
type InsertRowsOptions
type InsertRowsOptions = bigquery.ITableDataInsertAllRequest & { createInsertId?: boolean; partialRetries?: number; raw?: boolean; schema?: string | {};};
type InsertRowsResponse
type InsertRowsResponse = [bigquery.ITableDataInsertAllResponse | bigquery.ITable];
type InsertRowsStreamResponse
type InsertRowsStreamResponse = bigquery.ITableDataInsertAllResponse;
type IntegerTypeCastValue
type IntegerTypeCastValue = { integerValue: string | number; schemaFieldName?: string;};
type JobCallback
type JobCallback = ResourceCallback<Job, bigquery.IJob>;
type JobLoadMetadata
type JobLoadMetadata = JobRequest<bigquery.IJobConfigurationLoad> & { format?: string;};
type JobMetadata
type JobMetadata = bigquery.IJob;
type JobMetadataCallback
type JobMetadataCallback = RequestCallback<JobMetadata>;
type JobMetadataResponse
type JobMetadataResponse = [JobMetadata];
type JobOptions
type JobOptions = JobRequest<JobMetadata>;
type JobRequest
type JobRequest<J> = J & { jobId?: string; jobPrefix?: string; location?: string; projectId?: string;};
type JobResponse
type JobResponse = [Job, bigquery.IJob];
type PagedRequest
type PagedRequest<P> = P & { autoPaginate?: boolean; maxApiCalls?: number;};
type PagedResponse
type PagedResponse<T, Q, R> = [T[]] | [T[], Q | null, R];
type PermissionsCallback
type PermissionsCallback = RequestCallback<PermissionsResponse>;
type PermissionsResponse
type PermissionsResponse = [bigquery.ITestIamPermissionsResponse];
type Policy
type Policy = bigquery.IPolicy;
type PolicyCallback
type PolicyCallback = RequestCallback<PolicyResponse>;
type PolicyRequest
type PolicyRequest = bigquery.IGetIamPolicyRequest;
type PolicyResponse
type PolicyResponse = [Policy];
type ProvidedTypeArray
type ProvidedTypeArray = Array<ProvidedTypeStruct | string | []>;
type Query
type Query = JobRequest<bigquery.IJobConfigurationQuery> & { destination?: Table; params?: | any[] | { [param: string]: any; }; dryRun?: boolean; labels?: { [label: string]: string; }; types?: QueryParamTypes; job?: Job; maxResults?: number; jobTimeoutMs?: number; pageToken?: string; wrapIntegers?: boolean | IntegerTypeCastOptions;};
type QueryOptions
type QueryOptions = QueryResultsOptions;
type QueryParameter
type QueryParameter = bigquery.IQueryParameter;
type QueryResultsOptions
type QueryResultsOptions = { job?: Job; wrapIntegers?: boolean | IntegerTypeCastOptions;} & PagedRequest<bigquery.jobs.IGetQueryResultsParams>;
type QueryRowsCallback
type QueryRowsCallback = PagedCallback< RowMetadata, Query, bigquery.IGetQueryResultsResponse>;
type QueryRowsResponse
type QueryRowsResponse = PagedResponse< RowMetadata, Query, bigquery.IGetQueryResultsResponse>;
type QueryStreamOptions
type QueryStreamOptions = { wrapIntegers?: boolean | IntegerTypeCastOptions;};
type RoutineCallback
type RoutineCallback = ResourceCallback<Routine, bigquery.IRoutine>;
type RoutineMetadata
type RoutineMetadata = bigquery.IRoutine;
type RoutineResponse
type RoutineResponse = [Routine, bigquery.IRoutine];
type RowMetadata
type RowMetadata = any;
type RowsCallback
type RowsCallback = PagedCallback< RowMetadata, GetRowsOptions, bigquery.ITableDataList | bigquery.ITable>;
type RowsResponse
type RowsResponse = PagedResponse< RowMetadata, GetRowsOptions, bigquery.ITableDataList | bigquery.ITable>;
type SetPolicyOptions
type SetPolicyOptions = Omit<bigquery.ISetIamPolicyRequest, 'policy'>;
type SetTableMetadataOptions
type SetTableMetadataOptions = TableMetadata;
type SimpleQueryRowsCallback
type SimpleQueryRowsCallback = ResourceCallback<RowMetadata[], bigquery.IJob>;
type SimpleQueryRowsResponse
type SimpleQueryRowsResponse = [RowMetadata[], bigquery.IJob];
type TableCallback
type TableCallback = ResourceCallback<Table, bigquery.ITable>;
type TableField
type TableField = bigquery.ITableFieldSchema;
type TableMetadata
type TableMetadata = bigquery.ITable & { name?: string; schema?: string | TableField[] | TableSchema; partitioning?: string; view?: string | ViewDefinition;};
type TableResponse
type TableResponse = [Table, bigquery.ITable];
type TableRow
type TableRow = bigquery.ITableRow;
type TableRowField
type TableRowField = bigquery.ITableCell;
type TableRowValue
type TableRowValue = string | TableRow;
type TableSchema
type TableSchema = bigquery.ITableSchema;
type ValueType
type ValueType = bigquery.IQueryParameterType;
type ViewDefinition
type ViewDefinition = bigquery.IViewDefinition;
Package Files (9)
Dependencies (13)
Dev Dependencies (29)
Peer Dependencies (0)
No peer dependencies.
Badge
To add a badge like this oneto your package's README, use the codes available below.
You may also use Shields.io to create a custom badge linking to https://www.jsdocs.io/package/@google-cloud/bigquery
.
- Markdown[](https://www.jsdocs.io/package/@google-cloud/bigquery)
- HTML<a href="https://www.jsdocs.io/package/@google-cloud/bigquery"><img src="https://img.shields.io/badge/jsDocs.io-reference-blue" alt="jsDocs.io"></a>
- Updated .
Package analyzed in 8247 ms. - Missing or incorrect documentation? Open an issue for this package.