Luden
GuinsooLabGitHub
  • Welcome to Luden!
  • Quickstart
    • Deploying with Docker
    • Building from Source
    • Scaling Luden Server
  • Configuration
    • Configuration
    • Authorization
    • Destinations Conf
      • Snowflake
      • AWS S3
      • AWS RedShift
      • Postgres
      • Google BigQuery
      • MySQL
      • Clickhouse
      • Guinsoo
      • WebHook
    • Sources Conf
      • Synchronization Scheduling
      • Airbyte Based Sources
    • Sources Catelog
  • Features
    • JavaScript Transform
    • Destination Tags
    • Data Warehouses
    • Match Mode
    • Stream Mode
    • DBT Cloud Integration
    • Event Cache
    • Geo Data Resolution
    • Typecast
    • Admin Endpoint
    • Application Metrics
  • Sending Data
    • JS SDK
    • Node.js
    • Event API
    • Bulk API
    • Segment API
  • Extending
    • Destination Extensions
    • Source Extensions
    • API Specs
    • Luden Architecture
  • Appendix
    • Contribute
    • FAQs
Powered by GitBook
On this page
  1. Configuration
  2. Destinations Conf

Google BigQuery

PreviousPostgresNextMySQL

Last updated 2 years ago

Luden supports as a destination. For more information about BigQuery . BigQuery destination can work in stream and batch modes. In stream mode Luden uses . In batch mode Luden writes incoming events in formatted file on the Google Cloud Storage and creates a to store data from GCP files into BigQuery.

Configuration

BigQuery destination config consists of the following schema:

destinations:
  my_bigquery:
    type: bigquery
    google:
      gcs_bucket: google_cloud_storage_bucket
      bq_project: big_query_project
      bq_dataset: big_query_dataset
      key_file: path_to_bqkey.json # or json string of key e.g. "{"service_account":...}"

BigQuery
see docs
BigQuery Streaming API
loading job