K Knowledge Base
Breadcrumbs

Bytehouse (via Collector method) - v3.0.0

About Collectors


Pre-requisites

Collector Server Minimum Requirements

ByteHouse Requirements

  • Access to the following tables

    1. system.databases

    2. system.tables

    3. system.columns


Step 1: Enabling logging

TBC


Step 2: Create the Source in K

Create a ByteHouse source in K

  • Go to Settings, Select Sources and click Add Source

  • Select "Load from File system" option

  • Give the source a Name - e.g. ByteHouse Production

  • Add the Host name for the ByteHouse Server

  • Click Finish Setup


Step 3: Getting Access to the Source Landing Directory


Step 4: Install the Collector

You can download the latest Core Library and whl via Platform Settings → SourcesDownload Collectors

Run the following command to install the collector

pip install kada_collectors_extractors_<version>-none-any.whl

You will also need to install the common library kada_collectors_lib for this collector to function properly.

pip install kada_collectors_lib-<version>-none-any.whl

Step 5: Configure the Collector

The ByteHouse collector only extracts metadata and does not extract or process query usage on the database.

FIELD

FIELD TYPE

DESCRIPTION

EXAMPLE

api_key

string

The API Key for ByteHouse, you can generate one via the Console

"xasdaxcv"

server

string

ByteHouse gateway, these are regionally specific - see https://docs.byteplus.com/en/docs/bytehouse/docs-supported-regions-and-providers

"gateway.aws-ap-southeast-1.bytehouse.cloud"

port

integer

The port to connect to the ByteHouse instance, generally this is 19000

19000

host

string

The onboarded host in K for the ByteHouse Source

"gateway.aws-ap-southeast-1.bytehouse.cloud"

tenant_account_id

string

This value can be found in the ByteHouse console under the Tenant Management Tab and Basic Information

"123456778"

meta_only

boolean

Currently we only support meta only as true

true

output_path

string

Absolute path to the output location

"/tmp/output"

mask

boolean

To enable masking or not

true

compress

boolean

To enable compression or not to .csv.gz

true

timeout

integer

Timeout setting in seconds

80000

kada_bytehouse_extractor_config.json

{
    "api_key": "",
    "server": "",
    "port": 19000,
    "tenant_account_id": "",
    "host": "",
    "output_path": "/tmp/output",
    "mask": true,
    "compress": true,
    "meta_only": true,
    "timeout": 80000
}

Step 6: Run the Collector

This is the wrapper script: kada_bytehouse_extractor.py

import os
import argparse
from kada_collectors.extractors.utils import load_config, get_hwm, publish_hwm, get_generic_logger
from kada_collectors.extractors.bytehouse import Extractor

get_generic_logger('root')

_type = 'bytehouse'
dirname = os.path.dirname(__file__)
filename = os.path.join(dirname, 'kada_{}_extractor_config.json'.format(_type))

parser = argparse.ArgumentParser(description='KADA Bytehouse Extractor.')
parser.add_argument('--config', '-c', dest='config', default=filename)
parser.add_argument('--name', '-n', dest='name', default=_type)
args = parser.parse_args()

start_hwm, end_hwm = get_hwm(args.name)

ext = Extractor(**load_config(args.config))
ext.test_connection()
ext.run(**{"start_hwm": start_hwm, "end_hwm": end_hwm})

publish_hwm(args.name, end_hwm)

Step 7: Check the Collector Outputs

K Extracts

A set of files (eg metadata, databaselog, linkages, events etc) will be generated in the output_path directory.

High Water Mark File

A high water mark file is created called bytehouse_hwm.txt.


Step 8: Push the Extracts to K

Once the files have been validated, you can push the files to the K landing directory.


Example: Using Airflow to orchestrate the Extract and Push to K