Snowflake (via Collector method) - v3.4.0

About Collectors

Collectors are extractors that are developed and managed by you (a customer of K).

KADA provides python libraries that customers can use to quickly deploy a Collector.

Why you should use a Collector

There are several reasons why you may use a collector vs the direct connect extractor:

  1. You are using the KADA SaaS offering and it cannot connect to your sources due to firewall restrictions

  2. You want to push metadata to KADA rather than allow it to pull data for security reasons

  3. You want to inspect the metadata before pushing it to K

Using a collector requires you to manage:

  1. Deploying and orchestrating the extract code

  2. Managing a high water mark so the extract only pulls the latest metadata

  3. Storing and pushing the extracts to your K instance


Pre-requisites

Collector Server Minimum Requirements

For the collector to operate effectively, it will need to be deployed on a server with the below minimum specifications:

  • CPU: 2 vCPU

  • Memory: 8GB

  • Storage: 30GB (depends on historical data extracted)

  • OS: unix distro e.g. RHEL preferred but can also work with Windows Server

  • Python 3.10.x or later

  • Access to K landing directory

Snowflake Requirements

  • Access to Snowflake (see section below)

Snowflake Access

Create a Snowflake user with read access to following views in the Snowflake database.

  • account_usage.history

  • account_usage.views

  • account_usage.tables

  • account_usage.columns

  • account_usage.copy_history

  • account_usage.grants_to_roles

  • account_usage.grants_to_users

  • account_usage.schemata

  • account_usage.databases

  • account_usage.policy_references

  • account_usage.access_history (If you have Enterprise Edition)

Ability to run

  • SHOW STREAMS IN ACCOUNT

  • SHOW PRIMARY KEYS IN ACCOUNT

To create a user with general access to metadata available in Snowflake Account Usage schema

--Log in with a user that has the permissions to create a role/user

--Create a new role for the Catalog user
Create role CATALOG_READ_ONLY;

--Grant the role access to the Account usage schema
grant imported privileges on database Snowflake to CATALOG_READ_ONLY;
grant select on all tables in schema SNOWFLAKE.ACCOUNT_USAGE to CATALOG_READ_ONLY;
grant monitor on account to role CATALOG_READ_ONLY;

--Create a new user for K and grant it the role (remove the [])
create user [kada_user] password=['abc123!@#'] default_role = CATALOG_READ_ONLY default_warehouse = [warehouse];

To create a user with specific access to metadata in Snowflake Account Usage, you will need to create a new Snowflake database with views that select from the Snowflake database. This is a known Snowflake limitation.

-- create a new database
create database CATALOG_METADATA;

-- create a new schema
create schema CATALOG_METADATA.ACCOUNT_USAGE;

-- account_usage.access_history
create view CATALOG_METADATA.ACCOUNT_USAGE.ACCESS_HISTORY
    as select * from SNOWFLAKE.ACCOUNT_USAGE.ACCESS_HISTORY;

-- account_usage.views
create view CATALOG_METADATA.ACCOUNT_USAGE.VIEWS
    as select * from SNOWFLAKE.ACCOUNT_USAGE.VIEWS;

-- account_usage.tables
create view CATALOG_METADATA.ACCOUNT_USAGE.TABLES
    as select * from SNOWFLAKE.ACCOUNT_USAGE.TABLES;

-- account_usage.columns
create view CATALOG_METADATA.ACCOUNT_USAGE.COLUMNS
    as select * from SNOWFLAKE.ACCOUNT_USAGE.COLUMNS;

-- account_usage.copy_history
create view CATALOG_METADATA.ACCOUNT_USAGE.COPY_HISTORY
    as select * from SNOWFLAKE.ACCOUNT_USAGE.COPY_HISTORY;

-- account_usage.grant_to_roles
create view CATALOG_METADATA.ACCOUNT_USAGE.GRANTS_TO_ROLES
    as select * from SNOWFLAKE.ACCOUNT_USAGE.GRANTS_TO_ROLES;

-- account_usage.grant_to_users
create view CATALOG_METADATA.ACCOUNT_USAGE.GRANTS_TO_USERS
    as select * from SNOWFLAKE.ACCOUNT_USAGE.GRANTS_TO_USERS;

-- account_usage.schemata
create view CATALOG_METADATA.ACCOUNT_USAGE.SCHEMATA
    as select * from SNOWFLAKE.ACCOUNT_USAGE.SCHEMATA;

-- account_usage.databases
create view CATALOG_METADATA.ACCOUNT_USAGE.DATABASES
    as select * from SNOWFLAKE.ACCOUNT_USAGE.DATABASES;

-- account_usage.policy_references
create view CATALOG_METADATA.ACCOUNT_USAGE.POLICY_REFERENCES
    as select * from SNOWFLAKE.ACCOUNT_USAGE.POLICY_REFERENCES;

-- create a new role
create role CATALOG_READ_ONLY;

-- grant access
grant usage on warehouse [MY_WAREHOUSE] to role CATALOG_READ_ONLY;
grant usage, monitor on database CATALOG_METADATA to role CATALOG_READ_ONLY;
grant usage, monitor on schema CATALOG_METADATA.ACCOUNT_USAGE to role CATALOG_READ_ONLY;
grant select on all views in schema CATALOG_METADATA.ACCOUNT_USAGE to CATALOG_READ_ONLY;
grant select on future views in schema CATALOG_METADATA.ACCOUNT_USAGE to CATALOG_READ_ONLY;

-- create a new Kada user
create user [kada_user] password=['<add password>'] default_role = CATALOG_READ_ONLY default_warehouse = [warehouse];

From the above record down the following to be used for the setup

  1. User name / Password

  2. Role

  3. Warehouse

  4. (If creating a new database for metadata) Database name

  5. Snowflake account (found in the URL of your Snowflake instance - between https:// and .snowflakecomputing.com/…)


Step 1: Create the Source in K

Create a Snowflake source in K

  • Go to Settings, Select Sources and click Add Source

  • Select "Load from File" option

  • Give the source a Name - e.g. Snowflake Production

  • Add the Host name for the Snowflake Server

  • Click Finish Setup


Step 2: Getting Access to the Source Landing Directory

When using a Collector you will push metadata to a K landing directory.

To find your landing directory you will need to:

  1. Go to Platform Settings - Settings. Note down the value of this setting:

    • If using Azure: storage_azure_storage_account

    • If using AWS:

      • storage_root_folder - the AWS s3 bucket

      • storage_aws_region - the region where the AWS s3 bucket is hosted

  2. Go to Sources - Edit the Source you have configured. Note down the landing directory in the About this Source section.

To connect to the landing directory you will need:

  • If using Azure: a SAS token to push data to the landing directory. Request this from KADA Support (support@kada.ai)

  • If using AWS:

    • An Access key and Secret. Request this from KADA Support (support@kada.ai)

    • OR provide your IAM role to KADA Support to provision access.


Step 3: Install the Collector

You can download the latest Core Library and Snowflake whl via Platform Settings → SourcesDownload Collectors

pip install kada_collectors_extractors_<version>-none-any.whl
pip install kada_collectors_lib-<version>-none-any.whl

OS

Packages

CentOS

libffi-devel openssl-devel

Ubuntu

libssl-dev libffi-dev


Step 4: Configure the Collector

FIELD

FIELD TYPE

DESCRIPTION

EXAMPLE

account

string

Snowflake account

"abc123.australia-east.azure"

username

string

Username to log into the snowflake account


password

string

Password to log into the snowflake account


information_database

string

Database where all the required tables are located

"snowflake"

role

string

The role to access the required account_usage tables

"accountadmin"

warehouse

string

The warehouse to execute the queries against

"xs_analytics"

databases

list<string>

A list of databases to extract from Snowflake

["dwh", "adw"]

login_timeout

integer

The max amount of time in seconds for connection

5

output_path

string

Absolute path to the output location

"/tmp/output"

mask

boolean

To enable masking or not

true

compress

boolean

To gzip the output or not

true

use_private_key

boolean

To use private key or not

false

private_key

string

The private key value as text

-----BEGIN ENCRYPTED PRIVATE KEY-----\

blah
-----END ENCRYPTED PRIVATE KEY----- |
| host | string | The host value for snowflake that was onboarded in K | "abc123.australia-east.azure.snowflakecomputing.com" |
| enterprise | boolean | Do you have snowflake Enterprise Edition? | false |

kada_snowflake_extractor_config.json

JSON
{
    "account": "",
    "username": "",
    "password": "",
    "information_database": "",
    "role": "",
    "warehouse": "",
    "databases": [],
    "login_timeout": 5,
    "output_path": "/tmp/output",
    "mask": true,
    "compress": true,
    "use_private_key": false,
    "private_key": "",
    "host": "",
    "enterprise": false
}

Step 5: Run the Collector

This is the wrapper script: kada_snowflake_extractor.py

Python
import os
import argparse
from kada_collectors.extractors.utils import load_config, get_hwm, publish_hwm, get_generic_logger
from kada_collectors.extractors.snowflake import Extractor

get_generic_logger('root')

_type = 'snowflake'
dirname = os.path.dirname(__file__)
filename = os.path.join(dirname, 'kada_{}_extractor_config.json'.format(_type))

parser = argparse.ArgumentParser(description='KADA Snowflake Extractor.')
parser.add_argument('--config', '-c', dest='config', default=filename)
parser.add_argument('--name', '-n', dest='name', default=_type)
args = parser.parse_args()

start_hwm, end_hwm = get_hwm(args.name)

ext = Extractor(**load_config(args.config))
ext.test_connection()
ext.run(**{"start_hwm": start_hwm, "end_hwm": end_hwm})

publish_hwm(args.name, end_hwm)

class Extractor(account: str = None,
    username: str = None,
    password: str = None,
    databases: list = [],
    information_database: str = 'snowflake',
    role: str = 'accountadmin',
    output_path: str = './output',
    warehouse: str = None,
    login_timeout: int = 5,
    mask: bool = False,
    compress: bool = False,
    host: str = None,
    use_private_key: bool = False,
    private_key: str = None,
    enterprise: bool = False,
    ) -> None)

enterprise: Enterprise edition of snowflake


Step 6: Check the Collector Outputs

K Extracts

A set of files (eg metadata, databaselog, linkages, events etc) will be generated in the output_path directory.

High Water Mark File

A high water mark file is created called snowflake_hwm.txt.

Refer to Collector Integration General Notes for more information.


Step 7: Push the Extracts to K

Once the files have been validated, you can push the files to the K landing directory.


Example: Using Airflow to orchestrate the Extract and Push to K

The following example is how you can orchestrate the Tableau collector using Airflow and push the files to K hosted on Azure. The code is not expected to be used as-is but as a template for your own DAG.

Python
# built-in
import os

# Installed
from airflow.operators.python_operator import PythonOperator
from airflow.models.dag import DAG
from airflow.operators.dummy import DummyOperator
from airflow.utils.dates import days_ago
from airflow.utils.task_group import TaskGroup

from plugins.utils.azure_blob_storage import AzureBlobStorage

from kada_collectors.extractors.utils import load_config, get_hwm, publish_hwm, get_generic_logger
from kada_collectors.extractors.tableau import Extractor

# To be configured by the customer.
# Note variables may change if using a different object store.
KADA_SAS_TOKEN = os.getenv("KADA_SAS_TOKEN")
KADA_CONTAINER = ""
KADA_STORAGE_ACCOUNT = ""
KADA_LANDING_PATH = "lz/tableau/landing"
KADA_EXTRACTOR_CONFIG = {
    "server_address": "http://tabserver",
    "username": "user",
    "password": "password",
    "sites": [],
    "db_host": "tabserver",
    "db_username": "repo_user",
    "db_password": "repo_password",
    "db_port": 8060,
    "db_name": "workgroup",
    "meta_only": False,
    "retries": 5,
    "dry_run": False,
    "output_path": "/set/to/output/path",
    "mask": True,
    "mapping": {}
}

# To be implemented by the customer.
# Upload to your landing zone storage.
# Change '.csv' to '.csv.gz' if you set compress = true in the config
def upload():
  output = KADA_EXTRACTOR_CONFIG['output_path']
  for filename in os.listdir(output):
      if filename.endswith('.csv'):
        file_to_upload_path = os.path.join(output, filename)

        AzureBlobStorage.upload_file_sas_token(
            client=KADA_SAS_TOKEN,
            storage_account=KADA_STORAGE_ACCOUNT,
            container=KADA_CONTAINER,
            blob=f'{KADA_LANDING_PATH}/{filename}',
            local_path=file_to_upload_path
        )

with DAG(dag_id="taskgroup_example", start_date=days_ago(1)) as dag:

    # To be implemented by the customer.
    # Retrieve the timestamp from the prior run
    start_hwm = 'YYYY-MM-DD HH:mm:SS'
    end_hwm = 'YYYY-MM-DD HH:mm:SS' # timestamp now

    ext = Extractor(**KADA_EXTRACTOR_CONFIG)

    start = DummyOperator(task_id="start")

    with TaskGroup("taskgroup_1", tooltip="extract tableau and upload") as extract_upload:
        task_1 = PythonOperator(
            task_id="extract_tableau",
            python_callable=ext.run,
            op_kwargs={"start_hwm": start_hwm, "end_hwm": end_hwm},
            provide_context=True,
        )

        task_2 = PythonOperator(
            task_id="upload_extracts",
            python_callable=upload,
            op_kwargs={},
            provide_context=True,
        )

        # To be implemented by the customer.
        # Timestamp needs to be saved for next run
        task_3 = DummyOperator(task_id='save_hwm')

    end = DummyOperator(task_id='end')

    start >> extract_upload >> end