Azure SQL (via Collector method) - v3.0.0

About Collectors

Collectors are extractors that are developed and managed by you (a customer of K).

KADA provides python libraries that customers can use to quickly deploy a Collector.

Why you should use a Collector

There are several reasons why you may use a collector vs the direct connect extractor:

  1. You are using the KADA SaaS offering and it cannot connect to your sources due to firewall restrictions

  2. You want to push metadata to KADA rather than allow it to pull data for security reasons

  3. You want to inspect the metadata before pushing it to K

Using a collector requires you to manage:

  1. Deploying and orchestrating the extract code

  2. Managing a high water mark so the extract only pulls the latest metadata

  3. Storing and pushing the extracts to your K instance


Pre-requisites

Collector server minimum requirements

For the collector to operate effectively, it will need to be deployed on a server with the below minimum specifications:

  • CPU: 2 vCPU

  • Memory: 8GB

  • Storage: 30GB (depends on historical data extracted)

  • OS: unix distro e.g. RHEL preferred but can also work with Windows Server

  • Python 3.10.x or later

  • Access to K landing directory

SQL Server Requirements

Setting up SQL Server for metadata extraction is a 2 step process.

Step 1: Establish SQLServer Access

Apply in MASTER using an Azure SQL Admin user

CREATE LOGIN kadauser WITH password='PASSWORD';
CREATE USER kadauser FROM LOGIN kadauser;

Apply per database in scope for metadata collection.

CREATE USER kadauser FROM LOGIN kadauser;
GRANT VIEW DEFINITION TO kadauser;
GRANT VIEW DATABASE STATE to kadauser;
GRANT CONTROL to kadauser;  -- required for extended events sys.fn_xe_file_target_read_file

The following table should also be available to SELECT by the user created in each database

  • INFORMATION_SCHEMA.ROUTINES

  • INFORMATION_SCHEMA.VIEWS

  • INFORMATION_SCHEMA.TABLE_CONSTRAINTS

  • INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE

  • INFORMATION_SCHEMA.TABLES

  • INFORMATION_SCHEMA.COLUMNS

  • sys.foreign_key_columns

  • sys.objects

  • sys.tables

  • sys.schemas

  • sys.columns

  • sys.databases

Step 2: Setup Extended Event Logging

Extended Events Setup is in pilot for Azure SQL

An Azure SQL Admin will need to setup an extended events process to capture Query Execution in SQLServer.

First create or reuse an existing Azure Storage Account. Then create a blob in the example the blob is called extended-events

Run the following script to setup Extended Events logging.

Apply per database in scope for metadata collection.

SQL
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<REPLACE with your key: abc1234>';

CREATE DATABASE SCOPED CREDENTIAL [https://your.blob.core.windows.net/extended-events]
WITH IDENTITY='SHARED ACCESS SIGNATURE',
SECRET = '< REPLACE WITH YOUR SAS TOKEN: sp=racwdl ...>';


-- Make sure this file name is unique per database
CREATE EVENT SESSION [KADA] ON DATABASE
	ADD EVENT sqlserver.sp_statement_completed (
		ACTION(package0.event_sequence, sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.database_id, sqlserver.database_name, sqlserver.query_hash, sqlserver.session_id, sqlserver.transaction_id, sqlserver.username) WHERE (
			(
				[statement] LIKE '%CREATE %'
				OR [statement] LIKE '%DROP %'
				OR [statement] LIKE '%MERGE %'
				OR [statement] LIKE '%FROM %'
				)
			AND [sqlserver].[is_system] = (0)
			AND NOT [statement] LIKE 'Insert into % Values %'
			AND [sqlserver].[Query_hash] <> (0)
			)
		), 
	ADD EVENT sqlserver.sql_statement_completed (
	SET collect_statement = (1) ACTION(package0.event_sequence, sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.database_id, sqlserver.database_name, sqlserver.query_hash, sqlserver.session_id, sqlserver.transaction_id, sqlserver.username) WHERE (
		(
			[statement] LIKE '%CREATE %'
			OR [statement] LIKE '%DROP %'
			OR [statement] LIKE '%MERGE %'
			OR [statement] LIKE '%FROM %'
			)
		AND [sqlserver].[is_system] = (0)
		AND NOT [statement] LIKE 'Insert into % Values %'
		AND [sqlserver].[Query_hash] <> (0)
		)
	) ADD TARGET package0.event_file (SET filename = N'https://your.blob.core.windows.net/extended-events/<REPLACE with your db name: database1>.xel')
	WITH (MAX_MEMORY = 4096 KB, EVENT_RETENTION_MODE = ALLOW_MULTIPLE_EVENT_LOSS, MAX_DISPATCH_LATENCY = 30 SECONDS, MAX_EVENT_SIZE = 0 KB, MEMORY_PARTITION_MODE = NONE, TRACK_CAUSALITY = ON, STARTUP_STATE = ON)
GO

Step 1: Create the Source in K

Create a source in K

  • Go to Settings, Select Sources and click Add Source

  • Select "Load from File" option

  • Give the source a Name - e.g. SQLServer Azure Production

  • Add the Host name for the SQLServer Azure Instance

  • Click Next & Finish Setup


Step 2: Getting Access to the Source Landing Directory

When using a Collector you will push metadata to a K landing directory.

To find your landing directory you will need to:

  1. Go to Platform Settings - Settings. Note down the value of this setting:

    • If using Azure: storage_azure_storage_account

    • If using AWS:

      • storage_root_folder - the AWS s3 bucket

      • storage_aws_region - the region where the AWS s3 bucket is hosted

  2. Go to Sources - Edit the Source you have configured. Note down the landing directory in the About this Source section.

To connect to the landing directory you will need:

  • If using Azure: a SAS token to push data to the landing directory. Request this from KADA Support (support@kada.ai)

  • If using AWS:

    • An Access key and Secret. Request this from KADA Support (support@kada.ai)

    • OR provide your IAM role to KADA Support to provision access.


Step 3: Install the Collector

You can download the Latest Core Library and Azure SQL whl via Platform Settings → SourcesDownload Collectors

Run the following command to install the collector

pip install kada_collectors_extractors_sqlserver_azure-x.x.x-py3-none-any.whl

You will also need to install the corresponding common library kada_collectors_lib-x.x.x for this collector to function properly.

pip install kada_collectors_lib-x.x.x-py3-none-any.whl

Note that you will also need an ODBC package installed at the OS level for pyodbc to use as well as a SQLServer ODBC driver, refer to https://docs.microsoft.com/en-us/sql/connect/odbc/download-odbc-driver-for-sql-server?view=sql-server-ver15


Step 4: Configure the Collector

FIELD

FIELD TYPE

DESCRIPTION

EXAMPLE

server

string

SQLServer Azure server

"mydatabase.database.windows.net"

host

string

The onboarded host value in K

"mydatabase.database.windows.net"

username

string

Username to log into the SQLServer Azure account

"myuser"

password

string

Password to log into the SQLServer Azure account


databases

list<string>

A list of databases to extract from SQLServer Azure

["dwh", "adw"]

driver

string

This is the ODBC driver

"ODBC Driver 17 for SQL Server"

meta_only

boolean

Extract metadata only without extended events

true

output_path

string

Absolute path to the output location

"/tmp/output"

mask

boolean

To enable masking or not

true

compress

boolean

To gzip the output or not

true

events_name

string

The created extended event session name

KADA

kada_sqlserver_azure_extractor_config.json

JSON
{
    "server": "",
    "username": "",
    "password": "",
    "databases": [""],
    "driver": "ODBC Driver 17 for SQL Server",
    "output_path": "/tmp/output",
    "mask": true,
    "compress": true,
    "meta_only": true,
    "host": "",
    "events_name": "KADA"
}

Step 5: Run the Collector

This code sample uses the kada_sqlserver_azure_extractor.py for handling the configuration details

Python
import os
import argparse
from kada_collectors.extractors.utils import load_config, get_hwm, publish_hwm, get_generic_logger
from kada_collectors.extractors.sqlserver_azure import Extractor

get_generic_logger('root')

_type = 'sqlserver_azure'
dirname = os.path.dirname(__file__)
filename = os.path.join(dirname, 'kada_{}_extractor_config.json'.format(_type))

parser = argparse.ArgumentParser(description='KADA SqlServer Azure Extractor.')
parser.add_argument('--config', '-c', dest='config', default=filename)
parser.add_argument('--name', '-n', dest='name', default=_type)
args = parser.parse_args()

start_hwm, end_hwm = get_hwm(args.name)

ext = Extractor(**load_config(args.config))
ext.test_connection()
ext.run(**{"start_hwm": start_hwm, "end_hwm": end_hwm})

publish_hwm(args.name, end_hwm)

In some scenarios, you may receive an error message about the SSL settings. This error can be resolved via the Open SSL settings. Refer to: https://github.com/mkleehammer/pyodbc/issues/610#issuecomment-534920201

Edited /etc/ssl/openssl.cnf 

# Change or add

MinProtocol = TLSv1.0

CipherString = DEFAULT@SECLEVEL=1

Step 6: Check the Collector Outputs

K Extracts

A set of files (eg metadata, databaselog, linkages, events etc) will be generated in the output_path directory.

High Water Mark File

A high water mark file is created called sqlserver_azure_hwm.txt.

Refer to Collector Integration General Notes for more information.


Step 7: Push the Extracts to K

Once the files have been validated, you can push the files to the K landing directory.


Example: Using Airflow to orchestrate the Extract and Push to K

The following example is how you can orchestrate the Tableau collector using Airflow and push the files to K hosted on Azure. The code is not expected to be used as-is but as a template for your own DAG.

Python
# built-in
import os

# Installed
from airflow.operators.python_operator import PythonOperator
from airflow.models.dag import DAG
from airflow.operators.dummy import DummyOperator
from airflow.utils.dates import days_ago
from airflow.utils.task_group import TaskGroup

from plugins.utils.azure_blob_storage import AzureBlobStorage

from kada_collectors.extractors.utils import load_config, get_hwm, publish_hwm, get_generic_logger
from kada_collectors.extractors.tableau import Extractor

# To be configured by the customer.
# Note variables may change if using a different object store.
KADA_SAS_TOKEN = os.getenv("KADA_SAS_TOKEN")
KADA_CONTAINER = ""
KADA_STORAGE_ACCOUNT = ""
KADA_LANDING_PATH = "lz/tableau/landing"
KADA_EXTRACTOR_CONFIG = {
    "server_address": "http://tabserver",
    "username": "user",
    "password": "password",
    "sites": [],
    "db_host": "tabserver",
    "db_username": "repo_user",
    "db_password": "repo_password",
    "db_port": 8060,
    "db_name": "workgroup",
    "meta_only": False,
    "retries": 5,
    "dry_run": False,
    "output_path": "/set/to/output/path",
    "mask": True,
    "mapping": {}
}

# To be implemented by the customer.
# Upload to your landing zone storage.
# Change '.csv' to '.csv.gz' if you set compress = true in the config
def upload():
  output = KADA_EXTRACTOR_CONFIG['output_path']
  for filename in os.listdir(output):
      if filename.endswith('.csv'):
        file_to_upload_path = os.path.join(output, filename)

        AzureBlobStorage.upload_file_sas_token(
            client=KADA_SAS_TOKEN,
            storage_account=KADA_STORAGE_ACCOUNT,
            container=KADA_CONTAINER,
            blob=f'{KADA_LANDING_PATH}/{filename}',
            local_path=file_to_upload_path
        )

with DAG(dag_id="taskgroup_example", start_date=days_ago(1)) as dag:

    # To be implemented by the customer.
    # Retrieve the timestamp from the prior run
    start_hwm = 'YYYY-MM-DD HH:mm:SS'
    end_hwm = 'YYYY-MM-DD HH:mm:SS' # timestamp now

    ext = Extractor(**KADA_EXTRACTOR_CONFIG)

    start = DummyOperator(task_id="start")

    with TaskGroup("taskgroup_1", tooltip="extract tableau and upload") as extract_upload:
        task_1 = PythonOperator(
            task_id="extract_tableau",
            python_callable=ext.run,
            op_kwargs={"start_hwm": start_hwm, "end_hwm": end_hwm},
            provide_context=True,
        )

        task_2 = PythonOperator(
            task_id="upload_extracts",
            python_callable=upload,
            op_kwargs={},
            provide_context=True,
        )

        # To be implemented by the customer.
        # Timestamp needs to be saved for next run
        task_3 = DummyOperator(task_id='save_hwm')

    end = DummyOperator(task_id='end')

    start >> extract_upload >> end