Skip to main content
Skip table of contents

SQL Server (via Collector method) - v3.0.0

About Collectors

Collector Method

Collectors are extractors that are developed and managed by you (A customer of K).

KADA provides python libraries that customers can use to quickly deploy a Collector.

Why you should use a Collector

There are several reasons why you may use a collector vs the direct connect extractor:

  1. You are using the KADA SaaS offering and it cannot connect to your sources due to firewall restrictions

  2. You want to push metadata to KADA rather than allow it pull data for Security reasons

  3. You want to inspect the metadata before pushing it to K

Using a collector requires you to manage

  1. Deploying and orchestrating the extract code

  2. Managing a high water mark so the extract only pull the latest metadata

  3. Storing and pushing the extracts to your K instance.


Pre-requisites

  • Python 3.6 - 3.10

  • Access to K landing directory

  • Access to SQL Server (see section below)

  • Check the SQLServer instance port

    • Run the following query and note the local tcp port.

      CODE
      SELECT local_tcp_port
      FROM   sys.dm_exec_connections
      WHERE  session_id = @@SPID
      GO

SQL Server Access

Setting up SQL Server for metadata extraction is a 2 step process.

Step 1: Establish SQLServer Access

Create a SqlServer user with read access per SQLServer database.

  • INFORMATION_SCHEMA.ROUTINES

  • INFORMATION_SCHEMA.VIEWS

  • INFORMATION_SCHEMA.TABLE_CONSTRAINTS

  • INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE

  • INFORMATION_SCHEMA.TABLES

  • INFORMATION_SCHEMA.COLUMNS

  • INFORMATION_SCHEMA.VIEWS

  • sys.foreign_key_columns

  • sys.objects

  • sys.tables

  • sys.schemas

  • sys.columns

  • VIEW SERVER STATE permission on the server

    • Required for Extended Event log

  • VIEW Definition

    • All databases

      CODE
      USE master 
      GO 
      GRANT VIEW ANY DEFINITION TO Kadauser
    • Selected databases. Repeat for each database

      CODE
      USE <REPLACE WITH A DATABASE> 
      GO 
      GRANT VIEW ANY DEFINITION TO Kadauser

Step 2: Setup Extended Event Logging

A SQLServer Admin will need to setup an extended events process to capture Query Execution in SQLServer.

Some tuning of the logging parameters may be needed depending on processing volume on your SQLServer instance.

Example script to setup Extended Events logging.

CODE
--Query To Create Extended Events Session
CREATE EVENT SESSION [KADA] ON SERVER ADD EVENT sqlserver.sp_statement_completed (
	ACTION(package0.collect_system_time, package0.event_sequence, sqlos.task_time, sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.database_id, sqlserver.database_name, sqlserver.nt_username, sqlserver.query_hash, sqlserver.server_instance_name, sqlserver.server_principal_name, sqlserver.server_principal_sid, sqlserver.session_id, sqlserver.session_nt_username, sqlserver.transaction_id, sqlserver.username) WHERE (
		(
			[statement] LIKE '%CREATE %'
			OR [statement] LIKE '%DROP %'
			OR [statement] LIKE '%MERGE %'
			OR [statement] LIKE '%FROM %'
			)
		AND [sqlserver].[server_principal_name] <> N'USERS_TO_EXCLUDE'
		AND [sqlserver].[is_system] = (0)
		AND NOT [statement] LIKE 'Insert into % Values %'
		AND [sqlserver].[Query_hash] <> (0)
		)
	), ADD EVENT sqlserver.sql_statement_completed (
	SET collect_statement = (1) ACTION(package0.collect_system_time, package0.event_sequence, sqlos.task_time, sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.database_id, sqlserver.database_name, sqlserver.nt_username, sqlserver.query_hash, sqlserver.server_instance_name, sqlserver.server_principal_name, sqlserver.session_id, sqlserver.session_nt_username, sqlserver.transaction_id, sqlserver.username) WHERE (
		(
			[statement] LIKE '%CREATE %'
			OR [statement] LIKE '%DROP %'
			OR [statement] LIKE '%MERGE %'
			OR [statement] LIKE '%FROM %'
			)
		AND [sqlserver].[server_principal_name] <> N'USERS_TO_EXCLUDE'
		AND [sqlserver].[is_system] = (0)
		AND NOT [statement] LIKE 'Insert into % Values %'
		AND [sqlserver].[Query_hash] <> (0)
		)
	) ADD TARGET package0.event_file (SET filename = N'G:\extended events\Extendedevents.xel', max_file_size = (20), max_rollover_files = (100))
	WITH (MAX_MEMORY = 4096 KB, EVENT_RETENTION_MODE = ALLOW_MULTIPLE_EVENT_LOSS, MAX_DISPATCH_LATENCY = 30 SECONDS, MAX_EVENT_SIZE = 0 KB, MEMORY_PARTITION_MODE = NONE, TRACK_CAUSALITY = ON, STARTUP_STATE = ON)
GO


-- Check if the session is dropping events and see other data about the session
-- https://sqlperformance.com/2019/10/extended-events/understanding-event-loss-with-extended-events
SELECT
   s.name, 
   s.total_regular_buffers,
   s.regular_buffer_size,
   s.total_large_buffers,
   s.large_buffer_size,
   s.dropped_event_count,
   s.dropped_buffer_count,
   s.largest_event_dropped_size
FROM sys.dm_xe_sessions AS s;


-- Also check log growth rate. Apply filters to remove noise.
-- some filters:
-- [sqlserver].[server_principal_name] = N'name of principal'
-- [sqlserver].[is_system] = (0)
-- [sqlserver].[client_app_name] = N'name of app'


Step 1: Create the Source in K

Create a source in K

  • Go to Settings, Select Sources and click Add Source

  • Select “Load from File” option

  • Give the source a Name - e.g. SQL Server Production

  • Add the Host name for the SQL Server instance

  • Click Next & Finish Setup


Step 2: Getting Access to the Source Landing Directory

Collector Method

When using a Collector you will push metadata to a K landing directory.

To find your landing directory you will need to

  1. Go to Platform Settings - Settings. Note down the value of this setting

    1. If using Azure: storage_azure_storage_account

    2. if using AWS:

      1. storage_root_folder - the AWS s3 bucket

      2. storage_aws_region - the region where the AWS s3 bucket is hosted

  2. Go to Sources - Edit the Source you have configured. Note down the landing directory in the About this Source section

To connect to the landing directory you will need

  • If using Azure: a SAS token to push data to the landing directory. Request this from KADA Support (support@kada.ai)

  • if using AWS:

    • an Access key and Secret. Request this from KADA Support (support@kada.ai)

    • OR provide your IAM role to KADA Support to provision access.


Step 3: Install the Collector

It is recommended to use a python environment such as pyenv or pipenv if you are not intending to install this package at the system level.

Some python packages also have dependencies on the OS level packages, so you may be required to install additional OS packages if the below fails to install.

You can download the latest Core Library and whl via Platform Settings → SourcesDownload Collectors

Run the following command to install the collector

CODE
pip install kada_collectors_extractors_<version>-none-any.whl

You will also need to install the common library kada_collectors_lib for this collector to function properly.

CODE
pip install kada_collectors_lib-<version>-none-any.whl

Note that you will also need an ODBC package installed at the OS level for pyodbc to use as well as a SQLServer ODBC driver, refer to https://docs.microsoft.com/en-us/sql/connect/odbc/download-odbc-driver-for-sql-server?view=sql-server-ver15


Step 4: Configure the Collector

The collector requires a set of parameters to connect to and extract metadata from SQL Server.

FIELD

FIELD TYPE

DESCRIPTION

EXAMPLE

server

string

SQLServer server host.

Note if the default port is not used append the port to the server name. Example

10.123.123.123\\<SERVICE NAME>,<INSTANCE PORT>

“10.1.18.19”

username

string

Username to log into the SQLServer account

“myuser”

password

string

Password to log into the SQLServer account

databases

list<string>

A list of databases to extract from SQLServer

[“dwh”, “adw”]

sqlserver_version

string

Version of SQLServer release name, supported is 2012, 2016, 2017, 2019

2016

driver

string

This is the ODBC driver, generally its ODBC Driver 17 for SQL Server, if you another driver installed please use that instead

“ODBC Driver 17 for SQL Server”

events_path

string

This is the extended events file pattern configuration for SQLServer.

“/tmp/eevents*.xel”

output_path

string

Absolute path to the output location where files are to be written

“/tmp/output”

mask

boolean

To enable masking or not

true

compress

boolean

To gzip the output or not

true

These parameters can be added directly into the run or you can use pass the parameters in via a JSON file. The following is an example you can use that is included in the example run code below.

kada_sqlserver_extractor_config.json

JSON
{
    "server": "",
    "username": "",
    "password": "",
    "databases": [""],
    "sqlserver_version": "2016",
    "driver": "ODBC Driver 17 for SQL Server",
    "events_path": "/tmp/Extendedevents*.xel",
    "output_path": "/tmp/output",
    "mask": true,
    "compress": true
}

Step 5: Run the Collector

The following code is an example of how to run the extractor. You may need to uplift this code to meet any code standards at your organisation.

This can be executed in any python environment where the whl has been installed.

This code sample uses the kada_sqlserver_extractor.py for handling the configuration details

PY
import os
import argparse
from kada_collectors.extractors.utils import load_config, get_hwm, publish_hwm, get_generic_logger
from kada_collectors.extractors.sqlserver import Extractor

get_generic_logger('root') # Set to use the root logger, you can change the context accordingly or define your own logger

_type = 'sqlserver'
dirname = os.path.dirname(__file__)
filename = os.path.join(dirname, 'kada_{}_extractor_config.json'.format(_type))

parser = argparse.ArgumentParser(description='KADA SqlServer Extractor.')
parser.add_argument('--config', '-c', dest='config', default=filename, help='Location of the configuration json, default is the config json in the same directory as the script.')
args = parser.parse_args()

start_hwm, end_hwm = get_hwm(_type)

ext = Extractor(**load_config(args.config))
ext.test_connection()
ext.run(**{"start_hwm": start_hwm, "end_hwm": end_hwm})

publish_hwm(_type, end_hwm)

Advance options:

If you wish to maintain your own high water mark files elsewhere you can use the above section’s script as a guide on how to call the extractor. The configuration file is simply the keyword arguments in JSON format. Refer to this document for more information Collector Integration General Notes | Storing-HWM-in-another-location

If you are handling external arguments of the runner yourself, you’ll need to consider additional items for the run method. Refer to this document for more information Collector Integration General Notes | The-run-method


PY
class Extractor(username: str = None, password: str = None, server: str = None, \
    driver: str = None, events_path: str = None, databases: list = [], \
    sqlserver_version: str = None, output_path: str = './output', mask: bool = False, \
    compress: bool = False) -> None

username: username to sign into sqlserver
password: password to sign into sqlserver
server: sqlserver host
driver: sqlserver driver name
events_path: regex location of the events files on the server
databases: list of databases to extract
sqlserver_version: Release name for the SQLServer supported is 2012, 2016, 2017, 2019
output_path: full or relative path to where the outputs should go
mask: To mask the META/DATABASE_LOG files or not
compress: To gzip output files or not


Step 6: Check the Collector Outputs

K Extracts

A set of files (eg metadata, databaselog, linkages, events etc) will be generated. These files will appear in the output_path directory you set in the configuration details

High Water Mark File

A high water mark file is created in the same directory as the execution called sqlserver_hwm.txt and produce files according to the configuration JSON. This file is only produced if you call the publish_hwm method.


Step 7: Push the Extracts to K

Once the files have been validated, you can push the files to the K landing directory.

You can use Azure Storage Explorer if you want to initially do this manually. You can push the files using python as well (see Airflow example below)


Example: Using Airflow to orchestrate the Extract and Push to K

Collector Method

The following example is how you can orchestrate the Tableau collector using Airflow and push the files to K hosted on Azure. The code is not expected to be used as-is but as a template for your own DAG.

PY
# built-in
import os

# Installed
from airflow.operators.python_operator import PythonOperator
from airflow.models.dag import DAG
from airflow.operators.dummy import DummyOperator
from airflow.utils.dates import days_ago
from airflow.utils.task_group import TaskGroup

from plugins.utils.azure_blob_storage import AzureBlobStorage

from kada_collectors.extractors.utils import load_config, get_hwm, publish_hwm, get_generic_logger
from kada_collectors.extractors.tableau import Extractor

# To be configed by the customer.
# Note variables may change if using a different object store.
KADA_SAS_TOKEN = os.getenv("KADA_SAS_TOKEN")
KADA_CONTAINER = ""
KADA_STORAGE_ACCOUNT = ""
KADA_LANDING_PATH = "lz/tableau/landing"
KADA_EXTRACTOR_CONFIG = {
    "server_address": "http://tabserver",
    "username": "user",
    "password": "password",
    "sites": [],
    "db_host": "tabserver",
    "db_username": "repo_user",
    "db_password": "repo_password",
    "db_port": 8060,
    "db_name": "workgroup",
    "meta_only": False,
    "retries": 5,
    "dry_run": False,
    "output_path": "/set/to/output/path",
    "mask": True,
    "mapping": {}
}

# To be implemented by the customer. 
# Upload to your landing zone storage.
# Change '.csv' to '.csv.gz' if you set compress = true in the config
def upload():
  output = KADA_EXTRACTOR_CONFIG['output_path']
  for filename in os.listdir(output):
      if filename.endswith('.csv'):
        file_to_upload_path = os.path.join(output, filename)

        AzureBlobStorage.upload_file_sas_token(
            client=KADA_SAS_TOKEN,
            storage_account=KADA_STORAGE_ACCOUNT,
            container=KADA_CONTAINER, 
            blob=f'{KADA_LANDING_PATH}/{filename}', 
            local_path=file_to_upload_path
        )

with DAG(dag_id="taskgroup_example", start_date=days_ago(1)) as dag:
  
    # To be implemented by the customer.
    # Retrieve the timestamp from the prior run
    start_hwm = 'YYYY-MM-DD HH:mm:SS'
    end_hwm = 'YYYY-MM-DD HH:mm:SS' # timestamp now
    
    ext = Extractor(**KADA_EXTRACTOR_CONFIG)
    
    start = DummyOperator(task_id="start")

    with TaskGroup("taskgroup_1", tooltip="extract tableau and upload") as extract_upload:
        task_1 = PythonOperator(
            task_id="extract_tableau",
            python_callable=ext.run, 
            op_kwargs={"start_hwm": start_hwm, "end_hwm": end_hwm},
            provide_context=True,
        )
        
        task_2 = PythonOperator(
            task_id="upload_extracts",
            python_callable=upload, 
            op_kwargs={},
            provide_context=True,
        )

        # To be implemented by the customer. 
        # Timestamp needs to be saved for next run
        task_3 = DummyOperator(task_id='save_hwm') 

    end = DummyOperator(task_id='end')

    start >> extract_upload >> end

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.