About Collectors
Collectors are extractors that are developed and managed by you (A customer of K).
KADA provides python libraries that customers can use to quickly deploy a Collector.
Why you should use a Collector
There are several reasons why you may use a collector vs the direct connect extractor:
-
You are using the KADA SaaS offering and it cannot connect to your sources due to firewall restrictions
-
You want to push metadata to KADA rather than allow it pull data for Security reasons
-
You want to inspect the metadata before pushing it to K
Using a collector requires you to manage
-
Deploying and orchestrating the extract code
-
Managing a high water mark so the extract only pull the latest metadata
-
Storing and pushing the extracts to your K instance.
Pre-requisites
Collector Server Minimum Requirements
Integration with Cognos requires Cognos Analytics APIs.
Cognos Analytics APIs are available from version 11.1.7 onwards.
Previous versions are currently not supported.
Cognos Requirements
-
Cognos access
-
Cognos Analytics user that has the ability to read all objects in Cognos
-
A SQL Authenticated user Database User for the underlying Audit Database configured for Cognos
-
-
Cognos auditing must be enabled (Log level - Basic)
Collector currently only supports a SQLServer version 2016 or higher Audit Database, if you use another Database type, please contact KADA support.
Step 1) Setup KADA user configuration in Cognos
This step is performed by a Cognos Admin.
-
Log into your Cognos instance.
-
Note down the URL you use e.g. https://kada-cognos.cloudapp.net/ to be used in Step 3
-
-
Create a new KADA user.
-
Follow the steps here - https://www.ibm.com/docs/en/cognos-analytics/11.2.0?topic=namespace-creating-managing-users
-
Add the user to a role that has read access to objects to be profiled/monitored.
-
To enable K to monitor ALL objects, the user will need read access to ALL Cognos objects.
-
Note down the Namespace ID for the namespace where the user was created.
-
Step 2) Setup KADA user in the Cognos Audit Database
-
Log into your Cognos Audit Database e.g SQL Server
-
Create a new KADA database user
-
Give the KADA database user READ ONLY access to the following tables in the Audit Database:
-
COGIPF_VIEWREPORT
-
COGIPF_USERLOGON
-
COGIPF_RUNREPORT
-
COGIPF_RUNJOB
-
Step 3: Create the Source in K
Create a Cognos source in K
-
Log into your K instance
-
Go to Platform Settings, select Sources and click Add Source
-
Select Cognos
-
Select "Load from File" option
-
Give the source a Name - e.g. Cognos Production
-
Add the Host name - use the cognos URL from Step 1
-
Click Finish Setup
Step 4: Getting Access to the Source Landing Directory
When using a Collector you will push metadata to a K landing directory.
To find your landing directory you will need to:
-
Go to Platform Settings - Settings. Note down the value of this setting:
-
If using Azure: storage_azure_storage_account
-
If using AWS:
-
storage_root_folder - the AWS s3 bucket
-
storage_aws_region - the region where the AWS s3 bucket is hosted
-
-
-
Go to Sources - Edit the Source you have configured. Note down the landing directory in the About this Source section.
To connect to the landing directory you will need:
-
If using Azure: a SAS token to push data to the landing directory. Request this from KADA Support (support@kada.ai)
-
If using AWS:
-
An Access key and Secret. Request this from KADA Support (support@kada.ai)
-
OR provide your IAM role to KADA Support to provision access.
-
Step 5: Install the Collector
You can download the latest Core Library and whl via Platform Settings → Sources → Download Collectors
Run the following command to install the collector
pip install kada_collectors_extractors_<version>-none-any.whl
You will also need to install the common library kada_collectors_lib for this collector to function properly.
pip install kada_collectors_lib-<version>-none-any.whl
Note that you will also need an ODBC package installed at the OS level for pyodbc to use as well as a SQLServer ODBC driver, refer to https://docs.microsoft.com/en-us/sql/connect/odbc/download-odbc-driver-for-sql-server?view=sql-server-ver15
Step 6: Configure the Collector
|
FIELD |
FIELD TYPE |
DESCRIPTION |
EXAMPLE |
|---|---|---|---|
|
server_url |
string |
Cognos server address domain including the protocol (e.g. |
|
|
username |
string |
Username to log into Cognos server created in Step 1 |
"cognos" |
|
password |
string |
Password to log into Cognos server |
|
|
namespace |
string |
The user namespace which the user will log into. By default the namespace is |
"CognosEx" |
|
timeout |
integer |
API timeout for Cognos APIs in seconds. |
20 |
|
db_host |
string |
IP address or address of the Audit database. |
"10.1.19.15" |
|
db_username |
string |
Username for the Audit database created in Step 2 |
"kada" |
|
db_password |
string |
Password for the database user created in Step 2 |
|
|
db_port |
integer |
Default is usually 1433 for SQLServer |
1433 |
|
db_name |
string |
Database name where the audit tables are stored |
"Audit" |
|
db_schema |
string |
Schema name where the audit tables are stored |
dbo |
|
db_driver |
string |
Driver name must match the one installed on the collector machine |
"ODBC Driver 17 for SQL Server" |
|
db_use_kerberos |
boolean |
Does the database request impersonation, e.g. Kerberos |
false |
|
meta_only |
boolean |
For meta only set this to true otherwise leave it as false |
false |
|
output_path |
string |
Absolute path to the output location |
"/tmp/output" |
|
mask |
boolean |
To enable masking or not |
true |
|
mapping |
json |
Mapping of data source names to onboarded K hosts |
{"somehost.adw": "analytics.adw"} |
|
compress |
boolean |
To gzip the output or not |
true |
kada_cognos_extractor_config.json
{
"server_url": "http://xxx:9300",
"username": "",
"password": "",
"namespace": "",
"timeout": 20,
"db_host": "",
"db_username": "",
"db_password": "",
"db_port": 8060,
"db_name": "",
"db_schema": "",
"db_use_kerberos": false,
"meta_only": false,
"output_path": "/tmp/output",
"mask": false,
"mapping": {},
"compress": false
}
Step 7: Run the Collector
This is the wrapper script: kada_cognos_extractor.py
import os
import argparse
from kada_collectors.extractors.utils import load_config, get_hwm, publish_hwm, get_generic_logger
from kada_collectors.extractors.cognos import Extractor
get_generic_logger('root')
_type = 'cognos'
dirname = os.path.dirname(__file__)
filename = os.path.join(dirname, 'kada_{}_extractor_config.json'.format(_type))
parser = argparse.ArgumentParser(description='KADA Cognos Extractor.')
parser.add_argument('--config', '-c', dest='config', default=filename)
parser.add_argument('--name', '-n', dest='name', default=_type)
args = parser.parse_args()
start_hwm, end_hwm = get_hwm(args.name)
ext = Extractor(**load_config(args.config))
ext.test_connection()
ext.run(**{"start_hwm": start_hwm, "end_hwm": end_hwm})
publish_hwm(args.name, end_hwm)
Step 8: Check the Collector Outputs
K Extracts
A set of files (eg metadata, databaselog, linkages, events etc) will be generated in the output_path directory.
High Water Mark File
A high water mark file is created called cognos_hwm.txt.
Refer to Collector Integration General Notes for more information.
Step 9: Push the Extracts to K
Once the files have been validated, you can push the files to the K landing directory.
Example: Using Airflow to orchestrate the Extract and Push to K
The following example is how you can orchestrate the Tableau collector using Airflow and push the files to K hosted on Azure. The code is not expected to be used as-is but as a template for your own DAG.
# built-in
import os
# Installed
from airflow.operators.python_operator import PythonOperator
from airflow.models.dag import DAG
from airflow.operators.dummy import DummyOperator
from airflow.utils.dates import days_ago
from airflow.utils.task_group import TaskGroup
from plugins.utils.azure_blob_storage import AzureBlobStorage
from kada_collectors.extractors.utils import load_config, get_hwm, publish_hwm, get_generic_logger
from kada_collectors.extractors.tableau import Extractor
# To be configured by the customer.
# Note variables may change if using a different object store.
KADA_SAS_TOKEN = os.getenv("KADA_SAS_TOKEN")
KADA_CONTAINER = ""
KADA_STORAGE_ACCOUNT = ""
KADA_LANDING_PATH = "lz/tableau/landing"
KADA_EXTRACTOR_CONFIG = {
"server_address": "http://tabserver",
"username": "user",
"password": "password",
"sites": [],
"db_host": "tabserver",
"db_username": "repo_user",
"db_password": "repo_password",
"db_port": 8060,
"db_name": "workgroup",
"meta_only": False,
"retries": 5,
"dry_run": False,
"output_path": "/set/to/output/path",
"mask": True,
"mapping": {}
}
# To be implemented by the customer.
# Upload to your landing zone storage.
# Change '.csv' to '.csv.gz' if you set compress = true in the config
def upload():
output = KADA_EXTRACTOR_CONFIG['output_path']
for filename in os.listdir(output):
if filename.endswith('.csv'):
file_to_upload_path = os.path.join(output, filename)
AzureBlobStorage.upload_file_sas_token(
client=KADA_SAS_TOKEN,
storage_account=KADA_STORAGE_ACCOUNT,
container=KADA_CONTAINER,
blob=f'{KADA_LANDING_PATH}/{filename}',
local_path=file_to_upload_path
)
with DAG(dag_id="taskgroup_example", start_date=days_ago(1)) as dag:
# To be implemented by the customer.
# Retrieve the timestamp from the prior run
start_hwm = 'YYYY-MM-DD HH:mm:SS'
end_hwm = 'YYYY-MM-DD HH:mm:SS' # timestamp now
ext = Extractor(**KADA_EXTRACTOR_CONFIG)
start = DummyOperator(task_id="start")
with TaskGroup("taskgroup_1", tooltip="extract tableau and upload") as extract_upload:
task_1 = PythonOperator(
task_id="extract_tableau",
python_callable=ext.run,
op_kwargs={"start_hwm": start_hwm, "end_hwm": end_hwm},
provide_context=True,
)
task_2 = PythonOperator(
task_id="upload_extracts",
python_callable=upload,
op_kwargs={},
provide_context=True,
)
# To be implemented by the customer.
# Timestamp needs to be saved for next run
task_3 = DummyOperator(task_id='save_hwm')
end = DummyOperator(task_id='end')
start >> extract_upload >> end