Skip to main content
Skip table of contents

Tableau Cloud (Collector method) - v3.4.0

About Collectors

Collectors are extractors that are developed and managed by you (A customer of K).

KADA provides python libraries that customers can use to quickly deploy a Collector.

Why you should use a Collector

There are several reasons why you may use a collector vs the direct connect extractor:

  1. You are using the KADA SaaS offering and it cannot connect to your sources due to firewall restrictions

  2. You want to push metadata to KADA rather than allow it pull data for Security reasons

  3. You want to inspect the metadata before pushing it to K

Using a collector requires you to manage

  1. Deploying and orchestrating the extract code

  2. Managing a high water mark so the extract only pull the latest metadata

  3. Storing and pushing the extracts to your K instance.


Pre-requisites

Collector Server Minimum Requirements

For the collector to operate effectively, it will need to be deployed on a server with the below minimum specifications:

  • CPU: 2 vCPU

  • Memory: 8GB

  • Storage: 30GB (depends on historical data extracted)

  • OS: unix distro e.g. RHEL preferred but can also work with Windows Server.

  • Python 3.10.x or later

  • Access to K landing directory

Tableau Cloud Requirements

  • Tableau API access

    • An API user with a PAT token. See Personal Access Tokens

    • User needs Site Administrator Creator or Server/Site Administrator role.

As of 3.2.0 the collector now supports PAT Authentication Personal Access Tokens and Tableau Cloud


Step 1) Setup KADA user configuration in Tableau Cloud

This step is performed by the Tableau Cloud Admin with Site Administrator Creator role.

  • Login to Tableau Cloud

  • In the top right click on your user icon and click My Account Settings

  • Scroll to Personal Access Tokens and in the Token Name field enter ‘Kada'

  • Click Create New Token

  • Scroll to the bottom and set Language of the KADA User to ‘English (United Kingdom)

  • Click Save Changes


Step 2) Setup K workbook to extract event data from Tableau Cloud

  • Clone the Admin Insights > Admin Insights Starter Workbook

  • Save newly cloned workbook as KADA in a new project called KADA.

  • Create a new sheet in the workbook called kada_ts_events

    • Add the following fields to the kada_ts_events sheet Rows

      • Event Date

      • Event Name

      • User Name

      • Item Id

      • Item Type

  • Create another new sheet in the workbook called kada_site_content

    • Add the following fields to the kada_site_content sheet Rows

      • Item Id

      • Item Type

      • Item LUID

The names of the fields need to match exactly. Remember to check for any accidental spaces.

Refer to the below example of a kada_ts_events sheet


Step 3: Create the Source in K

Create a Tableau source in K

  • Go to Settings, Select Sources and click Add Source

  • Select “Load from File” option

  • Give the source a Name - e.g. Tableau Production

  • Add the Host name for the Tableau Cloud

  • Click Finish Setup


Step 4: Getting Access to the Source Landing Directory

Collector Method

When using a Collector you will push metadata to a K landing directory.

To find your landing directory you will need to

  1. Go to Platform Settings - Settings. Note down the value of this setting

    1. If using Azure: storage_azure_storage_account

    2. if using AWS:

      1. storage_root_folder - the AWS s3 bucket

      2. storage_aws_region - the region where the AWS s3 bucket is hosted

  2. Go to Sources - Edit the Source you have configured. Note down the landing directory in the About this Source section

To connect to the landing directory you will need

  • If using Azure: a SAS token to push data to the landing directory. Request this from KADA Support (support@kada.ai)

  • if using AWS:

    • an Access key and Secret. Request this from KADA Support (support@kada.ai)

    • OR provide your IAM role to KADA Support to provision access.


Step 5: Install the Collector

It is recommended to use a python environment such as pyenv or pipenv if you are not intending to install this package at the system level.

Some python packages also have dependencies on the OS level packages, so you may be required to install additional OS packages if the below fails to install.

You can download the latest Core Library and whl via Platform Settings → SourcesDownload Collectors

Run the following command to install the collector

CODE
pip install kada_collectors_extractors_<version>-none-any.whl

You will also need to install the common library kada_collectors_lib for this collector to function properly.

CODE
pip install kada_collectors_lib-<version>-none-any.whl

Step 6: Configure the Collector

The collector requires a set of parameters to connect to and extract metadata from Tableau.

FIELD

FIELD TYPE

DESCRIPTION

EXAMPLE

server_address

string

Tableau server address domain including the protocol: http:// https://

https://10.1.19.15

username

string

Username to log into tableau api (Can be null if use_cloud is True or use_token is True)

“tabadmin”

password

string

Password to log into tableau api (Can be null if use_cloud is True or use_token is True)

 

sites

list<string>

List of specific sites that you wish to extract, if left as [] it will extract all sites. (This should be a single value list if use_cloud is True as cloud only supports a single site)

[]

ssl_verification

boolean

Should ssl verification be used for API requests

true

db_host

string

This is generally the same as server address less the http/https (Can be null if use_cloud is True or meta_only is True)

“10.1.19.15”

db_username

string

By default the tableau database use is readonly should not need to change this unless you actively manage the database (Can be null if use_cloud is True or meta_only is True)

“readonly”

db_password

list<string>

Password for the database user (Can be null if use_cloud is True or meta_only is True)

 

db_port

integer

Default is 8060 unless your tableau is configured differently (Can be null if use_cloud is True or meta_only is True)

8060

db_name

string

Default database to use is workgroup (Can be null if use_cloud is True or meta_only is True), th

“workgroup”

meta_only

boolean

If for some reason you want to extract meta only set this to true otherwise leave it as false

false

retries

integer

Number of retries that the extractor should hit the API incase of intermittent failures, default is 5

5

dry_run

boolean

By doing a dry run you produce the mapping.json file which is used to populate the mapping field below. It is recommended you do a dry run first to see what databases are available to map.

true

output_path

string

Absolute path to the output location where files are to be written

“/tmp/output”

mask

boolean

To enable masking or not

true

mapping

json

This should be populate with the mapping.json output where each data source name mentioned is mapped to an onboarded K host

Where analytics.adw is the onboarded database in K

CODE
{
"somehost.adw": "analytics.adw"
}

compress

boolean

To gzip the output or not

true

use_token

boolean

Using a PAT for Authentication, this will be forced to True if use_cloud is True as cloud only supports PAT Authentication

false

use_cloud

boolean

If connection to tableau cloud set to false.
If connecting to tableau server set to true.

false

token_name

string

The PAT name, must be specified if use_token or use_cloud is True

My_token

token_secret

string

The PAT secret, must be specified if use_token or use_cloud is True

somehashgarble_123123

site_content_view_name

string

The view name for the Site Content data tab in the Kada Workbook for extracting event data, used only when meta_only is False and used_cloud is True

kada_site_content

ts_events_view_name

string

The view name for the TS Events data tab in the Kada Workbook for extracting event data, used only when meta_only is False and used_cloud is True

kada_ts_events

timeout

integer

The timeout value against Tableau API calls in seconds, default recommended is 120, if you are using cloud with meta_only as false, suggest you tune this timeout to the amount of activity information you have on Tableau.

120

timestamp_format

string

The timestamp format used by the Event TS view, applicable only when use_cloud is True default value is

%d/%m/%Y %H:%M:%S

%d/%m/%Y %H:

fields_per_page

integer

Number of field objects to be returned via the Tableau Metadata API, default is 1000, if you find you reach a 20k limit error, look to reduce this value.

1000

sheets_per_page

integer

Same as above but for sheets and dashboards

100

These parameters can be added directly into the run or you can use pass the parameters in via a JSON file. The following is an example you can use that is included in the example run code below.

kada_tableau_extractor_config.json

JSON
{
    "server_address": "",
    "username": "",
    "password": "",
    "sites": [],
    "ssl_verification": true,
    "db_host": "",
    "db_username": "readonly",
    "db_password": "",
    "db_port": 8060,
    "db_name": "workgroup",
    "meta_only": false,
    "retries": 5,
    "dry_run": false,
    "output_path": "/tmp/output",
    "mask": true,
    "mapping": {},
    "compress": true,
    "use_token": false,
    "use_cloud": false,
    "token_name": "",
    "token_secret": "",
    "site_content_view_name": "kada_site_content",
    "ts_events_view_name": "kada_ts_events",
    "timeout": 120,
    "timestamp_format": "%d/%m/%Y %H:%M:%S",
    "fields_per_page": "1000",
    "sheets_per_page": "100"
}

Step 7: Run the Collector

The following code is an example of how to run the extractor. You may need to uplift this code to meet any code standards at your organisation.

This can be executed in any python environment where the whl has been installed.

This is the wrapper script: kada_tableau_extractor.py

PY
import os
import argparse
from kada_collectors.extractors.utils import load_config, get_hwm, publish_hwm, get_generic_logger
from kada_collectors.extractors.tableau import Extractor

get_generic_logger('root') # Set to use the root logger, you can change the context accordingly or define your own logger

_type = 'tableau'
dirname = os.path.dirname(__file__)
filename = os.path.join(dirname, 'kada_{}_extractor_config.json'.format(_type))

parser = argparse.ArgumentParser(description='KADA Tableau Extractor.')
parser.add_argument('--config', '-c', dest='config', default=filename, help='Location of the configuration json, default is the config json in the same directory as the script.')
parser.add_argument('--name', '-n', dest='name', default=_type, help='Name of the collector instance.')
args = parser.parse_args()

start_hwm, end_hwm = get_hwm(args.name)

ext = Extractor(**load_config(args.config))
ext.test_connection()
ext.run(**{"start_hwm": start_hwm, "end_hwm": end_hwm})

publish_hwm(_type, end_hwm)

Advance options:

If you wish to maintain your own high water mark files elsewhere you can use the above section’s script as a guide on how to call the extractor. The configuration file is simply the keyword arguments in JSON format. Refer to this document for more information Collector Integration General Notes | Storing-HWM-in-another-location

If you are handling external arguments of the runner yourself, you’ll need to consider additional items for the run method. Refer to this document for more information Collector Integration General Notes | The-run-method

CODE
from kada_collectors.extractors.tableau import Extractor

kwargs = {my args} # However you choose to construct your args
hwm_kwrgs = {"start_hwm": "end_hwm": } # The hwm values

ext = Extractor(**kwargs)
ext.run(**hwm_kwrgs)

CODE
class Extractor(server_address: str = None, username: str = None, password: str = None, \
    sites: list = [], db_host: str = None, db_password: str = None, \
    db_port: int = 8060, db_name: str = '≈', db_username: str = 'readonly', \
    meta_only: bool = False, events_only: bool = False, retries: int = 5, \
    dry_run: bool = False, output_path: str = './output', \
    mask: bool = False, mapping: dict = {}, compress: bool = False, \
    use_cloud: bool = False, use_token: bool = False, token_name: str = None, \
    token_secret: str = None, site_content_view_name: str='kada_site_content', \
    ts_events_view_name: str='kada_ts_events', timeout: int=120, timestamp_format: str='%d/%m/%Y %H:%M:%S', \
    fields_per_page: int=1000, sheets_per_page: int=100) -> None

server_address: server address
username: username to sign into server
password: password to sign into server
sites: list of sites to extract.
ssl_verification: Should ssl verification be enabled for API requests.
db_host: Tableau database address
db_password: Tableau database password
db_port: Tableau database port
db_name: Tableau database name
db_username: Tableau database username
meta_only: extract metadata only
events_only: extract events only
retries: Number of attemps if an API fails on NonXMLResponse Error, default is 5
dry_run: If specified the extractor will do a dry run to produce a template mapping.
output_path: full or relative path to where the outputs should go
login_timeout: The timeout for snowflake Auth
mask: To mask the META/DATABASE_LOG files or not
compress: To gzip output files or not
use_cloud: Are you using Tableau Cloud? Note cloud will force use token authentication
use_token: Are you using a token for authentication? Token authentication is also available for Tableau Server
token_name: Token based authentication
token_secret: Token based authentication
site_content_view_name: The view name for Content View tab in the Kada workbook for events, defaults to kada_site_content
ts_events_view_name: The view name for TS Events View tab in the Kada workbook for events, defaults to kada_ts_events
timeout: The timeout value in seconds for Tableau API calls
timestamp_format: The format of the event timestamp for the TS Events View, if defaults to %d/%m/%Y %H:%M:%S, refer to python datetime formating
fields_per_page: Number of field objects per page to be returned via the Tableau metadata api
sheets_per_page: Number of sheet and dashboard objects per page to be returned via the Tableau metadata api


Step 8: Check the Collector Outputs

K Extracts

A set of files (eg metadata, databaselog, linkages, events etc) will be generated. These files will appear in the output_path directory you set in the configuration details

High Water Mark File

A high water mark file is created in the same directory as the execution called tableau_hwm.txt and produce files according to the configuration JSON. This file is only produced if you call the publish_hwm method.


Step 9: Push the Extracts to K

Once the files have been validated, you can push the files to the K landing directory..

You can use Azure Storage Explorer if you want to initially do this manually. You can push the files using python as well (see Airflow example below)


Example: Using Airflow to orchestrate the Extract and Push to K

PY
# built-in
import os

# Installed
from airflow.operators.python_operator import PythonOperator
from airflow.models.dag import DAG
from airflow.operators.dummy import DummyOperator
from airflow.utils.dates import days_ago
from airflow.utils.task_group import TaskGroup

from plugins.utils.azure_blob_storage import AzureBlobStorage

from kada_collectors.extractors.utils import load_config, get_hwm, publish_hwm, get_generic_logger
from kada_collectors.extractors.tableau import Extractor

# To be configed by the customer.
# Note variables may change if using a different object store.
KADA_SAS_TOKEN = os.getenv("KADA_SAS_TOKEN")
KADA_CONTAINER = ""
KADA_STORAGE_ACCOUNT = ""
KADA_LANDING_PATH = "lz/tableau/landing"
KADA_EXTRACTOR_CONFIG = {
    "server_address": "http://tabserver",
    "username": "user",
    "password": "password",
    "sites": [],
    "db_host": "tabserver",
    "db_username": "repo_user",
    "db_password": "repo_password",
    "db_port": 8060,
    "db_name": "workgroup",
    "meta_only": False,
    "retries": 5,
    "dry_run": False,
    "output_path": "/set/to/output/path",
    "mask": True,
    "mapping": {}
}

# To be implemented by the customer. 
# Upload to your landing zone storage.
def upload():
  output = KADA_EXTRACTOR_CONFIG['output_path']
  for filename in os.listdir(output):
      if filename.endswith('.csv'):
        file_to_upload_path = os.path.join(output, filename)

        AzureBlobStorage.upload_file_sas_token(
            client=KADA_SAS_TOKEN,
            storage_account=KADA_STORAGE_ACCOUNT,
            container=KADA_CONTAINER, 
            blob=f'{KADA_LANDING_PATH}/{filename}', 
            local_path=file_to_upload_path
        )

with DAG(dag_id="taskgroup_example", start_date=days_ago(1)) as dag:
  
    # To be implemented by the customer.
    # Retrieve the timestamp from the prior run
    start_hwm = 'YYYY-MM-DD HH:mm:SS'
    end_hwm = 'YYYY-MM-DD HH:mm:SS' # timestamp now
    
    ext = Extractor(**KADA_EXTRACTOR_CONFIG)
    
    start = DummyOperator(task_id="start")

    with TaskGroup("taskgroup_1", tooltip="extract tableau and upload") as extract_upload:
        task_1 = PythonOperator(
            task_id="extract_tableau",
            python_callable=ext.run, 
            op_kwargs={"start_hwm": start_hwm, "end_hwm": end_hwm},
            provide_context=True,
        )
        
        task_2 = PythonOperator(
            task_id="upload_extracts",
            python_callable=upload, 
            op_kwargs={},
            provide_context=True,
        )

        # To be implemented by the customer. 
        # Timestamp needs to be saved for next run
        task_3 = DummyOperator(task_id='save_hwm') 

    end = DummyOperator(task_id='end')

    start >> extract_upload >> end

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.