KDQ is KADA's data quality module — purpose-built to help data teams define, run, and monitor data quality checks across their entire data estate, with results surfaced directly inside K.
KDQ is available as a separate module for eligible customers. It is built around a simple, repeatable workflow: connect to a data source, define checks, run them on a schedule, and review results — all within a workspace your team controls.
Define data quality rules across your sources
The outcome: Teams can connect to any supported data source and define structured, reusable data quality checks — without writing custom scripts or managing fragile infrastructure.
How KDQ delivers this:
|
Feature |
Role in this outcome |
|---|---|
|
Organise your checks by domain, team, or purpose — each Workspace has its own connections, datasets, access settings, and results |
|
|
Overview of how KDQ connects to your data sources and what sources are supported |
|
|
Step-by-step guide to adding and securing a data source connection within a Workspace |
|
|
Define the scope of data to test — including full-table scans or custom SQL queries — and link it to a K asset |
Who this is for: Data engineers and KDQ Workspace Admins setting up a new data quality environment.
Run and schedule data quality checks
The outcome: Checks run automatically on a schedule or on demand — giving teams consistent, up-to-date insight into data health without manual intervention.
How KDQ delivers this:
|
Feature |
Role in this outcome |
|---|---|
|
Group datasets and associated tests into runnable jobs, ready for scheduling |
|
|
Define the specific validation rules to apply to each field or table within a dataset |
|
|
Apply workspace changes and trigger test runs manually or on a set schedule |
|
|
Connect KDQ to external schedulers (e.g. Airflow, CRON) for fully automated execution |
Who this is for: Data engineers and data quality managers maintaining ongoing DQ routines.
Monitor results and surface quality in K
The outcome: Results from every check run are stored, visible, and actionable — inside KDQ and on the relevant data assets in K — so quality issues are never hidden from the people who need to know.
How KDQ delivers this:
|
Feature |
Role in this outcome |
|---|---|
|
View pass/fail outcomes, download failing records, and track quality trends over time within KDQ |
|
|
Push KDQ scores and issues into K so data consumers see quality signals on every asset's Data Profile Page |
|
|
Control who can view, run, and manage checks — keeping sensitive configurations protected |
Who this is for: All KDQ users — data quality managers for configuration and investigation, data consumers for quality visibility in K.
💡 Tip: To get started with KDQ, set up a Workspace, configure a Connection, then define your first Dataset and Job. Results sync automatically to K after each run.