DBT Import
Import your DBT project artifacts (manifest.json, catalog.json, run_results.json) to automatically populate your Data Catalog with tables, columns, and lineage information.
We have Python SDK with a DBT plugin defined which can simplify your workflow: https://github.com/Aylesbury/aylesbury-python-sdk
Overview
The DBT Import API accepts DBT artifacts and processes them asynchronously to:
- Create or update Tables in your Data Catalog
- Create or update Columns with data types and descriptions
- Create Lineage relationships between tables based on DBT's dependency graph
Authentication
All API requests require a valid API key with the `write:dbt_imports` permission.
Authorization: Bearer YOUR_API_KEY
Or use the `X-API-Key` header:
X-API-Key: YOUR_API_KEY
Endpoints
Create Import
POST `/api/v1/dbt_imports`
Upload DBT artifacts and start an import job.
Upload DBT artifacts and start an import job.
Parameters
- schema_id - slug to parent schema. This can be found in your main Schema table, "Schema ID" column in dashboard
- manifest - path to your generated `manifest.json`
- catalog - path to your generated `catalog.json` (OPTIONAL, but gives us more context)
- run_results - path to your generated `run_results.json` (OPTIONAL, but gives us more context)
Example Request
curl -X POST https://aylesbury.io/api/v1/dbt_imports \ -H "Authorization: Bearer YOUR_API_KEY" \ -F "schema_id=SCH-101" \ -F "manifest=@target/manifest.json" \ -F "catalog=@target/catalog.json" \ -F "run_results=@target/run_results.json"
Example Result
{
"data": {
"id": "ABC-123",
"type": "dbt_import",
"attributes": {
"status": "pending",
"has_manifest": true,
"has_catalog": true,
"has_run_results": true,
"tables_created": 0,
"tables_updated": 0,
"columns_created": 0,
"columns_updated": 0,
"lineages_created": 0,
"lineages_updated": 0,
"started_at": null,
"completed_at": null,
"error_message": null,
"created_at": "2026-01-10T18:00:00Z",
"updated_at": "2026-01-10T18:00:00Z"
},
"relationships": {
"schema": {
"id": "SCH-101",
"name": "Analytics Warehouse"
}
}
},
"meta": {
"job_id": "ABC-123"
}
}Get Import Status
Check the status of an import job.
Example Request
curl https://aylesbury.io/api/v1/dbt_imports/ABC-123 \ -H "Authorization: Bearer YOUR_API_KEY"
Example Response (Completed)
{
"data": {
"id": "ABC-123",
"type": "dbt_import",
"attributes": {
"status": "completed",
"has_manifest": true,
"has_catalog": true,
"has_run_results": true,
"tables_created": 15,
"tables_updated": 3,
"columns_created": 120,
"columns_updated": 25,
"lineages_created": 42,
"lineages_updated": 0,
"started_at": "2026-01-10T18:00:01Z",
"completed_at": "2026-01-10T18:00:05Z",
"error_message": null,
"created_at": "2026-01-10T18:00:00Z",
"updated_at": "2026-01-10T18:00:05Z"
},
"relationships": {
"schema": {
"id": "SCH-101",
"name": "Analytics Warehouse"
}
}
}
}Import Status Values
- pending - Import created, waiting to be processed
- processing - Import is being processed
- completed - Import finished successfully
- failed - Import failed (check `error_message` for details)
DBT Artifacts
manifest.json (Required)
The manifest file is the primary source of information. It contains:
- Models: Transformed tables created by DB
- Seeds: Static data loaded from CSV files
- Snapshots: Historical snapshots of data
- Sources: External tables referenced by your models
- Parent Map: Dependencies between models (used for lineage)
catalog.json (Optional)
The catalog file provides detailed column information:
- Column names and data types
- Column descriptions from schema.yml
If not provided, column information is extracted from the manifest (less detailed).
run_results.json (Optional)
The run results file provides execution metadata:
- Run status (success/error)
- Execution timing
- Last run timestamp
This information is stored in table metadata for reference.
What Gets Created
Tables
For each model, seed, snapshot, and source in your DBT project:
- Creates a new table if it doesn't exist in the schema
- Updates the existing table if it already exists
- Preserves existing descriptions (only updates if DBT provides one)
- Stores DBT metadata (unique_id, resource_type, package_name, path)
Columns
For each column in each table:
Creates a new column if it doesn't exist
Creates a new column if it doesn't exist
- Updates data type and description
- Normalizes DBT data types to catalog data types
Lineage
Based on DBT's `parent_map`:
- Creates lineage relationships between tables
- Marks lineage as `derived` type with `medium` confidence
- Tags lineage as sourced from `dbt_import`
Permissions
To use the DBT Import API, your API key needs:
- `write:dbt_imports` - Creating imports
- `read:dbt_imports` - Listing and viewing imports