{"id":819,"date":"2025-09-01T14:39:49","date_gmt":"2025-09-01T14:39:49","guid":{"rendered":"https:\/\/dataopsschool.com\/blog\/?p=819"},"modified":"2025-09-01T14:53:33","modified_gmt":"2025-09-01T14:53:33","slug":"databricks-dlt-introduction","status":"publish","type":"post","link":"https:\/\/dataopsschool.com\/blog\/databricks-dlt-introduction\/","title":{"rendered":"Databricks: DLT Introduction"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"457\" src=\"https:\/\/dataopsschool.com\/blog\/wp-content\/uploads\/2025\/09\/Screenshot-2025-08-31-at-9.07.50-PM-1024x457.png\" alt=\"\" class=\"wp-image-821\" srcset=\"https:\/\/dataopsschool.com\/blog\/wp-content\/uploads\/2025\/09\/Screenshot-2025-08-31-at-9.07.50-PM-1024x457.png 1024w, https:\/\/dataopsschool.com\/blog\/wp-content\/uploads\/2025\/09\/Screenshot-2025-08-31-at-9.07.50-PM-300x134.png 300w, https:\/\/dataopsschool.com\/blog\/wp-content\/uploads\/2025\/09\/Screenshot-2025-08-31-at-9.07.50-PM-768x343.png 768w, https:\/\/dataopsschool.com\/blog\/wp-content\/uploads\/2025\/09\/Screenshot-2025-08-31-at-9.07.50-PM-1536x686.png 1536w, https:\/\/dataopsschool.com\/blog\/wp-content\/uploads\/2025\/09\/Screenshot-2025-08-31-at-9.07.50-PM-2048x914.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h1 class=\"wp-block-heading\">Introduction<\/h1>\n\n\n\n<p><strong>Goal:<\/strong> Build a Delta Live Tables (DLT) pipeline that:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Reads raw \u201corders\u201d (as streaming) and \u201ccustomer\u201d (as batch).<\/li>\n\n\n\n<li>Joins them via a view.<\/li>\n\n\n\n<li>Writes a refined Silver table.<\/li>\n\n\n\n<li>Aggregates into a Gold table by market segment.<\/li>\n<\/ul>\n\n\n\n<p><strong>What DLT gives you (why declarative matters):<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>You write transformations; DLT handles orchestration, dependency graph (DAG), cluster mgmt, retries, data quality (if enabled), and error handling.<\/li>\n\n\n\n<li>Optionally runs continuously (streaming) or on a schedule (triggered).<\/li>\n\n\n\n<li>Requires the <strong>Premium<\/strong> (or higher) tier.<\/li>\n<\/ul>\n\n\n\n<p><strong>What we\u2019ll build:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Bronze:<\/strong> two inputs (orders, customers)<\/li>\n\n\n\n<li><strong>Silver:<\/strong> a joined\/cleaned table + audit column<\/li>\n\n\n\n<li><strong>Gold:<\/strong> an aggregate by market segment<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h1 class=\"wp-block-heading\">What is Delta Live Tables (DLT)?<\/h1>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>DLT<\/strong> is a <strong>declarative<\/strong> way to define ETL\/ELT steps as tables and views.<\/li>\n\n\n\n<li>You declare datasets with Python decorators (<code>@dlt.table<\/code>, <code>@dlt.view<\/code>) or SQL (<code>CREATE LIVE TABLE<\/code>, <code>CREATE LIVE VIEW<\/code>).<\/li>\n\n\n\n<li>DLT builds the DAG, provisions a job compute cluster, runs in <strong>Triggered<\/strong> or <strong>Continuous<\/strong> mode, and stores lineage\/metrics.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h1 class=\"wp-block-heading\">How to create a DLT Pipeline (prereqs + setup)<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">1) Create a working schema (Unity Catalog)<\/h2>\n\n\n\n<p>We\u2019ll use <code>dev.etl<\/code> as the target schema for DLT artifacts.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>-- in a SQL cell\nCREATE SCHEMA IF NOT EXISTS dev.etl;\n<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">2) Prepare source data (deep clone the samples)<\/h2>\n\n\n\n<p>The transcript uses TPCH samples. We\u2019ll deep clone them into <code>dev.bronze<\/code> as raw inputs.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>-- Orders source table\nCREATE TABLE IF NOT EXISTS dev.bronze.orders_raw\nDEEP CLONE samples.tpch.orders;\n\n-- Customers source table\nCREATE TABLE IF NOT EXISTS dev.bronze.customer_raw\nDEEP CLONE samples.tpch.customer;\n<\/code><\/pre>\n\n\n\n<p>Sanity-check:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>SELECT * FROM dev.bronze.orders_raw   LIMIT 5;\nSELECT * FROM dev.bronze.customer_raw LIMIT 5;\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h1 class=\"wp-block-heading\">Streaming Tables, Materialized Views, Views in DLT<\/h1>\n\n\n\n<p>DLT has two dataset types in code:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Table<\/strong> (<code>@dlt.table<\/code> \/ <code>CREATE LIVE TABLE<\/code>): persisted in target schema.\n<ul class=\"wp-block-list\">\n<li>If its <strong>input is streaming<\/strong> (e.g., <code>read_stream<\/code> or <code>dlt.read_stream<\/code>), the resulting table behaves as a streaming table (it ingests incrementally).<\/li>\n\n\n\n<li>If its <strong>input is batch<\/strong>, it behaves like a batch-built table (recomputes deterministically when refreshed).<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>View<\/strong> (<code>@dlt.view<\/code> \/ <code>CREATE LIVE VIEW<\/code>): ephemeral within the pipeline; not materialized into the target schema. Great for intermediate joins\/cleanup.<\/li>\n<\/ul>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><\/p>\n<\/blockquote>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h1 class=\"wp-block-heading\">Create a DLT <strong>Streaming<\/strong> Table (Bronze: orders)<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">Python (recommended)<\/h2>\n\n\n\n<p>Create a new <strong>DLT Notebook<\/strong> (Python). Put this at the top:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import dlt\nfrom pyspark.sql import functions as F\n<\/code><\/pre>\n\n\n\n<p>Now declare the streaming bronze table for orders:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>@dlt.table(\n    name=\"orders_bronze\",  # optional; defaults to function name\n    table_properties={\"quality\": \"bronze\"},\n    comment=\"Orders (raw) as a streaming table\"\n)\ndef orders_bronze():\n    # Delta supports streaming reads; use read_stream.table for a streaming source\n    return spark.readStream.table(\"dev.bronze.orders_raw\")\n<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">SQL (alternative)<\/h2>\n\n\n\n<pre class=\"wp-block-code\"><code>CREATE LIVE TABLE orders_bronze\nTBLPROPERTIES (quality = 'bronze')\nCOMMENT 'Orders (raw) as a streaming table'\nAS SELECT * FROM dev.bronze.orders_raw;\n<\/code><\/pre>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>In SQL, DLT understands lineage; this is still treated as streaming because the engine recognizes an upstream streaming source (Delta supports incremental reads). In Python you\u2019re explicit via <code>readStream<\/code>.<\/p>\n<\/blockquote>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h1 class=\"wp-block-heading\">Create a DLT <strong>Batch<\/strong> Table (Bronze: customers)<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">Python<\/h2>\n\n\n\n<pre class=\"wp-block-code\"><code>@dlt.table(\n    name=\"customer_bronze\",\n    table_properties={\"quality\": \"bronze\"},\n    comment=\"Customers (raw) as batch table\"\n)\ndef customer_bronze():\n    return spark.read.table(\"dev.bronze.customer_raw\")  # batch read\n<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">SQL<\/h2>\n\n\n\n<pre class=\"wp-block-code\"><code>CREATE LIVE TABLE customer_bronze\nTBLPROPERTIES (quality = 'bronze')\nCOMMENT 'Customers (raw) as batch table'\nAS SELECT * FROM dev.bronze.customer_raw;\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h1 class=\"wp-block-heading\">Create a DLT <strong>View<\/strong> (to join)<\/h1>\n\n\n\n<p><strong>Python best practice:<\/strong> reference pipeline tables with <code>dlt.read()<\/code> (batch semantic) or <code>dlt.read_stream()<\/code> (stream semantic).<\/p>\n\n\n\n<p>We\u2019ll read customers as batch and orders as stream, then join.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>@dlt.view(\n    name=\"join_view\",\n    comment=\"Join customers to orders\"\n)\ndef join_view():\n    df_c = dlt.read(\"customer_bronze\")      # batch read from within pipeline\n    df_o = dlt.read_stream(\"orders_bronze\") # stream read from within pipeline\n    \n    # Join on customer key (c_custkey == o_custkey)\n    return (df_o.join(\n                df_c,\n                on=&#91;df_o&#91;\"o_custkey\"] == df_c&#91;\"c_custkey\"]],\n                how=\"left\"\n            ))\n<\/code><\/pre>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><strong>SQL equivalent<\/strong> uses the <code>LIVE.<\/code> keyword:<\/p>\n<\/blockquote>\n\n\n\n<pre class=\"wp-block-code\"><code>CREATE LIVE VIEW join_view AS\nSELECT *\nFROM LIVE.orders_bronze o\nLEFT JOIN LIVE.customer_bronze c\n  ON o.o_custkey = c.c_custkey;\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h1 class=\"wp-block-heading\">The <strong>LIVE<\/strong> keyword (SQL) vs <code>dlt.read*<\/code> (Python)<\/h1>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>SQL:<\/strong> use <code>LIVE.&lt;table_or_view_name><\/code> to reference another pipeline dataset.<\/li>\n\n\n\n<li><strong>Python:<\/strong> use <code>dlt.read(\"name\")<\/code> (batch) or <code>dlt.read_stream(\"name\")<\/code> (stream).<\/li>\n<\/ul>\n\n\n\n<p>Using <code>LIVE<\/code> in Python isn\u2019t valid\u2014use the <code>dlt<\/code> helpers instead.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h1 class=\"wp-block-heading\">(Silver) Build a refined table with audit column<\/h1>\n\n\n\n<p>Turn the view into a persisted <strong>Silver<\/strong> table and add an <code>insert_ts<\/code> audit column:<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Python<\/h2>\n\n\n\n<pre class=\"wp-block-code\"><code>@dlt.table(\n    name=\"joined_silver\",\n    table_properties={\"quality\": \"silver\"},\n    comment=\"Joined orders+customers with audit column\"\n)\ndef joined_silver():\n    return (dlt.read(\"join_view\")\n            .withColumn(\"insert_ts\", F.current_timestamp()))\n<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">SQL<\/h2>\n\n\n\n<pre class=\"wp-block-code\"><code>CREATE LIVE TABLE joined_silver\nTBLPROPERTIES (quality = 'silver')\nCOMMENT 'Joined orders+customers with audit column'\nAS\nSELECT\n  j.*,\n  current_timestamp() AS insert_ts\nFROM LIVE.join_view j;\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h1 class=\"wp-block-heading\">(Gold) Aggregate by market segment<\/h1>\n\n\n\n<p>Group by <code>c_mktsegment<\/code> and count orders.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>In TPCH, the order key is <code>o_orderkey<\/code>.<\/p>\n<\/blockquote>\n\n\n\n<h2 class=\"wp-block-heading\">Python<\/h2>\n\n\n\n<pre class=\"wp-block-code\"><code>@dlt.table(\n    name=\"orders_aggregated_gold\",\n    table_properties={\"quality\": \"gold\"},\n    comment=\"Orders aggregated by market segment\"\n)\ndef orders_aggregated_gold():\n    return (dlt.read(\"joined_silver\")\n            .groupBy(\"c_mktsegment\")\n            .agg(F.count(\"o_orderkey\").alias(\"order_count\"))\n            .withColumn(\"insert_ts\", F.current_timestamp()))\n<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">SQL<\/h2>\n\n\n\n<pre class=\"wp-block-code\"><code>CREATE LIVE TABLE orders_aggregated_gold\nTBLPROPERTIES (quality = 'gold')\nCOMMENT 'Orders aggregated by market segment'\nAS\nSELECT\n  c_mktsegment,\n  COUNT(o_orderkey) AS order_count,\n  current_timestamp() AS insert_ts\nFROM LIVE.joined_silver\nGROUP BY c_mktsegment;\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h1 class=\"wp-block-heading\">Orchestrate the DLT Pipeline<\/h1>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Workspace > New > Delta Live Tables pipeline<\/strong> (sometimes shown as \u201cETL Pipelines\u201d).<\/li>\n\n\n\n<li><strong>Name<\/strong>: <code>00_dlt_introduction<\/code> (anything you like).<\/li>\n\n\n\n<li><strong>Product edition<\/strong>:\n<ul class=\"wp-block-list\">\n<li><strong>Core<\/strong>: basics (tables\/views\/lineage).<\/li>\n\n\n\n<li><strong>Pro<\/strong>: adds Change Data Capture (CDC).<\/li>\n\n\n\n<li><strong>Advanced<\/strong>: adds expectations\/data quality and more controls.<br>For this tutorial, <strong>Core<\/strong> is fine.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Mode<\/strong>:\n<ul class=\"wp-block-list\">\n<li><strong>Triggered<\/strong>: runs when started or on schedule.<\/li>\n\n\n\n<li><strong>Continuous<\/strong>: runs like a streaming job, never-ending.<br>Choose <strong>Triggered<\/strong> for now.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Notebook<\/strong>: select the DLT notebook you just created.<\/li>\n\n\n\n<li><strong>Target<\/strong>: set <strong>Catalog<\/strong> = <code>dev<\/code>, <strong>Schema<\/strong> = <code>etl<\/code>.<\/li>\n\n\n\n<li><strong>Compute<\/strong>: Fixed size, 1\u20132 workers is fine for the demo. Driver same as worker.<\/li>\n\n\n\n<li><strong>Channel<\/strong>: <code>Current<\/code> (Preview adds newer features; stick to Current unless you need preview features).<\/li>\n\n\n\n<li>Click <strong>Create<\/strong>, then <strong>Start<\/strong>.<\/li>\n<\/ol>\n\n\n\n<p>You\u2019ll see:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Waiting for resources \u2192 Initializing \u2192 Setting up tables \u2192 Rendering graph \u2192 Running<\/strong><\/li>\n\n\n\n<li>A <strong>DAG graph<\/strong> of your pipeline, with per-dataset metrics (rows read\/written).<\/li>\n\n\n\n<li><strong>Event log<\/strong> at the bottom (great for debugging failed resolutions\/imports).<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h1 class=\"wp-block-heading\">Development vs Production mode<\/h1>\n\n\n\n<p>At the top of the pipeline page:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Development<\/strong> (default): keeps the job cluster <strong>running<\/strong> after success\/failure, which is handy for quick re-runs and debugging (faster iteration).<\/li>\n\n\n\n<li><strong>Production<\/strong>: cluster is <strong>terminated<\/strong> when a run finishes (success or failure), reducing idle costs and behaving more like scheduled jobs.<\/li>\n<\/ul>\n\n\n\n<p>Switch to <strong>Production<\/strong> when you automate\/schedule.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Verifying results<\/h2>\n\n\n\n<p>In a notebook (attached to any cluster), query the outputs:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>SELECT * FROM dev.etl.joined_silver       LIMIT 5;\nSELECT * FROM dev.etl.orders_aggregated_gold ORDER BY order_count DESC;\n<\/code><\/pre>\n\n\n\n<p>You should see a handful of market segments with counts and <code>insert_ts<\/code>.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Full Python DLT notebook (copy\/paste)<\/h2>\n\n\n\n<pre class=\"wp-block-code\"><code>import dlt\nfrom pyspark.sql import functions as F\n\n# BRONZE: orders as streaming table\n@dlt.table(\n    name=\"orders_bronze\",\n    table_properties={\"quality\": \"bronze\"},\n    comment=\"Orders (raw) as a streaming table\"\n)\ndef orders_bronze():\n    return spark.readStream.table(\"dev.bronze.orders_raw\")\n\n# BRONZE: customers as batch table\n@dlt.table(\n    name=\"customer_bronze\",\n    table_properties={\"quality\": \"bronze\"},\n    comment=\"Customers (raw) as batch table\"\n)\ndef customer_bronze():\n    return spark.read.table(\"dev.bronze.customer_raw\")\n\n# VIEW: join within the pipeline (not materialized)\n@dlt.view(\n    name=\"join_view\",\n    comment=\"Join customers to orders\"\n)\ndef join_view():\n    df_c = dlt.read(\"customer_bronze\")        # batch semantics\n    df_o = dlt.read_stream(\"orders_bronze\")   # streaming semantics\n    return (df_o.join(\n                df_c,\n                on=&#91;df_o&#91;\"o_custkey\"] == df_c&#91;\"c_custkey\"]],\n                how=\"left\"\n            ))\n\n# SILVER: persisted joined table w\/ audit column\n@dlt.table(\n    name=\"joined_silver\",\n    table_properties={\"quality\": \"silver\"},\n    comment=\"Joined orders+customers with audit column\"\n)\ndef joined_silver():\n    return dlt.read(\"join_view\").withColumn(\"insert_ts\", F.current_timestamp())\n\n# GOLD: aggregated table by market segment\n@dlt.table(\n    name=\"orders_aggregated_gold\",\n    table_properties={\"quality\": \"gold\"},\n    comment=\"Orders aggregated by market segment\"\n)\ndef orders_aggregated_gold():\n    return (dlt.read(\"joined_silver\")\n            .groupBy(\"c_mktsegment\")\n            .agg(F.count(\"o_orderkey\").alias(\"order_count\"))\n            .withColumn(\"insert_ts\", F.current_timestamp()))\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Example Github Repo<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>https:\/\/github.com\/databricks\/delta-live-tables-notebooks<\/li>\n\n\n\n<li>https:\/\/github.com\/a0x8o\/delta-live-tables-hands-on-workshop<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction Goal: Build a Delta Live Tables (DLT) pipeline that: What DLT gives you (why declarative matters): What we\u2019ll build: What is Delta Live Tables (DLT)? How&#8230; <\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-819","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/dataopsschool.com\/blog\/wp-json\/wp\/v2\/posts\/819","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dataopsschool.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dataopsschool.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dataopsschool.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/dataopsschool.com\/blog\/wp-json\/wp\/v2\/comments?post=819"}],"version-history":[{"count":3,"href":"https:\/\/dataopsschool.com\/blog\/wp-json\/wp\/v2\/posts\/819\/revisions"}],"predecessor-version":[{"id":825,"href":"https:\/\/dataopsschool.com\/blog\/wp-json\/wp\/v2\/posts\/819\/revisions\/825"}],"wp:attachment":[{"href":"https:\/\/dataopsschool.com\/blog\/wp-json\/wp\/v2\/media?parent=819"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dataopsschool.com\/blog\/wp-json\/wp\/v2\/categories?post=819"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dataopsschool.com\/blog\/wp-json\/wp\/v2\/tags?post=819"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}