{"id":456,"date":"2025-08-14T07:06:52","date_gmt":"2025-08-14T07:06:52","guid":{"rendered":"https:\/\/dataopsschool.com\/blog\/?p=456"},"modified":"2025-08-18T13:03:41","modified_gmt":"2025-08-18T13:03:41","slug":"comprehensive-tutorial-azure-data-factory-in-the-context-of-dataops","status":"publish","type":"post","link":"https:\/\/dataopsschool.com\/blog\/comprehensive-tutorial-azure-data-factory-in-the-context-of-dataops\/","title":{"rendered":"Comprehensive Tutorial: Azure Data Factory in the Context of DataOps"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">Introduction &amp; Overview<\/h2>\n\n\n\n<p>Azure Data Factory (ADF) is a cloud-based data integration service that enables organizations to create, schedule, and orchestrate data pipelines for moving and transforming data at scale. In the context of DataOps, ADF plays a pivotal role in streamlining data workflows, fostering collaboration, and enabling automation across the data lifecycle. This tutorial provides an in-depth exploration of ADF, tailored for technical readers, with practical examples and best practices to leverage its capabilities within a DataOps framework.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">What is Azure Data Factory?<\/h3>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/cms.cdata.com\/media\/2rabxtck\/azure-data-factory.png?format=webp&amp;v=1da844b25b95ed0\" alt=\"\" \/><\/figure>\n\n\n\n<p>Azure Data Factory is a fully managed, serverless data integration service within Microsoft Azure. It allows users to build data pipelines that ingest, prepare, transform, and publish data from various sources to destinations, both on-premises and in the cloud. ADF supports a wide range of data stores, including Azure SQL Database, Azure Data Lake, and third-party services like Amazon S3 or Salesforce.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">History or Background<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Launched<\/strong>: Introduced by Microsoft in 2015 as part of the Azure ecosystem.<\/li>\n\n\n\n<li><strong>Evolution<\/strong>: Initially focused on ETL (Extract, Transform, Load) processes, ADF has evolved into a robust platform supporting ELT (Extract, Load, Transform), data orchestration, and integration with modern DataOps practices.<\/li>\n\n\n\n<li><strong>Version 2<\/strong>: Released in 2018, ADF v2 introduced advanced features like mapping data flows, CI\/CD integration, and enhanced monitoring, making it a cornerstone for DataOps.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Why is it Relevant in DataOps?<\/h3>\n\n\n\n<p>DataOps emphasizes automation, collaboration, and agility in data management. ADF aligns with these principles by:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Automating Data Pipelines<\/strong>: Enables repeatable, scalable workflows for data ingestion and transformation.<\/li>\n\n\n\n<li><strong>Facilitating Collaboration<\/strong>: Integrates with Git for version control, allowing data engineers and analysts to collaborate.<\/li>\n\n\n\n<li><strong>Supporting CI\/CD<\/strong>: Provides native integration with Azure DevOps for continuous integration and delivery.<\/li>\n\n\n\n<li><strong>Real-Time Insights<\/strong>: Supports near-real-time data processing, critical for agile decision-making in DataOps.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Core Concepts &amp; Terminology<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Key Terms and Definitions<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Pipeline<\/strong>: A logical grouping of activities that perform a unit of work, such as copying or transforming data.<\/li>\n\n\n\n<li><strong>Activity<\/strong>: A processing step within a pipeline, e.g., Copy Activity, Data Flow Activity.<\/li>\n\n\n\n<li><strong>Dataset<\/strong>: A named view of data that defines the structure and source\/destination of data used in activities.<\/li>\n\n\n\n<li><strong>Linked Service<\/strong>: Connection information to external data sources or sinks, such as database credentials or API endpoints.<\/li>\n\n\n\n<li><strong>Data Flow<\/strong>: A visual, code-free transformation tool for complex data transformations (ELT processes).<\/li>\n\n\n\n<li><strong>Trigger<\/strong>: A mechanism to schedule or event-drive pipeline execution.<\/li>\n\n\n\n<li><strong>Integration Runtime (IR)<\/strong>: The compute infrastructure used by ADF to execute activities, supporting cloud, on-premises, or hybrid scenarios.<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Term<\/th><th>Definition<\/th><\/tr><\/thead><tbody><tr><td><strong>Pipeline<\/strong><\/td><td>Logical container of data movement and transformation activities<\/td><\/tr><tr><td><strong>Activity<\/strong><\/td><td>A single step (e.g., copy, transformation, data movement) inside a pipeline<\/td><\/tr><tr><td><strong>Dataset<\/strong><\/td><td>Representation of data (input\/output) within linked services<\/td><\/tr><tr><td><strong>Linked Service<\/strong><\/td><td>Connection information to external data stores\/services (like SQL DB, Blob storage)<\/td><\/tr><tr><td><strong>Trigger<\/strong><\/td><td>Defines when\/how a pipeline runs (scheduled, event-based, tumbling window)<\/td><\/tr><tr><td><strong>Integration Runtime (IR)<\/strong><\/td><td>Compute infrastructure used by ADF for data movement and transformations<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">How It Fits into the DataOps Lifecycle<\/h3>\n\n\n\n<p>The DataOps lifecycle includes stages like data ingestion, transformation, orchestration, monitoring, and governance. ADF contributes as follows:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Ingestion<\/strong>: Connects to diverse data sources (e.g., SQL, NoSQL, APIs) for seamless data collection.<\/li>\n\n\n\n<li><strong>Transformation<\/strong>: Uses mapping data flows or external compute services (e.g., Databricks, Synapse) for data processing.<\/li>\n\n\n\n<li><strong>Orchestration<\/strong>: Coordinates complex workflows with dependencies and triggers.<\/li>\n\n\n\n<li><strong>Monitoring<\/strong>: Provides built-in monitoring tools to track pipeline performance and errors.<\/li>\n\n\n\n<li><strong>Governance<\/strong>: Supports integration with Azure Purview for data lineage and compliance.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Architecture &amp; How It Works<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Components and Internal Workflow<\/h3>\n\n\n\n<p>ADF operates as a serverless orchestration engine with the following components:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Pipelines<\/strong>: Define the workflow logic.<\/li>\n\n\n\n<li><strong>Activities<\/strong>: Execute tasks like copying data, running scripts, or invoking external services.<\/li>\n\n\n\n<li><strong>Datasets and Linked Services<\/strong>: Define data sources and destinations.<\/li>\n\n\n\n<li><strong>Integration Runtime<\/strong>: Facilitates data movement and activity execution.<\/li>\n\n\n\n<li><strong>Triggers<\/strong>: Automate pipeline execution based on schedules or events.<\/li>\n<\/ul>\n\n\n\n<p><strong>Workflow<\/strong>:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>A pipeline is triggered (manually, scheduled, or event-based).<\/li>\n\n\n\n<li>Activities within the pipeline execute in sequence or parallel, using the Integration Runtime.<\/li>\n\n\n\n<li>Data is moved or transformed based on dataset definitions and linked services.<\/li>\n\n\n\n<li>Monitoring tools log execution details and errors.<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">Architecture Diagram (Description)<\/h3>\n\n\n\n<p>Imagine a diagram with:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Central Node<\/strong>: ADF pipeline orchestrating the workflow.<\/li>\n\n\n\n<li><strong>Left Side<\/strong>: Data sources (e.g., SQL Server, Blob Storage, APIs) connected via Linked Services.<\/li>\n\n\n\n<li><strong>Right Side<\/strong>: Data destinations (e.g., Azure Data Lake, Synapse Analytics).<\/li>\n\n\n\n<li><strong>Middle<\/strong>: Integration Runtime facilitating data movement and transformation via Activities (Copy, Data Flow).<\/li>\n\n\n\n<li><strong>Top<\/strong>: Triggers (e.g., schedule, event) initiating the pipeline.<\/li>\n\n\n\n<li><strong>Bottom<\/strong>: Monitoring dashboard for logs and alerts.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Integration Points with CI\/CD or Cloud Tools<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Azure DevOps\/Git<\/strong>: ADF supports Git integration for version control, enabling collaborative development and CI\/CD pipelines.<\/li>\n\n\n\n<li><strong>Azure Synapse Analytics<\/strong>: Integrates for advanced analytics and ELT processes.<\/li>\n\n\n\n<li><strong>Azure Databricks<\/strong>: Executes complex transformations using Spark.<\/li>\n\n\n\n<li><strong>Azure Monitor<\/strong>: Tracks pipeline performance and alerts on failures.<\/li>\n\n\n\n<li><strong>Azure Purview<\/strong>: Ensures data governance and lineage tracking.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Installation &amp; Getting Started<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Basic Setup or Prerequisites<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Azure Subscription<\/strong>: Active Azure account (free tier available for testing).<\/li>\n\n\n\n<li><strong>Permissions<\/strong>: Contributor or Owner role for creating ADF resources.<\/li>\n\n\n\n<li><strong>Tools<\/strong>: Azure Portal, Azure CLI, or PowerShell for setup; Git for version control (optional).<\/li>\n\n\n\n<li><strong>Supported Browser<\/strong>: Chrome, Edge, or Firefox for ADF Studio.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Hands-On: Step-by-Step Beginner-Friendly Setup Guide<\/h3>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Create an ADF Instance<\/strong>:\n<ul class=\"wp-block-list\">\n<li>Log in to the Azure Portal.<\/li>\n\n\n\n<li>Search for \u201cData Factory\u201d and select \u201cCreate.\u201d<\/li>\n\n\n\n<li>Enter a unique name, select a subscription, resource group, and region.<\/li>\n\n\n\n<li>Click \u201cReview + Create\u201d and deploy.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Access ADF Studio<\/strong>:\n<ul class=\"wp-block-list\">\n<li>Navigate to the created ADF instance in the Azure Portal.<\/li>\n\n\n\n<li>Click \u201cLaunch Studio\u201d to open the web-based ADF interface.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Configure a Simple Pipeline<\/strong>:\n<ul class=\"wp-block-list\">\n<li>In ADF Studio, go to the \u201cAuthor\u201d tab.<\/li>\n\n\n\n<li>Create a new pipeline: Click \u201c+\u201d &gt; \u201cPipeline.\u201d<\/li>\n\n\n\n<li>Add a <strong>Copy Activity<\/strong>:\n<ul class=\"wp-block-list\">\n<li>Drag \u201cCopy Data\u201d to the pipeline canvas.<\/li>\n\n\n\n<li>Configure a <strong>Source<\/strong> (e.g., Azure Blob Storage):<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n\n\n\n<pre class=\"wp-block-code\"><code>{\n  \"name\": \"SourceBlob\",\n  \"type\": \"BlobSource\",\n  \"typeProperties\": {\n    \"source\": {\n      \"type\": \"BlobSource\",\n      \"recursive\": true\n    }\n  }\n}<\/code><\/pre>\n\n\n\n<p>Configure a <strong>Sink<\/strong> (e.g., Azure SQL Database).<\/p>\n\n\n\n<p>Validate the pipeline and save.<\/p>\n\n\n\n<p>4. <strong>Set Up a Trigger<\/strong>:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Go to the \u201cManage\u201d tab, select \u201cTriggers,\u201d and create a new schedule trigger.<\/li>\n\n\n\n<li>Link the trigger to the pipeline and set a recurrence (e.g., daily).<\/li>\n<\/ul>\n\n\n\n<p>5. <strong>Test the Pipeline<\/strong>:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Click \u201cDebug\u201d to test the pipeline execution.<\/li>\n\n\n\n<li>Monitor the run in the \u201cMonitor\u201d tab.<\/li>\n<\/ul>\n\n\n\n<ol class=\"wp-block-list\">\n<li><\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\">Real-World Use Cases<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Scenario 1: Retail Data Integration<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Use Case<\/strong>: A retail company ingests sales data from multiple sources (POS systems, e-commerce platforms) into Azure Data Lake for analytics.<\/li>\n\n\n\n<li><strong>Implementation<\/strong>: ADF pipelines extract data from APIs and SQL databases, transform it using Data Flows, and load it into a data lake for reporting.<\/li>\n\n\n\n<li><strong>Industry Relevance<\/strong>: Retail benefits from real-time insights for inventory and sales trends.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Scenario 2: Financial Data Processing<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Use Case<\/strong>: A bank processes transactional data for fraud detection, integrating on-premises SQL Server with cloud-based analytics.<\/li>\n\n\n\n<li><strong>Implementation<\/strong>: ADF uses a Self-hosted Integration Runtime to connect to on-premises data, applies transformations in Azure Synapse, and triggers alerts via Logic Apps.<\/li>\n\n\n\n<li><strong>Industry Relevance<\/strong>: Finance requires secure, compliant data pipelines.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Scenario 3: IoT Data Streaming<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Use Case<\/strong>: A manufacturing firm collects IoT sensor data for predictive maintenance.<\/li>\n\n\n\n<li><strong>Implementation<\/strong>: ADF ingests streaming data from Azure Event Hubs, processes it with Data Flows, and stores it in Cosmos DB for real-time analytics.<\/li>\n\n\n\n<li><strong>Industry Relevance<\/strong>: Manufacturing leverages real-time data for operational efficiency.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Scenario 4: Healthcare Data Aggregation<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Use Case<\/strong>: A hospital aggregates patient data from EHR systems for research.<\/li>\n\n\n\n<li><strong>Implementation<\/strong>: ADF pipelines connect to EHR APIs, anonymize data using Data Flows, and load it into Azure SQL for analysis.<\/li>\n\n\n\n<li><strong>Industry Relevance<\/strong>: Healthcare requires secure, compliant data handling.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Benefits &amp; Limitations<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Key Advantages<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Scalability<\/strong>: Serverless architecture handles large-scale data workflows.<\/li>\n\n\n\n<li><strong>Ease of Use<\/strong>: Visual interface (ADF Studio) simplifies pipeline creation.<\/li>\n\n\n\n<li><strong>Hybrid Support<\/strong>: Connects on-premises and cloud data sources seamlessly.<\/li>\n\n\n\n<li><strong>Integration<\/strong>: Natively integrates with Azure services and supports CI\/CD.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Common Challenges or Limitations<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Learning Curve<\/strong>: Complex transformations may require familiarity with Data Flows or external compute services.<\/li>\n\n\n\n<li><strong>Cost<\/strong>: Pay-as-you-go pricing can escalate with high data volumes or frequent pipeline runs.<\/li>\n\n\n\n<li><strong>Limited Real-Time Processing<\/strong>: Better suited for batch processing than ultra-low-latency streaming.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Best Practices &amp; Recommendations<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Security Tips<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Use Azure Key Vault to store sensitive credentials for Linked Services.<\/li>\n\n\n\n<li>Enable Managed Identity for secure access to Azure resources.<\/li>\n\n\n\n<li>Implement network security with Virtual Network integration or private endpoints.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Performance<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Optimize pipelines by minimizing data movement and using parallel processing.<\/li>\n\n\n\n<li>Use caching in Data Flows for repetitive transformations.<\/li>\n\n\n\n<li>Monitor pipeline performance with Azure Monitor to identify bottlenecks.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Maintenance<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Regularly review pipeline logs to detect and resolve errors.<\/li>\n\n\n\n<li>Use Git integration for version control and rollback capabilities.<\/li>\n\n\n\n<li>Automate pipeline deployments with Azure DevOps.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Compliance Alignment<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Integrate with Azure Purview for data lineage and GDPR\/CCPA compliance.<\/li>\n\n\n\n<li>Use role-based access control (RBAC) to restrict access to sensitive data.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Automation Ideas<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Use event-based triggers for real-time data processing (e.g., Blob storage events).<\/li>\n\n\n\n<li>Automate pipeline testing with Azure DevOps CI\/CD pipelines.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Comparison roasted Alternatives<\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th><strong>Feature<\/strong><\/th><th><strong>Azure Data Factory<\/strong><\/th><th><strong>Apache NiFi<\/strong><\/th><th><strong>AWS Glue<\/strong><\/th><\/tr><\/thead><tbody><tr><td><strong>Ease of Use<\/strong><\/td><td>Visual interface, beginner-friendly<\/td><td>Visual but steeper learning curve<\/td><td>Code-heavy, less intuitive<\/td><\/tr><tr><td><strong>Cloud Integration<\/strong><\/td><td>Native Azure integration<\/td><td>Limited cloud-native support<\/td><td>Strong AWS integration<\/td><\/tr><tr><td><strong>Hybrid Support<\/strong><\/td><td>Strong (Self-hosted IR)<\/td><td>Strong (on-premises focus)<\/td><td>Limited hybrid capabilities<\/td><\/tr><tr><td><strong>Pricing<\/strong><\/td><td>Pay-as-you-go, can be costly<\/td><td>Open-source, free<\/td><td>Pay-as-you-go, moderate cost<\/td><\/tr><tr><td><strong>Real-Time Processing<\/strong><\/td><td>Moderate (better for batch)<\/td><td>Strong real-time support<\/td><td>Moderate (batch-focused)<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">When to Choose Azure Data Factory<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Choose ADF for Azure-centric environments with strong integration needs.<\/li>\n\n\n\n<li>Ideal for organizations requiring hybrid data integration or visual pipeline design.<\/li>\n\n\n\n<li>Avoid ADF if ultra-low-latency streaming or open-source solutions are priorities.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>Azure Data Factory is a powerful tool for implementing DataOps, enabling organizations to automate and orchestrate data pipelines with ease. Its scalability, hybrid support, and integration with Azure services make it a go-to choice for modern data workflows. However, users must consider its cost and limitations for real-time processing. As DataOps evolves, ADF is likely to incorporate more AI-driven automation and real-time capabilities.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Next Steps<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Explore the Azure Data Factory Documentation.<\/li>\n\n\n\n<li>Join the Azure Data Factory community on Microsoft Q&amp;A.<\/li>\n\n\n\n<li>Experiment with hands-on labs in the Azure Portal or try the free tier.<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Introduction &amp; Overview Azure Data Factory (ADF) is a cloud-based data integration service that enables organizations to create, schedule, and orchestrate data pipelines for moving and transforming&#8230; <\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-456","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/dataopsschool.com\/blog\/wp-json\/wp\/v2\/posts\/456","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dataopsschool.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dataopsschool.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dataopsschool.com\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dataopsschool.com\/blog\/wp-json\/wp\/v2\/comments?post=456"}],"version-history":[{"count":2,"href":"https:\/\/dataopsschool.com\/blog\/wp-json\/wp\/v2\/posts\/456\/revisions"}],"predecessor-version":[{"id":635,"href":"https:\/\/dataopsschool.com\/blog\/wp-json\/wp\/v2\/posts\/456\/revisions\/635"}],"wp:attachment":[{"href":"https:\/\/dataopsschool.com\/blog\/wp-json\/wp\/v2\/media?parent=456"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dataopsschool.com\/blog\/wp-json\/wp\/v2\/categories?post=456"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dataopsschool.com\/blog\/wp-json\/wp\/v2\/tags?post=456"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}