How to create Azure Container Apps Jobs with Bicep and Azure CLI

How to create Azure Container Apps Jobs with Bicep and Azure CLI

https://ift.tt/FxbpD7A

This article shows how to create Azure Container Apps Jobs via Bicep or Azure CLI, start jobs using Azure CLI commands, and monitor jobs using Azure Monitor Log Analytics. You can find the code and Visio diagrams in the companion GitHub repository.

 

Prerequisites

 

Architecture

The following diagram shows the architecture and network topology of the sample:

 

architecture.png

 

This sample provides two sets of Bicep modules to deploy the infrastructure and the jobs. In particular, you can use these Bicep modules in the bicep/infra folder to deploy the infrastructure used by the sample, including the Azure Container Apps Environment and Azure Container Registry (ACR), but not the Azure Container Apps Jobs.

 

 

You can use these Bicep modules in the bicep/jobs folder to deploy the Azure Container Apps Jobs using the Docker container images stored in the Azure Container Registry deployed at the previous step.

 

  • Microsoft.App/jobs: this samples deploys the following jobs:

    • Sender job: this is a manually triggered job that sends a configurable amount of messages to the parameters queue in the Azure Service Bus namespace. The payload of each message contains a random positive integer number comprised in a configurable range.
    • Processor job: this is a scheduled job that reads the messages from the parameters queue in the Azure Service Bus namespace, calculates the Fibonacci number for the actual parameter, and writes the result in the results queue in the same namespace.
    • Receiver job: this is an event-driven job which execution is triggered by each message in the results queue. The job reads the result and logs it.

 

The following diagram shows the message flow of the sample:

 

messageflow.png

Here are the steps of the message flow:

 

  1. A job administrator starts the sender job using the az containerapp job start Azure CLI command.
  2. The sender job writes a list of messages to the parameters queue, each containing a random positive integer within a pre-defined range.
  3. The processor job reads the messages from the parameters queue
  4. The processor job calculates the Fibonacci number for the actual integer and writes the result in the results queue in the same namespace.
  5. The receiver job reads the messages from the results queue.
  6. The receiver job writes the results to the standard output that is logged to the Azure Log Analytics workspace.

 

Azure Container Apps Jobs

Azure Container Apps Jobs jobs allow you to run containerized tasks that execute for a given duration and complete. You can use jobs to run tasks such as data processing, machine learning, or any scenario where on-demand processing is required. For more information, see the following tutorials:

 

 

Azure Container apps and jobs run in the same environment, allowing them to share capabilities such as networking and logging.

Compared to Kubernetes Jobs, Azure Container Apps Jobs provide a more straightforward and serverless experience. While both Azure Container Apps Jobs and Kubernetes Jobs are used to run containerized tasks for a finite duration, there are some critical differences between them:

 

  • Environment: Azure Container Apps Jobs and Kubernetes Jobs run in different environments. Azure Container Apps Jobs run within the Azure Container Apps service, which is powered by Kubernetes and other open-source technologies like Dapr (Distributed Application Runtime) and KEDA (Kubernetes-based Event Driven Autoscaling). On the other hand, Kubernetes Jobs run within a Kubernetes cluster managed by Azure Kubernetes Service (AKS).
  • Trigger Types: Azure Container Apps Jobs support three trigger types: manual, schedule, and event-driven. Manual jobs are triggered on-demand, scheduled jobs are triggered at specific times and can run repeatedly, and event-driven jobs are triggered by events such as a message arriving in a queue. In comparison, Kubernetes Jobs are primarily triggered manually or based on time schedules, while Kubernetes CronJobs are meant for performing regularly scheduled actions such as backups, report generation, and so on. They run a job periodically on a given schedule, written in Cron format.
  • Resource Management: Azure Container Apps Jobs are managed within the Azure Container Apps service, providing a fully managed experience by Azure. In contrast, Kubernetes Jobs within [Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/intro-kubernetes provide more control and flexibility, as they allow direct access to the Kubernetes API and support running any Kubernetes workload. The cluster configurations and operations are within the user’s control and responsibility with AKS.
  • Scaling and Load Balancing: Azure Container Apps Jobs support event-driven scaling and automatic scaling of HTTP traffic, including the ability to scale to zero when there is no traffic. On the other hand, Kubernetes Jobs in AKS require manual scaling or metric-based scaling and do not support scaling to zero natively. However, Kubernetes-based Event Driven Autoscaling (KEDA) can be used with AKS to achieve similar scaling capabilities.

In summary, Azure Container Apps Jobs provide a more straightforward and serverless experience for running containerized tasks, with built-in support for event-driven scaling and the ability to scale to zero. On the other hand, Kubernetes Jobs within AKS offer more control and flexibility, allowing direct access to the Kubernetes API and supporting any Kubernetes workload.

 

Job trigger types

A job’s trigger type determines how the job is started. The following trigger types are available:

 

  • Manual: Manual jobs are triggered on demand.
  • Schedule: Scheduled jobs are triggered at specific times and can run repeatedly.
  • Event: Event-driven jobs are triggered by events such as a message arriving in a queue.

 

Manual jobs

Manual jobs are triggered on-demand using the Azure CLI or a request to the Azure Resource Manager API.

Examples of manual jobs include:

 

  • One-time processing tasks such as migrating data from one system to another.
  • An e-commerce site running as a container app starts a job execution to process inventory when an order is placed.

 

To create a manual job, use the job type Manual. To create a manual job using the Azure CLI, use the az containerapp job create command. The following example creates a manual job named my-job in a resource group named my-resource-group and a Container Apps environment named my-environment:

 

 

Scheduled jobs

Azure Container Apps Jobs use Cron expressions to define schedules. It supports the standard cron expression format with five fields for minute, hour, day of month, month, and day of week. The following are examples of cron expressions:

 

Expression Description
0 */2 * * * Runs every two hours.
0 0 * * * Runs every day at midnight.
0 0 * * 0 Runs every Sunday at midnight.
0 0 1 * * Runs on the first day of every month at midnight.

 

Cron expressions in scheduled jobs are evaluated in Universal Time Coordinated (UTC). To create a scheduled job, you can use the job type Schedule. To create a scheduled job using the Azure CLI, use the az containerapp job create command. The following example creates a scheduled job named my-job in a resource group named my-resource-group and a Container Apps environment named my-environment:

 

 

Event-driven jobs

Event-driven jobs are triggered by events from supported custom scalers. Examples of event-driven jobs include:

 

  • A job that runs when a new message is added to a queue, such as Azure Service Bus, Azure Event Hubs, Apache Kafka, or RabbitMQ.
  • A self-hosted GitHub Actions runner or Azure DevOps agent that runs when a new job is queued in a workflow or pipeline.

 

Container apps and event-driven jobs use KEDA scalers. They both evaluate scaling rules on a polling interval to measure the volume of events for an event source, but the way they use the results is different.

In an app, each replica continuously processes events and a scaling rule determines the number of replicas to run to meet demand. In event-driven jobs, each job typically processes a single event, and a scaling rule determines the number of jobs to run.

Use jobs when each event requires a new instance of the container with dedicated resources or needs to run for a long time. Event-driven jobs are conceptually similar to KEDA scaling jobs.

To create an event-driven job, use the job type Event. To create an event-driven job using the Azure CLI, use the az containerapp job create command. The following example creates an event-driven job named my-job in a resource group named my-resource-group and a Container Apps environment named my-environment:

 

 

Deploy the Infrastructure

You can deploy the infrastructure and network topology using the Bicep modules in the bicep/infra folder using the deploy.sh Bash script in the same folder. Specify a value for the following parameters in the deploy.sh Bash script and main.parameters.json parameters file before deploying the Bicep modules.

 

  • prefix: specifies a prefix for all the Azure resources.
  • authenticationType: location for the resource group and Azure resources.

 

The following table contains the code from the container-apps-environment.bicep Bicep module used to deploy the Azure Container Apps Environment. For more information, see the documentation of the Microsoft.App/managedEnvironments resource type:

 

 

The following table contains the code from the managed-identity.bicep Bicep module used to deploy the Azure Managed Identity used by the by the Azure Container Apps Jobs to pull container images from the Azure Container Registry, and by the jobs to connect to the Azure Service Bus namespace. You can use a system-assigned or user-assigned managed identity from Azure Active Directory (Azure AD) to let Azure Container Apps Jobs access any Azure AD-protected resource. For more information, see Managed identities in Azure Container Apps. You can pull container images from private repositories in an Azure Container Registry using user-assigned or user-assigned managed identities for authentication to avoid the use of administrative credentials. For more information, see Azure Container Apps image pull with managed identity. This user-defined managed identity is assigned the Azure Service Bus Data Owner role on the Azure Service Bus namespace and ACRPull role on the Azure Container Registry (ACR).

 

 

Jobs

You can find the Python code for the sender, processor, and receiver jobs under the src folder. The three jobs makes use of the following libraries:

 

  • azure-servicebus: you can use this Service Bus client library to let Python applications communicate with the Azure Service Bus implement asynchronous messaging patterns.
  • azure-identity: The Azure Identity library provides Azure Active Directory (Azure AD) token authentication support across the Azure SDK. It provides a set of TokenCredential implementations, which can be used to construct Azure SDK clients that support Azure AD token authentication.
  • python-dotenv: Python-dotenv reads key-value pairs from a .env file and can set them as environment variables. It helps in the development of applications following the 12-factor principles.
  •  

The requirements.txt file under the scripts folder contains the list of packages used by the app.py application that you can restore using the following command:

 

 

Each job makes use of a DefaultAzureCredential object to acquire a security token from Azure Active Directory and access the queues in the Azure Service Bus namespace using the credentials of the user-defined managed identity associated to the Azure Container Apps Job.

You can use a managed identity in a running container app or job to authenticate to any service that supports Azure AD authentication.

With managed identities:

 

 

If you want to debug the sender, processor, and receiver jobs locally, you can define a value for the environment variables in the src/.env file, the credentials used by the DefaultAzureCredential object in the src/.local file.

In the Azure Identity client libraries, you can choose one of the following approaches:

 

  • Use DefaultAzureCredential, which will attempt to use the WorkloadIdentityCredential.
  • Create a ChainedTokenCredential instance that includes WorkloadIdentityCredential.
  • Use WorkloadIdentityCredential directly.

 

The following table provides the minimum package version required for each language’s client library.

 

Language Library Minimum Version Example
.NET Azure.Identity 1.9.0 Link
Go azidentity 1.3.0 Link
Java azure-identity 1.9.0 Link
JavaScript @azure/identity 3.2.0 Link
Python azure-identity 1.13.0 Link

 

When using the Azure Identity client library with Azure Container Apps and Azure Container Apps Jobs, the user-assigned managed identity client id must be specified.

 

Sender Job

The sender task is a manually triggered job that sends a configurable amount of messages to the parameters queue in the Azure Service Bus namespace. The payload of each message contains a random positive integer number comprised in a configurable range. The Python code of the sender job is contained in a single file called sbsender.py under the src folder. You can configure the job using the following environment variables:

 

  • FULLY_QUALIFIED_NAMESPACE: the fully qualified name of the Azure Service Bus namespace.
  • INPUT_QUEUE_NAME: the name of the parameters queue to which the job sends request messages.
  • MIN_NUMBER: the minimum value of the parameter.
  • MAX_NUMBER: the maximum value of the parameter.
  • MESSAGE_COUNT: the count of messages that the job sends to the parameters queue.
  • SEND_TYPE: the sender job can send messages using the following approaches:

    • list: sends all the messages with a single call.
    • batchsends all the messages as a batch.

 

 

Processor Job

The processor task is a scheduled job that reads the messages from the parameters queue in the Azure Service Bus namespace, calculates the Fibonacci number for the actual parameter, and writes the result in the results queue in the same namespace. The Python code of the processor job is contained in a single file called sbprocessor.py under the src folder. You can configure the job using the following environment variables:

 

  • FULLY_QUALIFIED_NAMESPACE: the fully qualified name of the Azure Service Bus namespace.
  • INPUT_QUEUE_NAME: the name of the parameters queue from which the job receives request messages.
  • OUTPUT_QUEUE_NAME: the name of the results queue to which the job sends result messages.
  • MAX_MESSAGE_COUNT: the maximum number of messages that each job run receives and processes from the parameters queue.
  • MAX_WAIT_TIME: the maximum wait time in seconds for the message receive method.

 

 

Receiver Job

The receiver task is an event-driven job which execution is triggered by each message in the results queue. The job reads the result and logs it. The Python code of the receiver job is contained in a single file called sbreceiver.py under the src folder. You can configure the job using the following environment variables:

 

  • FULLY_QUALIFIED_NAMESPACE: the fully qualified name of the Azure Service Bus namespace.
  • OUTPUT_QUEUE_NAME: the name of the results queue from which the job receives result messages.
  • MAX_MESSAGE_COUNT: the maximum number of messages that each job run receives and processes from the parameters queue.
  • MAX_WAIT_TIME: the maximum wait time in seconds for the message receive method.

 

 

Build Docker Images

You can use the 01-build-docker-images.sh Bash script in the src folder to build the Docker container image for each job.

 

 

Before running any script in the src folder, make sure to customize the value of the variables inside the 00-variables.sh file located in the same folder. This file is embedded in all the scripts and contains the following variables:

The Dockerfile under the src folder is parametric and can be used to build the container images for three jobs.

 

 

Test jobs locally

You can use the 02-run-docker-container.sh Bash script in the src folder to test the containers for the sender, processor, and receiver jobs.

 

 

Push Docker containers to the Azure Container Registry

You can use the 03-push-docker-image.sh Bash script in the src folder to push the Docker container images for the sender, processor, and receiver jobs to the Azure Container Registry (ACR).

 

 

Deploy the Jobs using Bicep

Now you can deploy the sender, processor, and receiver jobs using the Bicep modules in the bicep/infra folder using the deploy.sh Bash script in the same folder. Specify a value for the following parameters in the deploy.sh Bash script and main.parameters.json parameters file before deploying the Bicep modules.

 

  • prefix: specifies a prefix for all the Azure resources.
  • authenticationType: location for the resource group and Azure resources.

 

The following table contains the code from the container-apps-job.bicep Bicep module used to deploy manually-triggered, scheduled, or even-driven Azure Container Apps Jobs. For more information, see the documentation of the Microsoft.App/jobs resource type.

 

 

Deploy the Jobs using Azure CLI

Alternatively, you can deploy the sender, processor, and receiver jobs using the 01-create-job.sh Bash script under the scripts folder. This scripts makes use of the az containerapp job create command to create the jobs.

 

 

Before running any script in the scripts folder, make sure to customize the value of the variables inside the 00-variables.sh file located in the same folder. This file is embedded in all the scripts and contains the following variables:

 

 

Test the Sample

You can use the 02-start-job.sh Bash script in the scripts folder to manually run any of the jobs.

 

 

To test the demo, just run the 02-start-job.sh Bash script and select the "Bicep Sender Job or Azure CLI Sender Job depending on how you created the jobs, via Bicep or Azure CLI. The sender job sends a list or a batch of messages to the parameters queue. The processor runs periodically on a given schedule based on the Cron expression you specified when you created the job. The processor job calculates the Fibonacci number for the integer number contained in each request message, and writes the result in the results queue in the same namespace. Finally, the receiver job reads messages from the results queue and logs their content to the standard output that is saved to the Azure Monitor Log Analytics workspace.

 

Get job execution history

Each Azure Container Apps Job maintains a history of recent job executions. To get the statuses of job executions using the Azure CLI, use the az containerapp job execution list command. The following example returns the status of the most recent executions of a job named gundamsender in a resource group named GundamRG:

 

 

This command returns a JSON array containing the list of executions of the gundamsender job.

 

 

The execution history for scheduled and event-based jobs is limited to the most recent 100 successful and failed job executions. To list all the executions of a job or to get detailed output from a job, you can query the logs provider configured for your Container Apps environment, for example, Azure Monitor Log Analytics.

 

You can also use the Azure Portal to see the logs generated by a job, as shown in the following screenshot. You can eventually customize the Kusto Query Language (KQL) query to filter, project, and retrieve only the desired data.

 

joblogs.png

 

You can use the Azure Portal, as shown in the following picture, to see the execution history for a given job, as shown in the following picture.

 

executionhistory.png

 

You can click the View logs link to see the logs for specific job execution, as shown in the following picture.

 

executionlogs.png

 

Review deployed resources

You can use the Azure portal to list the deployed resources in the resource group, as shown in the following picture:

 

 

resources.png

 

You can also use Azure CLI to list the deployed resources in the resource group:

 

 

You can also use the following PowerShell cmdlet to list the deployed resources in the resource group:

 

 

Clean up resources

You can delete the resource group using the following Azure CLI command when you no longer need the resources you created. This will remove all the Azure resources.

 

 

Alternatively, you can use the following PowerShell cmdlet to delete the resource group and all the Azure resources.

 

 

 

office365,Azure,microsoft

via Microsoft Tech Community https://ift.tt/KGvUfa8

July 3, 2023 at 08:54AM
paolosalvatori