Skip to main content

Creating New Worker

Follow these steps to create and deploy a new worker:

Step 1: Create project

Create project using backstage template

naming convention

To improve workers discoverability please follow their naming convention, which is that the name of the worker should end with -pluggable-worker

Step 2: Develop new task

Implement your new task by following the example here. You can implement multiple task executors within a single worker deployment. Ensure to register executors in CoreModule as a provider

See the Complete Guide for Implementing Tasks

Step 3: Test your task

To build and run project locally check the instructions in project README . After running the project you can access autogenerated Swagger UI and test your executor using REST API

Step 4: Register Task

Register your task following the instructions in Task Registration Guide

Step 5: Deploy The Worker

By default, the worker runs REST and Kafka controllers to handle task commands via sync and async APIs.

Quick Start without Kafka

As configuring Kafka might be challenging at first, you can disable Kafka and deploy the worker using only the REST API. This allows you to get started quickly with the synchronous API.

To disable Kafka

  1. Update your base/deployment.yaml and add the following environment variable in it.
- name: APP_NAME
value: 'REST'
  1. Comment out all environment variables whose names begin with KAFKA.

Kafka Topic Creation (Automated)

Kafka topic and user creation is now automated via Backstage. When you create a new worker through Backstage, it will open an MR in the Terraform repo that provisions all the necessary Kafka resources for you. No manual Terraform edits are required.

Need to add a topic manually or edit the config?

Step 6: Accessing Deployed Worker

Congrats. The worker is created and deployed, now you can access it using Workflow APIs.

Find detailed information about api usage in API usage guide

Step 7: Monitor the worker

A Dedicated Kibana Index and Datadog APM dashboard will be created for each deployed worker.

In addition to this you can monitor your workers API stats using this dashboard

Step 8: Scale the worker

You can configure the number of running instances of your worker in k8s deployment.yml file.

Please ensure to choose a number which will be a divisor of Kafka topic partitions, so the load is distributed between the worker instances evenly. See Kafka Topic Configuration for partition defaults and how to override them.