Skip to main content

Creating New Worker

Follow these steps to create and deploy a new worker:

Step 1: Create project

Create project using backstage template

naming convention

To improve workers discoverability please follow their naming convention, which is that the name of the worker should end with -pluggable-worker

Step 2: Develop new task

Implement your new task by following the example here. You can implement multiple task executors within a single worker deployment. Ensure to register executors in CoreModule as a provider

See the Complete Guide for Implementing Tasks

Step 3: Test your task

To build and run project locally check the instructions in project README . After running the project you can access autogenerated Swagger UI and test your executor using REST API

Step 4: Register Task

Register your task following the instructions in Task Registration Guide

Step 5: Deploy The Worker

By default, the worker runs REST and Kafka controllers to handle task commands via sync and async APIs.

naming convention

As configuring kafka might be challenging at first, you can disable Kafka and deploy the worker using only the REST API. This allows you to get started quickly with the synchronous API.

To disable kafka

  1. update your base/deployment.yaml and add the following environment variable in it.
- name: APP_NAME
value: 'REST'
  1. Comment out all environment variables whose names begin with KAFKA.

When you're ready to enable Kafka, you need to configure Kafka topics for your executors in Terraform repo. Here is an Example MR which creates a new topic and a new user for your worker, and sets up corresponding permissions.

Please note that the number of partitions of your kafka topic should not be less than the number of instances you plan to have to run corresponding executor.

After the topic is created make sure it is configured properly in worker config file.

Step 6: Accessing Deployed Worker

Congrats. The worker is created and deployed, now you can access it using Workflow APIs.

Find detailed information about api usage in API usage guide

Step 7: Monitor the worker

A Dedicated Kibana Index and Datadog APM dashboard will be created for each deployed worker.

In addition to this you can monitor your workers API stats using this dashboard

Step 8: Scale the worker

You can configure the number of running instances of your worker in k8s deployment.yml file,

Please ensure to choose a number, which will be a divisor of kafka topic partitions, so the load is distributed between the worker instances evenly.