Version

    4. CloverDX Worker

    Worker is a standalone JVM running separately from the Server Core. This provides an isolation of the Server Core from executed jobs (e.g. graphs, jobflows, etc.). Therefore, an issue caused by a job in Worker will not affect the Server Core.

    Worker does not require any additional installation - it is started and managed by the Server. Worker runs on the same host as the Server Core, i.e. it is not used for parallel or distributed processes. In Cluster, each node has its own Worker.

    Worker is a relatively light-weight and simple executor of jobs. It handles job execution requests from the Server Core, but does not perform any high-level job management or scheduling. It communicates with the Server Core via an API for more complex activities, e.g. to request execution of other jobs, check file permissions, etc.

    Configuration

    General configuration

    Worker is started by the Server Core as a standalone JVM process. The default configuration of Worker can be changed in the Setup:

    • Heap memory limits

    • Port ranges

    • Additional command line arguments (e.g. to tweak

      garbage collector settings)

    The settings are stored in the usual Server configuration file. Worker is configured via special configuration properties.

    A full command line of Worker is available in the Monitoring section.

    There are several properties that are propagated or otherwise put on the Worker’s command line. For a full list, see the Properties on Worker’s command line.

    Cluster specific configuration

    Cluster should use a single portRange: all nodes should have identical value of portRange. That is the preferred configuration, although different ranges for individual nodes are possible.

    Management

    The Server manages the runtime of Worker, i.e. it is able to start, stop, restart Worker, etc. Users don’t need to manually install and start Worker.

    Status of Worker and actions are available in the Monitoring Worker section.

    In the case of problems with Worker, see Troubleshooting Worker.

    Job Execution

    By default, all jobs are executed in Worker; yet the Server Core still keeps the capability to execute jobs. It is possible to set specific jobs or whole sandboxes to run in the Server Core via the worker_execution property on the job or sandbox. It is also possible to disable Worker completely, in which case all jobs will be executed in the Server Core.

    Executing jobs in the Server Core should be an exception. To see where the job was executed, look in the run details in Execution History - in the Executor field. Jobs started in Worker also log a message in their log, e.g. Job is executed on Worker:[worker0@node01:10500].

    Job Configuration

    The following areas of Worker configuration affect job execution:

    • JNDI

      Graphs running in Worker cannot use JNDI as defined in the application container of the Server Core, because Worker is a separate JVM process. Worker provides its own JNDI configuration.

    • Classpath

      The classpath is not shared between the Server Core and Worker. If you need to add a library to the Worker classpath, e.g. a JDBC driver, follow the instructions in Adding Libraries to the Worker’s Classpath.