Skip to main content
Pentaho Documentation

Job (Job Entry)

The Job job entry executes a previously defined job, which allows you to perform "functional decomposition": a break out of jobs into more manageable units. For example, instead of writing a data warehouse load using one job which contains 500 entries, you could create smaller jobs and aggregate them. 

Although it is possible to create a recursive, never-ending job that points to itself, you should be aware that such a job will eventually fail with an out of memory or stack error.

General

Enter the following information in the job entry fields.

Field Description
Entry Name The unique name of the job entry on the canvas. A job entry can be placed on the canvas several times; however, it will be the same job entry.
Job

Specify your job by entering in its path or clicking Browse.

If you select a job that has the same root path as the current job, the variable ${Internal.Entry.Current.Directory} will automatically be inserted in place of the common root path. For example, if the current job's path is /home/admin/transformation.kjb and you select a job in the folder /home/admin/path/sub.kjb than the path will automatically be converted to ${Internal.Entry.Current.Directory}/path/sub.kjb.

If you are working with a repository, specify the name of the job. If you are not working with a repository, specify the XML file name of the job. 

Jobs previously specified by reference are automatically converted to be specified by the job name within the Pentaho Repository.

Options

The Job job entry features several tabs with fields. Each tab is described below.

Options Tab

This tab includes the following fields:

Option Description
Environment Type

Select if you want to run the job locally or on the server:

  • Local: Select this option to run the job on the machine you are currently using.
  • Server: Select this option to send the job to the server. In the Server field, enter or select the name of the server.
Wait for remote job to finish If you selected Server as your environment type, select this check box to block the job until the job runs on the server.
Pass the sub jobs and transformations to the server If you selected Server as your environment type, select this check box to pass the complete job (including referenced sub-jobs and sub-transformations) to the remote server.
Enable monitoring for sub jobs and transformations If you selected Server as your environment type, select this check box to monitor the child jobs and transformations when the job runs.
Follow local abort to remote job If you selected Server as your environment type, select this check box to send the abort signal remotely.
Execute every input row

Runs the job once for every row found in a set of result rows, also known as looping. For example, you can run a job for each file found in a directory.

Logging Settings Tab

By default, if you do not set logging, PDI will take the generated log entries and create a log record inside of the job. For example, suppose a job has three transformations to run and you have not set logging. The transformations will not log information to other files, locations, or special configurations. In this instance, the job executes and logs information into its master job log.

In most instances, it is acceptable for logging information to be available in the job log. For example, if you have load dimensions, you would want logs for your load dimension runs to display in the job logs. If there are errors in the transformations, they will display in the job logs. However, you want all your log information kept in one place, you must set up logging.

This tab includes the following fields:

Option Description
Specify logfile Specifies a separate log file for running this job.
Name The directory and base name of the log file, for example, 'C:\logs'.
Extension The file name extension, for example, '.log' or '.txt'.
Log level Specifies the logging level for running the job. See Enable Logging for more details.
Append logfile? Appends the log file instead of creating a new one.
Create parent folder Creates the parent folder for the log file if it does not already exist.
Include date in logfile Adds the system date to the file name with the format 'YYYYMMDD', for example '_20051231'.
Include time in logfile Adds the system time to the file name with the format 'HHMMSS', for example, '_235959'.

Argument Tab

Enter the following information to pass arguments to the job:

Option Description
Copy results to arguments

Copies the results from a previous transformation as arguments of the job using the Copy rows to result step. If the Execute for every input row field is selected, then each row is a set of command-line arguments to be passed to the job; otherwise, only the first row is used to generate the command-line arguments.

Arguments  Specify which command-line arguments will be passed to the job. 

Parameters Tab

Enter the following information to pass parameters to the job:

Option Description
Copy results to parameters Copies the results from a previous job as parameters of the job using the Copy rows to result step.
Pass parameter values to sub-job Passes all parameters of the job down to the sub-job. 
Parameter Specify the parameter name passed to the job.
Stream Column Name  Specify the field of an incoming record from a previous job as the parameter.
Value  Specify the values for the job's parameters by using one of the following methods: 
  • Manually typing a value, for example, 'ETL Job'.
  • Use a parameter to set the value, for example, '${Internal.Job.Name}'.
  • Use a combination of manually specified values and parameter values, for example, '${FILE_PREFIX}_${FILE_DATE}.txt'.
Get Parameters Get the existing parameters associated with the job.