Skip to main content

Pentaho+ documentation is moving!

The new product documentation portal is here. Check it out now at docs.hitachivantara.com

 

Hitachi Vantara Lumada and Pentaho Documentation

Using Spark with PDI

Parent article

You can run a Spark job with the Spark Submit job entry or execute a PDI transformation in Spark through a run configuration.

These instructions explain how to use the Spark Submit job entry.

Install the Spark Client

Before you start, you must install and configure the Spark client according to the instructions in the Spark Submit job entry, which can be found here: Spark Submit.

Modify the Spark Sample

The following example demonstrates how to use PDI to submit a Spark job.

Open and Rename the Job

To copy files in these instructions, use either the Hadoop Copy Files job entry or Hadoop command line tools. For an example of how to do this using PDI, check out our tutorial at http://wiki.pentaho.com/display/BAD/Loading+Data+into+HDFS.

Procedure

  1. Copy a text file that contains words that you would like to count to the HDFS on your cluster.

  2. Start the PDI client (also known as Spoon).

  3. Open the Spark Submit.kjb job, which is in design-tools/data-integration/samples/jobs.

  4. Select File Save As, then save the file as Spark Submit Sample.kjb.

Results

Spark Submit Sample Job

Submit the Spark Job

To submit the spark job, complete the following steps.

Procedure

  1. Open the Spark PI job entry.

    Spark PI is the name given to the Spark Submit entry in the sample.
  2. Indicate the path to the spark-submit utility in the Spark Submit Utility field.

    It is located in where you installed the Spark client.
  3. Indicate the path to your spark examples jar (either the local version or the one on the cluster in the HDFS) in the Application Jar field.

    The Word Count example is in this jar.
  4. In the Class Name field, add the following: org.apache.spark.examples.JavaWordCount.

  5. We recommend that you set the Master URL to yarn-client.

    To read more about other execution modes, see https://spark.apache.org/docs/2.2.0/submitting-applications.html
  6. In the Arguments field, indicate the path to the file you want to run Word Count on.

  7. Click the OK button.

  8. Save the job.

  9. Run the job.

Results

As the program runs, you will see the results of the word count program in the Execution pane.