Skip to main content
Pentaho Documentation

Advanced settings for connecting to a Cloudera cluster

Parent article

This article explains advanced settings for configuring the Pentaho Server to connect to a working Cloudera's Distribution for Hadoop (CDH) cluster.

Before you begin

Before you begin to set up Pentaho to connect to a Cloudera cluster, you must perform the following tasks:

Procedure

  1. Check the Components Reference to verify that your Pentaho version supports your version of the CDH cluster.

  2. Prepare your Cloudera cluster by performing the following tasks:

    1. Configure a CDH cluster.

      See Cloudera's documentation if you need help.
    2. Install any required services and service client tools.

    3. Test the cluster.

  3. From your Hadoop administrator, get the connection information for the cluster and services that you intend to use. Some of this information may be from Cloudera Manager or other cluster management tools. You also need to supply some of this information to users after you are finished.

  4. Add the YARN user on the cluster to the group defined by dfs.permissions.superusergroup property. The dfs.permissions.superusergroup property can be found in hdfs-site.xml file on your cluster or in the Cloudera Manager.

  5. Set up the Pentaho Server to connect to a Hadoop cluster. You need to install the driver for your version of CDH.

Set up a secured cluster

If you are connecting to a CDH cluster that is secured with Kerberos, you must also perform the following tasks.

Procedure

  1. Configure Kerberos security on the cluster, including the Kerberos Realm, Kerberos KDC, and Kerberos Administrative Server.

  2. Configure the name, data, secondary name, job tracker, and task tracker nodes to accept remote connection requests.

  3. If you have deployed CDH using an enterprise-level program, set up Kerberos for name, data, secondary name, job tracker, and task tracker nodes.

  4. Add user account credentials to the Kerberos database for each Pentaho user that needs access to the Hadoop cluster.

  5. Verify that an operating system user account exists on each node in your Hadoop cluster for each user you want to add to the Kerberos database. Add operating system user accounts if necessary.

    NoteThe user account UIDs should be greater than the minimum user ID value (min.user.id). Usually, the minimum user ID value is set to 1000.
  6. Set up Kerberos on your Pentaho machines. For instructions, see Set Up Kerberos for Pentaho.

Edit configuration files for users

Your cluster administrators must download the configuration files from the cluster for the applications your teams are using, and then edit these files to include Pentaho-specific and user-specific parameters. These files must be copied to the user's directory: <username>/.pentaho/metastore/pentaho/NamedCluster/Configs/<user-defined connection name>. This directory and the config.properties file are created when you create a named connection

The following files must be modified and provided to your users:

  • config.properties
  • hive-site.xml
  • mapred-site.xml
  • yarn-site.xml

Edit Hive site XML file

If you are using Hive, follow these instructions to set the location of the hive metastore in the hive-site.xml file:

Procedure

  1. Navigate to the <username>/.pentaho/metastore/pentaho/NamedCluster/Configs/<user-defined connection name> directory and open the hive-site.xml file.

  2. Add the following value:

    ParameterValue
    hive.metastore.urisSet this to the location of your hive metastore if it differs from what is on the cluster.
  3. Save and close the file.

Edit Mapred site XML file

If you are using MapReduce, you must edit the mapred-site.xml file to indicate where the job history logs are stored and to allow MapReduce jobs to run across platforms.

Perform the following steps to edit the mapred-site.xml file:

Procedure

  1. Navigate to the <username>/.pentaho/metastore/pentaho/NamedCluster/Configs/<user-defined connection name> directory and open the mapred-site.xml file.

  2. Verify the mapreduce.jobhistory.address and mapreduce.app-submission.cross-platform properties are in the mapred-site.xml file. If they are not in the file, add them as follows.

    ParameterValue
    mapreduce.jobhistory.addressSet this to the place where job history logs are stored.
    mapreduce.app-submission.cross-platform

    Add this property to allow MapReduce jobs to run on either Windows client or Linux server platforms.

    <property>
       <name>mapreduce.app-submission.cross-platform</name>
       <value>true</value>
    </property>
  3. Save and close the file.

Edit YARN site XML file

If you are using YARN, you must verify that the following parameters are set in the yarn-site.xml file.

Perform the following steps to edit the yarn-site.xml file:

Procedure

  1. Navigate to the <username>/.pentaho/metastore/pentaho/NamedCluster/Configs/<user-defined connection name> directory and open the yarn-site.xml file.

  2. Add the following values:

    ParameterValue
    yarn.application.classpathAdd the classpaths you need to run YARN applications. Use commas to separate multiple paths.
    yarn.resourcemanager.hostnameChange to the hostname of the resource manager in your environment.
    yarn.resourcemanager.addressChange to the hostname and port for your environment.
    yarn.resourcemanager.admin.addressChange to the hostname and port for your environment.
  3. Save and close the file.

Oozie configuration

If you are using Oozie on a cluster, you must configure the cluster and the server. For instructions, see Using Oozie

Windows configuration for a secured cluster

If you are on a Windows machine, perform the following steps to edit the configuration properties:

Procedure

  1. Navigate to the server/pentaho-server directory and open the start-pentaho.bat file with any text editor.

  2. Set the CATALINA_OPTS environment variable to the location of the krb5.conf or krb5.ini file on your system, as shown in the following example:

    set "CATALINA_OPTS=%"-Djava.security.krb5.conf=C:\kerberos\krb5.conf
    
  3. Save and close the file.

Connect to a Hadoop cluster with the PDI client

After you have set up the Pentaho Server to connect to a cluster, you must configure and test the connection to the cluster. For more information about setting up the connection, see Connecting to a Hadoop cluster with the PDI client.

Connect other Pentaho components to the Cloudera cluster

The following sections explain how to create and test a connection to the cluster in the Pentaho Server, PRD, and PME. Creating and testing a connection to the cluster in the PDI client involves two tasks:

Create and test connections

For each Pentaho component, create the test as described in the following list.

  • Pentaho Server for DI

    Create a transformation in the PDI client and run it remotely.

  • Pentaho Server for BA

    Create a connection to the cluster in the Data Source Wizard.

  • PME

    Create a connection to the cluster in PME.

  • PRD

    Create a connection to the cluster in PRD.

After you have connected to the cluster and its services properly, provide connection information to users who need access to the cluster and its services. Those users can only obtain access from machines that are properly configured to connect to the cluster.

Here is what users need to connect:

  • Hadoop distribution and version of the cluster
  • HDFS, JobTracker, ZooKeeper, and Hive2/Impala Hostnames, IP addresses and port numbers
  • Oozie URL (if used)

Users also require the permissions to access the directories they need on HDFS, such as their home directory and any other required directories.

They might also need more information depending on the job entries, transformation steps, and services they use. Here's a more detailed list of information that your users might need from you.