Skip to main content
Pentaho Documentation

Set Up Pentaho to Connect to a Cloudera Cluster

Overview

These instructions explain how to configure Pentaho's Cloudera shim so Pentaho can connect to a working Cloudera's Distribution for Hadoop (CDH) cluster.

Before You Begin

Before you begin, you will need to do a few things.

  1. Verify Support
    Check the Components Reference to verify that your Pentaho version supports your version of the CDH cluster.
     
  2. Set Up a CDH cluster
    1. Configure a CDH cluster.  See Cloudera's documentation if you need help.

    2. Install any required services and service client tools.

    3. Test the cluster.
       

  3. Get Connection Information
    Get the connection information for the cluster and services that you will use from your Hadoop Administrator, Cloudera Manager, or other cluster management tool.  You'll also need to supply some of this information to users once you are finished. 
     
  4. Add a YARN User to the Superuser Group
    Add the YARN user on the cluster to the group defined by dfs.permissions.superusergroup property. The dfs.permissions.superusergroup property can be found in hdfs-site.xml file on your cluster or in the Cloudera Manager.
     
  5. Review the Notes Section
    Read the Notes section to review special configuration instructions for your version of CDH.

If you are connecting to a secured CDH cluster there are a few additional things you need to do.

  1. Secure the Cloudera Cluster with Kerberos
    To secure a cluster with Kerberos authentication, you will need to:
    1. Configure Kerberos security on the cluster, including the Kerberos Realm, Kerberos KDC, and Kerberos Administrative Server. 

    2. Configure the name, data, secondary name, job tracker, and task tracker nodes to accept remote connection requests.

    3. Set up Kerberos for name, data, secondary name, job tracker, and task tracker nodes if you are have deployed CDH using an enterprise-level program.

    4. Add user account credentials to the Kerberos database for each Pentaho user that needs access to the Hadoop cluster.  Also, make sure there is an operating system user account on each node in the Hadoop cluster for each user that you want to add to the Kerberos database. Add operating system user accounts if necessary. Note that the user account UIDs should be greater than the minimum user ID value (min.user.id). Usually, the minimum user ID value is set to 1000.

  2. Set up Kerberos on your Pentaho computers
    Instructions for how to do this appear in the article Set up Kerberos on Your Pentaho Computer.

Edit Configuration Files on Clusters

Pentaho-specific edits to configuration files are the cluster are referenced in this section.

Oozie

By default, Oozie jobs are run by the Oozie user.  But, if you use PDI to start an Oozie job, you must add the PDI user to the oozie-site.xml file on the cluster so that the PDI user can execute the program in proxy. If you plan to use the Oozie service complete these instructions:

  1. Open the oozie-site.xml file on the cluster.
  2. Add the following lines of the code to the oozie-site.xml file on cluster, substituting <your_pdi_user_name> with the PDI User username, such as jdoe.
<property>
<name>oozie.service.ProxyUserService.proxyuser.<your_pdi_user_name>.groups</name>
<value>*</value>
</property>
<property>
<name>oozie.service.ProxyUserService.proxyuser.<your_pdi_user_name>.hosts</name>
<value>*</value>
</property>
  1. Save and close the file

Configure Pentaho Component Shims

You must configure the shim in each of the following Pentaho components, on each computer from which Pentaho will be used to connect to the cluster:

  • PDI client (Spoon)
  • Pentaho Server, including Analyzer and Pentaho Interactive Reporting.
  • Pentaho Report Designer (PRD)
  • Pentaho Metadata Editor (PME)

As a best practice, configure the shim in the PDI client first.  the PDI client has features that will help you test your configuration.  Then copy the tested PDI client configuration files to other components, making changes if necessary. You can also opt to go through these instructions for each Pentaho component, and not copy the shim files from the PDI client. 

If you do not plan to connect to the cluster from the PDI client, you can configure the shim in another component first instead.

Perform the following steps to configure the shim on each component:

  1. Locate the Pentaho Big Data Plugin and Shim Directories.
  2. Select the Correct Shim.
  3. Copy the Configuration Files from Cluster to Shim.
  4. Edit the Shim Configuration Files.

Step 1: Locate the Pentaho Big Data Plugin and Shim Directories

Shims and other parts of the Pentaho Adaptive Big Data Layer are in the Pentaho Big Data Plugin directory.  The path to this directory differs by component. You need to know the locations of this directory, in each component, to complete shim configuration and testing tasks.

In the following table, <pentaho home> in the shim locations for each component is the directory where Pentaho is installed:

Components Location of Pentaho Big Data Plugin Directory
PDI client <pentaho home>/design-tools/data-integration/plugins/pentaho-big-data-plugin
Pentaho Server <pentaho home>/server/pentaho-server/pentaho-solutions/system/kettle/plugins/pentaho-big-data-plugin
Pentaho Report Designer <pentaho home>/design-tools/report-designer/plugins/pentaho-big-data-plugin
Pentaho Metadata Editor <pentaho home>/design-tools/metadata-editor/plugins/pentaho-big-data-plugin

Shims are located in the pentaho-big-data-plugin/hadoop-configurations directory.  Shim directory names consist of a three or four letter Hadoop Distribution abbreviation followed by the Hadoop Distribution's version number.  The version number does not contain a decimal point.  For example, the shim directory named cdh512 is the shim for the CDH (Cloudera Distribution for Hadoop), version 5.12.  The following table lists the shim directory abbreviations for Hadoop distributions:

Abbreviation Shim
cdh Cloudera's Distribution of Apache Hadoop
emr Amazon Elastic Map Reduce
hdi Microsoft Azure HDInsight
hdp Hortonworks Data Platform
mapr MapR

Step 2: Select the Correct Shim

Although Pentaho often supports one or more versions of a Hadoop distribution, the download of the Pentaho Suite only contains the latest, supported, Pentaho-certified version of the shim.  The other supported versions of shims can be downloaded from the Pentaho Customer Support Portal

Before you begin, verify that the shim you want is supported by your version of Pentaho shown in the Components Reference.

  1. Navigate to the pentaho-big-data-plugin/hadoop-configurations directory to view the shim directories. If the shim you want to use is already there, you can go to Step 3: Copy the Configuration Files from Cluster to Shim
  2. On the Customer Portal home page, sign in using the Pentaho support user name and password provided to you in your Pentaho Welcome Packet. 
  3. In the search box, enter the name of the shim you want. Select the shim from the search results. Optionally, you can browse the shims by version on the Downloads page. 
  4. Read all prerequisites, warnings, and instructions. On the bottom of the page in the Box widget, click the shim zip file to download it. 
  5. Unzip the downloaded shim package to the pentaho-big-data-plugin/hadoop-configurations directory.

Step 3: Copy the Configuration Files from Cluster to Shim

Copying configuration files from the cluster to the shim helps keep key configuration settings in sync with the cluster and reduces configuration errors. Perform the following steps to copy these configuration file from the cluster to the shim:

  1. Back up the CDH shim files in the  pentaho-big-data-plugin/hadoop-configurations/cdhxx directory.
  2. Copy the following configuration files from the cluster to the Pentaho shim directory.  You should overwrite the existing Pentaho shim files.
  • core-site.xml
  • hbase-site.xml
  • hdfs-site.xml
  • hive-site.xml
  • mapred-site.xml
  • yarn-site.xml

Step 4: Edit the Shim Configuration Files

You need to verify or change authentication, Oozie, Hive, MapReduce, and YARN settings in these shim configuration files:

  • core-site.xml
  • config.properties
  • hive-site.xml
  • mapred-site.xml
  • yarn-site.xml

Edit config.properties (Unsecured Cluster)

If you are connecting to an unsecure cluster, verify that these values are properly set.  Set the Oozie proxy user if needed.

  1. Navigate to the pentaho-big-data-plugin/hadoop-configurations/cdhxx directory and open the config.properties file.
  2. Add the following values:
Parameter Values
authentication.superuser.provider NO_AUTH
pentaho.oozie.proxy.user Add a proxy user's name to access the Oozie service through a proxy, otherwise, leave it set to oozie
  1. Save and close the file.

Edit config.properties (Secured Clusters)

If you are connecting to a secure cluster, add Kerberos information to the config.properties file. If you plan to use secure impersonation to access your cluster, see Use Secure Impersonation to Access a Cloudera Cluster before editing the config.properties file.

Perform the following steps to add Kerberos information to the config.properties file: 

  1. Navigate to the pentaho-big-data-plugin/hadoop-configurations/cdhxx directory and open the config.properties file.
  2. Add these values:
Parameter Values
authentication.superuser.provider cdh-kerberos (This should be the same as the authentication.kerberos.id.)
authentication.kerberos.principal Set the Kerberos principal.
authentication.kerberos.password Set the Kerberos password.  You only need to set the password or the keytab, not both.
authentication.kerberos.keytabLocation set the Kerberos keytab. You only need to set the password or the keytab, not both.
pentaho.oozie.proxy.user Add the proxy user's name if you plan to access the Oozie service through a proxy.  Otherwise, leave it set to oozie.
  1. Save and close the file.

Edit hive-site.xml

Follow these instructions to set the location of the hive metastore in the hive-site.xml file:

  1. Navigate to the pentaho-big-data-plugin/hadoop-configurations/cdhxx directory and open the hive-site.xml file.
  2. Add the followng value:
Parameter Value
hive.metastore.uris Set this to the location of your hive metastore if it differs from what is on the cluster.
  1. Save and close the file.

Edit mapred-site.xml

Edit the mapred-site.xml file to indicate where the job history logs are stored and to allow MapReduce jobs to run across platforms. 

  1. Navigate to the pentaho-big-data-plugin/hadoop-configurations/cdhxx directory and open the  mapred-site.xml file.
  2. Verify the mapreduce.jobhistory.address and mapreduce.app-submission.cross-platform properties are in the mapred-site.xml file. If they are not in the file, add them as follows.
Parameter Value
mapreduce.jobhistory.address Set this to the place where job history logs are stored.
mapreduce.app-submission.cross-platform

Add this property to allow MapReduce jobs to run on either Windows client or Linux server platforms .

<property>
   <name>mapreduce.app-submission.cross-platform</name>
   <value>true</value>
</property>
  1. Save and close the file.

Edit yarn-site.xml

Verify that the following parameters are set in the yarn-site.xml file:

  1. Navigate to the pentaho-big-data-plugin/hadoop-configurations/cdhxx directory and open the yarn-site.xml file.
  2. Add the following values:
Parameter Values
yarn.application.classpath Add the classpaths you need to run YARN applications.  Use commas to separate multiple paths.  
yarn.resourcemanager.hostname Change to the hostname of the resource manager in your environment.
yarn.resourcemanager.address Change to the hostname and port for your environment.
yarn.resourcemanager.admin.address Change to the hostname and port for your environment.
  1. Save and close the file.

Connect to a Hadoop Cluster with the PDI Client   

Once you have set up your shim, you must make it active, then configure and test the connection to the cluster. For details on setting up the connection, see the article Connect to a Hadoop Cluster with the PDI Client.

Connect Other Pentaho Components to the Cloudera Cluster

These instructions explain how to create and test a connection to the cluster in the Pentaho Server, PRD, and PME. Creating and testing a connection to the cluster in the PDI client involves two tasks:

  • Set the active shim on PRD, PME, and the Pentaho Server
  • Create and test the cluster connections

Set the Active Shim on PRD, PME, and the Pentaho Server

Modify the plugin.properties file to set the active shim for the Pentaho Server, PRD, and PME.

  1. Stop the component.
  2. Locate the pentaho-big-data-plugin directory for your component. 
  3. Navigate to the hadoop-configurations directory.
  4. Navigate to the pentaho-big-data-plugin directory and open the plugin.properties file.
  5. Set the active.hadoop.configuration property to the directory name of the shim you want to make active.  Here is an example:
active.hadoop.configuation=cdh512
  1. Save and close the plugin.properties file.
  2. Restart the component.

Create and Test Connections

Connection tests appear in the following table.

Component Test
Pentaho Server for DI Create a transformation in the PDI client and run it remotely.
Pentaho Server for BA Create a connection to the cluster in the Data Source Wizard.
PME Create a connection to the cluster in PME.
PRD Create a connection to the cluster in PRD.

Once you've connected to the cluster and its services properly, provide connection information to users who need access to the cluster and its services.  Those users can only obtain access from computers that have been properly configured to connect to the cluster.

Here is what they need to connect:

  • Hadoop distribution and version of the cluster
  • HDFS, JobTracker, ZooKeeper, and Hive2/Impala Hostnames, IP addresses and port numbers
  • Oozie URL (if used)
  • Users also require the appropriate permissions to access the directories they need on HDFS.  This typically includes their home directory and any other required directories.

They might also need more information depending on the job entries, transformation steps, and services they use.  Here's a more detailed list of information that your users might need from you.

Notes

The following are special topics for CDH.

CDH 5.4 Notes

The following notes address issues with CDH 5.4.

Simba Driver Support Note

If you are using Pentaho 6.0 or later, the CDH 5.4 shim supports the Cloudera JDBC Simba driver: Impala JDBC Connector 2.5.28 for Cloudera Enterprise. This replaces the Apache Hive JDBC that was supported previously in previous versions of the CDH 5.4 shim.

In the Database connection window, you will need to select the Cloudera Impala option. If Impala is secured on your cluster, you also need to supply KrbHostFQDN, KrbServiceName, and KrbRealm in the Options tab. For more information on how to set up a database connection see the database connection articles at help.pentaho.com. 

You will need to install the driver in the shim directory for each Pentaho component (e.g., the PDI client, Pentaho Server, PRD) you want to use.  

  1. Download the Impala JDBC Connector 2.5.28 for Cloudera Enterprise driver.
  2. Copy the ImpalaJDBC41.jar to the pentaho-big-data-plugin/hadoop-configurations/cdhxx/lib directory.
  3. Stop and restart the component.

CDH 5.3 Notes

The following notes address issues with CDH 5.3.

Configuring High Availability for CDH 5.3

If you are configuring CDH 5.3 to be used in High Availability mode, we recommend that you use the Cloudera Manager "Download Client Configuration" feature. The Download Client Configuration feature provides a convenient way to get configuration files from the cluster for a service (such as HBase, HDFS, or YARN). Use this feature to download the unzip the configuration zip files to the pentaho-big-data-plugin/hadoop-configurations/cdh53 directory.​

For more information on how to do this, see Cloudera documentation: http://www.cloudera.com/content/cloudera/en/documentation/core/v5-3-x/topics/cm_mc_client_config.html

For troubleshooting cluster and service configuration Issues, refer to Big Data Issues.