Skip to main content
Pentaho Documentation

Connect to Azure HDInsight

This article explains how to connect the Pentaho Server to a Microsoft Azure HDInsight cluster.  Pentaho supports both the HDFS file system and the WASB  (Windows Azure Storage BLOB) extension for Azure HDInsight. The WASB file system is the default file system for Azure HDInsight. 

The following tasks make up the process for connecting an Azure HDInsight Hadoop cluster to the Pentaho Server:

  1. Edit Cluster Configuration Files
  2. Configure Pentaho Components Shim
  3. Create a Connection to the HDInsight Cluster

After creating a connection, we suggest that you test it. If you are not able to connect, refer to the Troubleshooting section. To ensure a good connection, we recommend performing a few tasks before you begin the connection process.

Before You Begin 

Before you begin, you will need to perform the following tasks:

  1. Verify Support
    Check the Components Reference to verify that your Pentaho version supports your version of the Azure HDInsight cluster.

  2. Set Up an Azure HDInsight cluster
    Pentaho can connect to secured and unsecured Azure HDInsight clusters:
    1. Configure an Azure HDInsight cluster. See the Microsoft Azure documentation if you need help.
    2. Install any required services and service client tools.
    3. Test the cluster.

  3. Get Connection Information
    Ask your Hadoop Administrator for connection information to the cluster and related services. This information may also be available in your cluster management tools or Ambari.

  4. Add a YARN User to Superuser Group
    Add the YARN user on the cluster to the group defined by the dfs.permissions.superusergroup property. This property is located in the hdfs-site.xml file on your cluster or in the cluster management application.

Although Pentaho often supports one or more versions of a Hadoop distribution, the shim for Azure HDInsight is not included in the Pentaho Suite download, and must be downloaded separately from the Pentaho Customer Support Portal. You will need to sign in to view the available downloads.

The Oozie user runs Oozie jobs by default. But if you use PDI to start an Oozie job, you must add the PDI user to the oozie-site.xml file on the cluster so that the PDI user can execute the program by proxy. To use the Oozie service complete the following instructions:

  1. Open the oozie-site.xml file on the cluster.
  2. Add the following lines of the code to the oozie-site.xml file on cluster, substituting <your_pdi_user_name> with the PDI User username, such as jdoe.
<property>
    <name>oozie.service.ProxyUserService.proxyuser.<your_pdi_user_name>.groups</name>
        <value>*</value>
</property>
<property>
    <name>oozie.service.ProxyUserService.proxyuser.<your_pdi_user_name>.hosts</name>
        <value>*</value>
</property>
  1. Save and close the file.
  2. Create or edit an existing job.properties file to point to your hostnames and folders in the cluster. Here is an example:
nameNode=wasb://<Your server name>@eastorageacct2.blob.core.windows.net/ jobTracker=hn1-pentah.trhf3tzg3kne3osozhcc4hsv1h.cx.internal.cloudapp.net:8050 queueName=default examplesRoot=examples
oozie.wf.application.path=<Cluster folder name>/oozie/examples/apps/map-reduce outputDir=<Your working directory>/oozie/output

Configure Pentaho Component Shims 

You must configure the shim in each of the following Pentaho components and on each computer from which Pentaho will be used to connect to the cluster:

  • PDI client (Spoon)
  • Pentaho Server

As a best practice, configure the shim in the PDI client first. The PDI client has features that will help you test your configuration. Then, copy the tested PDI client configuration files to other components, making changes if necessary. 

You can also opt to go through these instructions for each Pentaho component, and not copy the shim files from the PDI client. If you do not not plan to connect to the cluster from the PDI client, you can configure the shim in another component first.

Perform the following steps to configure the shim on each component:

  1. Locate the Pentaho Big Data Plugin and Shim Directories.
  2. Select the Correct Shim.
  3. Copy the Configuration Files from Cluster to Shim.
  4. Edit the Shim Configuration Files.

Step 1: Locate the Pentaho Big Data Plugin and Shim Directories

Shims and other parts of the Pentaho Adaptive Big Data Layer are in the Pentaho Big Data Plugin directory. The path to this directory differs by component. You need to know the locations of this directory, in each component, to complete shim configuration and testing tasks.

In the following table, <pentaho home> in the shim locations for each component is the directory where Pentaho is installed:

Components Location of Pentaho Big Data Plugin Directory
PDI client <pentaho home>/design-tools/data-integration/plugins/pentaho-big-data-plugin
Pentaho Server <pentaho home>/server/pentaho-server/pentaho-solutions/system/kettle/plugins/pentaho-big-data-plugin

Shims are in the pentaho-big-data-plugin/hadoop-configurations directory. Shim directory names consist of a three or four letter Hadoop Distribution abbreviation followed by the Hadoop Distribution's version number. The version number does not contain a decimal point. For example, the shim directory named cdh59 is the shim for the CDH (Cloudera Distribution for Hadoop), version 5.9. The following table lists the shim directory abbreviations for Hadoop distributions:

Abbreviation Shim
cdh Cloudera's Distribution of Apache Hadoop
emr Amazon Elastic Map Reduce
hdi Microsoft Azure HDInsight
hdp Hortonworks Data Platform
mapr MapR

You will not see the hdi directory until you have unpacked the download as outlined in Step 2.

Step 2: Select the Correct Shim 

Although Pentaho often supports one or more versions of a Hadoop distribution, the download of the Pentaho Suite only contains the latest, supported, Pentaho-certified version of the shim.  The other supported versions of shims can be downloaded from the Pentaho Customer Support Portal

Before you begin, verify that the shim you want is supported by your version of Pentaho shown in the Components Reference.

  1. If you have not downloaded the Azure HDInsight shim, go to the Customer Portal. Sign in using the Pentaho support user name and password provided to you in your Pentaho Welcome Packet. 
  2. In the search box, enter the name of the shim you want. Select the shim from the search results. Optionally, you can browse the shims by version on the Downloads page. 
  3. Read all prerequisites, warnings, and instructions. On the bottom of the page in the Box widget, click the shim zip file to download it.
  4. Navigate to the pentaho-big-data-plugin/hadoop-configurations directory to view the shim directories. 
  5. Unzip the downloaded shim package into the pentaho-big-data-plugin/hadoop-configurations directory.

Step 3: Copy the Configuration Files from Cluster to Shim 

Copying configuration files from the cluster to the shim helps keep key configuration settings in sync with the cluster and reduces configuration errors.

Perform the following steps to copy these configuration file from the cluster to the shim:

  1. Back up the existing HDI shim files in the pentaho-big-data-plugin/hadoop-configurations/hdixx directory. 
  2. Copy the following configuration files from the HDI cluster to pentaho-big-data-plugin/hadoop-configurations/hdixx (overwriting the existing files):
  • core-site.xml
  • hbase-site.xml
  • hdfs-site.xml
  • hive-site.xml
  • mapred-site.xml
  • yarn-site.xml

Step 4: Edit the Shim Configuration Files 

You need to verify or change authentication, Oozie, Hive, MapReduce, and YARN settings in the following files:

  • core-site.xml
  • config.properties
  • hbase-site.xml
  • hive-site.xml
  • mapred-site.xml
  • yarn-site.xml

Verify or Edit config.properties

To connect to a cluster, perform the following steps to verify that the proxy user values are properly set.

  1. Navigate to the pentaho-big-data-plugin/hadoop-configurations/hdixx directory and open the config.properties file.
  2. Add the following values:
Parameter Values
authentication.superuser.provider ​NO_AUTH
pentaho.oozie.proxy.user Add a proxy user's name to access the Oozie service through a proxy, otherwise, leave it set to oozie.
java.system.hdp.version HDI Version. For HDP 2.2, this is 2.2.0.0-2041
  1. Save and close the file.

Edit core-site.xml

To use WASB storage, perform the following steps for updating the core-site.xml file:

  1. Obtain an unencrypted key from the Azure HDInsight cluster.
  2. Set the following properties in the core-site.xml file:
Parameter Values
fs.AbstractFileSystem.wasb.impl
<property>
   <name>fs.AbstractFileSystem.wasb.impl</name>
    <value>org.apache.hadoop.fs.azure.Wasb</value>
 </property>
fs.azure.account.key.eastorageacct2.blob.core.windows.net
<property>
  <name>fs.azure.account.key.eastorageacct2.blob.core.windows.net</name>
 <value>VR9p2ca4enpOrS2/CVOuwN/5+4eFS7nLjudXwFD21T5wA9yrtAuAJrnmoSbjRYPUSwh8d8HKEGPCu
Kzv4so99A==</value>
</property>
fs.azure.account.keyprovider.eastorageacct2.blob.core.windows.net
<property>
<name>fs.azure.account.keyprovider.eastorageacct2.blob.core.windows.net</name>
      <value>org.apache.hadoop.fs.azure.ShellDecryptionKeyProvider</value>
</property>
  1. Save and close the file.

Edit hbase-site.xml

Edit the location of the temporary directory in the hbase-site.xml file to create an HBase local storage directory as follows:

  1. Navigate to the pentaho-big-data-plugin/hadoop-configurations/hdixx directory and open the hbase-site.xml file.
  2. Add the following value:
Parameter Value
hbase.tmp.dir  /tmp/hadoop/hbase
  1. Save and close the file.

Edit hive-site.xml 

Verify that the hive.metastore.uris parameter is set in the hive-site.xml file through the following steps:

  1. Navigate to the pentaho-big-data-plugin/hadoop-configurations/hdixx directory and open the hive-site.xml file.
  2. Add the following value:
Parameter Value
hive.metastore.uris Set this to the location of your hive metastore. 
  1. Save and close the file.

Edit hdfs-site.xml 

Set the dfs.internal.nameservices parameter value in the config.properties file through the following steps:

  1. Navigate to the pentaho-big-data-plugin/hadoop-configurations/hdixx directory and open the config.properties file.
  2. Add these values:
Parameter Value
dfs.internal.nameservices Set the value to the your alias name for the HDFS name nodes.
  1. Save and close the file.

Edit mapred-site.xml 

Edit the mapred-site.xml file to indicate where the job history logs are stored and to allow MapReduce jobs to run across platforms as follows:

  1. Navigate to the pentaho-big-data-plugin/hadoop-configurations/hdixx directory and open the mapred-site.xml file .
  2. Add the following values:
Parameter Value
mapreduce.jobhistory.address Set this to the directory where you want to store the job history logs. 
mapreduce.application.classpath

Add classpath information. Here is an example:

<property>
 <name>mapreduce.application.classpath</name>
 <value>$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*
   :$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*
   :$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*
   :$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*
   :$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*
   :/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure
 </value>
</property>
mapreduce.application.framework.path

Set the framework path. Here is an example:

<property>
    <name>mapreduce.application.framework.path</name>
    <value>/hdp/apps/${hdp.version}/mapreduce/mapreduce.tar.gz#mr-framework</value>
</property>
mapreduce.app-submission.cross-platform Add this property to allow MapReduce jobs to run on either Windows client or Linux server platforms:
<property>
    <name>mapreduce.app-submission.cross-platform</name>
    <value>true</value>
</property>
mapreduce.jobhistory.webapp.address
<property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>headnodehost:19888</value>
</property>
  1. Save and close the file.

Edit yarn-site.xml 

Verify that the following parameters are set in the yarn-site.xml file:

  1. Navigate to the pentaho-big-data-plugin/hadoop-configurations/hdixx directory and open the yarn-site.xml file.
  2. Add these values:
Parameter Values
yarn.application.classpath

​Add the classpaths needed to run YARN applications. Use commas to separate multiple paths. Here is an example:

<property> <name>yarn.application.classpath</name>
    <value>$HADOOP_CONF_DIR,/usr/hdp/current/hadoop-client/*,
        /usr/hdp/current/hadoop-client/lib/*,/usr/hdp/current/hadoop-hdfs-client/*,
        /usr/hdp/current/hadoop-hdfs-client/lib/*,/usr/hdp/current/hadoop-yarn-client/*,
        /usr/hdp/current/hadoop-yarn-client/lib/*</value>
</property>
yarn.resourcemanager.hostname Update the hostname in your environment or use the default: sandbox.hortonworks.com
yarn.resourcemanager.address Update the hostname and port for your environment.
yarn.resourcemanager.admin.address Update the hostname and port for your environment.
  1. Save and close the file.

Create a Connection to the HDInsight Cluster 

Creating a connection to the cluster involves setting an active shim, then configuring and testing the connection to the cluster. Making a shim active means it is used by default when you access a cluster. When you initially install Pentaho, no shim is active by default. You must choose a shim to make active before you can connect to a cluster. Only one shim can be active at a time. The way you make a shim active, as well as the way you configure and test the cluster connection, differs by Pentaho component as follows:

  • Create and Test a Connection to the Cluster in the the PDL Client
  • Connect Other Pentaho Components to the Azure HDInsight Cluster

Create and Test a Connection to the Cluster in the PDI Client

Creating and testing a connection to the HDP cluster from the PDI client involves two tasks:

  • Setting the active shim in the PDI client
  • Configuring and testing the cluster connection

Set the Active Shim in the PDI Client

Set the active shim when you want to connect to a Hadoop cluster the first time, or when you want to switch clusters. To set a shim as active, complete the following steps:

  1. Start the PDI client.
  2. Select Hadoop Distribution... from the Tools menu.

  1. In the Hadoop Distribution window, select Microsoft Azure HDI.
  2. Click OK.
  3. Stop, then restart the PDI client.

Configure and Test the Cluster Connection 

You must provide connection details for the cluster and services you will use, such as the hostname for HDFS or the URL for Oozie. Then you can use a built-in tool to test your configuration to find and troubleshoot common configuration issues, such as wrong hostnames and user permission errors.

Connection settings are set in the Hadoop cluster window. You can get to the settings from several places, but in these instructions, you will get to the Hadoop cluster window from the View tab in a transformation or job. Complete the following steps to configure and test a connection:

  1. In the PDI client, create a new job or transformation or open an existing one.
  2. Click the View tab.

  1. Right-click the Hadoop clusters folder, then click New Cluster. The Hadoop cluster window appears.

  1. Enter the connection information in the Hadoop cluster window. You can get this information from your Hadoop administrator.

As a best practice, use Kettle variables for each connection parameter value to mitigate risks associated with running jobs and transformations in environments that are disconnected from the repository.

Option Definition
Cluster Name Name that you assign the cluster connection.
Storage

Specifies the type of storage you want to use for this connection. Use the drop-down box to select one of the following:

  • HDFS: Hadoop Distributed File System, which is typically used for connecting to a Hadoop cluster. This is the default storage selection.
  • MapR: MapR Converged Data Platform. When selected, the fields in the storage and JobTracker sections are disabled because these parameters are not needed to configure MapR.
  • WASB: Windows Azure Storage Blob, which is only available for connecting to Azure HDInsight.
Hostname (in selected storage section) Hostname for the HDFS or WASB node in your Hadoop cluster.
Port (in selected storage section)

Port for the HDFS or WASB node in your Hadoop cluster.  

If your cluster has been enabled for high availability (HA), then you do not need a port number. Clear the port number.

Username (in selected storage section) Username for the HDFS or WASB node.
Password (in selected storage section) Password for the HDFS or WASB node.
Hostname (in JobTracker section) Hostname for the JobTracker node in your Hadoop cluster.  If you have a separate job tracker node, type in the hostname here.
Port (in JobTracker section) Port for the JobTracker in your Hadoop cluster.
Hostname (in ZooKeeper section) Hostname for the ZooKeeper node in your Hadoop cluster. Supply this only if you want to connect to a ZooKeeper service.
Port (in ZooKeeper section) Port for the ZooKeeper node in your Hadoop cluster. Supply this only if you want to connect to a ZooKeeper service.
URL (in Oozie section) Oozie client address. Supply this only if you want to connect to the Oozie service.
  1. Click the Test button. Test results appear in the Hadoop Cluster Test window. If you have errors, see the Troubleshoot Cluster and Service Configuration Issues section to resolve the issues, then test again.

  1. Click Close on the Hadoop Cluster Test window, then click OK to close the Hadoop cluster window.

Copy the PDI Client Shim Files to Other Pentaho Components 

Once your connection has been properly configured on the PDI client, copy configuration files to the shim directories in other Pentaho components. Copy the following configuration files from the pentaho-big-data-plugin/hadoop-configurations/hdixx directory to pentaho-big-data-shim/hdixx on the Pentaho Server:

  • hbase-site.xml
  • core-site.xml
  • hdfs-site.xml
  • hive-site.xml
  • mapred-site.xml
  • yarn-site.xml

Connect Other Pentaho Components to the Azure HDInsight Cluster 

These instructions explain how to create and test a connection to the cluster in the Pentaho Server. Creating and testing a connection to the other components involves two tasks:

  • Setting the active shim on the Pentaho Server
  • Configuring and testing the cluster connections

Set the Active Shim on the Pentaho Server

Modify the plugin.properties file to set the active shim for the Pentaho Server.

  1. Stop the component.
  2. Locate the pentaho-big-data-plugin directory for your component. 
  3. Navigate to the hadoop-configurations directory.
  4. Navigate to the pentaho-big-data-plugin directory and open the plugin.properties file.
  5. Set the active.hadoop.configuration property to the directory name of the shim you want to make active. Here is an example:
active.hadoop.configuation=active.hadoop.configuation=hdi35
  1. Save and close the plugin.properties file.
  2. Restart the component.

Create and Test Connections 

Connection tests appear in the following table:

Component Test
Pentaho Server for DI

Create a transformation in the PDI client and run it remotely.

Pentaho Server for BA Create a connection to the cluster in the Data Source Wizard.

Once you have connected to the cluster and its services properly, provide connection information to users who need access to the cluster and its services. Those users can only obtain access from computers that have been properly configured to connect to the cluster.

These users need the following information to connect:

  • Hadoop distribution and version of the cluster
  • HDFS, JobTracker, ZooKeeper, and Hive2/Impala Hostnames, IP addresses and port numbers
  • Oozie URL (if used)
  • Users also require the appropriate permissions to access the directories they need on HDFS. This typically includes their home directory and any other required directories.

They might also need more information depending on the job entries, transformation steps, and services they use. See Hadoop Connection and Access Information List for a more detailed list of information that your users might need from you.

Troubleshoot Cluster and Service Configuration Issues 

The issues in the following tables explain how to resolve common configuration problems. 

Shim and Configuration Issues 

Symptoms Common Causes Common Resolutions

No shim

  • Active shim was not selected.
  • Shim was installed in the wrong place.
  • Shim name was not entered correctly in the plugin.properties file.
  • Verify that the plugin name that is in the plugin.properties file matches the directory name in the pentaho-big-data-plugin/hadoop-configurations directory. 
  • Make sure the shim is installed in the correct place.
  • Check the instructions for your Hadoop distribution in the Set Up Pentaho to Connect to an Apache Hadoop Cluster article for more details on how to verify the plugin name and shim installation directory.
Shim does not load
  • Required licenses are not installed.
  • You tried to load a shim that is not supported by your version of Pentaho.
  • If you are using MapR, the client might not have been installed correctly. 
  • Configuration file changes were made incorrectly.
  • Verify the required licenses are installed and have not expired.
  • Verify that the shim is supported by your version of Pentaho. Find your version of Pentaho, then look for the corresponding Components Reference for more details.
  • Verify that configuration file changes were made correctly. Contact your Hadoop Administrator or see the Set Up Pentaho to Connect to an Apache Hadoop Cluster article.
  • If you are connecting to MapR, verify that the client was properly installed. See MapR documentation for details.
  • Restart the PDI client, then test again.
  • If this error continues to occur, files might be corrupted. Download a new copy of the shim from the Pentaho Customer Support Portal.
The file system's URL does not match the URL in the configuration file. Configuration files (*-site.xml files) were not configured properly.  Verify that the configuration files were configured correctly. Verify that the core-site.xml file is configured correctly. See the instructions for your Hadoop distribution in the Set Up Pentaho to Connect to an Apache Hadoop Cluster article for details.

Connection Problems 

Symptoms Common Causes Common Resolutions
Hostname incorrect or not resolving properly.
  • No hostname has been specified.
  • Hostname/IP Address is incorrect.
  • Hostname is not resolving properly in the DNS.
  • Verify that the Hostname/IP address is correct.
  • Check the DNS to make sure the Hostname is resolving properly. 
Port name is incorrect.
  • Port  number is incorrect.
  • Port number is not numeric.
  • The port number is not necessary for HA clusters.
  • No port number has been specified.
  • Verify that the port number is correct.
  • Determine whether your cluster has been enabled for high availability (HA). If it has, then you do not need a port number. Clear the port number and retest the connection.
Cannot connect.
  • Firewall is a barrier to connecting.
  • Other networking issues are occurring.
  • Verify that a firewall is not impeding the connection and that there aren't other network issues. 

Directory Access or Permissions Issues 

Symptoms Common Causes Common Resolutions

Cannot access directory.

  • Authorization and/or authentication issues.
  • Directory is not on the cluster.
  • Make sure the user has been granted read, write, and execute access to the directory. 
  • Ensure security settings for the cluster and shim allow access.
  • Verify the hostname and port number are correct for the Hadoop File System's namenode. 

Cannot create, read, update, or delete files or directories

Authorization and/or authentication issues.

  • Make sure the user has been authorized execute access to the directory. 
  • Ensure security settings for the cluster and shim allow access.
  • Verify that the hostname and port number are correct for the Hadoop File System's namenode. 
Test file cannot be overwritten.  Pentaho test file is already in the directory.
  • A file with the same name as the Pentaho test file is already in the directory. The test file is used to make sure that the user can create, write, and delete in the user's home directory.
  • The test was run, but the file was not deleted. You will need to manually delete the test file. Check the log for the test file name.

Oozie Issues 

Symptoms Common Causes Common Resolutions

Cannot connect to Oozie.

  • Firewall issue.
  • Other networking issues.
  • Oozie URL is incorrect.
  • Verify that the Oozie URL was correctly entered.
  • Verify that a firewall is not impeding the connection. 

ZooKeeper Problems 

Symptoms Common Causes Common Resolutions

Cannot connect to ZooKeeper .

  • Firewall is hindering connection with the ZooKeeper service.
  • Other networking issues.
  • Verify that a firewall is not impeding the connection. 

ZooKeeper hostname or port not found or doesn't resolve properly.

  • Hostname/IP address and port number is missing or is incorrect.
  • Try to connect to the ZooKeeper nodes using ping or another method.
  • Verify that the Hostname/IP address and port numbers are correct.