Skip to main content

Pentaho+ documentation is moving!

The new product documentation portal is here. Check it out now at docs.hitachivantara.com

 

Hitachi Vantara Lumada and Pentaho Documentation

Use Hadoop with Pentaho

Parent article

Pentaho provides a complete big data analytics solution that supports the entire big data analytics process. From big data aggregation, preparation, and integration, to interactive visualization, analysis, and prediction, Pentaho allows you to harvest the meaningful patterns buried in big data stores. Analyzing your big data sets gives you the ability to identify new revenue sources, develop loyal and profitable customer relationships, and run your organization more efficiently and cost effectively.

Pentaho, big data, and Hadoop

The term big data applies to very large, complex, or dynamic datasets that need to be stored and managed over a long time. To derive benefits from big data, you need the ability to access, process, and analyze data as it is being created. However, the size and structure of big data makes it very inefficient to maintain and process it using traditional relational databases.

Big data solutions re-engineer the components of traditional databases—data storage, retrieval, query, processing—and massively scales them.

Pentaho big data overview

Pentaho increases speed-of-thought analysis against even the largest of big data stores by focusing on the features that deliver performance.

  • Instant access

    : Pentaho provides visual tools to make it easy to define the sets of data that are important to you for interactive analysis. These data sets and associated analytics can be easily shared with others, and as new business questions arise, new views of data can be defined for interactive analysis.

  • High performance platform

    : Pentaho is built on a modern, lightweight, high performance platform. This platform fully leverages 64-bit, multi-core processors and large memory spaces to efficiently leverage the power of contemporary hardware.

  • Extreme-scale, in-memory caching

    : Pentaho is unique in leveraging external data grid technologies, such as Infinispan and Memcached to load vast amounts of data into memory so that it is instantly available for speed-of-thought analysis.

  • Federated data integration

    : Data can be extracted from multiple sources, including big data and traditional data stores, integrated together and then flowed directly into reports, without needing an enterprise data warehouse or data mart.

About Hadoop

The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.

A Hadoop platform consists of a Hadoop kernel, a MapReduce model, a distributed file system, and often a number of related projects, such as Apache Hive, Apache HBase, and others.

A Hadoop Distributed File System, commonly referred to as HDFS, is a Java-based, distributed, scalable, and portable file system for the Hadoop framework.

Get started with Hadoop and PDI

Pentaho Data Integration(PDI) can operate in two distinct modes, job orchestration and data transformation. Within PDI they are referred to as jobs and transformations.

PDI jobs sequence a set of entries that encapsulate actions. An example of a PDI big data job would be to check for existence of new log files, copy the new files to HDFS, execute a MapReduce task to aggregate the weblog into a click stream and stage that clickstream data in an analytic database.

PDI transformations consist of a set of steps that execute in parallel and operate on a stream of data columns. Through the default Pentaho engine, columns usually flow from one system where new columns can be calculated or values can be looked up and added to the stream. The data stream is then sent to a receiving system like a Hadoop cluster, a database, or even the Pentaho ReportingEngine.

You can also run transformations using the Spark engine. Pentaho uses the Adaptive Execution Layer(AEL) to run transformations in different engines. AEL builds a transformation definition for Spark, which moves execution directly to the cluster, leveraging Spark’s ability to coordinate large amounts of data over multiple nodes. See Adaptive Execution Layer for details.

PDI job entries and transformation steps are described in Transformation step reference and Job entry reference.

PDI's big data plugin

The Pentaho Big Data plugin contains all of the job entries and transformation steps required for working with Hadoop, Cassandra, and MongoDB. PDI can be configured to communicate with most popular Hadoop distributions. See the Set up Pentaho to Connect to Hadoop Cluster section for more information. For a list of supported big data technology, including which configurations of Hadoop are currently supported, see the Components Reference.

Connect to a Hadoop cluster in PDI client

You connect a Hadoop cluster through a shim that acts as an adapter between Pentaho and the cluster.

Configure PDI for Hadoop connections

You can edit a PDI properties file to manage a Hadoop configuration.

Hadoop connection and access information list

After you connect and configure your Hadoop cluster, your users will need some information and permissions to access the cluster and its services.

Use PDI outside and inside the Hadoop cluster

You and your users can use PDI to execute both outside of a Hadoop cluster and within the nodes of a Hadoop cluster.

Learn more

Advanced topics

The following topics help to extend your knowledge of multidimensional data beyond basic setup and use:

Troubleshoot the Pentaho system

See our list of common problems and resolutions.

Learn more