beam

module
v2.2.0+incompatible Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 16, 2017 License: Apache-2.0

README

Apache Beam

Apache Beam is a unified model for defining both batch and streaming data-parallel processing pipelines, as well as a set of language-specific SDKs for constructing pipelines and Runners for executing them on distributed processing backends, including Apache Apex, Apache Flink, Apache Spark, and Google Cloud Dataflow.

Status

Build Status Coverage Status

Overview

Beam provides a general approach to expressing embarrassingly parallel data processing pipelines and supports three categories of users, each of which have relatively disparate backgrounds and needs.

  1. End Users: Writing pipelines with an existing SDK, running it on an existing runner. These users want to focus on writing their application logic and have everything else just work.
  2. SDK Writers: Developing a Beam SDK targeted at a specific user community (Java, Python, Scala, Go, R, graphical, etc). These users are language geeks, and would prefer to be shielded from all the details of various runners and their implementations.
  3. Runner Writers: Have an execution environment for distributed processing and would like to support programs written against the Beam Model. Would prefer to be shielded from details of multiple SDKs.
The Beam Model

The model behind Beam evolved from a number of internal Google data processing projects, including MapReduce, FlumeJava, and Millwheel. This model was originally known as the “Dataflow Model”.

To learn more about the Beam Model (though still under the original name of Dataflow), see the World Beyond Batch: Streaming 101 and Streaming 102 posts on O’Reilly’s Radar site, and the VLDB 2015 paper.

The key concepts in the Beam programming model are:

  • PCollection: represents a collection of data, which could be bounded or unbounded in size.
  • PTransform: represents a computation that transforms input PCollections into output PCollections.
  • Pipeline: manages a directed acyclic graph of PTransforms and PCollections that is ready for execution.
  • PipelineRunner: specifies where and how the pipeline should execute.
SDKs

Beam supports multiple language specific SDKs for writing pipelines against the Beam Model.

Currently, this repository contains SDKs for both Java and Python.

Have ideas for new SDKs or DSLs? See the JIRA.

Runners

Beam supports executing programs on multiple distributed processing backends through PipelineRunners. Currently, the following PipelineRunners are available:

  • The DirectRunner runs the pipeline on your local machine.
  • The ApexRunner runs the pipeline on an Apache Hadoop YARN cluster (or in embedded mode).
  • The DataflowRunner submits the pipeline to the Google Cloud Dataflow.
  • The FlinkRunner runs the pipeline on an Apache Flink cluster. The code has been donated from dataArtisans/flink-dataflow and is now part of Beam.
  • The SparkRunner runs the pipeline on an Apache Spark cluster. The code has been donated from cloudera/spark-dataflow and is now part of Beam.

Have ideas for new Runners? See the JIRA.

Getting Started

Please refer to the Quickstart[Java, Python] available on our website.

If you'd like to build and install the whole project from the source distribution, you may need some additional tools installed in your system. In a Debian-based distribution:

sudo apt-get install \
    openjdk-8-jdk \
    maven \
    python-setuptools \
    python-pip

Then please use the standard mvn clean install command.

Spark Runner

See the Spark Runner README.

Contact Us

To get involved in Apache Beam:

We also have a contributor's guide.

More Information

Directories

Path Synopsis
runners
gcp/gcemd
gcemd is a metadata-configured provisioning server for GCE.
gcemd is a metadata-configured provisioning server for GCE.
gcp/gcsproxy
gcsproxy is an artifact server backed by GCS and can run in either retrieval (read) or staging (write) mode.
gcsproxy is an artifact server backed by GCS and can run in either retrieval (read) or staging (write) mode.
sdks
go/cmd/beamctl
beamctl is a command line client for the Apache Beam portability services.
beamctl is a command line client for the Apache Beam portability services.
go/cmd/beamctl/cmd
Package cmd contains the commands for beamctl.
Package cmd contains the commands for beamctl.
go/pkg/beam/artifact
Package artifact contains utilities for staging and retrieving artifacts.
Package artifact contains utilities for staging and retrieving artifacts.
go/pkg/beam/artifact/gcsproxy
Package gcsproxy contains artifact staging and retrieval servers backed by GCS.
Package gcsproxy contains artifact staging and retrieval servers backed by GCS.
go/pkg/beam/model/org_apache_beam_fn_v1
Package org_apache_beam_fn_v1 is a generated protocol buffer package.
Package org_apache_beam_fn_v1 is a generated protocol buffer package.
go/pkg/beam/model/org_apache_beam_runner_v1
Package org_apache_beam_runner_api_v1 is a generated protocol buffer package.
Package org_apache_beam_runner_api_v1 is a generated protocol buffer package.
go/pkg/beam/provision
Package provision contains utilities for obtaining runtime provision, information -- such as pipeline options.
Package provision contains utilities for obtaining runtime provision, information -- such as pipeline options.
go/pkg/beam/util/errorx
Package errorx contains utilities for handling errors.
Package errorx contains utilities for handling errors.
go/pkg/beam/util/execx
Package execx contains wrappers and utilities for the exec package.
Package execx contains wrappers and utilities for the exec package.
go/pkg/beam/util/gcsx
Package gcsx contains utilities for working with Google Cloud Storage (GCS).
Package gcsx contains utilities for working with Google Cloud Storage (GCS).
go/pkg/beam/util/grpcx
Package grpcx contains utilities for working with gRPC.
Package grpcx contains utilities for working with gRPC.
java/container
boot is the boot code for the Java SDK harness container.
boot is the boot code for the Java SDK harness container.
python/container
boot is the boot code for the Python SDK harness container.
boot is the boot code for the Python SDK harness container.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL