bonaci_runtimearch_00 From Spark in Action by Petar Zečević and Marko Bonaći.

When talking about Spark runtime architecture, we can distinguish the specifics of various cluster types from the typical Spark components shared by all. Here we describe typical Spark components that are the same regardless of the runtime mode you choose.

 


Save 37% on Spark in Action. Just enter code fcczecevic into the discount code box at checkout at manning.com.


 

Spark runtime components

A basic familiarity with Spark runtime components helps you understand how your jobs work. Figure 1 shows the main Spark components running inside a cluster: client, driver, and executors.


bonaci_runtimearch_01

Figure 1: Spark runtime components in cluster deploy mode. Elements of a Spark application are in blue boxes and an application’s tasks running inside task slots are labeled with a “T”. Unoccupied task slots are in white boxes.


The physical placement of executor and driver processes depends on the cluster type and its configuration. For example, some of these processes could share a single physical machine, or they could run on different ones. Figure 1 shows only the logical components in cluster deploy mode.

Responsibilities of the client process component

The client process starts the driver program. For example, the client process can be a spark-submit script for running applications, a spark-shell script, or a custom application using Spark API. The client process prepares the classpath and all configuration options for the Spark application. It also passes application arguments, if any, to the application running inside the driver.

Responsibilities of the driver component

The driver orchestrates and monitors execution of a Spark application. There’s always one driver per Spark application. You can think of the driver as a wrapper around the application. The driver and its subcomponents – the Spark context and scheduler – are responsible for:

  • requesting memory and CPU resources from cluster managers
  • breaking application logic into stages and tasks
  • sending tasks to executors
  • collecting the results

bonaci_runtimearch_02

Figure 2: Spark runtime components in client deploy mode. The driver is running inside the client’s JVM process.


Two basic ways the driver program can be run are:

  • Cluster deploy mode is depicted in figure 1. In this mode, the driver process runs as a separate JVM process inside a cluster, and the cluster manages its resources (mostly JVM heap memory).
  • Client deploy mode is depicted in figure 2. In this mode, the driver’s running inside the client’s JVM process and communicates with the executors managed by the cluster.

The deploy mode you choose affects how you configure Spark and the resource requirements of the client JVM.

Responsibilities of the executors

The executors, which JVM processes, accept tasks from the driver, execute those tasks, and return the results to the driver.

The example drivers in figures 1 and 2 use only two executors, but you can use a much larger number (some companies run Spark clusters with thousands of executors).

Each executor has several task slots (or CPU cores) for running tasks in parallel. The executors in the figures have six tasks slots each.  Those slots in white boxes are vacant. You can set the number of task slots to a value two or three times the number of CPU cores. Although these task slots are often referred to as CPU cores in Spark, they’re implemented as threads and don’t need to correspond to the number of physical CPU cores on the machine.

Creation of the Spark context

Once the driver’s started, it configures an instance of SparkContext. When running a Spark REPL shell, the shell is the driver program.  Your Spark context is already preconfigured and available as a sc variable. When running a standalone Spark application by submitting a jar file, or by using Spark API from another program, your Spark application starts and configures the Spark context.

There can be only one Spark context per JVM.

NOTE Although the configuration option spark.driver.allowMultipleContexts exists, it’s misleading because usage of multiple Spark contexts is discouraged. This option’s used only for Spark internal tests and we recommend you don’t use that option in your user programs. If you do, you may get unexpected results while running more than one Spark context in a single JVM.

A Spark context comes with many useful methods for creating RDDs, loading data, and is the main interface for accessing Spark runtime.


Spark cluster types

Spark can run in local mode and inside Spark standalone, YARN, and Mesos clusters. Although Spark runs on all of them, one might be more applicable for your environment and use cases. In this section, you’ll find the pros and cons of each cluster type.

Spark standalone cluster

A Spark standalone cluster is a Spark-specific cluster. Because a standalone cluster’s built specifically for Spark applications, it doesn’t support communication with an HDFS secured with Kerberos authentication protocol. If you need that kind of security, use YARN for running Spark. A Spark standalone cluster, but provides faster job startup than those jobs running on YARN.

YARN cluster

YARN is Hadoop’s resource manager and execution system. It’s also known as MapReduce 2 because it superseded the MapReduce engine in Hadoop 1 that supported only MapReduce jobs.

Running Spark on YARN has several advantages:

  • Many organizations already have YARN clusters of a significant size, along with the technical know-how, tools, and procedures for managing and monitoring them.
  • Furthermore, YARN lets you run different types of Java applications, not only Spark, and you can mix legacy Hadoop and Spark applications with ease.
  • YARN also provides methods for isolating and prioritizing applications among users and organizations, a functionality the standalone cluster doesn’t have.
  • It’s the only cluster type that supports Kerberos-secured HDFS.
  • Another advantage of YARN over the standalone cluster’s that you don’t have to install Spark on every node in the cluster.

Mesos cluster

Mesos is a scalable and fault-tolerant “distributed systems kernel” written in C++. Running Spark in a Mesos cluster also has its advantages. Unlike YARN, Mesos also supports C++ and Python applications,  and unlike YARN and a standalone Spark cluster that only schedules memory, Mesos provides scheduling of other types of resources (for example, CPU, disk space and ports), although these additional resources aren’t used by Spark currently. Mesos has some additional options for job scheduling that other cluster types don’t have (for example, fine-grained mode).

And, Mesos is a “scheduler of scheduler frameworks” because of its two-level scheduling architecture. The jury’s still out on which is better: YARN or Mesos; but now, with the Myriad project (https://github.com/mesos/myriad),  you can run YARN on top of Mesos to solve the dilemma.

Spark local modes

Spark local mode and Spark local cluster mode are special cases of a Spark standalone cluster running on a single machine. Because these cluster types are easy to set up and use, they’re convenient for quick tests, but they shouldn’t be used in a production environment.

Furthermore, in these local modes, the workload isn’t distributed, and it creates the resource restrictions of a single machine and suboptimal performance. True high availability isn’t possible on a single machine, either.


For more, check out the book on liveBook here.