BigData SQL : Spark Architecture

A Spark application is composed of drivers, workers, executors, and tasks. The Spark driver starts the Spark worker processes on nodes in the cluster. Each Spark worker process spawns executors that spawn tasks, which are threads of the same code executing on different pieces of the data. The executor process coordinates the execution of the tasks and the details around them, such as scheduling, fault tolerance, etc., while the worker process communicates with the driver process.