How to set Apache Spark Executor memory

MemoryApache Spark

Memory Problem Overview


How can I increase the memory available for Apache spark executor nodes?

I have a 2 GB file that is suitable to loading in to Apache Spark. I am running apache spark for the moment on 1 machine, so the driver and executor are on the same machine. The machine has 8 GB of memory.

When I try count the lines of the file after setting the file to be cached in memory I get these errors:

2014-10-25 22:25:12 WARN  CacheManager:71 - Not enough space to cache partition rdd_1_1 in memory! Free memory is 278099801 bytes.

I looked at the documentation here and set spark.executor.memory to 4g in $SPARK_HOME/conf/spark-defaults.conf

The UI shows this variable is set in the Spark Environment. You can find screenshot here

However when I go to the Executor tab the memory limit for my single Executor is still set to 265.4 MB. I also still get the same error.

I tried various things mentioned here but I still get the error and don't have a clear idea where I should change the setting.

I am running my code interactively from the spark-shell

Memory Solutions


Solution 1 - Memory

Since you are running Spark in local mode, setting spark.executor.memory won't have any effect, as you have noticed. The reason for this is that the Worker "lives" within the driver JVM process that you start when you start spark-shell and the default memory used for that is 512M. You can increase that by setting spark.driver.memory to something higher, for example 5g. You can do that by either:

  • setting it in the properties file (default is $SPARK_HOME/conf/spark-defaults.conf),

    spark.driver.memory              5g
    
  • or by supplying configuration setting at runtime

    $ ./bin/spark-shell --driver-memory 5g
    

Note that this cannot be achieved by setting it in the application, because it is already too late by then, the process has already started with some amount of memory.

The reason for 265.4 MB is that Spark dedicates spark.storage.memoryFraction * spark.storage.safetyFraction to the total amount of storage memory and by default they are 0.6 and 0.9.

512 MB * 0.6 * 0.9 ~ 265.4 MB

So be aware that not the whole amount of driver memory will be available for RDD storage.

But when you'll start running this on a cluster, the spark.executor.memory setting will take over when calculating the amount to dedicate to Spark's memory cache.

Solution 2 - Memory

Also note, that for local mode you have to set the amount of driver memory before starting jvm:

bin/spark-submit --driver-memory 2g --class your.class.here app.jar

This will start the JVM with 2G instead of the default 512M.
Details here: > For local mode you only have one executor, and this executor is your driver, so you need to set the driver's memory instead. *That said, in local mode, by the time you run spark-submit, a JVM has already been launched with the default memory settings, so setting "spark.driver.memory" in your conf won't actually do anything for you. Instead, you need to run spark-submit as follows

Solution 3 - Memory

The answer submitted by Grega helped me to solve my issue. I am running Spark locally from a python script inside a Docker container. Initially I was getting a Java out-of-memory error when processing some data in Spark. However, I was able to assign more memory by adding the following line to my script:

conf=SparkConf()
conf.set("spark.driver.memory", "4g") 

Here is a full example of the python script which I use to start Spark:

import os
import sys
import glob

spark_home = '<DIRECTORY WHERE SPARK FILES EXIST>/spark-2.0.0-bin-hadoop2.7/'
driver_home = '<DIRECTORY WHERE DRIVERS EXIST>'

if 'SPARK_HOME' not in os.environ:
    os.environ['SPARK_HOME'] = spark_home 

SPARK_HOME = os.environ['SPARK_HOME']

sys.path.insert(0,os.path.join(SPARK_HOME,"python"))
for lib in glob.glob(os.path.join(SPARK_HOME, "python", "lib", "*.zip")):
    sys.path.insert(0,lib);

from pyspark import SparkContext
from pyspark import SparkConf
from pyspark.sql import SQLContext

conf=SparkConf()
conf.set("spark.executor.memory", "4g")
conf.set("spark.driver.memory", "4g")
conf.set("spark.cores.max", "2")
conf.set("spark.driver.extraClassPath",
    driver_home+'/jdbc/postgresql-9.4-1201-jdbc41.jar:'\
    +driver_home+'/jdbc/clickhouse-jdbc-0.1.52.jar:'\
    +driver_home+'/mongo/mongo-spark-connector_2.11-2.2.3.jar:'\
    +driver_home+'/mongo/mongo-java-driver-3.8.0.jar') 
 
sc = SparkContext.getOrCreate(conf)

spark = SQLContext(sc)

Solution 4 - Memory

Apparently, the question never says to run on local mode not on yarn. Somehow I couldnt get spark-default.conf change to work. Instead I tried this and it worked for me

bin/spark-shell --master yarn --num-executors 6  --driver-memory 5g --executor-memory 7g

( couldnt bump executor-memory to 8g there is some restriction from yarn configuration.)

Solution 5 - Memory

You need to increase the driver memory.On mac(i.e when running on local master), the default driver-memory is 1024M). By default, thus 380Mb is allotted to the executor.

Screenshot

Upon increasing [--driver-memory 2G], executor memory got increased to ~950Mb. enter image description here

Solution 6 - Memory

As far as i know it wouldn't be possible to change the spark.executor.memory at run time. If you are running a stand-alone version, with pyspark and graphframes, you can launch the pyspark REPL by executing the following command:

pyspark --driver-memory 2g --executor-memory 6g --packages graphframes:graphframes:0.7.0-spark2.4-s_2.11

Be sure to change the SPARK_VERSION environment variable appropriately regarding the latest released version of Spark

Solution 7 - Memory

create a file called spark-env.sh in spark/conf directory and add this line

SPARK_EXECUTOR_MEMORY=2000m #memory size which you want to allocate for the executor

Solution 8 - Memory

spark-submit \

  --class org.apache.spark.examples.SparkPi \

  --master yarn \

  --deploy-mode cluster \  # can be client for client mode

  --executor-memory 2G \

  --num-executors 5 \

  /path/to/examples.jar \

  1000

Solution 9 - Memory

You can build command using following example

 spark-submit    --jars /usr/share/java/postgresql-jdbc.jar    --class com.examples.WordCount3  /home/vaquarkhan/spark-scala-maven-project-0.0.1-SNAPSHOT.jar --jar  --num-executors 3 --driver-memory 10g **--executor-memory 10g** --executor-cores 1  --master local --deploy-mode client  --name wordcount3 --conf "spark.app.id=wordcount" 

Solution 10 - Memory

Spark executor memory is required for running your spark tasks based on the instructions given by your driver program. Basically, it requires more resources that depends on your submitted job.

Executor memory includes memory required for executing the tasks plus overhead memory which should not be greater than the size of JVM and yarn maximum container size.

Add the following parameters in spark-defaults.conf

spar.executor.cores=1

spark.executor.memory=2g

If you using any cluster management tools like cloudera manager or amabari please refresh the cluster configuration for reflecting the latest configs to all nodes in the cluster.

Alternatively, we can pass the executor core and memory value as an argument while running spark-submit command along with class and application path.

Example:

spark-submit \

  --class org.apache.spark.examples.SparkPi \

  --master yarn \

  --deploy-mode cluster \  # can be client for client mode

  --executor-memory 2G \

  --num-executors 5 \

  /path/to/examples.jar \

  1000

Solution 11 - Memory

you mentioned that you are running yourcode interactivly on spark-shell so, while doing if no proper value is set for driver-memory or executor memory then spark defaultly assign some value to it, which is based on it's properties file(where default value is being mentioned).

I hope you are aware of the fact that there is one driver(master node) and worker-node(where executors are get created and processed), so basically two types of space is required by the spark program,so if you want to set driver memory then when start spark-shell .

spark-shell --driver-memory "your value" and to set executor memory : spark-shell --executor-memory "your value"

then I think you are good to go with the desired value of the memory that you want your spark-shell to use.

Solution 12 - Memory

In Windows or Linux, you can use this command:

spark-shell --driver-memory 2G

enter image description here

Solution 13 - Memory

For configuring Cores and Memory for executors.

spark-shell --help

--master MASTER_URL spark://host:port, mesos://host:port, yarn,

--executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G).

--total-executor-cores NUM Total cores for all executors.

--executor-cores NUM Number of cores used by each executor. (Default: 1 in YARN and K8S modes, or all available cores on the worker in standalone mode).

Choose one of the following commands in case your system is having 6 Cores and 6GB RAM:

  1. creates 6 executors with each 1 core and 1GB RAM
spark-shell --master spark://sparkmaster:7077 --executor-cores 1 --executor-memory 1g
  1. creates 3 executors with each 1 core and 2GB RAM. The Max memory is 6GB, 3 cores are ideal.
spark-shell --master spark://sparkmaster:7077 --executor-cores 1 --executor-memory 2g
  1. creates 2 executors with each 3 cores and 3GB RAM. Using all RAM and Cores
spark-shell --master spark://sparkmaster:7077 --executor-cores 3 --executor-memory 3g
  1. creates 2 executors with each 3 cores and only 1GB RAM.
spark-shell --master spark://sparkmaster:7077 --executor-cores 3 --executor-memory 1g
  1. if we want to use only one executors with 1 core and 1GB RAM
spark-shell --master spark://sparkmaster:7077 --total-executor-cores 1 --executor-cores 1 --executor-memory 1g
  1. if we want to use only two executors with each 1 core and 1GB RAM
spark-shell --master spark://sparkmaster:7077 --total-executor-cores 2 --executor-cores 1 --executor-memory 1g
  1. if we want to use only two executors with each 2 cores and 2GB RAM (total 4 cores and 4GB RAM)
spark-shell --master spark://sparkmaster:7077 --total-executor-cores 4 --executor-cores 2 --executor-memory 2g
  1. If we apply --total-executor-cores 2, then only one executor will be created.
spark-shell --master spark://sparkmaster:7077 --total-executor-cores 4 --executor-cores 2 --executor-memory 2g
  1. Total executor cores: 3 is not divisible by cores per executor: 2, the left cores: 1 will not be allocated. one executor with 2 core will create.
spark-shell --master spark://sparkmaster:7077 --total-executor-cores 3 --executor-cores 2 --executor-memory 2g

So --total-executor-cores / --executor-cores = Number of executors that will create.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionWillamSView Question on Stackoverflow
Solution 1 - MemoryGrega KešpretView Answer on Stackoverflow
Solution 2 - MemoryDmitriy SelivanovView Answer on Stackoverflow
Solution 3 - MemorySarahView Answer on Stackoverflow
Solution 4 - MemorySomumView Answer on Stackoverflow
Solution 5 - MemorySanchayView Answer on Stackoverflow
Solution 6 - MemoryTaieView Answer on Stackoverflow
Solution 7 - MemoryMohamed Thasin ahView Answer on Stackoverflow
Solution 8 - MemorykevenView Answer on Stackoverflow
Solution 9 - Memoryvaquar khanView Answer on Stackoverflow
Solution 10 - MemoryRadhakrishnan RkView Answer on Stackoverflow
Solution 11 - MemoryA.MishraView Answer on Stackoverflow
Solution 12 - MemoryRobert David Ramírez GarciaView Answer on Stackoverflow
Solution 13 - MemoryMarreddyView Answer on Stackoverflow