How do I detect if a Spark DataFrame has a column

ScalaApache SparkDataframeApache Spark-Sql

Scala Problem Overview


When I create a DataFrame from a JSON file in Spark SQL, how can I tell if a given column exists before calling .select

Example JSON schema:

{
  "a": {
    "b": 1,
    "c": 2
  }
}

This is what I want to do:

potential_columns = Seq("b", "c", "d")
df = sqlContext.read.json(filename)
potential_columns.map(column => if(df.hasColumn(column)) df.select(s"a.$column"))

but I can't find a good function for hasColumn. The closest I've gotten is to test if the column is in this somewhat awkward array:

scala> df.select("a.*").columns
res17: Array[String] = Array(b, c)

Scala Solutions


Solution 1 - Scala

Just assume it exists and let it fail with Try. Plain and simple and supports an arbitrary nesting:

import scala.util.Try
import org.apache.spark.sql.DataFrame

def hasColumn(df: DataFrame, path: String) = Try(df(path)).isSuccess

val df = sqlContext.read.json(sc.parallelize(
  """{"foo": [{"bar": {"foobar": 3}}]}""" :: Nil))

hasColumn(df, "foobar")
// Boolean = false

hasColumn(df, "foo")
// Boolean = true

hasColumn(df, "foo.bar")
// Boolean = true

hasColumn(df, "foo.bar.foobar")
// Boolean = true

hasColumn(df, "foo.bar.foobaz")
// Boolean = false

Or even simpler:

val columns = Seq(
  "foobar", "foo", "foo.bar", "foo.bar.foobar", "foo.bar.foobaz")

columns.flatMap(c => Try(df(c)).toOption)
// Seq[org.apache.spark.sql.Column] = List(
//   foo, foo.bar AS bar#12, foo.bar.foobar AS foobar#13)

Python equivalent:

from pyspark.sql.utils import AnalysisException
from pyspark.sql import Row


def has_column(df, col):
    try:
        df[col]
        return True
    except AnalysisException:
        return False

df = sc.parallelize([Row(foo=[Row(bar=Row(foobar=3))])]).toDF()

has_column(df, "foobar")
## False

has_column(df, "foo")
## True

has_column(df, "foo.bar")
## True

has_column(df, "foo.bar.foobar")
## True

has_column(df, "foo.bar.foobaz")
## False

Solution 2 - Scala

Another option which I normally use is

df.columns.contains("column-name-to-check")

This returns a boolean

Solution 3 - Scala

Actually you don't even need to call select in order to use columns, you can just call it on the dataframe itself

// define test data
case class Test(a: Int, b: Int)
val testList = List(Test(1,2), Test(3,4))
val testDF = sqlContext.createDataFrame(testList)

// define the hasColumn function
def hasColumn(df: org.apache.spark.sql.DataFrame, colName: String) = df.columns.contains(colName)

// then you can just use it on the DF with a given column name
hasColumn(testDF, "a")  // <-- true
hasColumn(testDF, "c")  // <-- false

Alternatively you can define an implicit class using the pimp my library pattern so that the hasColumn method is available on your dataframes directly

implicit class DataFrameImprovements(df: org.apache.spark.sql.DataFrame) {
    def hasColumn(colName: String) = df.columns.contains(colName)
}

Then you can use it as:

testDF.hasColumn("a") // <-- true
testDF.hasColumn("c") // <-- false

Solution 4 - Scala

Try is not optimal as the it will evaluate the expression inside Try before it takes the decision.

For large data sets, use the below in Scala:

df.schema.fieldNames.contains("column_name")

Solution 5 - Scala

For those who stumble across this looking for a Python solution, I use:

if 'column_name_to_check' in df.columns:
    # do something

When I tried @Jai Prakash's answer of df.columns.contains('column-name-to-check') using Python, I got AttributeError: 'list' object has no attribute 'contains'.

Solution 6 - Scala

Your other option for this would be to do some array manipulation (in this case an intersect) on the df.columns and your potential_columns.

// Loading some data (so you can just copy & paste right into spark-shell)
case class Document( a: String, b: String, c: String)
val df = sc.parallelize(Seq(Document("a", "b", "c")), 2).toDF

// The columns we want to extract
val potential_columns = Seq("b", "c", "d")

// Get the intersect of the potential columns and the actual columns, 
// we turn the array of strings into column objects
// Finally turn the result into a vararg (: _*)
df.select(potential_columns.intersect(df.columns).map(df(_)): _*).show

Alas this will not work for you inner object scenario above. You will need to look at the schema for that.

I'm going to change your potential_columns to fully qualified column names

val potential_columns = Seq("a.b", "a.c", "a.d")

// Our object model
case class Document( a: String, b: String, c: String)
case class Document2( a: Document, b: String, c: String)

// And some data...
val df = sc.parallelize(Seq(Document2(Document("a", "b", "c"), "c2")), 2).toDF

// We go through each of the fields in the schema.
// For StructTypes we return an array of parentName.fieldName
// For everything else we return an array containing just the field name
// We then flatten the complete list of field names
// Then we intersect that with our potential_columns leaving us just a list of column we want
// we turn the array of strings into column objects
// Finally turn the result into a vararg (: _*)
df.select(df.schema.map(a => a.dataType match { case s : org.apache.spark.sql.types.StructType => s.fieldNames.map(x => a.name + "." + x) case _ => Array(a.name) }).flatMap(x => x).intersect(potential_columns).map(df(_)) : _*).show

This only goes one level deep, so to make it generic you would have to do more work.

Solution 7 - Scala

in pyspark you can simply run

> 'field' in df.columns

Solution 8 - Scala

If you shred your json using a schema definition when you load it then you don't need to check for the column. if it's not in the json source it will appear as a null column.

        val schemaJson = """
  {
      "type": "struct",
      "fields": [
          {
            "name": field1
            "type": "string",
            "nullable": true,
            "metadata": {}
          },
          {
            "name": field2
            "type": "string",
            "nullable": true,
            "metadata": {}
          }
      ]
  }
        """
    val schema = DataType.fromJson(schemaJson).asInstanceOf[StructType]
     
    val djson = sqlContext.read
    .schema(schema )
    .option("badRecordsPath", readExceptionPath)
    .json(dataPath)

Solution 9 - Scala

For nested columns you can use

df.schema.simpleString().find('column_name')

Solution 10 - Scala

def hasColumn(df: org.apache.spark.sql.DataFrame, colName: String) =
  Try(df.select(colName)).isSuccess

Use the above mentioned function to check the existence of column including nested column name.

Solution 11 - Scala

In PySpark, df.columns gives you a list of columns in the dataframe, so "colName" in df.columns would return a True or False. Give a try on that. Good luck!

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionbenView Question on Stackoverflow
Solution 1 - Scalazero323View Answer on Stackoverflow
Solution 2 - ScalaJai PrakashView Answer on Stackoverflow
Solution 3 - ScalaDaniel B.View Answer on Stackoverflow
Solution 4 - ScalaNitin MathurView Answer on Stackoverflow
Solution 5 - ScalamefryarView Answer on Stackoverflow
Solution 6 - ScalaMichael Lloyd Lee mlkView Answer on Stackoverflow
Solution 7 - ScalaDomenico Di NicolaView Answer on Stackoverflow
Solution 8 - ScalaShaun RyanView Answer on Stackoverflow
Solution 9 - ScalaRicardo MoraesView Answer on Stackoverflow
Solution 10 - Scalauser11349757View Answer on Stackoverflow
Solution 11 - ScalaJieView Answer on Stackoverflow