A comparison between fastparquet and pyarrow?

PythonParquetDaskPyarrowFastparquet

Python Problem Overview


After some searching I failed to find a thorough comparison of fastparquet and pyarrow.

I found this blog post (a basic comparison of speeds).

and a github discussion that claims that files created with fastparquet do not support AWS-athena (btw is it still the case?)

when/why would I use one over the other? what are the major advantages and disadvantages ?


my specific use case is processing data with dask writing it to s3 and then reading/analyzing it with AWS-athena.

Python Solutions


Solution 1 - Python

I used both fastparquet and pyarrow for converting protobuf data to parquet and to query the same in S3 using Athena. Both worked, however, in my use-case, which is a lambda function, package zip file has to be lightweight, so went ahead with fastparquet. (fastparquet library was only about 1.1mb, while pyarrow library was 176mb, and Lambda package limit is 250mb).

I used the following to store a dataframe as parquet file:

from fastparquet import write

parquet_file = path.join(filename + '.parq')
write(parquet_file, df_data)

Solution 2 - Python

However, since the question lacks concrete criteria, and I came here for a good "default choice", I want to state that pandas default engine for DataFrame objects is pyarrow (see pandas docs).

Solution 3 - Python

I would point out that the author of the speed comparison is also the author of pyarrow :) I can speak about the fastparquet case.

From your point of view, the most important thing to know is compatibility. Athena is not one of the test targets for fastparquet (or pyarrow), so you should test thoroughly before making your choice. There are a number of options that you may want to envoke (docs) for datetime representation, nulls, types, that may be important to you.

Writing to s3 using dask is certainly a test case for fastparquet, and I believe pyarrow should have no problem with that either.

Solution 4 - Python

I just used fastparquet for a case to get out data from Elasticsearch and to store it in S3 and query with Athena and had no issue at all.

I used the following to store a dataframe in S3 as parquet file:

import s3fs
import fastparquet as fp
import pandas as pd
import numpy as np

s3 = s3fs.S3FileSystem()
myopen = s3.open
s3bucket = 'mydata-aws-bucket/'

# random dataframe for demo
df = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD'))

parqKey = s3bucket + "datafile"  + ".parq.snappy"
fp.write(parqKey, df ,compression='SNAPPY', open_with=myopen)

My table look similar like this in Athena:

CREATE EXTERNAL TABLE IF NOT EXISTS myanalytics_parquet (
  `column1` string,
  `column2` int,
  `column3` DOUBLE,
  `column4` int,
  `column5` string
 )
STORED AS PARQUET
LOCATION 's3://mydata-aws-bucket/'
tblproperties ("parquet.compress"="SNAPPY")

Solution 5 - Python

This question may be a bit old, but I happen to be working on the same issue and I found this benchmark https://wesmckinney.com/blog/python-parquet-update/ . According to it, pyarrow is faster than fastparquet, little wonder it is the default engine used in dask.

Update:

An update to my earlier response. I have been more lucky writing with pyarrow and reading with fastparquet in google cloud storage.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionmosheviView Question on Stackoverflow
Solution 1 - PythonDaenerysView Answer on Stackoverflow
Solution 2 - Pythond4tm4xView Answer on Stackoverflow
Solution 3 - PythonmdurantView Answer on Stackoverflow
Solution 4 - PythonKlaus SeilerView Answer on Stackoverflow
Solution 5 - PythonAladejubelo OluwashinaView Answer on Stackoverflow