Is there any way in Elasticsearch to get results as CSV file in curl API?

CsvElasticsearch

Csv Problem Overview


I am using elastic search. I need results from elastic search as a CSV file. Any curl URL or any plugins to achieve this?

Csv Solutions


Solution 1 - Csv

I've done just this using cURL and jq ("like sed, but for JSON"). For example, you can do the following to get CSV output for the top 20 values of a given facet:

$ curl -X GET 'http://localhost:9200/myindex/item/_search?from=0&size=0' -d '
    {"from": 0,
    "size": 0,
    "facets": {
      "sourceResource.subject.name": {
        "global": true,
        "terms": {
          "order": "count",
          "size": 20,
          "all_terms": true,
          "field": "sourceResource.subject.name.not_analyzed"
        }
      }
    },
    "sort": [
      {
        "_score": "desc"
      }
    ],
    "query": {
      "filtered": {
        "query": {
          "match_all": {}
        }
      }
    }
  }' | jq -r '.facets["subject"].terms[] | [.term, .count] | @csv'

"United States",33755
"Charities--Massachusetts",8304
"Almshouses--Massachusetts--Tewksbury",8304
"Shields",4232
"Coat of arms",4214
"Springfield College",3422
"Men",3136
"Trees",3086
"Session Laws--Massachusetts",2668
"Baseball players",2543
"Animals",2527
"Books",2119
"Women",2004
"Landscape",1940
"Floral",1821
"Architecture, Domestic--Lowell (Mass)--History",1785
"Parks",1745
"Buildings",1730
"Houses",1611
"Snow",1579

Solution 2 - Csv

I've used Python successfully, and the scripting approach is intuitive and concise. The ES client for python makes life easy. First grab the latest Elasticsearch client for Python here:
http://www.elasticsearch.org/blog/unleash-the-clients-ruby-python-php-perl/#python

Then your Python script can include calls like:

import elasticsearch
import unicodedata
import csv

es = elasticsearch.Elasticsearch(["10.1.1.1:9200"])
# this returns up to 500 rows, adjust to your needs
res = es.search(index="YourIndexName", body={"query": {"match": {"title": "elasticsearch"}}},500)
sample = res['hits']['hits']

# then open a csv file, and loop through the results, writing to the csv
with open('outputfile.tsv', 'wb') as csvfile:   
    filewriter = csv.writer(csvfile, delimiter='\t',  # we use TAB delimited, to handle cases where freeform text may have a comma
                        quotechar='|', quoting=csv.QUOTE_MINIMAL)
    # create column header row
    filewriter.writerow(["column1", "column2", "column3"])    #change the column labels here
    for hit in sample: 
        # fill columns 1, 2, 3 with your data 
        col1 = hit["some"]["deeply"]["nested"]["field"].decode('utf-8') #replace these nested key names with your own
        col1 = col1.replace('\n', ' ')
        # col2 = , col3 = , etc...
        filewriter.writerow([col1,col2,col3])

You may want to wrap the calls to the column['key'] references in try / catch error handling, since documents are unstructured, and may not have the field from time to time (depends on your index).

I have a complete Python sample script using the latest ES python client available here:

https://github.com/jeffsteinmetz/pyes2csv

Solution 3 - Csv

You can use elasticsearch head plugin. You can install from [elasticsearch head plugin][1] [1]: https://github.com/mobz/elasticsearch-head

http://localhost:9200/_plugin/head/ Once you have the plugin installed, navigate to the structured query tab, provide query details and you can select 'csv' format from the 'Output Results' dropdown.

Solution 4 - Csv

I don't think there is a plugin that will give you CSV results directly from the search engine, so you will have to query ElasticSearch to retrieve results and then write them to a CSV file.

Command line

If you're on a Unix-like OS, then you might be able to make some headway with es2unix which will give you search results back in raw text format on the command line and so should be scriptable.

You could then dump those results to text file or pipe to awk or similar to format as CSV. There is a -o flag available, but it only gives 'raw' format at the moment.

Java

I found an example using Java - but haven't tested it.

Python

You could query ElasticSearch with something like pyes and write the results set to a file with the standard csv writer library.

Perl

Using Perl then you could use Clinton Gormley's GIST linked by Rakesh - https://gist.github.com/clintongormley/2049562

Solution 5 - Csv

Shameless plug. I wrote estab - a command line program to export elasticsearch documents to tab-separated values.

Example:

$ export MYINDEX=localhost:9200/test/default/
$ curl -XPOST $MYINDEX -d '{"name": "Tim", "color": {"fav": "red"}}'
$ curl -XPOST $MYINDEX -d '{"name": "Alice", "color": {"fav": "yellow"}}'
$ curl -XPOST $MYINDEX -d '{"name": "Brian", "color": {"fav": "green"}}'

$ estab -indices "test" -f "name color.fav"
Brian   green
Tim     red
Alice   yellow

estab can handle export from multiple indices, custom queries, missing values, list of values, nested fields and it's reasonably fast.

Solution 6 - Csv

I have been using https://github.com/robbydyer/stash-query stash-query for this.

I find it quite convenient and working well, though i struggle with the install every time I redo it (this is due to me not being very fluent with gem's and ruby).

On Ubuntu 16.04 though, what seemed to work was:

apt install ruby
sudo apt-get install libcurl3 libcurl3-gnutls libcurl4-openssl-dev
gem install stash-query

and then you should be good to go

  1. Installs Ruby
  2. Install curl dependencies for Ruby, because the stash-query tool is working via the REST API of elasticsearch
  3. Installs stash query

This blog post describes how to build it as well:

https://robbydyer.wordpress.com/2014/08/25/exporting-from-kibana/

Solution 7 - Csv

If you are using kibana (app/discover in general), you can make your query in the UI, then save it and share -> CSV Reports. This creates a csv with a line for each record and columns will be comma separated

Solution 8 - Csv

you can use elasticsearch2csv is a small and effective python3 script that uses Elasticsearch scroll API and handle a big query response.

Solution 9 - Csv

You can use GIST. Its simple. Its in Perl and you can get some help from it.

Please download and see the usage on GitHub. Here is the link. GIST GitHub

Or if you want in Java then go for elasticsearch-river-csv

elasticsearch-river-csv

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionRamaraj KaruppusamyView Question on Stackoverflow
Solution 1 - CsvanarchivistView Answer on Stackoverflow
Solution 2 - CsvJeff SteinmetzView Answer on Stackoverflow
Solution 3 - CsvTikkiView Answer on Stackoverflow
Solution 4 - CsvjamescView Answer on Stackoverflow
Solution 5 - CsvmikuView Answer on Stackoverflow
Solution 6 - CsvpandaadbView Answer on Stackoverflow
Solution 7 - Csvstelios.anastasakisView Answer on Stackoverflow
Solution 8 - CsvMaoz ZadokView Answer on Stackoverflow
Solution 9 - Csvuser2431227View Answer on Stackoverflow