JPA: what is the proper pattern for iterating over large result sets?

JavaHibernateJpa

Java Problem Overview


Let's say I have a table with millions of rows. Using JPA, what's the proper way to iterate over a query against that table, such that I don't have all an in-memory List with millions of objects?

For example, I suspect that the following will blow up if the table is large:

List<Model> models = entityManager().createQuery("from Model m", Model.class).getResultList();

for (Model model : models)
{
     System.out.println(model.getId());
}

Is pagination (looping and manually updating setFirstResult()/setMaxResult()) really the best solution?

Edit: the primary use-case I'm targeting is a kind of batch job. It's fine if it takes a long time to run. There is no web client involved; I just need to "do something" for each row, one (or some small N) at a time. I'm just trying to avoid having them all in memory at the same time.

Java Solutions


Solution 1 - Java

Page 537 of Java Persistence with Hibernate gives a solution using ScrollableResults, but alas it's only for Hibernate.

So it seems that using setFirstResult/setMaxResults and manual iteration really is necessary. Here's my solution using JPA:

private List<Model> getAllModelsIterable(int offset, int max)
{
    return entityManager.createQuery("from Model m", Model.class).setFirstResult(offset).setMaxResults(max).getResultList();
}

then, use it like this:

private void iterateAll()
{
    int offset = 0;

    List<Model> models;
    while ((models = Model.getAllModelsIterable(offset, 100)).size() > 0)
    {
        entityManager.getTransaction().begin();
        for (Model model : models)
        {
            log.info("do something with model: " + model.getId());
        }

        entityManager.flush();
        entityManager.clear();
        em.getTransaction().commit();
        offset += models.size();
    }
}

Solution 2 - Java

I tried the answers presented here, but JBoss 5.1 + MySQL Connector/J 5.1.15 + Hibernate 3.3.2 didn't work with those. We've just migrated from JBoss 4.x to JBoss 5.1, so we've stuck with it for now, and thus the latest Hibernate we can use is 3.3.2.

Adding couple of extra parameters did the job, and code like this runs without OOMEs:

        StatelessSession session = ((Session) entityManager.getDelegate()).getSessionFactory().openStatelessSession();
        
        Query query = session
                .createQuery("SELECT a FROM Address a WHERE .... ORDER BY a.id");
        query.setFetchSize(Integer.valueOf(1000));
        query.setReadOnly(true);
        query.setLockMode("a", LockMode.NONE);
        ScrollableResults results = query.scroll(ScrollMode.FORWARD_ONLY);
        while (results.next()) {
            Address addr = (Address) results.get(0);
            // Do stuff
        }
        results.close();
        session.close();

The crucial lines are the query parameters between createQuery and scroll. Without them the "scroll" call tries to load everything into memory and either never finishes or runs to OutOfMemoryError.

Solution 3 - Java

You can't really do this in straight JPA, however Hibernate has support for stateless sessions and scrollable result sets.

We routinely process billions of rows with its help.

Here is a link to documentation: http://docs.jboss.org/hibernate/core/3.3/reference/en/html/batch.html#batch-statelesssession

Solution 4 - Java

To be honest, I would suggest leaving JPA and stick with JDBC (but certainly using JdbcTemplate support class or such like). JPA (and other ORM providers/specifications) is not designed to operate on many objects within one transaction as they assume everything loaded should stay in first-level cache (hence the need for clear() in JPA).

Also I am recommending more low level solution because the overhead of ORM (reflection is only a tip of an iceberg) might be so significant, that iterating over plain ResultSet, even using some lightweight support like mentioned JdbcTemplate will be much faster.

JPA is simply not designed to perform operations on a large amount of entities. You might play with flush()/clear() to avoid OutOfMemoryError, but consider this once again. You gain very little paying the price of huge resource consumption.

Solution 5 - Java

If you use EclipseLink I' using this method to get result as Iterable

private static <T> Iterable<T> getResult(TypedQuery<T> query)
{
  //eclipseLink
  if(query instanceof JpaQuery) {
    JpaQuery<T> jQuery = (JpaQuery<T>) query;
    jQuery.setHint(QueryHints.RESULT_SET_TYPE, ResultSetType.ForwardOnly)
       .setHint(QueryHints.SCROLLABLE_CURSOR, true);
  
    final Cursor cursor = jQuery.getResultCursor();
    return new Iterable<T>()
    {     
      @SuppressWarnings("unchecked")
      @Override
      public Iterator<T> iterator()
      {
        return cursor;
      }
    }; 
   }
  return query.getResultList();  
}  
  

close Method

static void closeCursor(Iterable<?> list)
{
  if (list.iterator() instanceof Cursor)
    {
      ((Cursor) list.iterator()).close();
    }
}

Solution 6 - Java

It depends upon the kind of operation you have to do. Why are you looping over a million of row? Are you updating something in batch mode? Are you going to display all records to a client? Are you computing some statistics upon the retrieved entities?

If you are going to display a million records to the client, please reconsider your user interface. In this case, the appropriate solution is paginating your results and using setFirstResult() and setMaxResult().

If you have launched an update of a large amount of records, you'll better keep the update simple and use Query.executeUpdate(). Optionally, you can execute the update in asynchronous mode using a Message-Driven Bean o a Work Manager.

If you are computing some statistics upon the retrieved entities, you can take advantage on the grouping functions defined by the JPA specification.

For any other case, please be more specific :)

Solution 7 - Java

There is no "proper" what to do this, this isn't what JPA or JDO or any other ORM is intended to do, straight JDBC will be your best alternative, as you can configure it to bring back a small number of rows at a time and flush them as they are used, that is why server side cursors exist.

ORM tools are not designed for bulk processing, they are designed to let you manipulate objects and attempt to make the RDBMS that the data is stored in be as transparent as possible, most fail at the transparent part at least to some degree. At this scale, there is no way to process hundreds of thousands of rows ( Objects ), much less millions with any ORM and have it execute in any reasonable amount of time because of the object instantiation overhead, plain and simple.

Use the appropriate tool. Straight JDBC and Stored Procedures definitely have a place in 2011, especially at what they are better at doing versus these ORM frameworks.

Pulling a million of anything, even into a simple List<Integer> is not going to be very efficient regardless of how you do it. The correct way to do what you are asking is a simple SELECT id FROM table, set to SERVER SIDE ( vendor dependent ) and the cursor to FORWARD_ONLY READ-ONLY and iterate over that.

If you are really pulling millions of id's to process by calling some web server with each one, you are going to have to do some concurrent processing as well for this to run in any reasonable amount of time. Pulling with a JDBC cursor and placing a few of them at a time in a ConcurrentLinkedQueue and having a small pool of threads ( # CPU/Cores + 1 ) pull and process them is the only way to complete your task on a machine with any "normal" amount of RAM, given you are already running out of memory.

See this answer as well.

Solution 8 - Java

You can use another "trick". Load only collection of identifiers of the entities you're interested in. Say identifier is of type long=8bytes, then 10^6 a list of such identifiers makes around 8Mb. If it is a batch process (one instance at a time), then it's bearable. Then just iterate and do the job.

One another remark - you should anyway do this in chunks - especially if you modify records, otherwise rollback segment in database will grow.

When it comes to set firstResult/maxRows strategy - it will be VERY VERY slow for results far from the top.

Also take into consideration that the database is probably operating in read commited isolation, so to avoid phantom reads load identifiers and then load entities one by one (or 10 by 10 or whatever).

Solution 9 - Java

I was surprised to see that the use of stored procedures was not more prominent in the answers here. In the past when I've had to do something like this, I create a stored procedure that processes data in small chunks, then sleeps for a bit, then continues. The reason for the sleeping is to not overwhelm the database which is presumably also being used for more real time types of queries, such as being connected to a web site. If there is no one else using the database, then you can leave out the sleep. If you need to ensure that you process each record once and only once, then you will need to create an additional table (or field) to store which records you have processed in order to be resilient across restarts.

The performance savings here are significant, possibly orders of magnitude faster than anything you could do in JPA/Hibernate/AppServer land, and your database server will most likely have its own server side cursor type of mechanism for processing large result sets efficiently. The performance savings come from not having to ship the data from the database server to the application server, where you process the data, and then ship it back.

There are some significant downsides to using stored procedures which may completely rule this out for you, but if you've got that skill in your personal toolbox and can use it in this kind of situation, you can knock out these kinds of things fairly quickly.

Solution 10 - Java

To expand on @Tomasz Nurkiewicz's answer. You have access to the DataSource which in turn can provide you with a connection

@Resource(name = "myDataSource",
    lookup = "java:comp/DefaultDataSource")
private DataSource myDataSource;

In your code you have

try (Connection connection = myDataSource.getConnection()) {
    // raw jdbc operations
}

This will allow you to bypass JPA for some specific large batch operations like import/export, however you still have access to the entity manager for other JPA operations if you need it.

Solution 11 - Java

Here's a simple, straight JPA example (in Kotlin) that shows how you can paginate over an arbitrarily large result set, reading chunks of 100 items at a time, without using a cursor (each cursor consumes resources on the database). It uses keyset pagination.

See https://use-the-index-luke.com/no-offset for the concept of keyset pagination, and https://www.citusdata.com/blog/2016/03/30/five-ways-to-paginate/ for a comparison of different ways to paginate along with their drawbacks.

/*
create table my_table(
  id int primary key, -- index will be created
  my_column varchar
)
*/

fun keysetPaginationExample() {
	var lastId = Integer.MIN_VALUE
	do {

	    val someItems =
		myRepository.findTop100ByMyTableIdAfterOrderByMyTableId(lastId)

	    if (someItems.isEmpty()) break

	    lastId = someItems.last().myTableId

	    for (item in someItems) {
		  process(item)
	    }

	} while (true)
}

Solution 12 - Java

An Example with JPA and NativeQuery fetching everytime the size Elements using offsets

public List<X> getXByFetching(int fetchSize) {
        int totalX = getTotalRows(Entity);
        List<X> result = new ArrayList<>();
        for (int offset = 0; offset < totalX; offset = offset + fetchSize) {
            EntityManager entityManager = getEntityManager();
            String sql = getSqlSelect(Entity) + " OFFSET " + offset + " ROWS";
            Query query = entityManager.createNativeQuery(sql, X.class);
            query.setMaxResults(fetchSize);
            result.addAll(query.getResultList());
            entityManager.flush();
            entityManager.clear();
        return result;
    }

Solution 13 - Java

I have wondered this myself. It seems to matter:

  • how big your dataset is (rows)
  • what JPA implementation you are using
  • what kind of processing you are doing for each row.

I have written an Iterator to make it easy to swap out both approaches (findAll vs findEntries).

I recommend you try both.

Long count = entityManager().createQuery("select count(o) from Model o", Long.class).getSingleResult();
ChunkIterator<Model> it1 = new ChunkIterator<Model>(count, 2) {
	
	@Override
	public Iterator<Model> getChunk(long index, long chunkSize) {
		//Do your setFirst and setMax here and return an iterator.
	}
	
};

Iterator<Model> it2 = List<Model> models = entityManager().createQuery("from Model m", Model.class).getResultList().iterator();


public static abstract class ChunkIterator<T> 
	extends AbstractIterator<T> implements Iterable<T>{
	private Iterator<T> chunk;
	private Long count;
	private long index = 0;
	private long chunkSize = 100;
	
	public ChunkIterator(Long count, long chunkSize) {
		super();
		this.count = count;
		this.chunkSize = chunkSize;
	}

	public abstract Iterator<T> getChunk(long index, long chunkSize);
	
	@Override
	public Iterator<T> iterator() {
		return this;
	}

	@Override
	protected T computeNext() {
		if (count == 0) return endOfData();
		if (chunk != null && chunk.hasNext() == false && index >= count) 
			return endOfData();
		if (chunk == null || chunk.hasNext() == false) {
			chunk = getChunk(index, chunkSize);
			index += chunkSize;
		}
		if (chunk == null || chunk.hasNext() == false) 
			return endOfData();
		return chunk.next();
	}
	
}

I ended up not using my chunk iterator (so it might not be that tested). By the way you will need google collections if you want to use it.

Solution 14 - Java

Use Pagination Concept for retrieving result

Solution 15 - Java

With hibernate there are 4 different ways to achieve what you want. Each has design tradeoffs, limitations, and consequences. I suggest exploring each and deciding which is right for your situation.

  1. Use stateless session with scroll()
  2. Use session.clear() after every iteration. When other entities need to be attached, then load them in a separate session. effectively the first session is emulating the stateless session, but retaining all the features of a stateful session, until the objects are detached.
  3. Use iterate() or list() but get only ids in the first query, then in a separate session in each iteration, do session.load and close the session at the end of the iteration.
  4. Use Query.iterate() with EntityManager.detach() aka Session.evict();

Solution 16 - Java

Finally the answer to what you want arrived in JPA 2.2 and for Hibernate (at least in v5.4.30), it uses the Scrollable implementation mentioned above.

Your code can now look like this:

entityManager().createQuery("from Model m", Model.class)
                  .getResultStream();
                  .forEach(model -> System.out.println(model.getId());

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionGeorge ArmholdView Question on Stackoverflow
Solution 1 - JavaGeorge ArmholdView Answer on Stackoverflow
Solution 2 - JavaZdsView Answer on Stackoverflow
Solution 3 - JavaCyberaxView Answer on Stackoverflow
Solution 4 - JavaTomasz NurkiewiczView Answer on Stackoverflow
Solution 5 - JavaFilippo RossoniView Answer on Stackoverflow
Solution 6 - JavafrmView Answer on Stackoverflow
Solution 7 - Javauser177800View Answer on Stackoverflow
Solution 8 - JavaMarcin CinikView Answer on Stackoverflow
Solution 9 - JavaDangerView Answer on Stackoverflow
Solution 10 - JavaArchimedes TrajanoView Answer on Stackoverflow
Solution 11 - JavaElifarleyView Answer on Stackoverflow
Solution 12 - JavaharryssupermanView Answer on Stackoverflow
Solution 13 - JavaAdam GentView Answer on Stackoverflow
Solution 14 - JavaDead ProgrammerView Answer on Stackoverflow
Solution 15 - JavaLarry ChuView Answer on Stackoverflow
Solution 16 - JavamjaggardView Answer on Stackoverflow