What does Statement.setFetchSize(nSize) method really do in SQL Server JDBC driver?

JavaSql ServerJdbc

Java Problem Overview


I have this really big table with some millions of records every day and in the end of every day I am extracting all the records of the previous day. I am doing this like:

String SQL =  "select col1, col2, coln from mytable where timecol = yesterday";
Statement.executeQuery(SQL);

The problem is that this program takes like 2GB of memory because it takes all the results in memory then it processes it.

I tried setting the Statement.setFetchSize(10) but it takes exactly the same memory from OS it does not make any difference. I am using Microsoft SQL Server 2005 JDBC Driver for this.

Is there any way to read the results in small chunks like the Oracle database driver does when the query is executed to show only a few rows and as you scroll down more results are shown?

Java Solutions


Solution 1 - Java

In JDBC, the setFetchSize(int) method is very important to performance and memory-management within the JVM as it controls the number of network calls from the JVM to the database and correspondingly the amount of RAM used for ResultSet processing.

Inherently if setFetchSize(10) is being called and the driver is ignoring it, there are probably only two options:

  1. Try a different JDBC driver that will honor the fetch-size hint.
  2. Look at driver-specific properties on the Connection (URL and/or property map when creating the Connection instance).

The RESULT-SET is the number of rows marshalled on the DB in response to the query. The ROW-SET is the chunk of rows that are fetched out of the RESULT-SET per call from the JVM to the DB. The number of these calls and resulting RAM required for processing is dependent on the fetch-size setting.

So if the RESULT-SET has 100 rows and the fetch-size is 10, there will be 10 network calls to retrieve all of the data, using roughly 10*{row-content-size} RAM at any given time.

The default fetch-size is 10, which is rather small. In the case posted, it would appear the driver is ignoring the fetch-size setting, retrieving all data in one call (large RAM requirement, optimum minimal network calls).

What happens underneath ResultSet.next() is that it doesn't actually fetch one row at a time from the RESULT-SET. It fetches that from the (local) ROW-SET and fetches the next ROW-SET (invisibly) from the server as it becomes exhausted on the local client.

All of this depends on the driver as the setting is just a 'hint' but in practice I have found this is how it works for many drivers and databases (verified in many versions of Oracle, DB2 and MySQL).

Solution 2 - Java

The fetchSize parameter is a hint to the JDBC driver as to many rows to fetch in one go from the database. But the driver is free to ignore this and do what it sees fit. Some drivers, like the Oracle one, fetch rows in chunks, so you can read very large result sets without needing lots of memory. Other drivers just read in the whole result set in one go, and I'm guessing that's what your driver is doing.

You can try upgrading your driver to the SQL Server 2008 version (which might be better), or the open-source jTDS driver.

Solution 3 - Java

You need to ensure that auto-commit on the Connection is turned off, or setFetchSize will have no effect.

dbConnection.setAutoCommit(false);

Edit: Remembered that when I used this fix it was Postgres-specific, but hopefully it will still work for SQL Server.

Solution 4 - Java

Statement interface Doc

> SUMMARY: void setFetchSize(int rows) > Gives the JDBC driver a hint as to the > number of rows that should be fetched > from the database when more rows are > needed.

Read this ebook J2EE and beyond By Art Taylor

Solution 5 - Java

Sounds like mssql jdbc is buffering the entire resultset for you. You can add a connect string parameter saying selectMode=cursor or responseBuffering=adaptive. If you are on version 2.0+ of the 2005 mssql jdbc driver then response buffering should default to adaptive.

http://msdn.microsoft.com/en-us/library/bb879937.aspx

Solution 6 - Java

It sounds to me that you really want to limit the rows being returned in your query and page through the results. If so, you can do something like:

select * from (select rownum myrow, a.* from TEST1 a )
where myrow between 5 and 10 ;

You just have to determine your boundaries.

Solution 7 - Java

Try this:

String SQL = "select col1, col2, coln from mytable where timecol = yesterday";

connection.setAutoCommit(false);
PreparedStatement stmt = connection.prepareStatement(SQL, SQLServerResultSet.TYPE_SS_SERVER_CURSOR_FORWARD_ONLY, SQLServerResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(2000);

stmt.set....

stmt.execute();
ResultSet rset = stmt.getResultSet();

while (rset.next()) {
    // ......

Solution 8 - Java

I had the exact same problem in a project. The issue is that even though the fetch size might be small enough, the JDBCTemplate reads all the result of your query and maps it out in a huge list which might blow your memory. I ended up extending NamedParameterJdbcTemplate to create a function which returns a Stream of Object. That Stream is based on the ResultSet normally returned by JDBC but will pull data from the ResultSet only as the Stream requires it. This will work if you don't keep a reference of all the Object this Stream spits. I did inspire myself a lot on the implementation of org.springframework.jdbc.core.JdbcTemplate#execute(org.springframework.jdbc.core.ConnectionCallback). The only real difference has to do with what to do with the ResultSet. I ended up writing this function to wrap up the ResultSet:

private <T> Stream<T> wrapIntoStream(ResultSet rs, RowMapper<T> mapper) {
    CustomSpliterator<T> spliterator = new CustomSpliterator<T>(rs, mapper, Long.MAX_VALUE, NON-NULL | IMMUTABLE | ORDERED);
    Stream<T> stream = StreamSupport.stream(spliterator, false);
    return stream;
}
private static class CustomSpliterator<T> extends Spliterators.AbstractSpliterator<T> {
    // won't put code for constructor or properties here
    // the idea is to pull for the ResultSet and set into the Stream
    @Override
    public boolean tryAdvance(Consumer<? super T> action) {
        try {
            // you can add some logic to close the stream/Resultset automatically
            if(rs.next()) {
                T mapped = mapper.mapRow(rs, rowNumber++);
                action.accept(mapped);
                return true;
            } else {
                return false;
            }
        } catch (SQLException) {
            // do something with this Exception
        }
    }
}

you can add some logic to make that Stream "auto closable", otherwise don't forget to close it when you are done.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionCofiView Question on Stackoverflow
Solution 1 - JavaDarrell TeagueView Answer on Stackoverflow
Solution 2 - JavaskaffmanView Answer on Stackoverflow
Solution 3 - JavajwaddellView Answer on Stackoverflow
Solution 4 - JavaKV PrajapatiView Answer on Stackoverflow
Solution 5 - JavaRonView Answer on Stackoverflow
Solution 6 - JavaBrian AgnewView Answer on Stackoverflow
Solution 7 - JavaSergio SierraView Answer on Stackoverflow
Solution 8 - JavaChaudPainView Answer on Stackoverflow