Change limit for "Mysql Row size too large"
MysqlMysql Problem Overview
How can I change the limit
Row size too large (> 8126). Changing some columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED
may help. In current row format, BLOB
prefix of 768 bytes is stored inline.
Table:
id int(11) No
name text No
date date No
time time No
schedule int(11) No
category int(11) No
top_a varchar(255) No
top_b varchar(255) No
top_c varchar(255) No
top_d varchar(255) No
top_e varchar(255) No
top_f varchar(255) No
top_g varchar(255) No
top_h varchar(255) No
top_i varchar(255) No
top_j varchar(255) No
top_title_a varchar(255) No
top_title_b varchar(255) No
top_title_c varchar(255) No
top_title_d varchar(255) No
top_title_e varchar(255) No
top_title_f varchar(255) No
top_title_g varchar(255) No
top_title_h varchar(255) No
top_title_i varchar(255) No
top_title_j varchar(255) No
top_desc_a text No
top_desc_b text No
top_desc_c text No
top_desc_d text No
top_desc_e text No
top_desc_f text No
top_desc_g text No
top_desc_h text No
top_desc_i text No
top_desc_j text No
status int(11) No
admin_id int(11) No
Mysql Solutions
Solution 1 - Mysql
The question has been asked on serverfault too.
> You may want to take a look at this article which explains a lot > about MySQL row sizes. It's important to note that even if you use > TEXT or BLOB fields, your row size could still be over 8K (limit for > InnoDB) because it stores the first 768 bytes for each field inline in > the page. > > The simplest way to fix this is to use the Barracuda file format > with InnoDB. This basically gets rid of the problem altogether by > only storing the 20 byte pointer to the text data instead of storing > the first 768 bytes.
The method that worked for the OP there was:
-
Add the following to the
my.cnf
file under[mysqld]
section.innodb_file_per_table=1 innodb_file_format = Barracuda
-
ALTER
the table to useROW_FORMAT=COMPRESSED
.ALTER TABLE nombre_tabla ENGINE=InnoDB ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=8;
There is a possibility that the above still does not resolve your issues. It is a known (and verified) bug with the InnoDB engine, and a temporary fix for now is to fallback to MyISAM engine as temporary storage. So, in your my.cnf
file:
internal_tmp_disk_storage_engine=MyISAM
Solution 2 - Mysql
I ran into this problem recently and solved it a different way. If you are running MySQL version 5.6.20 there is a known bug in the system. See MySQL docs
> Important > Due to Bug #69477, redo log writes for large, externally stored BLOB fields could overwrite the most > recent checkpoint. To address this bug, a patch introduced in MySQL 5.6.20 limits the size of redo > log BLOB writes to 10% of the redo log file size. As a result of this limit, innodb_log_file_size > should be set to a value greater than 10 times the largest BLOB data size found in the rows of your > tables plus the length of other variable length fields (VARCHAR, VARBINARY, and TEXT type fields).
In my situation the offending blob table was around 16MB. Thus, the way I solved it was by adding a line to my.cnf that ensured I had at least 10x that amount and then some:
innodb_log_file_size = 256M
Solution 3 - Mysql
Set followings on your my.cnf file and restart mysql server.
innodb_strict_mode=0
Solution 4 - Mysql
If you can switch the ENGINE and use MyISAM instead of InnoDB, that should help:
ENGINE=MyISAM
There are two caveats with MyISAM (arguably more):
- You can't use transactions.
- You can't use foreign key constraints.
Solution 5 - Mysql
I had the same issue, this solved it for me:
ALTER TABLE `my_table` ROW_FORMAT=DYNAMIC;
From MYSQL Documentation:
> The DYNAMIC row format maintains the efficiency of storing the entire > row in the index node if it fits (as do the COMPACT and REDUNDANT > formats), but this new format avoids the problem of filling B-tree > nodes with a large number of data bytes of long columns. The DYNAMIC > format is based on the idea that if a portion of a long data value is > stored off-page, it is usually most efficient to store all of the > value off-page. With DYNAMIC format, shorter columns are likely to > remain in the B-tree node, minimizing the number of overflow pages > needed for any given row.
Solution 6 - Mysql
I would like to share an awesome answer, it might be helpful. Credits Bill Karwin see here https://dba.stackexchange.com/questions/6598/innodb-create-table-error-row-size-too-large
They vary by InnoDB file format.At present there are 2 formats called Antelope and Barracuda.
The central tablespace file (ibdata1) is always in Antelope format. If you use file-per-table, you can make the individual files use Barracuda format by setting innodb_file_format=Barracuda in my.cnf.
Basic points:
-
One 16KB page of InnoDB data must hold at least two rows of data. Plus each page has a header and a footer containing page checksums and log sequence number and so on. That's where you get your limit of a bit less than 8KB per row.
-
Fixed-size data types like INTEGER, DATE, FLOAT, CHAR are stored on this primary data page and count toward the row size limit.
-
Variable-sized data types like VARCHAR, TEXT, BLOB are stored on overflow pages, so they don't count fully toward the row size limit. In Antelope, up to 768 bytes of such columns are stored on the primary data page in addition to being stored on the overflow page. Barracuda supports a dynamic row format, so it may store only a 20-byte pointer on the primary data page.
-
Variable-size data types are also prefixed with 1 or more bytes to encode the length. And InnoDB row format also has an array of field offsets. So there's an internal structure more or less documented in their wiki.
Barracuda also supports a ROW_FORMAT=COMPRESSED to gain further storage efficiency for overflow data.
I also have to comment that I've never seen a well-designed table exceed the row size limit. It's a strong "code smell" that you're violating the repeating groups condition of First Normal Form.
Solution 7 - Mysql
After spending hours I have found the solution: just run the following SQL in your MySQL admin to convert the table to MyISAM:
USE db_name;
ALTER TABLE table_name ENGINE=MYISAM;
Solution 8 - Mysql
The maximum row size for an InnoDB table, which applies to data stored locally within a database page, is slightly less than half a page for 4KB, 8KB, 16KB, and 32KB
For 16kb pages (default), we can calculate:
Slightly less than half a page 8126 / Number of bytes to threshold for overflow 767 = 10.59 fields of 767 bytes maximum
Basically, you could max out a row with:
- 11 varchar fields > 767 characters (latin1 = 1 byte per char) or
- 11 varchar fields > 255 characters (utf-8 on mysql = 3 bytes per char).
Remember, it will only overflow to an overflow page if the field is > 767 bytes. If there are too many fields of 767 bytes, it will bust (passing beyond max row_size). Not usual with latin1 but very possible with utf-8 if the developers aren’t careful.
For this case, I think you could possibly bump the innodb_page_size to 32kb.
in my.cnf:
innodb_page_size=32K
References:
- https://dev.mysql.com/doc/refman/8.0/en/column-count-limit.html#row-size-limits
- https://medium.com/@adamhooper/in-mysql-never-use-utf8-use-utf8mb4-11761243e434
- https://mariadb.com/kb/en/library/innodb-compact-row-format/#overflow-pages-with-the-compact-row-format
- https://stackoverflow.com/questions/55843820/innodb-page-size-setting
Solution 9 - Mysql
I ran into this issue when I was trying to restore a backed up mysql database from a different server. What solved this issue for me was adding certain settings to my.conf (like in the questions above) and additionally changing the sql backup file:
Step 1: add or edit the following lines in my.conf:
innodb_page_size=32K
innodb_file_format=Barracuda
innodb_file_per_table=1
Step 2 add ROW_FORMAT=DYNAMIC to the table create statement in the sql backup file for the table that is causing this error:
DROP TABLE IF EXISTS `problematic_table`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `problematic_table` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
...
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=3 ROW_FORMAT=DYNAMIC;
the important change above is ROW_FORMAT=DYNAMIC; (that was not included in the orignal sql backup file)
source that helped me to resolve this issue: MariaDB and InnoDB MySQL Row size too large
Solution 10 - Mysql
You need to do some changes to the my.ini
file
Add this under [mysqld]
innodb_strict_mode=0
Update These two lines
innodb_log_file_size=256M
innodb_log_buffer_size=256M
innodb_strict_mode
: When it is enabled, certain InnoDB warnings become errors instead.
Reference: https://mariadb.com/kb/en/innodb-strict-mode/
innodb_log_file_size
& innodb_log_buffer_size
needs to increase in size.
Solution 11 - Mysql
The other answers address the question asked. I will address the underlying cause: poor schema design.
Do not splay an array across columns. Here you have 3*10 columns that should be turned into 10 rows of 3 columns in a new table (plus id
, etc)
Your Main
table would have only
id int(11) No
name text No
date date No
time time No
schedule int(11) No
category int(11) No
status int(11) No
admin_id int(11) No
Your extra table (Top
) would have
id int(11) No -- for joining to Main
seq TINYINT UNSIGNED -- containing 1..10
img varchar(255) No
title varchar(255) No
desc text No
PRIMARY KEY(id, seq) -- so you can easily find the 10 top_titles
There would be 10 (or fewer? or more?) rows in Top
for each id
.
This eliminates your original problem, and cleans up the schema. (This is not "normalization", as debated in some of the Comments.)
Do not switch to MyISAM; it is going away.
Don't worry about ROW_FORMAT
.
You will need to change your code to do the JOIN
and to handle multiple rows instead of multiple columns.
Solution 12 - Mysql
I am using MySQL 5.6 on AWS RDS. I updated following in parameter group.
innodb_file_per_table=1
innodb_file_format = Barracuda
I had to reboot DB instance for parameter group changes to be in effect.
Also, ROW_FORMAT=COMPRESSED was not supported. I used DYNAMIC as below and it worked fine.
ALTER TABLE nombre_tabla ENGINE=InnoDB ROW_FORMAT=DYNAMIC KEY_BLOCK_SIZE=8
Solution 13 - Mysql
I keep running into this issue when I destroy my Laravel Homestead (Vagrant) box and start fresh.
-
SSH into the box from the command line
homestead ssh
-
Go to the my.cnf file
sudo vi /etc/mysql/my.cnf
-
Add the below lines to the bottom of the file (below the !includedir)
[mysqld]
innodb_log_file_size=512M
innodb_strict_mode=0
-
Save the changes to my.cnf, then reload MYSQL
sudo service mysql restart
Solution 14 - Mysql
To solve this problem, you have to change the innodb to myisam in your phpmyadmin. Look at this picture.
Solution 15 - Mysql
I also encountered the same problem. I solve the problem by executing the following sql:
ALTER ${table} ROW_FORMAT=COMPRESSED;
But, I think u should know about the Row Storage.
There are two kinds of columns: variable-length column(such as VARCHAR, VARBINARY, and BLOB and TEXT types) and fixed-length column. They are stored in different types of pages.
> Variable-length columns are an exception to this rule. Columns such as BLOB and VARCHAR that are too long to fit on a B-tree page are stored on separately allocated disk pages called overflow pages. We call such columns off-page columns. The values of these columns are stored in singly-linked lists of overflow pages, and each such column has its own list of one or more overflow pages. In some cases, all or a prefix of the long column value is stored in the B-tree, to avoid wasting storage and eliminating the need to read a separate page.
and when purpose of setting ROW_FORMAT is
> When a table is created with ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED, InnoDB can store long variable-length column values (for VARCHAR, VARBINARY, and BLOB and TEXT types) fully off-page, with the clustered index record containing only a 20-byte pointer to the overflow page.
Wanna know more about DYNAMIC and COMPRESSED Row Formats
Solution 16 - Mysql
If this occures on a SELECT with many columns, the cause can be that mysql is creating a temporary table. If this table is too large to fit in memory, it will use its default temp table format, which is InnoDB, to store it on Disk. In this case the InnoDB size limits apply.
You then have 4 options:
-
change the innodb row size limit like stated in another post, which requires reinitialization of the server.
-
change your query to include less columns or avoid causing it to create a temporary table (by i.e. removing order by and limit clauses).
-
changing max_heap_table_size to be large so the result fits in memory and does not need to get written to disk.
-
change the default temp table format to MYISAM, this is what i did. Change in my.cnf:
internal_tmp_disk_storage_engine=MYISAM
Restart mysql, query works.
Solution 17 - Mysql
Here is simple tip for anyone interested:
After upgrade from Debian 9 to Debian 10 with 10.3.17-MariaDB, I have some errors from Joomla databases:
[Warning] InnoDB: Cannot add field field
in table database
.table
because after adding it, the row size is 8742 which is greater than maximum allowed size (8126) for a record on index leaf page.
Just in case, I set innodb_default_row_format = DYNAMIC in /etc/mysql/mariadb.conf.d/50-server.cnf (it was default anyway)
Than, I have used phpmyadmin to run "Optimize table" for all the tables in Joomla database. I think table recreation done by phpmyadmin in the process helped. If you happen to have phpmyadmin installed it is just few clicks to do.
Solution 18 - Mysql
On Mysql 5.6:
> Execute the following SQL COMMANDS:
Mysql > SET GLOBAL innodb_file_format=Barracuda;
Mysql > ALTER TABLE `name_of_my_table_here` ENGINE=InnoDB ROW_FORMAT=DYNAMIC KEY_BLOCK_SIZE=8;