Error
Error Code: 1118

MariaDB Error 1118: Row Size Exceeds Limit

📦 MariaDB
📋

Description

This error indicates that the combined size of non-BLOB/TEXT columns in a table row has exceeded the maximum limit allowed by the storage engine. It typically occurs during `CREATE TABLE`, `ALTER TABLE`, or `INSERT` operations when the data being stored for a single row is too large.
💬

Error Message

Row size too large. The maximum row size for the used table type, not counting BLOBs, is %ld. You have to change some columns to TEXT or BLOBs
🔍

Known Causes

3 known causes
⚠️
Excessive Column Data
The sum of the lengths of all fixed-size and variable-length columns (excluding BLOB/TEXT) in a row exceeds the storage engine's maximum row size limit.
⚠️
Multi-byte Character Sets
Using multi-byte character sets (e.g., `utf8mb4`) increases the storage requirement per character, which can cause a row to exceed the size limit faster than with single-byte character sets.
⚠️
Storage Engine Constraints
Different MariaDB storage engines (e.g., InnoDB, MyISAM) have varying maximum row size limits, and exceeding the limit for the chosen engine will trigger this error.
🛠️

Solutions

3 solutions available

1. Convert Wide Columns to TEXT/BLOB medium

Reduces row size by moving large data types out of the main row.

1
Identify columns that are not frequently accessed or are large in nature. These are prime candidates for conversion.
SELECT COLUMN_NAME, DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'your_database_name' AND TABLE_NAME = 'your_table_name' AND CHARACTER_MAXIMUM_LENGTH > 255 OR DATA_TYPE IN ('VARCHAR', 'VARBINARY');
2
For each identified column, alter the table to change its data type to `TEXT` or `BLOB` (or their variants like `MEDIUMTEXT`, `LONGBLOB`). `TEXT` and `BLOB` types store data off-row, significantly reducing the main row size.
ALTER TABLE your_table_name MODIFY COLUMN your_column_name TEXT;
-- Or use other TEXT/BLOB types like MEDIUMTEXT, LONGTEXT, BLOB, MEDIUMBLOB, LONGBLOB depending on your data size requirements.
3
After altering, verify the table's row size by checking the `AVG_ROW_LENGTH` and `MAX_ROW_LENGTH` in `information_schema.tables` for your table. The row size should now be within the limit.
SELECT TABLE_NAME, AVG_ROW_LENGTH, MAX_ROW_LENGTH FROM information_schema.tables WHERE TABLE_SCHEMA = 'your_database_name' AND TABLE_NAME = 'your_table_name';

2. Normalize Your Table Structure advanced

Splits wide tables into smaller, related tables to reduce row size.

1
Analyze your table for columns that represent repeating groups of data or attributes that could be logically separated. This is a normalization process.
Analyze your table schema and identify groups of columns that are often inserted or updated together, or represent a one-to-many relationship with the main entity.
2
Create new tables for these separated attributes. The new table(s) will have a foreign key referencing the original table's primary key.
CREATE TABLE your_new_table_name (
    id INT AUTO_INCREMENT PRIMARY KEY,
    your_table_id INT,
    column_to_move1 VARCHAR(255),
    column_to_move2 INT,
    -- ... other columns
    FOREIGN KEY (your_table_id) REFERENCES your_table_name(id)
);
3
Populate the new table(s) with data from the original table, copying the relevant columns and setting the foreign key.
INSERT INTO your_new_table_name (your_table_id, column_to_move1, column_to_move2)
SELECT id, column_to_move1, column_to_move2 FROM your_table_name WHERE column_to_move1 IS NOT NULL OR column_to_move2 IS NOT NULL;
4
Remove the moved columns from the original table.
ALTER TABLE your_table_name DROP COLUMN column_to_move1;
ALTER TABLE your_table_name DROP COLUMN column_to_move2;
5
Update your application code to query and manipulate data across these newly related tables using JOINs.
SELECT t1.*, t2.column_to_move1 FROM your_table_name t1 JOIN your_new_table_name t2 ON t1.id = t2.your_table_id WHERE t1.id = 1;

3. Review and Optimize Column Definitions easy

Reduces row size by using the most appropriate and smallest data types.

1
Examine each column in the problematic table. Look for columns that are defined with excessively large `VARCHAR` or `VARBINARY` lengths, or use `INT` when a smaller integer type would suffice.
SELECT COLUMN_NAME, DATA_TYPE, COLUMN_TYPE, CHARACTER_MAXIMUM_LENGTH, NUMERIC_PRECISION FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'your_database_name' AND TABLE_NAME = 'your_table_name';
2
For `VARCHAR` columns, reduce `CHARACTER_MAXIMUM_LENGTH` to the actual maximum expected length of the data. For example, if a column will never store more than 50 characters, change it from `VARCHAR(255)` to `VARCHAR(50)`. This saves space per row.
ALTER TABLE your_table_name MODIFY COLUMN your_column_name VARCHAR(50);
-- Ensure you have data that fits the new, smaller size.
3
For numeric columns, use the smallest appropriate integer type. For instance, if a column only stores values between 0 and 255, use `TINYINT UNSIGNED` (1 byte) instead of `INT` (4 bytes).
ALTER TABLE your_table_name MODIFY COLUMN your_numeric_column TINYINT UNSIGNED;
-- Verify data range before changing.
4
For `ENUM` or `SET` types, ensure that the defined values are concise. While efficient, a large number of distinct options can still contribute to row size.
ALTER TABLE your_table_name MODIFY COLUMN your_enum_column ENUM('option1', 'option2');
-- Consider if the enum values can be shortened or if a separate lookup table is more appropriate.
🔗

Related Errors

5 related errors