Error
Error Code:
153
MongoDB Error 153: Chunk Too Big
Description
MongoDB Error 153, 'Chunk Too Big', indicates that a data chunk in a sharded cluster has exceeded its configured maximum size. This typically occurs during chunk migrations or balancing operations when the balancer attempts to move a chunk that is larger than the `chunkSize` limit.
Error Message
Chunk Too Big
Known Causes
3 known causesIneffective Shard Key Design
A poorly chosen shard key can lead to "jumbo chunks" where data for a specific key range accumulates excessively on one shard, making it too large to migrate.
High Write Volume to a Single Range
Intense write operations targeting a narrow range of a shard key can rapidly grow a chunk beyond its maximum size, preventing successful balancing.
Large Documents in a Chunk
If many very large documents are inserted into a chunk, their combined size can exceed the chunk size limit, especially if the chunk is already near its capacity.
Solutions
3 solutions available1. Adjust Chunk Size for Sharded Collections medium
Manually split large chunks to distribute data more evenly.
1
Identify the sharded collection experiencing the 'Chunk Too Big' error. You can usually infer this from the logs or by observing unbalanced shard distribution.
2
Connect to the `mongos` instance for your sharded cluster.
mongosh
3
Use the `sh.status()` command to examine the chunk distribution for the problematic collection. Look for chunks with a very large number of documents or a large size.
sh.status(1)
4
If a specific chunk is identified as too large, you can manually split it using the `sh.splitFind()` command. Replace `your_database`, `your_collection`, and `split_point` with your actual values. The `split_point` should be a value from your shard key that divides the large chunk into smaller ones.
sh.splitFind("your_database.your_collection", { shard_key_field: split_point });
5
After splitting, monitor `sh.status()` again to ensure the chunks are being redistributed and the error is resolved.
sh.status(1)
2. Optimize Shard Key for Even Distribution advanced
Re-evaluate and potentially change the shard key to ensure better data distribution.
1
Understand your data access patterns and the cardinality of your potential shard keys. A good shard key distributes data and read/write operations evenly across shards.
2
If the current shard key is not performing well, consider migrating to a new shard key. This is a complex operation and often involves creating a new sharded collection with the desired shard key, migrating data, and then dropping the old collection.
3
To change the shard key, you'll typically need to:
1. Create a new sharded collection with the preferred shard key.
2. Use `db.collection.insertMany()` or a similar method to copy data from the old to the new collection.
3. Update your application to use the new collection.
4. Once confident, drop the old collection.
1. Create a new sharded collection with the preferred shard key.
2. Use `db.collection.insertMany()` or a similar method to copy data from the old to the new collection.
3. Update your application to use the new collection.
4. Once confident, drop the old collection.
4
Alternatively, for very large datasets, you might consider using tools like `mongodump` and `mongorestore` to transfer data to a new cluster with the correct sharding configuration.
3. Increase the Chunk Size Threshold medium
Adjust the balancer configuration to tolerate larger chunks before triggering a split.
1
Connect to the `mongos` instance.
mongosh
2
Access the `config.settings` collection to view current balancer settings.
db.getSiblingDB('config').settings.findOne()
3
Modify the `chunkSize` setting. This value is in MB. Increasing it means the balancer will wait for chunks to grow larger before attempting to split them. Be cautious with this setting, as excessively large chunks can lead to performance issues and unbalanced shards. A common default is 64MB.
db.getSiblingDB('config').settings.updateOne({ _id: "balancer" }, { $set: { chunkSize: 128 } });
4
The balancer will automatically pick up the new setting. Monitor `sh.status()` to observe the impact. This is often a temporary workaround or a tuning step, not a fundamental fix for a poorly chosen shard key.
sh.status(1)