Error
Error Code: 286

MongoDB Error 286: Change Stream History Lost

📦 MongoDB
📋

Description

Error 286, 'Change Stream History Lost', indicates that a change stream cursor can no longer find its position in the MongoDB oplog. This typically occurs when the requested starting point for the stream (e.g., a resume token or operation time) is no longer available, having been removed from the oplog due to retention policies or server state changes.
💬

Error Message

Change Stream History Lost
🔍

Known Causes

3 known causes
⚠️
Oplog Truncation
The change stream's requested resume point or start time has been removed from the oplog due to its configured size or time-based retention policy.
⚠️
Server Restart or Failover
The MongoDB server instance or replica set member serving the change stream was restarted, leading to the loss of its in-memory change stream state.
⚠️
Long Idle Change Stream
A change stream cursor remained open and idle for an extended period, causing its position to fall behind the current oplog retention window.
🛠️

Solutions

3 solutions available

1. Restart Change Stream Consumer with Latest Resume Token easy

If the error is transient, restarting the consumer with the last known resume token can resolve the issue.

1
Identify the last successfully processed resume token from your application's logs or state. This token represents the point in the oplog up to which your consumer has successfully processed changes.
2
Modify your change stream consumer application to reconnect to MongoDB and start the change stream using the `resumeAfter` option, providing the identified resume token.
const MongoClient = require('mongodb').MongoClient;
const uri = "mongodb://localhost:27017";
const client = new MongoClient(uri);

async function run() {
  try {
    await client.connect();
    const database = client.db('yourDatabaseName');
    const collection = database.collection('yourCollectionName');

    const pipeline = [
      { $match: { operationType: { $in: ["insert", "update", "replace", "delete"] } } }
    ];

    // Replace 'your_last_resume_token' with the actual token
    const changeStream = collection.watch(pipeline, { resumeAfter: your_last_resume_token });

    console.log('Change stream started. Waiting for changes...');
    changeStream.on('change', (change) => {
      console.log('Change received:', change);
      // Process the change and store the new resume token
      your_last_resume_token = change._id;
    });

  } finally {
    // Consider how to gracefully close the client in a real application
    // await client.close();
  }
}

let your_last_resume_token = /* retrieve from persistent storage or logs */; 
run().catch(console.error);
3
Deploy the updated application. The change stream should now resume from the point before the history was lost.

2. Recreate Change Stream Consumer from Scratch medium

If the resume token is lost or invalid, a full resync of the data and then starting a new change stream is necessary.

1
Stop all running instances of your change stream consumer application.
2
Perform a full data dump of the collection(s) being monitored by the change stream. This can be done using `mongodump`.
mongodump --db yourDatabaseName --collection yourCollectionName --out /path/to/backup
3
Restore the data to a temporary or new collection. This ensures you have a complete snapshot of the data at a specific point in time.
mongorestore --db yourDatabaseName --collection yourCollectionName_temp /path/to/backup/yourDatabaseName/yourCollectionName.bson
4
Modify your change stream consumer application to start a new change stream without any `resumeAfter` token. It will implicitly start from the current oplog.
const MongoClient = require('mongodb').MongoClient;
const uri = "mongodb://localhost:27017";
const client = new MongoClient(uri);

async function run() {
  try {
    await client.connect();
    const database = client.db('yourDatabaseName');
    const collection = database.collection('yourCollectionName');

    const pipeline = [
      { $match: { operationType: { $in: ["insert", "update", "replace", "delete"] } } }
    ];

    // No resumeAfter token here, starts a fresh stream
    const changeStream = collection.watch(pipeline);

    console.log('Change stream started. Waiting for changes...');
    changeStream.on('change', (change) => {
      console.log('Change received:', change);
      // Process the change and store the new resume token
      // You will need a mechanism to compare changes with your restored data
    });

  } finally {
    // Consider how to gracefully close the client in a real application
    // await client.close();
  }
}

run().catch(console.error);
5
Update your application logic to reconcile the newly streamed changes with the restored data. This might involve comparing timestamps or using unique identifiers to avoid processing duplicate changes.
6
Once reconciled, you can switch your application to monitor the primary collection and discard the temporary one.

3. Increase Oplog Size and Retention medium

Insufficient oplog size or retention can lead to change stream history loss, especially with high write volumes.

1
Determine the current oplog size and retention period for your MongoDB deployment. This can be checked via the MongoDB shell.
db.adminCommand({ getParameter: 1, oplogSize: 1 })
db.adminCommand({ getParameter: 1, logRotate: 1 }) // This shows the log file size, not oplog retention directly, but can be indicative.
2
If your oplog size is too small for your write volume and the duration your change stream consumers might be offline, you need to increase it. This is typically done when starting or configuring your MongoDB instances.
For standalone mongod:
In your `mongod.conf` file, under the `storage` section, add or modify `oplogSize`.

Example `mongod.conf` snippet:
yaml
storage:
  dbPath: /var/lib/mongodb
  journal:
    enabled: true
  oplogSizeMB: 2048 # Set to desired size in MB (e.g., 2GB)


Then restart the `mongod` service.

For replica sets, the oplog size is typically managed per member. You can adjust the `oplogSizeMB` parameter for each member in their configuration.

**Important:** Changing oplog size might require reconfiguring and restarting MongoDB instances, which can cause downtime.
3
Ensure that the oplog retention period is sufficient for your change stream consumers to recover. MongoDB automatically purges old oplog entries. The retention is implicitly tied to the oplog size and write throughput.
4
Monitor your oplog usage and write throughput to proactively adjust the oplog size as your application's write patterns change.
🔗

Related Errors

5 related errors