Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Result incomplete:Insert failed due to LRU cache being full.;The database will be put in read-only mode #20380

Open
xg-github-00 opened this issue Dec 26, 2023 · 4 comments

Comments

@xg-github-00
Copy link

VERISON
OrangoDB 3.10.1
The server is runing well for a long time until it got an error recentenly.
log

And we also found the relevant issue #16127
We set enforce-block-cache-size-limit = false in the starup.cfg,but it takes no effect.
We can't insert any record to any table even though we retart the OrangoDB server.
Please help to check this problem,thanks ~

@jsteemann
Copy link
Contributor

The error happens when there is no more capacity in the RocksDB block cache, and no other data in the block cache shard can be freed.
Can you try setting the startup option --rocksdb.enforce-block-cache-size-limit to false? That should help avoid this particular error.
Btw, version 3.10.1 is more than a year old, and 3.10.2 changed the value of the startup option to false. So even better than changing the value of the startup option and stay on 3.10.1 would be to upgrade to the newest version of 3.10, or even to 3.11.

@xg-github-00
Copy link
Author

xg-github-00 commented Dec 26, 2023

We modify the arangod.conf,add line as
image
After this, we restart the server, the dbserver will still throw the error
log
It seems the particular error is not resolved.

@jsteemann
Copy link
Contributor

Are you sure that the config file is being used, i.e. that the right config file was modified?
Because the option is supposed to work just fine, and if it doesn't, it is likely that it has been put into a config file that is not being used.

@xg-github-00
Copy link
Author

OK,this time, we set the enforce-block-cache-size-limit to false and we also drop the table that cause the error, and restart again, the error disappeared. Thanks~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants