Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cancel copy operation if one chunk failed #38

Open
ptoews opened this issue Jun 29, 2020 · 2 comments
Open

Cancel copy operation if one chunk failed #38

ptoews opened this issue Jun 29, 2020 · 2 comments

Comments

@ptoews
Copy link
Contributor

ptoews commented Jun 29, 2020

Hi,

to ensure continuity in the destination database, i.e. no missing data points in between, it would be nice if it was possible to tell syncflux that it should stop after a chunk could not be copied.
Currently, the amount of retries and the wait delay for each chunk copy can be configured, but it seems that even if all this fails the next chunk is always attempted.

Would something like this be possible?

@toni-moreno
Copy link
Owner

Hello @ptoews , sorry for the big delayed answer

I can not figure out why this option could be useful. When I have used syncflux to replicate data manually, and something bad happened in the meanwhile I always had to restart the process, with this option the syncflux process does it for you.

Could you give me please more context about this new feature request? how are you working with syncflux and influxdb ? what did you mean with 'to ensure continuity in the destination database' ?

@ptoews
Copy link
Contributor Author

ptoews commented Oct 6, 2020

Hi @toni-moreno , thanks for your answer.

In our use case there is a device connected via ethernet cable to a server, which then syncs the data. It is possible that this cable is pulled out suddenly and the device may not return for some time to the server. In the meantime, we want to process the copied data, and can only do this if there are no gaps. It is okay or at least better if many recent data points are missing than if we process the data without knowing that there is a gap of missing data in some time interval in there.

I noticed that the failed chunks are attempted to copy at the end again, which is a great feature, but in our case if the cable is pulled out that sadly can't help much.

I hope I could make this a bit more clear. I also wish I could do this myself in a PR however this seems a bit more complicated, I'm also trying to implement #39 although I have zero Go experience yet and currently not that much time, so any help would be great!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants