Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mpv: Runners seems to be stuck #11958

Open
kasper93 opened this issue May 15, 2024 · 4 comments
Open

mpv: Runners seems to be stuck #11958

kasper93 opened this issue May 15, 2024 · 4 comments

Comments

@kasper93
Copy link
Contributor

kasper93 commented May 15, 2024

Hi,

Initially I though it was due to excessive timeouts, but they have been fixed now. Some of testcases are stuck, all I see is pending status and progression started that never ends.

oss-fuzz-linux-zone8-host-scn6-11: Progression task started.

Sure enough after searching similar issues, I found #11490 that was related to disk space issues on runners. And now is my fault, because we were leaking files in /tmp... oops, sorry, though it would be one per process, not that much data. It now has been fixed and rewritten to use memfd_create. mpv-player/mpv@6ede789

I'm creating this issue, because there is not much visibility into runners. Currently I don't see many of fuzz binaries running, stats and logs are missing, coverage build is failing. So I presume /tmp is persistent and it is failing?

Could you take a look and see if runners rebuild is needed similar to #11490?

EDIT:

One more generic question, what are the limits of concurrent jobs? FAQ says

Fuzzing machines only have a single core and fuzz targets should not use more than 2.5GB of RAM.

Say we have N fuzzing targets multiplied by sanitizers and fuzzing engines, each target is allowed one fuzz runner or they are queued and what's the limit?

EDIT2: I think I found the root cause #11965 (will close this issue if this helps after merge)

EDIT3: Nothing changed, still there is no progression.

EDIT4: Example of completely stuck testcase https://oss-fuzz.com/testcase-detail/4875501058457600

Thanks,
Kacper

oliverchang pushed a commit that referenced this issue May 17, 2024
Should fix arbitrary DNS resolutions.

I think this is the root cause of #11958, so let's fix it. Although I'm
only guessing. Everything is stuck, even sanitizer that cannot trigger
DNS doesn't run, so there might be more to it.

It wasn't clear that this error causes so much trouble. There is
https://oss-fuzz.com/testcase-detail/6494370936193024, but on crash
statistic it says
```
Time to crash: 5916.00s
Total count: 1 
```
but if we dig into the statistic table on the actual testcase-detail
page, I can see a lot of crashes. Which make sense of course.

What is little bit puzzling is that on the one log that is there, I can
see it gone all the way
```
INFO: fuzzed for 5916 seconds, wrapping up soon
```
and apparently reported error after doing whole 6000 seconds. There is
no detail, no more logs saved. My current understanding is that we got
stuck in this case.

Signed-off-by: Kacper Michajłow <kasper93@gmail.com>
@kasper93
Copy link
Contributor Author

kasper93 commented May 21, 2024

Sorry to bother you again. Is there anything I can do to help resolve this situation? Currently there seem to be no jobs running at all. So far only clue I have is that disk quota is exceeded and this makes runners stuck somehow. Is /tmp storage persistent? In libfuzzer fork mode (which seems to be used) it would indeed leak some files there previously, but I have no way to validate that this is the problem. I don't think fuzzers itself are that big to cause the problem.

Everything is working fine locally and with cifuzz workflow, only clusterfuzz (oss-fuzz) seems to be stuck completely.

@oliverchang
Copy link
Collaborator

Sorry for the delay. It doesn't appear to be a disk space issue, and I'm not sure why they're stuck. I'll kick off a restart of all the machines to see if that resolves it.

@kasper93
Copy link
Contributor Author

kasper93 commented May 24, 2024

Thank you. Unfortunately nothing moved. On fuzzer statistic I get Got error with status: 404, on testcase(s) [2024-05-24 13:08:05 UTC] oss-fuzz-linux-zone8-host-lt79-0: Progression task started. and Pending status.

In fairness, it never fully worked, since the initial integration we got some crash reports and some of them were detected as fixed. So far so good, but we never got corpus saved, coverage build since the beginning is failing with

Step #5: Failed to unpack the corpus for fuzzer_load_config_file. This usually means that corpus backup for a particular fuzz target does not exist. If a fuzz target was added in the last 24 hours, please wait one more day. Otherwise, something is wrong with the fuzz target or the infrastructure, and corpus pruning task does not finish successfully.

I thought it needs to stabilize, but now it doesn't seem to give any sign of life, no logs, reports.

I've tested locally full infra/helper.py pipeline and I can generate coverage report without issue, so build and fuzzers seems to be ok. I'd appreciate any help on this matter. I had plans to improve things, add initial corpus, but first we need to stabilize things. There is no rush, but if you need anything on my side to change/update, let me know.

@kasper93
Copy link
Contributor Author

It is probably by this typo 9ad0f4d Would be nice if we had some feedback on ClasterFuzz about possible errors during operation. Else things just don't work and there is no really way to see why.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants