-
Notifications
You must be signed in to change notification settings - Fork 369
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix dynamic embedding bugs that may lead to segmentation faults #1947
Conversation
Hi @sherlockkenan! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
Hi, I don't have expertise on dynamic embeddings, so probably will just cc @colin2328 also probably make an issue as well, since it is easier to track there |
sorry for the late response. I have created an issue here: #2006. I would appreciate any further suggestions regarding the pull request. |
fetch_notifications_.back().second; | ||
c10::intrusive_ptr<Notification> notification; | ||
{ | ||
std::unique_lock<std::mutex> lock_fetch(fetch_notifications_mutex_); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this lock needed? Line 20 grabs the lock mu_, so this function should be thread safe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although Fetch grabs the mu_ lock, the notification is accessed in SyncFetch without the lock (mu_), so the access to notification is not thread-safe. Here, a finer-grained lock is chosen to independently protect the notification instead of adding the mu_ lock during the SyncFetch process. The reason for this will be explained below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we even need mu_ lock then? Is that something we can get rid of? Could be a follow up pr
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on my understanding, the primary purpose of mu_ lock is to ensure that the calls to Fetch and Evict are mutually exclusive, as variables such as cache_ids_to_fetch_or_evict_ and shards_ are shared between them. Although the current implementation in the Python side actually guarantees that they won't be called simultaneously, the way they are called from the Python side may change in the future. Therefore, the mu_ lock might still be necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additionally, we've also encountered performance issues . The main problem is that both fetch and evict access data by reading or writing individual tensors one by one, which results in a large number of small GPU operators (on the same order of magnitude as the number of keys). for example https://github.com/pytorch/torchrec/blob/main/contrib/dynamic_embedding/src/tde/ps.cpp#L58
By merging these into a larger batch operator, we can achieve a significant performance improvement. We'll clean up the code and submit it as a new mr later.
auto& [t, notification] = fetch_notifications_.front(); | ||
if (t != time && time >= 0) { | ||
std::unique_lock<std::mutex> lock( | ||
fetch_notifications_mutex_, std::defer_lock); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume SyncFetch and Fetch are not meant to be called concurrently in general? Let's use the mu_ lock used in Fetch as well here, which means no need to create a new lock. Unless we want more granularity on threading here, where we want the unlocking between waiting for each notification. Curious to your thoughts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
normally , SyncFetch and Fetch would be called concurrently, because Fetch returns an asynchronous handler and then continues to the next fetch. The asynchronous handler will be waited on by other threads calling SyncFetch. Therefore, using the mu_ lock would cause SyncFetch and Fetch to be completely mutually exclusive, which means that Fetch cannot be executed while waiting for SyncFetch, leading to a decline in performance. Therefore, a finer-grained lock is created here to protect only the access to the notification.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes sense, I can see why we want the granularity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops, approved sorry
@PaulZhang12 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Please address CI failures and then you are good |
1) Ensure thread safety for fetch_notifications_ 2) Prevent premature recycling of tensor data during pushing operations
57e5359
to
3f3bee7
Compare
thanks! sorry for the failures and I've fixed them and pushed the changes to the merge request. However, it seems that the workflow requires approval again. |
CI looks good, previously flaky test. Feel free to merge! |
This PR was reopened (likely due to being reverted), so your approval was removed. Please request another review.
Hi TorchRec Team,
We've recently been trying to use the dynamic embedding feature in torchrec contrib, but we have encountered a few challenges. The process may result in a segmentation fault. After debugging, we've identified two potential problems.
1) fetch_notifications_ is not thread-safe.
2)Tensor data might be recycled during pushing operations.
For the first issue, the fetch_notifications_ variable is accessed in both the pull and sync fetch functions. As these functions are called in different threads, there might be a thread safety issue.
For the second issue, the tensor generated by concat in the push function is a temporary variable. The subsequent call to io.push is asynchronous and does not copy the data. Therefore, the data might have been recycled by the time it is executed asynchronously, leading to access to invalid data.
In light of these findings, we have undertaken some corrective measures to address these issues. We appreciate your attention to this Pull Request and eagerly await your valuable feedback.
Thank you for your time and consideration.