-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Genai LLMInference :Failed to load GPU model with the Error - Failed to build program executable - Out of host memoryPass #5406
Comments
Hi @KosuriSireesha, Could you confirm whether you are running the code on a physical device or a emulator? If it is a physical device, please provide the complete configuration and device name so we can reproduce and better understand the issue. Thank you!! |
Hi @kuaashish , |
Can you please look into this issue too? Thank you!! |
What phone is this running on? We currently only support higher level Android hardware. |
Hi @schmidt-sebastian , |
Hi @schmidt-sebastian |
Hi @KosuriSireesha, Could you please provide the name of your device and its RAM details? We believe there have been slight changes to the GPU in newer versions, which might be causing your device to be unable to run the model. Thank you!! |
This issue has been marked stale because it has no recent activity since 7 days. It will be closed if no further activity occurs. Thank you. |
Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
None
OS Platform and Distribution
Android 14
Mobile device if the issue happens on mobile device
Android Mobile device
Browser and version if the issue happens on browser
No response
Programming Language and version
Kotlin
MediaPipe version
0.10.14
Bazel version
No response
Solution
LLMInference
Android Studio, NDK, SDK versions (if issue is related to building in Android environment)
No response
Xcode & Tulsi version (if issue is related to building for iOS)
No response
Describe the actual behavior
Initialization of LLmInference fails loading the GPU models on the latest Maven package - 0.10.14 with the Error -"Failed to build program executable - Out of host memoryPass"
Describe the expected behaviour
LLMInference App should run using GPU model and Information retrieving works successfully
Standalone code/steps you may have used to try to get what you need
Other info / Complete Logs
The text was updated successfully, but these errors were encountered: