Replies: 1 comment
-
That is indeed intriguing! Thanks |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello everyone,
I've been working on a script for forensic analysis of messages and I've observed some intriguing discrepancies in the performance of the model when run on CPU versus GPU. Specifically, the model tends to generate more accurate and reliable responses when executed on a GPU rather than a CPU.
Has anyone else experienced similar issues? I'm curious about the technical reasons behind such differences and whether they are related to the specific LLM architecture, data handling, or perhaps the computation of certain operations that are more efficiently handled by GPUs.
Beta Was this translation helpful? Give feedback.
All reactions