Replies: 1 comment 1 reply
-
I think the biggest barrier here is that Coral TPUs only support TensorFlow Lite models, whereas our inference is currently built on llama.cpp which has its own format. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
can a coral tpu be used to help accelerate processing?
something like this:
https://coral.ai/products/m2-accelerator-dual-edgetpu/
Beta Was this translation helpful? Give feedback.
All reactions