Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support image-only #25

Open
fire opened this issue Jun 27, 2023 · 8 comments
Open

Support image-only #25

fire opened this issue Jun 27, 2023 · 8 comments

Comments

@fire
Copy link

fire commented Jun 27, 2023

Use image only for scanning a image and finding its classes

@monatis
Copy link
Owner

monatis commented Jun 28, 2023

zsl (zero-shot labeling) examples does that. We need to pass candidate class names with --text argument and zsl is scoring those classes.

@fire
Copy link
Author

fire commented Jun 28, 2023

I do not have a general list of candidates to provide

@monatis
Copy link
Owner

monatis commented Jun 28, 2023

If you don't have candidates, CLIP model won't work for image classification. Then I guess your best bet would be to hardcode class names from a common dataset such as OpenImages by modifying zsl example.

p.s.: I'm planning to experiment with a transfer learning method on the edge to train a single head layer on top of the CLIP backbone for image classification, but I don't know when yet.

@fire
Copy link
Author

fire commented Jun 29, 2023

Can you go into more detail?

common dataset such as OpenImages

Like go to OpenImages and generate a text prompt with all its classes in to the command line prompt?

@monatis
Copy link
Owner

monatis commented Jun 29, 2023

Yes, or only the classes that you are actually expecting to appear in the image. This is how zero-shot labeling is supposed to work.

If you could describe your exact use case, I'd try to make a more detailed comment.

@fire
Copy link
Author

fire commented Jun 29, 2023

Here’s an example. I want to create an discord bot that watches for a reaction on the image and posts a message explaining what the image looks like for people who have trouble seeing, but know what the words mean.

@Green-Sky
Copy link
Collaborator

sounds like you want to feed the image embedding to a llm. (like llava)

@monatis
Copy link
Owner

monatis commented Jun 29, 2023

Oh got it. This is called image captioning. There exist numerous models for it, but state-of-the-art results come from models like LLaVA. It is basically CLIP + LLaMA bridged with a linear layer. I have an idea like creating another project to combine clip.cpp with llama.cpp to achieve efficient inference of LLaVA, but this might be delayed for a week or so because there are some other features I'd like to implement in this repo before that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants