You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
oh,what i meant is that whether the language model was pre-trained in share training recipe, it's clear that the
vision encoder was trained in pretraining of share recipe.
---Original---
From: ***@***.***>
Date: Wed, Mar 20, 2024 19:10 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [DLCV-BUAA/TinyLLaVABench] Did you train the whole LLM in the pretraining stage of share recipe? (Issue #34)
No, vision tower was not trained.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
No description provided.
The text was updated successfully, but these errors were encountered: