Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should bring details in error message #102

Open
linonetwo opened this issue Jul 19, 2023 · 0 comments
Open

Should bring details in error message #102

linonetwo opened this issue Jul 19, 2023 · 0 comments

Comments

@linonetwo
Copy link

  try {
    runnerInstance = new LLM(RwkvCpp);
    const loadConfig: LoadConfig = {
      enableLogging: true,
      nThreads: 4,
      ...loadConfigOverwrite,
    };
    subscriber?.next({ message: 'prepared to load instance', ...loggerCommonMeta, meta: { ...loggerCommonMeta.meta, loadConfigOverwrite } });
    await runnerInstance.load(loadConfig);
    subscriber?.next({ message: 'instance loaded', ...loggerCommonMeta });
    return runnerInstance;
  } catch (error) {
    // error here is  Error: Failed to initialize LLama context from file: /Users/linonetwo/Desktop/repo/TiddlyGit-Desktop/language-model-dev/llama.bin
    throw error;
  }

while the log in console is

error loading model: unrecognized tensor type 13

llama_init_from_file: failed to load model

I think this detailed message should be added to Error, so I can print it to user

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant