Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OPENAI embeddings API Integration #177

Closed
arhaang13 opened this issue May 15, 2024 · 4 comments
Closed

OPENAI embeddings API Integration #177

arhaang13 opened this issue May 15, 2024 · 4 comments

Comments

@arhaang13
Copy link

Can you please help me navigating how to change and experiment the same application with the openAI embeddings API.

@andrewnguonly
Copy link
Owner

You'll have to update the scripts/background.ts file.

  1. Install @langchain/openai: npm install @langchain/openai
  2. Import OpenAIEmbeddings. See docs.
import { OpenAIEmbeddings } from "@langchain/openai";
  1. Replace OllamaEmbeddings instance with OpenAIEmbeddings instance.
// load documents into vector store
vectorStore = new EnhancedMemoryVectorStore(
  new OpenAIEmbeddings({
    apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY
    batchSize: 512, // Default value if omitted is 512. Max is 2048
    model: "text-embedding-3-large",
  });
);
  1. Rebuild the application: npm run build.

That's it! Let me know if that helps.

@arhaang13
Copy link
Author

Thank you so much, this helped me a lot!

In addition, I would like to ask if there is a way that I could integrate and call the openAI API instead of using Ollama models with Lumos itself because using the Ollama models on my local system is making my system way slower, so I would like to use the complete RAG using openAI option.

Kindly help me out with the same.

Thank you,

@andrewnguonly
Copy link
Owner

if there is a way that I could integrate and call the openAI API

Again, you'll have to update the scripts/background.ts file.

  1. Install @langchain/openai: npm install @langchain/openai
  2. Import ChatOpenAI and OpenAI.
import { ChatOpenAI } from "@langchain/openai";
import { OpenAI } from "@langchain/openai";
  1. Replace ChatOllama instance with ChatOpenAI instance.
const getChatModel = (options: LumosOptions): Runnable => {
  return new ChatOpenAI({
    apiKey: "YOUR-API-KEY",
    callbacks: [new ConsoleCallbackHandler()],
  }).bind({
    signal: controller.signal,
  });
};
  1. Replace Ollama instance with OpenAI instance.
const classifyPrompt = async (
  options: LumosOptions,
  type: string,
  originalPrompt: string,
  classifcationPrompt: string,
  prefixTrigger?: string,
): Promise<boolean> => {
  ...

  // otherwise, attempt to classify prompt
  const openai = new OpenAI({
    apiKey: "YOUR-API-KEY",
    temperature: 0,
    stop: [".", ","],
  }).bind({
    signal: controller.signal,
  });

  ...
};
  1. Rebuild the application: npm run build.

@andrewnguonly
Copy link
Owner

Closing this issue for now. @arhaang13, let me know if you need more help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants