Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Vertex AI Gemini models cause a ManagedChannel allocation site due channel to not being shutdown #1090

Open
kappamax opened this issue May 11, 2024 · 3 comments
Labels
bug Something isn't working P2 High priority P3 Medium priority

Comments

@kappamax
Copy link

Describe the bug
When I repeated call the gemini model to execute from my application code, I get this message. It does run the execution, and the underlying code garbage collects itself, but maybe we have to call shutdown from without langchain4j as well.

Log and Stack trace

i.g.i.ManagedChannelOrphanWrapper        : *~*~*~ Previous channel ManagedChannelImpl{logId=25, target=us-east4-aiplatform.googleapis.com:443} was garbage collected without being shut down! ~*~*~*
    Make sure to call shutdown()/shutdownNow()

java.lang.RuntimeException: ManagedChannel allocation site
	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:102) ~[grpc-core-1.62.2.jar:1.62.2]
	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:60) ~[grpc-core-1.62.2.jar:1.62.2]
	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:51) ~[grpc-core-1.62.2.jar:1.62.2]
	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:672) ~[grpc-core-1.62.2.jar:1.62.2]
	at io.grpc.ForwardingChannelBuilder2.build(ForwardingChannelBuilder2.java:260) ~[grpc-api-1.62.2.jar:1.62.2]
	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:442) ~[gax-grpc-2.45.0.jar:2.45.0]
	at com.google.api.gax.grpc.ChannelPool.<init>(ChannelPool.java:107) ~[gax-grpc-2.45.0.jar:2.45.0]
	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:85) ~[gax-grpc-2.45.0.jar:2.45.0]
	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:243) ~[gax-grpc-2.45.0.jar:2.45.0]
	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:237) ~[gax-grpc-2.45.0.jar:2.45.0]
	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:226) ~[gax-2.45.0.jar:2.45.0]
	at com.google.cloud.vertexai.api.stub.GrpcPredictionServiceStub.create(GrpcPredictionServiceStub.java:295) ~[google-cloud-vertexai-0.6.0.jar:0.6.0]
	at com.google.cloud.vertexai.api.stub.PredictionServiceStubSettings.createStub(PredictionServiceStubSettings.java:319) ~[google-cloud-vertexai-0.6.0.jar:0.6.0]
	at com.google.cloud.vertexai.api.PredictionServiceClient.<init>(PredictionServiceClient.java:427) ~[google-cloud-vertexai-0.6.0.jar:0.6.0]
	at com.google.cloud.vertexai.api.PredictionServiceClient.create(PredictionServiceClient.java:409) ~[google-cloud-vertexai-0.6.0.jar:0.6.0]
	at com.google.cloud.vertexai.VertexAI.getPredictionServiceClient(VertexAI.java:289) ~[google-cloud-vertexai-0.6.0.jar:0.6.0]
	at com.google.cloud.vertexai.generativeai.GenerativeModel.generateContent(GenerativeModel.java:629) ~[google-cloud-vertexai-0.6.0.jar:0.6.0]
	at com.google.cloud.vertexai.generativeai.GenerativeModel.generateContent(GenerativeModel.java:610) ~[google-cloud-vertexai-0.6.0.jar:0.6.0]
	at com.google.cloud.vertexai.generativeai.GenerativeModel.generateContent(GenerativeModel.java:499) ~[google-cloud-vertexai-0.6.0.jar:0.6.0]
	at dev.langchain4j.model.vertexai.VertexAiGeminiChatModel.lambda$generate$0(VertexAiGeminiChatModel.java:109) ~[langchain4j-vertex-ai-gemini-0.30.0.jar:na]
	at dev.langchain4j.internal.RetryUtils$RetryPolicy.withRetry(RetryUtils.java:192) ~[langchain4j-core-0.30.0.jar:na]
	at dev.langchain4j.internal.RetryUtils.withRetry(RetryUtils.java:229) ~[langchain4j-core-0.30.0.jar:na]
	at dev.langchain4j.model.vertexai.VertexAiGeminiChatModel.generate(VertexAiGeminiChatModel.java:108) ~[langchain4j-vertex-ai-gemini-0.30.0.jar:na]

To Reproduce

In my case, I've just called model.generate(...) a few times with a complex prompt (i.e. > timeout

Expected behavior

This message should not pop up using gemini.

Please complete the following information:

  • LangChain4j version: 0.30.0
  • LLM(s) used: gemini-1.5-pro-preview-0409
  • Java version: OpenJDK Runtime Environment Corretto-11.0.22.7.1
  • Spring Boot version (if applicable): 3.2.5 (also happed on 3.2.4)

Additional context

@kappamax kappamax added the bug Something isn't working label May 11, 2024
Copy link

dosubot bot commented May 11, 2024

The warning about the ManagedChannel not being properly shut down is due to the lifecycle management of the VertexAI instance within your VertexAiGeminiChatModel. To resolve this, you should manage the VertexAI instance's lifecycle alongside the VertexAiGeminiChatModel object, ensuring it's properly closed when no longer needed.

Here's a concise approach to fix the issue:

  1. Remove the VertexAI instance from the try-with-resources statement in the constructor of VertexAiGeminiChatModel. This change prevents the instance from being automatically closed at the end of the constructor's scope, which is not suitable since the instance is used beyond the constructor.

  2. Store a reference to the VertexAI instance within the VertexAiGeminiChatModel class. This allows you to access the instance throughout the lifecycle of the VertexAiGeminiChatModel object.

  3. Implement a close method in the VertexAiGeminiChatModel class that explicitly closes the VertexAI instance. This method should be called when the VertexAiGeminiChatModel object is no longer needed to properly release resources.

By implementing these steps, you ensure that the VertexAI instance is properly closed, addressing the warning message and managing resources effectively.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@langchain4j
Copy link
Owner

@kappamax thank you a lot for reporting!

cc @glaforge

@langchain4j langchain4j added P2 High priority P3 Medium priority labels May 13, 2024
@glaforge
Copy link
Contributor

@kappamax if you upgrade to the new 0.31 version released today, this problem should be gone.
Could you please double check that you're not seeing it anymore? (and then we can close this ticket)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working P2 High priority P3 Medium priority
Projects
None yet
Development

No branches or pull requests

3 participants