You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When I repeated call the gemini model to execute from my application code, I get this message. It does run the execution, and the underlying code garbage collects itself, but maybe we have to call shutdown from without langchain4j as well.
Log and Stack trace
i.g.i.ManagedChannelOrphanWrapper : *~*~*~ Previous channel ManagedChannelImpl{logId=25, target=us-east4-aiplatform.googleapis.com:443} was garbage collected without being shut down! ~*~*~*
Make sure to call shutdown()/shutdownNow()
java.lang.RuntimeException: ManagedChannel allocation site
at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:102) ~[grpc-core-1.62.2.jar:1.62.2]
at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:60) ~[grpc-core-1.62.2.jar:1.62.2]
at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:51) ~[grpc-core-1.62.2.jar:1.62.2]
at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:672) ~[grpc-core-1.62.2.jar:1.62.2]
at io.grpc.ForwardingChannelBuilder2.build(ForwardingChannelBuilder2.java:260) ~[grpc-api-1.62.2.jar:1.62.2]
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:442) ~[gax-grpc-2.45.0.jar:2.45.0]
at com.google.api.gax.grpc.ChannelPool.<init>(ChannelPool.java:107) ~[gax-grpc-2.45.0.jar:2.45.0]
at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:85) ~[gax-grpc-2.45.0.jar:2.45.0]
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:243) ~[gax-grpc-2.45.0.jar:2.45.0]
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:237) ~[gax-grpc-2.45.0.jar:2.45.0]
at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:226) ~[gax-2.45.0.jar:2.45.0]
at com.google.cloud.vertexai.api.stub.GrpcPredictionServiceStub.create(GrpcPredictionServiceStub.java:295) ~[google-cloud-vertexai-0.6.0.jar:0.6.0]
at com.google.cloud.vertexai.api.stub.PredictionServiceStubSettings.createStub(PredictionServiceStubSettings.java:319) ~[google-cloud-vertexai-0.6.0.jar:0.6.0]
at com.google.cloud.vertexai.api.PredictionServiceClient.<init>(PredictionServiceClient.java:427) ~[google-cloud-vertexai-0.6.0.jar:0.6.0]
at com.google.cloud.vertexai.api.PredictionServiceClient.create(PredictionServiceClient.java:409) ~[google-cloud-vertexai-0.6.0.jar:0.6.0]
at com.google.cloud.vertexai.VertexAI.getPredictionServiceClient(VertexAI.java:289) ~[google-cloud-vertexai-0.6.0.jar:0.6.0]
at com.google.cloud.vertexai.generativeai.GenerativeModel.generateContent(GenerativeModel.java:629) ~[google-cloud-vertexai-0.6.0.jar:0.6.0]
at com.google.cloud.vertexai.generativeai.GenerativeModel.generateContent(GenerativeModel.java:610) ~[google-cloud-vertexai-0.6.0.jar:0.6.0]
at com.google.cloud.vertexai.generativeai.GenerativeModel.generateContent(GenerativeModel.java:499) ~[google-cloud-vertexai-0.6.0.jar:0.6.0]
at dev.langchain4j.model.vertexai.VertexAiGeminiChatModel.lambda$generate$0(VertexAiGeminiChatModel.java:109) ~[langchain4j-vertex-ai-gemini-0.30.0.jar:na]
at dev.langchain4j.internal.RetryUtils$RetryPolicy.withRetry(RetryUtils.java:192) ~[langchain4j-core-0.30.0.jar:na]
at dev.langchain4j.internal.RetryUtils.withRetry(RetryUtils.java:229) ~[langchain4j-core-0.30.0.jar:na]
at dev.langchain4j.model.vertexai.VertexAiGeminiChatModel.generate(VertexAiGeminiChatModel.java:108) ~[langchain4j-vertex-ai-gemini-0.30.0.jar:na]
To Reproduce
In my case, I've just called model.generate(...) a few times with a complex prompt (i.e. > timeout
The warning about the ManagedChannel not being properly shut down is due to the lifecycle management of the VertexAI instance within your VertexAiGeminiChatModel. To resolve this, you should manage the VertexAI instance's lifecycle alongside the VertexAiGeminiChatModel object, ensuring it's properly closed when no longer needed.
Here's a concise approach to fix the issue:
Remove the VertexAI instance from the try-with-resources statement in the constructor of VertexAiGeminiChatModel. This change prevents the instance from being automatically closed at the end of the constructor's scope, which is not suitable since the instance is used beyond the constructor.
Store a reference to the VertexAI instance within the VertexAiGeminiChatModel class. This allows you to access the instance throughout the lifecycle of the VertexAiGeminiChatModel object.
Implement a close method in the VertexAiGeminiChatModel class that explicitly closes the VertexAI instance. This method should be called when the VertexAiGeminiChatModel object is no longer needed to properly release resources.
By implementing these steps, you ensure that the VertexAI instance is properly closed, addressing the warning message and managing resources effectively.
About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
@kappamax if you upgrade to the new 0.31 version released today, this problem should be gone.
Could you please double check that you're not seeing it anymore? (and then we can close this ticket)
Describe the bug
When I repeated call the gemini model to execute from my application code, I get this message. It does run the execution, and the underlying code garbage collects itself, but maybe we have to call shutdown from without langchain4j as well.
Log and Stack trace
To Reproduce
In my case, I've just called
model.generate(...)
a few times with a complex prompt (i.e. > timeoutExpected behavior
This message should not pop up using gemini.
Please complete the following information:
Additional context
The text was updated successfully, but these errors were encountered: