-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes metadata overwhelms memory limits in the Agent process #4729
Comments
Pinging @elastic/elastic-agent-control-plane (Team:Elastic-Agent-Control-Plane) |
Possible related, an increase starting in 8.14.0 was detected by the ECK integration tests #4730 |
FWIW the diagnostics described by this issue were from 8.13.3. |
Pinging @elastic/elastic-agent-data-plane (Team:Elastic-Agent-Data-Plane) |
After chatting with @cmacknz and @pierrehilbert, assigning this to you @faec and making it a high priority for the next sprint. |
cc @gizas |
Agent's variable provider API is very opaque, which is probably a big part of this. Agent's @bturquet / @gizas, if we add hooks to the variable provider API for the Coordinator to give a list of possible variables, what work would be needed to restrict Kubernetes queries to those variables? |
@faec trying to understand here how we can combine those pieces. So lets say the the parsing changes and there is a list of variables that the provider will need to populate. The other metadata enrichment we do with enrichers again is unrelated with the flow you describe here. Maybe we can sync offline for me to understand more about this? |
Diagnostics from production Agents running on Kubernetes show:
elastic-agent-autodiscover
and the other 20% is from helpers internal toelastic-agent
.We need to understand why the Kubernetes helpers are using so much memory, and find a way to mitigate it.
Definition of done
The text was updated successfully, but these errors were encountered: