Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get error in local environment , #6850

Open
s5364733 opened this issue Sep 19, 2023 · 13 comments
Open

Get error in local environment , #6850

s5364733 opened this issue Sep 19, 2023 · 13 comments
Labels
awaiting-more-evidence Need more info to actually get it done. kind/question An issue that reports a question about the project stale Automatic label to stale issues due inactivity to be closed if no further action

Comments

@s5364733
Copy link

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.
image
image

Desktop (please complete the following information):

  • Version [e.g. 2.4.4]
  • Kubernetes version [e.g. 1.21.10]
  • Package version [e.g. Helm 3.2, carvel-imgpkg 0.28.0]

Additional context
Add any other context about the problem here.

@s5364733 s5364733 added the kind/bug An issue that reports a defect in an existing feature label Sep 19, 2023
@s5364733
Copy link
Author

There is no way to use it at all. I need to refresh all the time, and suddenly it times out.

@s5364733
Copy link
Author

image

@absoludity
Copy link
Contributor

Hi @s5364733 . Can you watch the output of kubectl -n kubeapps get pods while you are experiencing those issues? A 502 bad gateway usually means that the frontend (nginx) is unable to forward the requests because the service it's trying to forward to is unavailable. Most likely the kubeapps-apis service is being constantly rebooted due to a lack of resources (not enough memory available, being the most obvious).

Note that if you are running this locally, you need a decent amount of grunt on your local machine (32GB ram), and even then, use the site/content/docs/latest/reference/manifests/kubeapps-local-dev-values.yaml to ensure you only have one of each service running (in prod you'd want multiple, but on a local development environment, it's too much for the one machine).

@s5364733
Copy link
Author

image

Hi @s5364733 . Can you watch the output of kubectl -n kubeapps get pods while you are experiencing those issues? A 502 bad gateway usually means that the frontend (nginx) is unable to forward the requests because the service it's trying to forward to is unavailable. Most likely the kubeapps-apis service is being constantly rebooted due to a lack of resources (not enough memory available, being the most obvious).

Note that if you are running this locally, you need a decent amount of grunt on your local machine (32GB ram), and even then, use the site/content/docs/latest/reference/manifests/kubeapps-local-dev-values.yaml to ensure you only have one of each service running (in prod you'd want multiple, but on a local development environment, it's too much for the one machine).

image

@s5364733
Copy link
Author

image

@s5364733
Copy link
Author

in prod you'd want multiple, but on a local development environment, it's too much for the one machine

All services are started normally

@absoludity
Copy link
Contributor

absoludity commented Sep 20, 2023

Thanks for the extra info.

in prod you'd want multiple, but on a local development environment, it's too much for the one machine

All services are started normally

Not according to both your screenshots? You've got two kubeapps-internal-kubeappsapis-* pods and one postgresql pod, all showing the "Running" state, but none of which is showing ready? (all show 0/1 ready)

This means that none of those pods are in a ready state (failing their ready checks). It would be worth taking a look at the k8s documentation for how to get more info about the ready checks. My guess, as before (and strengthened by the number of restarts you are seeing) is that there's just not enough resources (memory, EDIT: or CPU) available and so k8s keeps killing pods to start others etc. (for eg., your postgresql pod has 81 restarts in the 23 hours, over 3 per hour).

Note: free -mh shows you how much free mem you've got on your machine, not how much you've allocated for use with docker necessarily (varies based on system).

As mentioned earlier, I would use the local values that I pointed to so that you're not running two pods when you only need one.

You might find that the output of kubectl --namespace kubeapps describe pod kubeapps-postgresql-0 gives you more info about why the pod is not ready (see the checks). Or paste it here, with the logs for that pod, out of interest. Cheers.

@s5364733
Copy link
Author

Thanks for the extra info.

in prod you'd want multiple, but on a local development environment, it's too much for the one machine

All services are started normally

Not according to both your screenshots? You've got two kubeapps-internal-kubeappsapis-* pods and one postgresql pod, all showing the "Running" state, but none of which is showing ready? (all show 0/1 ready)

This means that none of those pods are in a ready state (failing their ready checks). It would be worth taking a look at the k8s documentation for how to get more info about the ready checks. My guess, as before (and strengthened by the number of restarts you are seeing) is that there's just not enough resources (memory, EDIT: or CPU) available and so k8s keeps killing pods to start others etc. (for eg., your postgresql pod has 81 restarts in the 23 hours, over 3 per hour).

Note: free -mh shows you how much free mem you've got on your machine, not how much you've allocated for use with docker necessarily (varies based on system).

As mentioned earlier, I would use the local values that I pointed to so that you're not running two pods when you only need one.

You might find that the output of kubectl --namespace kubeapps describe pod kubeapps-postgresql-0 gives you more info about why the pod is not ready (see the checks). Or paste it here, with the logs for that pod, out of interest. Cheers.

Name: kubeapps-postgresql-0
Namespace: kubeapps
Priority: 0
Service Account: default
Node: minikube/192.168.39.33
Start Time: Tue, 19 Sep 2023 11:03:12 +0800
Labels: app.kubernetes.io/component=primary
app.kubernetes.io/instance=kubeapps
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=postgresql
controller-revision-hash=kubeapps-postgresql-76c6bbd8c9
helm.sh/chart=postgresql-12.10.0
statefulset.kubernetes.io/pod-name=kubeapps-postgresql-0
Annotations:
Status: Running
IP: 10.244.0.235
IPs:
IP: 10.244.0.235
Controlled By: StatefulSet/kubeapps-postgresql
Containers:
postgresql:
Container ID: docker://191c5ac02958972248103e3fdd1ad3a3cde3a8ca74acc08873ae0e21b7f76b76
Image: docker.io/bitnami/postgresql:15.4.0-debian-11-r10
Image ID: docker-pullable://bitnami/postgresql@sha256:86c140fd5df7eeb3d8ca78ce4503fcaaf0ff7d2e10af17aa424db7e8a5ae8734
Port: 5432/TCP
Host Port: 0/TCP
SeccompProfile: RuntimeDefault
State: Running
Started: Wed, 20 Sep 2023 12:16:45 +0800
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Wed, 20 Sep 2023 12:14:22 +0800
Finished: Wed, 20 Sep 2023 12:16:35 +0800
Ready: True
Restart Count: 97
Requests:
cpu: 250m
memory: 256Mi
Liveness: exec [/bin/sh -c exec pg_isready -U "postgres" -d "dbname=assets" -h 127.0.0.1 -p 5432] delay=30s timeout=5s period=10s #success=1 #failure=6
Readiness: exec [/bin/sh -c -e exec pg_isready -U "postgres" -d "dbname=assets" -h 127.0.0.1 -p 5432
[ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ]
] delay=5s timeout=5s period=10s #success=1 #failure=6
Environment:
BITNAMI_DEBUG: false
POSTGRESQL_PORT_NUMBER: 5432
POSTGRESQL_VOLUME_DIR: /bitnami/postgresql
PGDATA: /bitnami/postgresql/data
POSTGRES_PASSWORD: <set to the key 'postgres-password' in secret 'kubeapps-postgresql'> Optional: false
POSTGRES_DATABASE: assets
POSTGRESQL_ENABLE_LDAP: no
POSTGRESQL_ENABLE_TLS: no
POSTGRESQL_LOG_HOSTNAME: false
POSTGRESQL_LOG_CONNECTIONS: false
POSTGRESQL_LOG_DISCONNECTIONS: false
POSTGRESQL_PGAUDIT_LOG_CATALOG: off
POSTGRESQL_CLIENT_MIN_MESSAGES: error
POSTGRESQL_SHARED_PRELOAD_LIBRARIES: pgaudit
Mounts:
/dev/shm from dshm (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9vfpw (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
dshm:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit:
data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
kube-api-access-9vfpw:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Warning BackOff 33m (x104 over 77m) kubelet Back-off restarting failed container postgresql in pod kubeapps-postgresql-0_kubeapps(fa0182c9-e300-4311-88c0-f5ceccfce91b)
Warning Unhealthy 23m (x435 over 168m) kubelet Readiness probe failed: command "/bin/sh -c -e exec pg_isready -U "postgres" -d "dbname=assets" -h 127.0.0.1 -p 5432\n[ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ]\n" timed out
Warning Unhealthy 18m (x156 over 164m) kubelet Liveness probe failed: command "/bin/sh -c exec pg_isready -U "postgres" -d "dbname=assets" -h 127.0.0.1 -p 5432" timed out
Warning Unhealthy 13m (x5 over 58m) kubelet Readiness probe failed: cannot exec in a stopped state: unknown

@s5364733
Copy link
Author

How can I allocate more memory to docker? Or do you have any better solution to solve this problem?

@s5364733
Copy link
Author

docker info :
Client:
Version: 24.0.5
Context: default
Debug Mode: false

Server:
Containers: 9
Running: 2
Paused: 0
Stopped: 7
Images: 13
Server Version: 24.0.5
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: syslog
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version:
runc version:
init version:
Security Options:
seccomp
Profile: builtin
Kernel Version: 5.15.90.1-microsoft-standard-WSL2
Operating System: Ubuntu 20.04.6 LTS
OSType: linux
Architecture: x86_64
CPUs: 10
Total Memory: 15.28GiB
Name: Jackliang
ID: 26d9706e-e3b4-437d-b407-86cbc2782ff9
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
https://ung2thfc.mirror.aliyuncs.com/
Live Restore Enabled: false

@s5364733
Copy link
Author

Thanks for the extra info.

in prod you'd want multiple, but on a local development environment, it's too much for the one machine

All services are started normally

Not according to both your screenshots? You've got two kubeapps-internal-kubeappsapis-* pods and one postgresql pod, all showing the "Running" state, but none of which is showing ready? (all show 0/1 ready)
This means that none of those pods are in a ready state (failing their ready checks). It would be worth taking a look at the k8s documentation for how to get more info about the ready checks. My guess, as before (and strengthened by the number of restarts you are seeing) is that there's just not enough resources (memory, EDIT: or CPU) available and so k8s keeps killing pods to start others etc. (for eg., your postgresql pod has 81 restarts in the 23 hours, over 3 per hour).
Note: free -mh shows you how much free mem you've got on your machine, not how much you've allocated for use with docker necessarily (varies based on system).
As mentioned earlier, I would use the local values that I pointed to so that you're not running two pods when you only need one.
You might find that the output of kubectl --namespace kubeapps describe pod kubeapps-postgresql-0 gives you more info about why the pod is not ready (see the checks). Or paste it here, with the logs for that pod, out of interest. Cheers.

Name: kubeapps-postgresql-0 Namespace: kubeapps Priority: 0 Service Account: default Node: minikube/192.168.39.33 Start Time: Tue, 19 Sep 2023 11:03:12 +0800 Labels: app.kubernetes.io/component=primary app.kubernetes.io/instance=kubeapps app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=postgresql controller-revision-hash=kubeapps-postgresql-76c6bbd8c9 helm.sh/chart=postgresql-12.10.0 statefulset.kubernetes.io/pod-name=kubeapps-postgresql-0 Annotations: Status: Running IP: 10.244.0.235 IPs: IP: 10.244.0.235 Controlled By: StatefulSet/kubeapps-postgresql Containers: postgresql: Container ID: docker://191c5ac02958972248103e3fdd1ad3a3cde3a8ca74acc08873ae0e21b7f76b76 Image: docker.io/bitnami/postgresql:15.4.0-debian-11-r10 Image ID: docker-pullable://bitnami/postgresql@sha256:86c140fd5df7eeb3d8ca78ce4503fcaaf0ff7d2e10af17aa424db7e8a5ae8734 Port: 5432/TCP Host Port: 0/TCP SeccompProfile: RuntimeDefault State: Running Started: Wed, 20 Sep 2023 12:16:45 +0800 Last State: Terminated Reason: Error Exit Code: 137 Started: Wed, 20 Sep 2023 12:14:22 +0800 Finished: Wed, 20 Sep 2023 12:16:35 +0800 Ready: True Restart Count: 97 Requests: cpu: 250m memory: 256Mi Liveness: exec [/bin/sh -c exec pg_isready -U "postgres" -d "dbname=assets" -h 127.0.0.1 -p 5432] delay=30s timeout=5s period=10s #success=1 #failure=6 Readiness: exec [/bin/sh -c -e exec pg_isready -U "postgres" -d "dbname=assets" -h 127.0.0.1 -p 5432 [ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ] ] delay=5s timeout=5s period=10s #success=1 #failure=6 Environment: BITNAMI_DEBUG: false POSTGRESQL_PORT_NUMBER: 5432 POSTGRESQL_VOLUME_DIR: /bitnami/postgresql PGDATA: /bitnami/postgresql/data POSTGRES_PASSWORD: <set to the key 'postgres-password' in secret 'kubeapps-postgresql'> Optional: false POSTGRES_DATABASE: assets POSTGRESQL_ENABLE_LDAP: no POSTGRESQL_ENABLE_TLS: no POSTGRESQL_LOG_HOSTNAME: false POSTGRESQL_LOG_CONNECTIONS: false POSTGRESQL_LOG_DISCONNECTIONS: false POSTGRESQL_PGAUDIT_LOG_CATALOG: off POSTGRESQL_CLIENT_MIN_MESSAGES: error POSTGRESQL_SHARED_PRELOAD_LIBRARIES: pgaudit Mounts: /dev/shm from dshm (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9vfpw (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: dshm: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: Memory SizeLimit: data: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: kube-api-access-9vfpw: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message

Warning BackOff 33m (x104 over 77m) kubelet Back-off restarting failed container postgresql in pod kubeapps-postgresql-0_kubeapps(fa0182c9-e300-4311-88c0-f5ceccfce91b) Warning Unhealthy 23m (x435 over 168m) kubelet Readiness probe failed: command "/bin/sh -c -e exec pg_isready -U "postgres" -d "dbname=assets" -h 127.0.0.1 -p 5432\n[ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ]\n" timed out Warning Unhealthy 18m (x156 over 164m) kubelet Liveness probe failed: command "/bin/sh -c exec pg_isready -U "postgres" -d "dbname=assets" -h 127.0.0.1 -p 5432" timed out Warning Unhealthy 13m (x5 over 58m) kubelet Readiness probe failed: cannot exec in a stopped state: unknown

Looks like a network problem?

@antgamdia
Copy link
Contributor

Can't reproduce it locally, did you manage to get it sorted out finally?

@antgamdia antgamdia added stale Automatic label to stale issues due inactivity to be closed if no further action kind/question An issue that reports a question about the project and removed kind/bug An issue that reports a defect in an existing feature labels Jan 8, 2024
@stale stale bot removed the stale Automatic label to stale issues due inactivity to be closed if no further action label Jan 8, 2024
@antgamdia antgamdia added stale Automatic label to stale issues due inactivity to be closed if no further action awaiting-more-evidence Need more info to actually get it done. labels Jan 8, 2024
@stale stale bot removed the stale Automatic label to stale issues due inactivity to be closed if no further action label Jan 8, 2024
@antgamdia antgamdia added this to the Community requests milestone Jan 8, 2024
Copy link

stale bot commented Mar 17, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale Automatic label to stale issues due inactivity to be closed if no further action label Mar 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting-more-evidence Need more info to actually get it done. kind/question An issue that reports a question about the project stale Automatic label to stale issues due inactivity to be closed if no further action
Projects
Status: 🗂 Backlog
Development

No branches or pull requests

3 participants