Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Buggy behavior after failed health check recover #12694

Open
1 task done
mhkarimi1383 opened this issue Mar 6, 2024 · 16 comments
Open
1 task done

Buggy behavior after failed health check recover #12694

mhkarimi1383 opened this issue Mar 6, 2024 · 16 comments
Labels
area/kubernetes Issues where Kong is running on top of Kubernetes bug

Comments

@mhkarimi1383
Copy link

mhkarimi1383 commented Mar 6, 2024

Is there an existing issue for this?

  • I have searched the existing issues

Kong version ($ kong version)

3.5.0 (With KIC 2.12)

Current Behavior

Sometimes when health check fails and health check recovers Kong response is still 503 for that service

Expected Behavior

Response recovers while Health check recovers

Steps To Reproduce

  1. In K8s environment
  2. Bring up a project and create a Ingress and UpstreamPolicy (With TCP or HTTP health check [TCP Preferred])
  3. configure health check to failure for some time (you will get 503 error)
  4. make that health check to success again (you may get 503 error again

Anything else?

No response

@chronolaw chronolaw added the area/kubernetes Issues where Kong is running on top of Kubernetes label Mar 6, 2024
@StarlightIbuki
Copy link
Contributor

The behavior sounds expected to me. The health check status does not update immediately, and the passive health checker cannot predict if the next request will succeed. Could you elaborate?

@StarlightIbuki StarlightIbuki added the pending author feedback Waiting for the issue author to get back to a maintainer with findings, more details, etc... label Mar 19, 2024
@mhkarimi1383
Copy link
Author

@StarlightIbuki
Hi
But after the interval passes it should recover, but it doesn't.
Also clearing Kong cache via Admin API fixes the issue

@mhkarimi1383
Copy link
Author

It happens when we have a rolling update on our K8s Deployment

@StarlightIbuki
Copy link
Contributor

@mhkarimi1383 Is the upstream failing in a predictable or manipulatable manner? So that you are sure that the status is not reflecting the fact?

@mhkarimi1383
Copy link
Author

@StarlightIbuki
Yes
I have sent a request to that pod and monitor that health check endpoint using a blackbox exporter pointing to it's k8s service

@StarlightIbuki
Copy link
Contributor

@mhkarimi1383 Could you share the config that you are using?

@mhkarimi1383
Copy link
Author

@StarlightIbuki

        upstream:
          healthchecks:
            active:
              healthy:
                interval: 5
                successes: 3
              type: tcp
              unhealthy:
                tcp_failures: 1
                interval: 5

Here is my KongIngress spec

@StarlightIbuki
Copy link
Contributor

5s seems a short interval. How long do you wait before inspecting the status?

@mhkarimi1383
Copy link
Author

mhkarimi1383 commented Mar 19, 2024

@StarlightIbuki
About 5 minutes

@StarlightIbuki
Copy link
Contributor

I still do not really understand the reproduction steps. When the health checker reports green and you get 503, what real status are you expecting?

@mhkarimi1383
Copy link
Author

I still do not really understand the reproduction steps. When the health checker reports green and you get 503, what real status are you expecting?

Yes,
Kong said the project is unhealthy but it is healthy, clear king cache fixes the problem

@StarlightIbuki
Copy link
Contributor

I still do not really understand the reproduction steps. When the health checker reports green and you get 503, what real status are you expecting?

Yes, Kong said the project is unhealthy but it is healthy, clear king cache fixes the problem

Sorry, but let's confirm if my understanding is correct: for step 4, we configure the upstream to back to work again, and we will observe the health checker reporting unhealthy condition?

@mhkarimi1383
Copy link
Author

@StarlightIbuki Yes

Copy link
Contributor

github-actions bot commented Apr 4, 2024

This issue is marked as stale because it has been open for 14 days with no activity.

@github-actions github-actions bot added the stale label Apr 4, 2024
@StarlightIbuki StarlightIbuki removed pending author feedback Waiting for the issue author to get back to a maintainer with findings, more details, etc... stale labels Apr 7, 2024
@ADD-SP ADD-SP added the bug label May 27, 2024
@ADD-SP
Copy link
Contributor

ADD-SP commented May 27, 2024

I have reproduced this issue locally using the master branch. @mhkarimi1383, thanks for your report.

Internal ticket for tracking: KAG-4588

_format_version: "3.0"

_transform: true

services:
- name: service_1
  host: upstream_1
  routes:
   - name: route_1
     paths:
     - /1



upstreams:
- name: upstream_1
  targets:
  - target: localhost:80
  healthchecks:
    active:
      timeout: 10
      healthy:
        interval: 5
      unhealthy:
        http_statuses: [500]
        http_failures: 1
        interval: 5

@mhkarimi1383
Copy link
Author

Thanks

Sometimes clearing cache will not work and we have to wait (for example 20 minutes) or we have to restart to fix the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubernetes Issues where Kong is running on top of Kubernetes bug
Projects
None yet
Development

No branches or pull requests

4 participants