For the readiness probe a simple endpoint that returns 200 is enough. This tests your service’s ability to respond to requests without depending on any other dependencies (sessions which might use Redis or a user auth service which might use a database).
For liveness probe I guess you could check if your service is accepting TCP connections? I don’t think there should ever be a reason for your service to outright refuse connections unless the main service process has crashed (in which case it’s best to let Kubernetes restart the container instead of having a recovery mechanism inside the container itself like supervisord or daemon tools).
> For the readiness probe a simple endpoint that returns 200 is enough. This tests your service’s ability to respond to requests without depending on any other dependencies (sessions which might use Redis or a user auth service which might use a database).
If the underlying dependencies aren't working, can a pod actually be considered ready and able to serve traffic? For example, if database calls are essential to a pod being functional and the pod can't communicate with the database, should the pod actually be eligible for traffic?
> Do not fail either of the probes if any of your shared dependencies is down, it would cause cascading failure of all the pods.
The idea would be that the downstream dependencies have their own probes and if they fail they will get restarted in isolation without touching the services that depend on them (that are only temporarily degraded because of the dependency failure and will recover as soon as the dependency is fixed).
For liveness probe I guess you could check if your service is accepting TCP connections? I don’t think there should ever be a reason for your service to outright refuse connections unless the main service process has crashed (in which case it’s best to let Kubernetes restart the container instead of having a recovery mechanism inside the container itself like supervisord or daemon tools).