- Release Signoff Checklist
- Summary
- Motivation
- Proposal
- Design Details
- Production Readiness Review Questionnaire
- Implementation History
- Drawbacks
- Alternatives
- Infrastructure Needed (Optional)
Items marked with (R) are required prior to targeting to a milestone / release.
- (R) Enhancement issue in release milestone, which links to KEP dir in kubernetes/enhancements (not the initial KEP PR)
- [] (R) KEP approvers have approved the KEP status as
implementable
- (R) Design details are appropriately documented
- (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input
- (R) Graduation criteria is in place
- (R) Production readiness review completed
- Production readiness review approved
- "Implementation History" section is up-to-date for milestone
- User-facing documentation has been created in kubernetes/website, for publication to kubernetes.io
- Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes
Kubelet graceful shutdown should take the pod priority values into account to determine the order in which the pods are stopped.
The node graceful shutdown KEP added support to the kubelet to detect that a node is shutting down and making sure that the pods are gracefully stopped before allowing the shutdown to proceed.
The feature added flags to specify the total time for shutdown and the time to reserve for shutting down critical pods.
However, there is a need to allow more fine grained control over the pod shutdown order beyond critical and regular pods.
Also, in general, kubernetes API design discourages hard coding anything by instances of name.
Instead of looking at the pod priority class names, we can instead look at the pod priority class values to allow more control over pod shutdown order.
- Make the kubelet use shutdown configuration based on pod priority values for graceful shutdown.
- Non-Linux hosts aren't supported
- Let users modify or change existing pod lifecycle or introduce new inner pod depencides / shutdown ordering
- Provide guarantee to handle all cases of graceful node shutdown, for example abrupt shutdown or sudden power cable pull can’t result in graceful shutdown
- As a cluster administrator, I can configure the nodes in my cluster to allocate different graceful shutdown durations for different pod priority value ranges to terminate them gracefully during node shutdown
This implementation builds on top of the node graceful shutdown feature
by introducing additional configuration. A new feature flag called
GracefulNodeShutdownBasedOnPodPriority
will be added to control the behavior
of the kubelet.
We will describe the configuration by using an example. Say, the following custom pod priority classes are created in a cluster:
Pod priority class name | Pod priority class value |
---|---|
custom-class-a | 100000 |
custom-class-b | 10000 |
custom-class-c | 1000 |
regular/unset | 0 |
We could set kubelet configuration to stop the pods as:
Pod priority class value | Shutdown period |
---|---|
100000 | 10 seconds |
10000 | 180 seconds |
1000 | 120 seconds |
0 | 60 seconds |
The above table implies that any pod with priority value >= 100000 will get just 10 seconds to stop, any pod with value >= 10000 and < 100000 will get 180 seconds to stop, any pod with value >= 1000 and < 10000 will get 120 seconds to stop. Finally, all other pods will get 60 seconds to stop.
Note: We use priority values instead of names because k8s API design discourages using names and the values are more portable as well.
One doesn't have to specify values corresponding to all of the classes. For e.g. the config could also be
Pod priority class value | Shutdown period |
---|---|
100000 | 300 seconds |
1000 | 120 seconds |
0 | 60 seconds |
In the above case, the pods with custom-class-b will go into the same bucket as custom-class-c for shutdown.
If there are no pods in a particular range, then the kubelet does not wait for pods in that priority range. Instead, the kubelet immediately skips to the next priority class value range.
If this feature is enabled and no configuration is provided, then no ordering action will be taken. The rationale is to allow some users to opt out of this if they are on a non-systemd distribution or have an older version of systemd with which this feature won't work.
The feature relies on systemd inhibitor locks that were introduced in systemd version 183.
If a user configures ShutdownGracePeriod
to 300 seconds and ShutdownGracePeriodCriticalPods
to 120 seconds, then it could be migrated to (note that the non-critical pods will
get the difference of total time and critical pods time):
Pod priority class value | Shutdown period |
---|---|
2000000000 | 120 seconds |
0 | 180 seconds |
Kubelet will be modified to only work with the config proposed in this KEP or the Node shutdown KEP. If both are specified, then it will be treated as a configuration error. If neither are specified, then Graceful Node Shutdown feature is disabled.
Same as the graceful shutdown KEP.
The configuration will be controlled by a new Kubelet Config setting,
kubeletConfig.ShutdownGracePeriodByPodPriority
:
type ShutdownGracePeriodByPodPriority struct {
Priority int32
ShutdownGracePeriodSeconds int64
}
type KubeletConfiguration struct {
ShutdownGracePeriodByPodPriority []ShutdownGracePeriodByPodPriority
}
- Unit tests for kubelet of handling shutdown event in pod priority order.
- New E2E tests to validate node graceful shutdown in pod priority order.
- Implemented the feature for Linux (systemd) only
- Unit tests
- Unit tests will mock out system components (i.e. systemd, inhibitors) for alpha
- Addresses feedback from alpha testers
- Sufficient E2E and unit testing
- Addresses feedback from beta
- Sufficient number of users using the feature
- Confident that no further API / kubelet config configuration options changes are needed
- Close on any remaining open issues & bugs
n/a
n/a
This section must be completed when targeting alpha to a release.
-
How can this feature be enabled / disabled in a live cluster?
- Feature gate (also fill in values in
kep.yaml
)- Feature gate name:
GracefulNodeShutdownBasedOnPodPriority
- Components depending on the feature gate:
kubelet
- Feature gate name:
- Other
- Describe the mechanism:
- Will enabling / disabling the feature require downtime of the control
plane?
- no
- Will enabling / disabling the feature require downtime or reprovisioning
of a node?
- yes (will require restart of kubelet)
- Feature gate (also fill in values in
-
Does enabling the feature change any default behavior? Any change of default behavior may be surprising to users or break existing automations, so be extremely careful here.
- The main behavior change is that during a node shutdown, pods running on the node will be terminated gracefully. Note that the pod authors won't be able to control the graceful shutdown time of the node as it will be bounded by the config proposed in the KEP.
-
Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)? Also set
disable-supported
totrue
orfalse
inkep.yaml
. Describe the consequences on existing workloads (e.g., if this is a runtime feature, can it break the existing applications?).- Yes, the feature can be disabled by either disabling the feature gate. The kubelet could be restarted with the feature gate disabled without having to evict the running pods.
-
What happens if we reenable the feature if it was previously rolled back?
- Kubelet will attempt to perform graceful termination of pods during a node shutdown using pod priority configuration.
-
Are there any tests for feature enablement/disablement? The e2e framework does not currently support enabling or disabling feature gates. However, unit tests in each component dealing with managing data, created with and without the feature, are necessary. At the very least, think about conversion tests if API types are being modified.
- N/A
This section must be completed when targeting beta graduation to a release.
- How can a rollout fail? Can it impact already running workloads? Try to be as paranoid as possible - e.g., what if some components will restart mid-rollout?
This feature should not impact rollouts.
- What specific metrics should inform a rollback?
N/A.
- Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested? Describe manual testing that was done and the outcomes. Longer term, we may want to require automated upgrade/rollback tests, but we are missing a bunch of machinery and tooling and can't do that now.
The feature is part of kubelet config so updating kubelet config should enable/disable the feature; upgrade/downgrade is N/A.
- Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.? Even if applying deprecation policies, they may still surprise some users.
No.
This section must be completed when targeting beta graduation to a release.
- How can an operator determine if the feature is in use by workloads? Ideally, this should be a metric. Operations against the Kubernetes API (e.g., checking if there are objects with field X set) may be a last resort. Avoid logs or events for this purpose.
Check if the feature gate and kubelet config settings are enabled on a node. Kubelet will be exposing metrics described below.
- What are the SLIs (Service Level Indicators) an operator can use to determine
the health of the service?
- Metrics
- Metric name: GracefulShutdownStartTime, GracefulShutdownEndTime
- [Optional] Aggregation method:
- Components exposing the metric: Kubelet
- Other (treat as last resort)
- Details:
- Metrics
Kubelet can write down the start and end time for the graceful shutdown to local storage and expose those metrics for scraping. In some cases, the metrics could be missed based on scraping interval, but they can be served back up after the kubelet comes online on a reboot. Operators could then look at these metrics to troubleshoot issues with the feature across their nodes in a cluster.
- What are the reasonable SLOs (Service Level Objectives) for the above SLIs?
At a high level, this usually will be in the form of "high percentile of SLI
per day <= X". It's impossible to provide comprehensive guidance, but at the very
high level (needs more precise definitions) those may be things like:
- per-day percentage of API calls finishing with 5XX errors <= 1%
- 99% percentile over day of absolute value from (job creation time minus expected job creation time) for cron job <= 10%
- 99,9% of /health requests per day finish with 200 code
Graceful shutdown feature should function outside of power failures or h/w failure rates on the nodes.
- Are there any missing metrics that would be useful to have to improve observability of this feature? Describe the metrics themselves and the reasons why they weren't added (e.g., cost, implementation difficulties, etc.).
N/A.
This section must be completed when targeting beta graduation to a release.
-
Does this feature depend on any specific services running in the cluster? Think about both cluster-level services (e.g. metrics-server) as well as node-level agents (e.g. specific version of CRI). Focus on external or optional services that are needed. For example, if this feature depends on a cloud provider API, or upon an external software-defined storage or network control plane.
For each of these, fill in the following—thinking about running existing user workloads and creating new ones, as well as about cluster-level services (e.g. DNS):
- [Dependency name]
- Usage description:
- Impact of its outage on the feature:
- Impact of its degraded performance or high-error rates on the feature:
- Usage description:
- [Dependency name]
No, this feature doesn't depend on any specific services running the cluster. It only depends on systemd running on the node itself.
For alpha, this section is encouraged: reviewers should consider these questions and attempt to answer them.
For beta, this section is required: reviewers must answer these questions.
For GA, this section is required: approvers should be able to confirm the previous answers based on experience in the field.
- Will enabling / using this feature result in any new API calls?
Describe them, providing:
- API call type (e.g. PATCH pods)
- estimated throughput
- originating component(s) (e.g. Kubelet, Feature-X-controller) focusing mostly on:
- components listing and/or watching resources they didn't before
- API calls that may be triggered by changes of some Kubernetes resources (e.g. update of object X triggers new updates of object Y)
- periodic API calls to reconcile state (e.g. periodic fetching state, heartbeats, leader election, etc.)
No.
- Will enabling / using this feature result in introducing new API types?
Describe them, providing:
- API type
- Supported number of objects per cluster
- Supported number of objects per namespace (for namespace-scoped objects)
No.
- Will enabling / using this feature result in any new calls to the cloud provider?
No.
- Will enabling / using this feature result in increasing size or count of
the existing API objects?
Describe them, providing:
- API type(s):
- Estimated increase in size: (e.g., new annotation of size 32B)
- Estimated amount of new objects: (e.g., new Object X for every existing Pod)
No.
- Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs? Think about adding additional work or introducing new steps in between (e.g. need to do X to start a container), etc. Please describe the details.
No.
- Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components? Things to keep in mind include: additional in-memory state, additional non-trivial computations, excessive access to disks (including increased log volume), significant amount of data sent and/or received over network, etc. This through this both in small and large cases, again with respect to the supported limits.
No.
The Troubleshooting section currently serves the Playbook
role. We may consider
splitting it into a dedicated Playbook
document (potentially with some monitoring
details). For now, we leave it here.
This section must be completed when targeting beta graduation to a release.
- How does this feature react if the API server and/or etcd is unavailable?
The feature does not depend on the API server / etcd.
-
What are other known failure modes? For each of them, fill in the following information by copying the below template:
-
Kubelet does not detect the shutdown e.g. due to systemd inhibitor not registering.
- Detection: Kubelet logs
- Mitigations: Workloads will not be affected, graceful node shutdown will not be enabled
- Diagnostics: At default (v2) logging verbosity, kubelet will log if inhibitor was registered
- Testing: Existing OSS SIG-Node E2E tests check for graceful node shutdown including priority based shutdown
-
Priority based graceful node shutdown setting is not respected
- Detection: Kubelet logs contain time allocated to each pod shutdown
- Mitigations: Change priority based graceful node shutdown config or revert to existing beta graceful node shutdown config
- Diagnostics: Kubelet logging at v2 level
- Testing: Existing OSS SIG-Node E2E tests check that pod shutdown time is respected depending on the config
-
-
What steps should be taken if SLOs are not being met to determine the problem?
N/A.