{"id":3823,"date":"2022-04-22T14:42:44","date_gmt":"2022-04-22T20:42:44","guid":{"rendered":"http:\/\/benincosa.com\/?p=3823"},"modified":"2022-04-22T16:32:06","modified_gmt":"2022-04-22T22:32:06","slug":"kubernetes-pod-anti-affinity","status":"publish","type":"post","link":"https:\/\/benincosa.com\/?p=3823","title":{"rendered":"Kubernetes Pod Anti Affinity"},"content":{"rendered":"\n<p>We have a couple of pods that get pretty intense memory use.  It&#8217;s possible there is a memory leak because it OOM&#8217;d a node due to a spike in usage.  One of our temporary solutions while we investigate is to put a pod anti-affinity rule in the deployment.  What this means is that we don&#8217;t like it when pod instances run on the same node.  That way we spread them out. <\/p>\n\n\n\n<p>As we may have more pods then nodes, we don&#8217;t want to make this a blocking requirement. Therefore we use the <code>preferredDuringSchedulingIgnoredDuringExecution<\/code> rule: <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>affinity:\n  # schedule pods on different nodes \n  podAntiAffinity:\n    preferredDuringSchedulingIgnoredDuringExecution:\n    - weight: 100\n      podAffinityTerm:\n        labelSelector:\n          matchExpressions:\n          - key: app.kubernetes.io\/name\n            operator: In\n            values:\n            - pod-name\n        topologyKey: kubernetes.io\/hostname<\/code><\/pre>\n\n\n\n<p>Here, the label on the pods are <code>app.kubernetes.io\/name: pod-name<\/code> This will now make it so the pods aren&#8217;t up on the same place. <\/p>\n\n\n\n<p>You can see this with: <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>kubectl get pods -o wide<\/code><\/pre>\n\n\n\n<p>And it will show that pods that have the same name are scheduled on different nodes.  <\/p>\n\n\n\n<p>For more information, see: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/#inter-pod-affinity-and-anti-affinity\">https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/#inter-pod-affinity-and-anti-affinity<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>We have a couple of pods that get pretty intense memory use. It&#8217;s possible there is a memory leak because it OOM&#8217;d a node due to a spike in usage. One of our temporary solutions while we investigate is to put a pod anti-affinity rule in the deployment. What this means is that we don&#8217;t&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[797],"tags":[],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/benincosa.com\/index.php?rest_route=\/wp\/v2\/posts\/3823"}],"collection":[{"href":"https:\/\/benincosa.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/benincosa.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/benincosa.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/benincosa.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3823"}],"version-history":[{"count":3,"href":"https:\/\/benincosa.com\/index.php?rest_route=\/wp\/v2\/posts\/3823\/revisions"}],"predecessor-version":[{"id":3826,"href":"https:\/\/benincosa.com\/index.php?rest_route=\/wp\/v2\/posts\/3823\/revisions\/3826"}],"wp:attachment":[{"href":"https:\/\/benincosa.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3823"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/benincosa.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3823"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/benincosa.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3823"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}