Filed under: DevOps, Kubernetes, — Tags: --service-node-port-range, kube-apiserver, kubernetes, service-node-port-range — Thomas Sundberg — 2019-02-20
The default node port range for Kubernetes is 30000
- 32767
. As it is a
default, a reasonable assumption would be that it can be changed. So far so good.
However, it turns out that the documentation on how to change the default
is hard to find.
kube-apiserver
The port range is controlled by the kube-apiserver
which is a pod running
inside your Kubernetes cluster. It is created from a pod definition located in
/etc/kubernetes/manifests/kube-apiserver.yaml
.
According to bits and pieces I have found when searching for details on setting
the port range, the directory /etc/kubernetes/manifests
is monitored by
kubelet
and any pods defined in it will be created as static pods
in your cluster.
--service-node-port-range
Update the file /etc/kubernetes/manifests/kube-apiserver.yaml
and add the line
--service-node-port-range=20000-22767
.
I added it just below --service-cluster-ip-range
and I got the result:
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --authorization-mode=Node,RBAC
- --advertise-address=192.168.252.65
- --allow-privileged=true
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.96.0.0/12
- --service-node-port-range=20000-22767
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: k8s.gcr.io/kube-apiserver:v1.13.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 192.168.252.65
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
As the directory is monitored by kubelet
for any changes, there is no need
to do anything more. The kube-apiserver
will be recreated with the
new settings.
I checked the age of kube-apiserver
and got
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
kube-apiserver-xxxxxxxxxxxx 1/1 Running 0 1m
The rest om my pods were older.
I took a backup of the file /etc/kubernetes/manifests/kube-apiserver.yaml
.
The mistake I made was that I left it next to the original file.
Thus, the kube-apiserver
was never recreated and the config change was
never applied. This mistake took me on a long detour.
Make sure that you move any backup from the directory /etc/kubernetes/manifests/
so the pod kube-apiserver
is recreated.
Updating /etc/kubernetes/manifests/kube-apiserver.yaml
with
--service-node-port-range=20000-22767
changed
the node port range from its default to a new range.
I would like to thank Malin Ekholm and Mika Kytöläinen for feedback.