The example below creates a Kubernetes cluster with 4 worker node Virtual Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient).
If you want a simplified getting started experience and GUI for managing clusters, please consider trying Google Kubernetes Engine for hosted cluster installation and management.
For an easy way to experiment with the Kubernetes development environment, click the button below to open a Google Cloud Shell with an auto-cloned copy of the Kubernetes source repo.
If you want to use custom binaries or pure open source Kubernetes, please continue with the instructions below.
gcloud
as necessary. gcloud
can be installed as a part of the Google Cloud SDK.gcloud config list project
and change it via gcloud config set project <project-id>
.gcloud auth login
.gcloud auth application-default login
.You can install a client and start a cluster with either one of these commands (we list both in case only one is installed on your machine):
curl -sS https://get.k8s.io | bash
or
wget -q -O - https://get.k8s.io | bash
Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster.
By default, some containers will already be running on your cluster. Containers like fluentd
provide logging, while heapster
provides monitoring services.
The script run by the commands above creates a cluster with the name/prefix “kubernetes”. It defines one specific cluster config, so you can’t run it more than once.
Alternately, you can download and install the latest Kubernetes release from this page, then run the <kubernetes>/cluster/kube-up.sh
script to start the cluster:
cd kubernetes
cluster/kube-up.sh
If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the <kubernetes>/cluster/gce/config-default.sh
file for more fine-grained configuration before you start up your cluster.
If you run into trouble, please see the section on troubleshooting, post to the kubernetes-users group, or come ask questions on Slack.
The next few steps will show you:
The cluster startup script will leave you with a running cluster and a kubernetes
directory on your workstation.
The kubectl tool controls the Kubernetes cluster manager. It lets you inspect your cluster resources, create, delete, and update components, and much more. You will use it to look at your new cluster and bring up example apps.
You can use gcloud
to install the kubectl
command-line tool on your workstation:
gcloud components install kubectl
Note: The kubectl version bundled with gcloud
may be older than the one
downloaded by the get.k8s.io install script. See Installing kubectl
document to see how you can set up the latest kubectl
on your workstation.
Once kubectl
is in your path, you can use it to look at your cluster. E.g., running:
$ kubectl get --all-namespaces services
should show a set of services that look something like this:
NAMESPACE NAME CLUSTER_IP EXTERNAL_IP PORT(S) AGE
default kubernetes 10.0.0.1 <none> 443/TCP 1d
kube-system kube-dns 10.0.0.2 <none> 53/TCP,53/UDP 1d
kube-system kube-ui 10.0.0.3 <none> 80/TCP 1d
...
Similarly, you can take a look at the set of pods that were created during cluster startup. You can do this via the
$ kubectl get --all-namespaces pods
command.
You’ll see a list of pods that looks something like this (the name specifics will be different):
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system fluentd-cloud-logging-kubernetes-minion-63uo 1/1 Running 0 14m
kube-system fluentd-cloud-logging-kubernetes-minion-c1n9 1/1 Running 0 14m
kube-system fluentd-cloud-logging-kubernetes-minion-c4og 1/1 Running 0 14m
kube-system fluentd-cloud-logging-kubernetes-minion-ngua 1/1 Running 0 14m
kube-system kube-dns-v5-7ztia 3/3 Running 0 15m
kube-system kube-ui-v1-curt1 1/1 Running 0 15m
kube-system monitoring-heapster-v5-ex4u3 1/1 Running 1 15m
kube-system monitoring-influx-grafana-v1-piled 2/2 Running 0 15m
Some of the pods may take a few seconds to start up (during this time they’ll show Pending
), but check that they all show as Running
after a short period.
Then, see a simple nginx example to try out your new cluster.
For more complete applications, please look in the examples directory. The guestbook example is a good “getting started” walkthrough.
To remove/delete/teardown the cluster, use the kube-down.sh
script.
cd kubernetes
cluster/kube-down.sh
Likewise, the kube-up.sh
in the same directory will bring it back up. You do not need to rerun the curl
or wget
command: everything needed to setup the Kubernetes cluster is now on your workstation.
The script above relies on Google Storage to stage the Kubernetes release. It
then will start (by default) a single master VM along with 4 worker VMs. You
can tweak some of these parameters by editing kubernetes/cluster/gce/config-default.sh
You can view a transcript of a successful cluster creation
here.
You need to have the Google Cloud Storage API, and the Google Cloud Storage JSON API enabled. It is activated by default for new projects. Otherwise, it can be done in the Google Cloud Console. See the Google Cloud Storage JSON API Overview for more details.
Also ensure that– as listed in the Prerequisites section– you’ve enabled the Compute Engine Instance Group Manager API
, and can start up a GCE VM from the command line as in the GCE Quickstart instructions.
If the Kubernetes startup script hangs waiting for the API to be reachable, you can troubleshoot by SSHing into the master and node VMs and looking at logs such as /var/log/startupscript.log
.
Once you fix the issue, you should run kube-down.sh
to cleanup after the partial cluster creation, before running kube-up.sh
to try again.
If you’re having trouble SSHing into your instances, ensure the GCE firewall
isn’t blocking port 22 to your VMs. By default, this should work but if you
have edited firewall rules or created a new non-default network, you’ll need to
expose it: gcloud compute firewall-rules create default-ssh --network=<network-name>
--description "SSH allowed from anywhere" --allow tcp:22
Additionally, your GCE SSH key must either have no passcode or you need to be
using ssh-agent
.
The instances must be able to connect to each other using their private IP. The
script uses the “default” network which should have a firewall rule called
“default-allow-internal” which allows traffic on any port on the private IPs.
If this rule is missing from the default network or if you change the network
being used in cluster/config-default.sh
create a new rule with the following
field values:
10.0.0.0/8
tcp:1-65535;udp:1-65535;icmp
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level |
---|---|---|---|---|---|---|
GCE | Saltstack | Debian | GCE | docs | Project |
For support level information on all solutions, see the Table of solutions chart.
Please see the Kubernetes docs for more details on administering and using a Kubernetes cluster.
Create an Issue Edit this Page