INSTALLATION
Installing with SKAS Helm Chart
The most straightforward and recommended method for installing the SKAS server is by using the provided Helm chart.
Before you begin, make sure you meet the following prerequisites:
-
Certificate Manager: Ensure that the Certificate Manager is deployed in your target Kubernetes cluster, and a
ClusterIssuer
is defined for certificate management. -
Ingress Controller: An NGINX ingress controller should be deployed in your target Kubernetes cluster.
-
Kubectl Configuration: You should have a local client Kubernetes configuration with full administrative rights on the target cluster.
-
Helm: Helm must be installed locally on your system.
Follow these steps to install SKAS using Helm:
-
Add the SKAS Helm repository by running the following command:
-
Create a dedicated namespace for SKAS:
-
Deploy the SKAS Helm chart using the following command:
helm -n skas-system install skas skas/skas \ --set clusterIssuer=your-cluster-issuer \ --set skAuth.exposure.external.ingress.host=skas.ingress.mycluster.internal \ --set skAuth.kubeconfig.context.name=skas@mycluster.internal \ --set skAuth.kubeconfig.cluster.apiServerUrl=https://kubernetes.ingress.mycluster.internal
Replace the values with your specific configuration:
clusterIssuer
: The ClusterIssuer from your Certificate Manager for certificate management.skAuth.exposure.external.ingress.host
: The hostname used for accessing the SKAS service from outside the cluster.
Make sure to define this hostname in your DNS.
skAuth.kubeconfig.context.name
: A unique context name for this cluster in your local configuration.skAuth.kubeconfig.cluster.apiServerUrl
: The API server URL from outside the cluster. You can find this information in an existing Kubernetes config file underclusters[X].cluster.server
.
Alternatively, you can create a local YAML values file as follows:
values.init.yaml
And then install SKAS using this values file:
After a successful installation, verify the SKAS server pod is running:
$ kubectl -n skas-system get pods
> NAME READY STATUS RESTARTS AGE
> skas-746c54dc75-v8v2f 3/3 Running 0 25s
Use another ingress controller instead of nginx
If you are using an ingress controller other than NGINX, you can specify the ingress class by adding the --set ingressClass=xxxx flag when launching the Helm chart. In this case, the Helm chart won't create an ingress resource, and you will need to set up your own ingress. (Here is the nginx definition, as a starting point.)
Please note that the ingress is configured with ssl-passthroughs
. The underlying service will handle SSL.
No Certificate Manager
If you are not using a Certificate Manager, you can still install SKAS. Follow these steps:
- Launch the helm chart without
ClusterIssuer
definition. Then, the secret hosting the certificate for the services will be missing, so theskas
pod will fail - Prepare a PEM-encoded self-signed certificate and key files. The certificate should be valid for the following hostnames:
skas-auth
skas-auth.skas-system.svc
localhost
skas.ingress.mycluster.internal
(Adjust this to your actual hostname)
- Base64-encode the CA certificate (in PEM format) and its key.
- Create a secret in the skas-system namespace:
- The skas pod should start successfully.
API Server configuration.
The API server's Authentication Webhook must be configured to communicate with our authentication module.
Manual configuration
Depending on your specific installation, the directory mentioned below may vary. For reference, the clusters used for testing and documentation purposes were built using kubespray.
Additionally, this procedure assumes that the API Server is managed by the Kubelet as a static Pod. If your API Server is managed by another system, such as systemd, you should make the necessary adjustments accordingly.
Please note that the following operations must be executed on all nodes hosting an instance of the Kubernetes API server, typically encompassing all nodes within the control plane.
These operations require 'root' access on these nodes._
To initiate the process, start by creating a dedicated folder for 'skas':"
Next, create the Authentication Webhook configuration file within this directory. You can conveniently copy and paste the following configuration:
/etc/kubernetes/skas/hookconfig.yaml
apiVersion: v1
kind: Config
# clusters refers to the remote service.
clusters:
- name: sk-auth
cluster:
certificate-authority: /etc/kubernetes/skas/skas_auth_ca.crt # CA for verifying the remote service.
server: https://sk-auth.skas-system.svc:7014/v1/tokenReview # URL of remote service to query. Must use 'https'.
# users refers to the API server's webhook configuration.
users:
- name: skasapisrv
# kubeconfig files require a context. Provide one for the API server.
current-context: authwebhook
contexts:
- context:
cluster: sk-auth
user: skasapisrv
name: authwebhook
As indicated within this file, there is a reference to the certificate authority of the authentication webhook service. Consequently, you should retrieve it and place it in this location:
kubectl -n skas-system get secret skas-auth-cert \
-o=jsonpath='{.data.ca\.crt}' | base64 -d >/etc/kubernetes/skas/skas_auth_ca.crt
Please ensure that the kubectl command is installed on this node with administrator configuration.
Inspect the folder's contents:
$ ls -l /etc/kubernetes/skas
> total 8
> -rw-r--r--. 1 root root 620 May 11 12:36 hookconfig.yaml
> -rw-r--r--. 1 root root 1220 May 11 12:58 skas_auth_ca.crt
Now, you need to modify the API Server manifest file located at /etc/kubernetes/manifests/kube-apiserver.yaml
to include the hookconfig.yaml
file:"
The initial step involves adding two flags to the kube-apiserver command line:
--authentication-token-webhook-cache-ttl
: This determines the duration for caching authentication decisions.--authentication-token-webhook-config-file
: This refers to the path of the configuration file we've just set up.
This is how it should appear:
...
spec:
containers:
- command:
- kube-apiserver
- --authentication-token-webhook-cache-ttl=30s
- --authentication-token-webhook-config-file=/etc/kubernetes/skas/hookconfig.yaml
- --advertise-address=192.168.33.16
- --allow-privileged=true
- --anonymous-auth=True
...
The second step involves mapping the node folder /etc/kubernetes/skas
inside the API server pod, using the same path.
This mapping is necessary because these files are accessed within the context of the API Server container.
To achieve this, you should add a new volumeMounts entry as follows:
Additionally, you need to include a corresponding new volumes entry:
Furthermore, you should define another configuration parameter. Specifically, you must set the dnsPolicy
to
ClusterFirstWithHostNet
. Please verify that this key doesn't already exist and add or modify it accordingly:
With these adjustments, you have completed the configuration for the API Server. Saving the edited file will trigger a restart of the API Server to take the changes into account.
For additional information, refer to the Kubernetes documentation on this topic, available here
Please remember to carry out this procedure on all nodes that host an instance of the API Server.
Using an Ansible role
If Ansible is one of your preferred tools, you can automate these laborious tasks by utilizing an Ansible role.
You can obtain such a role here.
Similar to manual installation, you might need to customize it to suit your local context.
To utilize this role, we assume that you have an Ansible configuration in place, along with an inventory that defines the target cluster.
Additionally, this role utilizes the
kubernetes.core.k8s_info module
.
Please review the requirements for this module
Then, follow these steps:
- Download and extract the role archive provided above into a folder that is part of the role path.
- Create a playbook file, for example:
skas.yaml
- Launch this playbook:
The playbook will execute all the steps outlined in the manual installation process detailed above. Consequently, this will trigger a restart of the API server.
Troubleshooting
If there is a minor typo or a configuration inconsistency, it could potentially prevent the API Server from restarting. In such cases, it's advisable to examine the logs of the Kubelet. (Remember that, as a static pod, the API Server is managed by the Kubelet). These logs can provide insights into what might be causing the issue.
If you've made any modifications to the hookconfig.yaml
file or updated the CA file, it's necessary to restart the
API Server to apply the new configuration. However, since the API Server is a 'static pod' managed by the Kubelet,
it can't be restarted like a standard pod.
The simplest method to effectively trigger a reload of the API Server is to make a modification to the
/etc/kubernetes/manifests/kube-apiserver.yaml
file. It's essential that this modification is a substantive change,
as simply using the touch command may not suffice. A common approach is to make a slight modification to the
authentication-token-webhook-cache-ttl
flag value. This will prompt the API Server to reload its configuration
and apply the changes.
Installation of SKAS CLI
SKAS offers a command-line interface (CLI) as an extension of kubectl.
The installation process is straightforward:
- Download the executable that corresponds to your operating system and architecture from this location.
- Rename the downloaded executable to
kubectl-sk
to adhere to the naming convention of kubectl extensions. - Make the file executable.
- Move the
kubectl-sk
executable to a directory that is included in your system's PATH environment variable.
For instance, on a Mac with an Intel processor, you can use the following commands:
cd /tmp
curl -L https://github.com/skasproject/skas/releases/download/0.2.1/kubectl-sk_0.2.1_darwin_amd64 -o ./kubectl-sk
chmod 755 kubectl-sk
sudo mv kubectl-sk /usr/local/bin
Now, you can verify whether the extension is working as intended.
It should display:
A kubectl plugin for Kubernetes authentication
Usage:
kubectl-sk [command]
Available Commands:
completion Generate the autocompletion script for the specified shell
hash Provided password hash, for use in config file
help Help about any command
init Add a new context in Kubeconfig file for skas access
login Logout and get a new token
logout Clear local token
password Change current password
user Skas user management
version display skas client version
whoami Display current logged user, if any
Flags:
-h, --help help for kubectl-sk
--kubeconfig string kubeconfig file path. Override default configuration.
--logLevel string Log level (default "INFO")
--logMode string Log mode: 'dev' or 'json' (default "dev")
Use "kubectl-sk [command] --help" for more information about a command.
SKAS is now successfully installed. You can proceed with the User guide for further instructions.
Depending on your cluster architecture, you may need to adjust your configuration for a safer and more resilient installation. Please refer to the Configuration: Kubernetes Integration section for more information.
SKAS Removal
When it comes to uninstalling SKAS, the initial step involves reconfiguring the API server. The approach depends on how you initially configured it:
If you configured it manually, remove the two entries, --authentication-token-webhook-cache-ttl
and
--authentication-token-webhook-config-file
, from the API server manifest file located at
/etc/kubernetes/manifests/kube-apiserver.yaml
.
If you used the Ansible role for configuration, simply modify the playbook by setting skas_state
to absent
:
skas.yaml
After making these changes, execute the playbook:
Once you have successfully reconfigured the Kubernetes API server, you can proceed to uninstall the Helm chart.
And to delete the namespace