Skip to content

Configuration

Principle

Installation is performed using a Helm chart, and configurations are made by providing a 'values' file that overrides the default values.yaml of the Helm chart.

This is what was done during the initial configuration, with such a file:

values.init.yaml
clusterIssuer: your-cluster-issuer

skAuth:
  exposure:
    external:
      ingress:
        host: skas.ingress.mycluster.internal
  kubeconfig:
    context:
      name: skas@mycluster.internal
    cluster:
      apiServerUrl: https://kubernetes.ingress.mycluster.internal

SKAS is a highly flexible product, and consequently, there are numerous variables in the default values.yaml of the Helm chart. Fortunately, the default values are suitable for most use cases.

In this chapter, we won't describe all the variables (you can refer to the comments in the file for details), but we will explain some typical configuration variations.

To apply a modified file, you should use the helm upgrade command:

helm -n skas-system upgrade skas skas/skas --values ./values.init.yaml

Pod restart

To ensure that the new configuration is taken into account, you need to restart the SKAS pod(s). The most straightforward way to do this is by performing a 'rollout' on the skas deployment:

$ kubectl -n skas-system rollout restart deployment skas
> deployment.apps/skas restarted

There is some solution to perform an automatic restart. See reloader

Skas behavior

Here is a values file that redefines the most common variables related to SKAS behavior:

values.behavior.yaml
# Default value. May be overridden by component
log: 
  mode: json # 'json' or 'dev'
  level: info

skAuth:
  # Define password requirement
  passwordStrength:
    forbidCommon: true    # Test against lists of common password
    minimumScore: 3       # From 0 (Accept anything) to 4

  tokenConfig:
    # After this period without token validation, the session expire
    inactivityTimeout: "30m"
    # After this period, the session expire, in all case.
    sessionMaxTTL: "12h"
    # This is intended for the client CLI, for token caching
    clientTokenTTL: "30s"

skCrd:
  initialUser:
    login: admin
    # passwordHash: $2a$10$ijE4zPB2nf49KhVzVJRJE.GPYBiSgnsAHM04YkBluNaB3Vy8Cwv.G  # admin
    commonNames: ["SKAS administrator"]
    groups:
      - skas-admin
  • The log section allows you to adjust the log level and set the log mode. By default, log.mode is set to json, which is intended for injection into an external log management system. To have a more human-readable log format, you can set log.mode to dev.
  • skAuth.passwordStrength lets you modify the criteria for a valid password.
  • The skAuth.token.config section configures the token lifecycle.
  • skCrd.initialUser is used to defines the default admin user. Note that passwordHash has been commented out, otherwise the password would be reset on each application of these values.

The meaning of skAuth and skCrd subsection is described in the Architecture chapter.

To apply a modified configuration, enter the following command:

helm -n skas-system upgrade skas skas/skas --values ./values.init.yaml \
--values ./values.behavior.yaml

We still need to add values.init.yaml, otherwise, corresponding default/empty values will be reset.

Don't forget to restart the pod(s). See above

Kubernetes integration

Here is a values file that redefines the most common variables related to SKAS integration with Kubernetes:

values.k8s.yaml
replicaCount: 1

# -- Annotations to be added to the pod
podAnnotations: {}

# -- Annotations to be added to all other resources
commonAnnotations: {}

image:
  pullSecrets: []
  repository: ghcr.io/skasproject/skas
  # -- Overrides the image tag whose default is the chart appVersion.
  tag:
  pullPolicy: IfNotPresent

# Node placement of SKAS pod(s) 
nodeSelector: {}
tolerations: []
affinity: {}
  • replicaCount allows you to define the number of pod replicas for the SKAS deployment. Note that we are in an active-active configuration with no need for a leader election mechanism.
  • podAnnotations and commonAnnotations allow you to annotate pods and other SKAS resources if required.
  • The image subsection allows you to define an alternate image version or location. This is useful in an air-gap deployment where the SKAS image is stored in a private repository.
  • nodeSelector, toleration, and affinity are standard Kubernetes properties related to the node placement of SKAS pod(s). See below

To apply a modified configuration, enter the following command:

helm -n skas-system upgrade skas skas/skas \
--values ./values.init.yaml --values ./values.behavior.yaml --values ./values.k8s.yaml

Remember to restart the pod(s) after making these configuration changes. See above

SKAS PODs node placement

With the default configuration, SKAS pods will be scheduled on worker nodes, just like any other workload.

To place them on nodes carrying the control plane, the following configuration can be used:

values.k8s.yaml
replicaCount: 2

# -- Annotations to be added to the pod
podAnnotations: {}

# -- Annotations to be added to all other resources
commonAnnotations: {}

image:
  pullSecrets: []
  repository: ghcr.io/skasproject/skas
  # -- Overrides the image tag whose default is the chart appVersion.
  tag:
  pullPolicy: IfNotPresent

# Node placement of SKAS pod(s) 
nodeSelector: {}
tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
  - effect: NoSchedule
    key: node-role.kubernetes.io/control-plane

affinity:
  nodeAffinity:
    preferredDuringSchedulingIgnoredDuringExecution:
      - preference:
          matchExpressions:
            - key: node-role.kubernetes.io/control-plane
              operator: In
              values:
                - ""
        weight: 100
  podAntiAffinity:
    preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
              - key: app.kubernetes.io/instance
                operator: In
                values:
                  - skas
          topologyKey: kubernetes.io/hostname

By default, Kubernetes prevents standard workloads from running on the control-plane nodes by using the taint mechanism. To work around this limitation, a toleration section is defined.

Then there is the affinity.nodeAffinity part, which instructs the pod to run on nodes with a control-plane label.

Additionally, there is the affinity.podAntiAffinity part, which prevents two SKAS pods from being scheduled on the same node.