The Context
One of the key features of KuboCD is its ability to generate Helm deployment values files from a small set of high-level input parameters, using a templating mechanism.
This mechanism combines a template with a data model.
Our first example uses only the .Parameters element of the data model:
podinfo-p01.yaml
Actually, the data model includes the following top-level elements:
.Parameters: The parameters provided in theReleasecustom resource..Release: The release object itself..Context: The deployment context.
The context is a YAML object with a flexible structure, designed to hold shared configuration data relevant to all deployments.
For example, the podinfo package includes a parameter ingressClassName with a default value (nginx). If a cluster uses a different ingress controller, this value would need to be overridden for all relevant Release objects.
This type of shared configuration is best defined in a global cluster-level context.
Similarly, if all application ingress URLs share a common root domain, that too should be centralized.
Here's an initial example of how this logic can be implemented.
Context Creation
A Context is a KuboCD resource:
cluster.yaml
apiVersion: kubocd.kubotal.io/v1alpha1
kind: Context
metadata:
namespace: contexts
name: cluster
spec:
description: Global context for the kubodoc cluster
protected: true
context:
ingress:
className: nginx
domain: ingress.kubodoc.local
storageClass:
data: standard
workspace: standard
certificateIssuer:
public: cluster-self
internal: cluster-self
Key attributes:
description: A short description.protected: Prevents deletion of this object. Requires KuboCD's webhook feature.context: A tree of values that is injected into the data model for the templating of thevaluessection. This section:- Must be valid YAML.
- Has a flexible structure, but should align with what the
Packagetemplates expect.
In this example, the context includes:
ingress.className: The ingress controller type.ingress.domain: The suffix used for building ingress URLs.storageClass: Two KubernetesStorageClassdefinitions for different application profiles. For ourkindbased cluster, there is only one available option:standard.certificateIssuer: Two certificate issuers, one to use internally and one intended for endpoints exposed to external world. As configuring a CA (Certificate Authority) is out of the scope of this documentation, we will set-up only a self-signed CA. This will be performed later with the chapter on cert-manager deployment
Cluster-wide contexts should be placed in a dedicated namespace:
Note
Since the context is shared among most of all applications, its structure must be carefully designed and well documented.
Package modification
Our initial podinfo package did not account for the context concept. Here is an updated version:
podinfo-p02.yaml
apiVersion: v1alpha1
type: Package
name: podinfo
tag: 6.7.1-p02
schema:
parameters:
$schema: http://json-schema.org/schema#
type: object
additionalProperties: false
properties:
host: { type: string }
required:
- host
context:
$schema: http://json-schema.org/schema#
additionalProperties: true
type: object
properties:
ingress:
type: object
additionalProperties: true
properties:
className: { type: string }
domain: { type: string }
required:
- domain
- className
required:
- ingress
modules:
- name: main
source:
helmRepository:
url: https://stefanprodan.github.io/podinfo
chart: podinfo
version: 6.7.1
values: |
ingress:
enabled: true
className: {{ .Context.ingress.className }}
hosts:
- host: {{ .Parameters.host }}.{{ .Context.ingress.domain }}
paths:
- path: /
pathType: ImplementationSpecific
Key points:
- The
tagwas updated to generate a new version. - The
fqdnparameter was replaced withhostto represent only the hostname (excluding the domain). - The
modules[X].valuessection now uses the context. - A
schema.contextsection has been added to define and validate the expected context structure.
This new version must be packaged:
====================================== Packaging package 'podinfo-p02.yaml'
--- Handling module 'main':
Fetching chart podinfo:6.7.1...
Chart: podinfo:6.7.1
--- Packaging
Generating index file
Wrap all in assembly.tgz
--- push OCI image: quay.io/kubodoc/packages/podinfo:6.7.1-p02
Successfully pushed
Deployment
Here is the corresponding Release manifest:
podinfo2-ctx.yaml
---
apiVersion: kubocd.kubotal.io/v1alpha1
kind: Release
metadata:
name: podinfo2
namespace: default
spec:
description: A first sample release of podinfo
package:
repository: quay.io/kubodoc/packages/podinfo
tag: 6.7.1-p02
interval: 30m
parameters:
host: podinfo2
contexts:
- namespace: contexts
name: cluster
Key points:
- The
fqdnparameter was replaced withhost. - A new
spec.contextssection lists the contexts to merge into a single object passed to the template engine.
Warning
Referencing a non-existent context results in an error.
Once spec.repository is set according to your repository, apply the deployment:
Check that the new Release reaches the READY state:
NAME REPOSITORY TAG CONTEXTS STATUS READY WAIT PRT AGE DESCRIPTION
podinfo2 quay.io/kubodoc/packages/podinfo 6.7.1-p02 contexts:cluster READY 1/1 - 17m A first sample release of podinfo
And check the corresponding ìngress has been configured properly:
NAME CLASS HOSTS ADDRESS PORTS AGE
podinfo2-main nginx podinfo2.ingress.kubodoc.local 10.96.59.9 80 120m
Notes
If you want to test access through this ingress, don't forget to update your /etc/host or your DNS.
Context Aggregation
An application's effective context may result from the aggregation of multiple context objects.
For instance, a project-level context can be created to share variables across all applications within a project. This will be merged with the global cluster context.
In the following examples, each deployed project has its own namespace and context.
Example 1: Context merge
Create the namespace:
Then create the project context:
project01.yaml
Note that the namespace is not specified in the manifest. It will be set via the command line:
List all defined contexts:
NAMESPACE NAME DESCRIPTION PARENTS STATUS AGE
contexts cluster Global context for the kubodoc cluster READY 2d2h
project01 project01 Context for project 1 READY 2m35s
This example requires modifying the package to include the new variable project.subdomain in the values template and in the schema.context section:
podinfo-p03.yaml
apiVersion: v1alpha1
type: Package
name: podinfo
tag: 6.7.1-p03
schema:
parameters:
$schema: http://json-schema.org/schema#
type: object
additionalProperties: false
properties:
host: { type: string }
required:
- host
context:
$schema: http://json-schema.org/schema#
additionalProperties: true
type: object
properties:
ingress:
type: object
additionalProperties: true
properties:
className: { type: string }
domain: { type: string }
required:
- domain
- className
project:
type: object
additionalProperties: true
properties:
subdomain: { type: string }
required:
- subdomain
required:
- ingress
- project
modules:
- name: main
source:
helmRepository:
url: https://stefanprodan.github.io/podinfo
chart: podinfo
version: 6.7.1
values: |
ingress:
enabled: true
className: {{ .Context.ingress.className }}
hosts:
- host: {{ .Parameters.host }}.{{ .Context.project.subdomain }}.{{ .Context.ingress.domain }}
paths:
- path: /
pathType: ImplementationSpecific
Package it:
Create a new Release for deployment:
podinfo-prj01.yaml
---
apiVersion: kubocd.kubotal.io/v1alpha1
kind: Release
metadata:
name: podinfo
spec:
description: A release of podinfo on project01
package:
repository: quay.io/kubodoc/packages/podinfo
tag: 6.7.1-p03
interval: 30m
parameters:
host: podinfo
contexts:
- namespace: contexts
name: cluster
- name: project01
debug:
dumpContext: true
dumpParameters: true
Notes:
metadata.namespaceis not defined; it will be set via command line.metadata.nameis simplypodinfo, assuming only one instance per namespace.spec.contextsincludes now two entries, with the second referencing the project context. As the namespace is not defined, it will be set to theReleaseone.- A
debugsection is added to include the resultingcontextandparametersin theReleasestatus.
Deploy the release:
Verify both contexts are listed:
NAME REPOSITORY TAG CONTEXTS STATUS READY WAIT PRT AGE DESCRIPTION
podinfo quay.io/kubodoc/packages/podinfo 6.7.1-p03 contexts:cluster,project01:project01 READY 1/1 - 8m31s A release of podinfo on project01
Check the resulting ingress object:
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
default podinfo1-main nginx podinfo1.ingress.kubodoc.local 10.96.207.51 80 4h46m
default podinfo2-main nginx podinfo2.ingress.kubodoc.local 10.96.207.51 80 4h6m
project01 podinfo-main nginx podinfo.prj01.ingress.kubodoc.local 10.96.207.51 80 4h
Inspect the Release status:
apiVersion: kubocd.kubotal.io/v1alpha1
kind: Release
metadata:
....
spec:
....
status:
context:
ingress:
className: nginx
domain: ingress.kubodoc.local
project:
id: p01
subdomain: prj01
storageClass:
data: standard
workspace: standard
....
parameters:
host: podinfo2
....
The merged context includes values from both the cluster and project contexts.
Warning
In real-world scenarios, the context may become quite large. Use this debug mode sparingly.
Example 2: Context override
In this second example, the objective remains the same (adding a subdomain to the ingress), but we use the initial version of the Package, which does not handle project.subdomain context value.
Create a dedicated namespace:
Create a project context in that namespace:
project02.yaml
Note that the same spec.context.ingress.domain path exists in both the project and cluster contexts.
When contexts are merged in a Release, later contexts in the list override earlier ones. Thus, the project’s value takes precedence.
Create and deploy a new Release object:
podinfo-prj02.yaml
---
apiVersion: kubocd.kubotal.io/v1alpha1
kind: Release
metadata:
name: podinfo
spec:
description: A release of podinfo on project02
package:
repository: quay.io/kubodoc/packages/podinfo
tag: 6.7.1-p02
interval: 30m
parameters:
host: podinfo
contexts:
- namespace: contexts
name: cluster
- name: project02
debug:
dumpContext: true
dumpParameters: true
Check the resulting context in the Release object:
apiVersion: kubocd.kubotal.io/v1alpha1
kind: Release
metadata:
....
spec:
....
status:
context:
ingress:
className: nginx
domain: prj02.ingress.kubodoc.local
project:
id: p02
storageClass:
data: standard
workspace: standard
....
Ensure the correct ingress host is used:
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
default podinfo1-main nginx podinfo1.ingress.kubodoc.local 10.96.218.98 80 2d20h
default podinfo2-main nginx podinfo2.ingress.kubodoc.local 10.96.218.98 80 110m
project01 podinfo-main nginx podinfo.prj01.ingress.kubodoc.local 10.96.218.98 80 26m
project02 podinfo-main nginx podinfo.prj02.ingress.kubodoc.local 10.96.218.98 80 2m52s
Context Change
Any change to a context is automatically applied to all associated Release objects. However, only the deployments that are actually affected will be updated.
Notes
Technically, KuboCD patches the corresponding Flux helmRelease objects, which triggers a helm upgrade. This should only update the resources that are truly impacted.
For example, modify the context for project01:
kubectl -n project01 patch context.kubocd.kubotal.io project01 --type='json' -p='[{"op": "replace", "path": "/spec/context/project/subdomain", "value": "project01" }]'
Observe that the ingress is quickly updated accordingly:
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
default podinfo1-main nginx podinfo1.ingress.kubodoc.local 10.96.218.98 80 3d3h
default podinfo2-main nginx podinfo2.ingress.kubodoc.local 10.96.218.98 80 8h
project01 podinfo-main nginx podinfo.project01.ingress.kubodoc.local 10.96.218.98 80 7h13m
project02 podinfo-main nginx podinfo.prj02.ingress.kubodoc.local 10.96.218.98 80 6h49m
To restore the original value: