The Problem: Two Worlds, One Applicati
The Problem: Two Worlds, One Application
Modern enterprise applications deployed on Oracle Cloud Infrastructure (OCI) routinely span two distinct management planes:
- Kubernetes control plane — Deployments, Services, ConfigMaps, Secrets, and custom workloads.
- OCI infrastructure plane — Oracle Autonomous Databases, Vault secrets, VCNs, and Load Balancers.
Historically, teams maintained separate runbooks, separate CI/CD pipelines, and separate IAM configurations for each layer — a source of drift, operational toil, and incident response friction that every SRE knows intimately.
Two open-source Oracle projects dissolve this boundary entirely:
- OCI Service Operator for Kubernetes (OSOK) lets you declare OCI resources — Oracle Autonomous Databases, Vault secrets — as native Kubernetes CRDs.
kubectl applyprovisions real OCI infrastructure. - kro (Kube Resource Orchestrator) lets you compose those CRDs, standard Kubernetes resources, and cross-resource dependencies into a single custom API your team deploys with one
kubectl apply.
The Simplest Possible Example
Before diving deep, here is the core concept in three commands.
Step 1 — Declare an Oracle Autonomous Database as a Kubernetes object:
# adb.yaml
apiVersion: database.oracle.com/v1alpha1
kind: AutonomousDatabase
metadata:
name: prod-adb
namespace: production
spec:
compartmentId: ocid1.compartment.oc1..aaaa...xxxx
displayName: prod-adb
dbName: PRODDB
dbWorkload: OLTP # OLTP or DW
cpuCoreCount: 2
dataStorageSizeInTBs: 1
isAutoScalingEnabled: true
licenseModel: LICENSE_INCLUDED
adminPassword:
secretRef:
name: adb-admin-secret # Kubernetes Secret holding the password
wallet:
walletPasswordSecret: adb-wallet-passwordStep 2 — Apply it:
kubectl apply -f adb.yamlStep 3 — Check status and retrieve the connection string OSOK created automatically:
# Watch provisioning (takes 2–3 minutes)
kubectl get autonomousdatabase -n production -w
# Retrieve the JDBC connection string from the OSOK-managed Secret
kubectl get secret prod-adb -n production \
-o jsonpath='{.data.connectionString}' | base64 -d; echoThat is OSOK in a nutshell. No OCI Console clicks, no CLI scripts — a Kubernetes controller manages the OCI API on your behalf.
Component 1 — OCI Service Operator for Kubernetes (OSOK)
OSOK is built on the Operator Framework and the controller-runtime library. It ships one Kubernetes controller per supported OCI service. Each controller watches for its CRD, reconciles it against the OCI REST API, and exposes status — including connection endpoints — back as Kubernetes resource fields and Secrets.
OCI services supported by OSOK include: Oracle Autonomous Database (Shared & Dedicated), MySQL HeatWave, Vault & Vault Secrets, Streaming, OCI Service Mesh, and more.
What OSOK Does on kubectl apply
When you apply an AutonomousDatabase resource, the OSOK controller:
- Reads credentials from the referenced Kubernetes Secret and calls the OCI Database API to provision the Autonomous Database.
- Reconciles status by polling the OCI API and updating
.status.lifecycleStatethroughPROVISIONING → AVAILABLE. - Creates a Kubernetes Secret named after the DB resource, containing
connectionString,connectionStrings,dbName, and other connection details that pods consume viasecretKeyRef. - Handles permitted spec updates — for example, increases to
cpuCoreCountordataStorageSizeInTBs— by calling the OCI update API automatically. - Deletes the OCI resource when the Kubernetes resource is deleted, preventing orphaned cloud spend.
DBA Angle: What OSOK Replaces
| Before OSOK | With OSOK |
|---|---|
| OCI Console UI clicks | kubectl apply -f adb.yaml |
| OCI CLI provisioning scripts | Version-controlled YAML in Git |
| Manual JDBC URL management | OSOK auto-creates a Kubernetes Secret with connectionString |
| Ad-hoc change management | GitOps PR review for every infrastructure change |
| Manual drift remediation | Controller detects and corrects drift within 30 seconds |
Component 2 — kro (Kube Resource Orchestrator)
kro introduces a new Kubernetes primitive called ResourceGraphDefinition (RGD). An RGD describes a new custom API schema and the graph of underlying Kubernetes resources — including OSOK CRDs — that implement it. kro automatically generates the CRD, and when you create an instance, it provisions all resources in the correct dependency order, enforcing readiness gates at each step.
| Capability | Helm | kro |
|---|---|---|
| Templating | Go templates | CEL expressions (type-safe, validated at apply time) |
| Dependency ordering | Hooks only | Automatic — derived from expression references |
| Readiness gates | Not supported | readyWhen conditions per resource |
| Custom API generation | Not supported | Auto-generates CRD from schema definition |
| Reconciliation model | One-shot install | Continuous controller loop |
| OCI resource support | Via OSOK CRDs indirectly | Via OSOK CRDs natively |
| Status propagation | Not supported | Status fields derived from child resource fields |
System Design: Spring Boot + Oracle Autonomous Database
The reference deployment provisions a Spring Boot REST API backed by an Oracle Autonomous Database (OLTP). kro enforces the following provisioning sequence:
kubectl apply OracleKroDemoApp
↓
kro Controller — reads RGD, builds dependency graph
↓
Step 1: Secret (ADB admin credentials)
↓
Step 2: OSOK AutonomousDatabase — blocks until lifecycleState == AVAILABLE
↓
Step 3: Job (DDL schema init via SQLcl) — blocks until status.succeeded > 0
↓
Step 4: Spring Boot Deployment — blocks until availableReplicas > 0
↓
Step 5: OCI Load Balancer Service — blocks until loadBalancer.ingress assignedSRE Insight: In traditional deployments, race conditions between "database available" and "application starting" are a primary cause of CrashLoopBackOff at initial deployment and after maintenance windows. kro's readyWhen gates enforce ordering at the infrastructure level — the application Deployment is never created until the schema init Job has status.succeeded > 0, which itself is blocked until the ADB lifecycleState == AVAILABLE. This eliminates an entire class of startup incidents without any application-level retry logic.
Prerequisites
| Requirement | Details | Verify |
|---|---|---|
| OCI Tenancy | Access with an Identity Domain | oci iam compartment list |
| IAM Permissions | Rights to create Dynamic Groups and Policies | OCI Console → Identity |
| OKE Cluster | Oracle Kubernetes Engine cluster | kubectl get nodes |
| kubectl | v1.26+ configured against your OKE cluster | kubectl version --client |
| Helm | v3.10+ | helm version |
| Operator SDK | Required to install OLM for OSOK | operator-sdk version |
| OCI Values | Compartment OCID, ADB configuration | oci db autonomous-database list |
Installation
Step 1 — Access Your OKE Cluster
# Configure kubectl for your OKE cluster
oci ce cluster create-kubeconfig \
--cluster-id ocid1.cluster.oc1.iad.aaaa...xxxx \
--file $HOME/.kube/config \
--region us-ashburn-1 \
--token-version 2.0.0 \
--kube-endpoint PUBLIC_ENDPOINT
# Verify — all nodes should be Ready
kubectl get nodes -o wideStep 2 — Install OSOK via Operator Lifecycle Manager
# 1. Install Operator SDK
export ARCH=$(case $(uname -m) in x86_64) echo -n amd64 ;; aarch64) echo -n arm64 ;; esac)
export OS=$(uname | awk '{print tolower($0)}')
export SDK_VER=v1.38.0
curl -LO https://github.com/operator-framework/operator-sdk/releases/download/${SDK_VER}/operator-sdk_${OS}_${ARCH}
chmod +x operator-sdk_${OS}_${ARCH}
sudo mv operator-sdk_${OS}_${ARCH} /usr/local/bin/operator-sdk
operator-sdk version
# 2. Install Operator Lifecycle Manager
operator-sdk olm install
# Verify OLM pods are running
kubectl get pods -n olm
# Expected: catalog-operator, olm-operator Running
# 3. Deploy OSOK (includes Autonomous Database + MySQL controllers)
kubectl apply -f https://operatorhub.io/install/oci-service-operator.yaml
# Watch until the OSOK controller pod is Running
kubectl get pods -n oci-service-operator-system -w
# Confirm Autonomous Database CRD is registered
kubectl get crd | grep database.oracle.com
# Expected: autonomousdatabases.database.oracle.comStep 3 — Install kro via Helm
helm repo add kro https://kro.run/charts
helm repo update
helm install kro kro/kro \
--namespace kro-system \
--create-namespace \
--version 0.3.0
# Verify kro controller is running
kubectl get pods -n kro-system
# Expected: kro-controller-manager-xxxx Running
# Verify ResourceGraphDefinition CRD is registered
kubectl get crd | grep kro
# Expected: resourcegraphdefinitions.kro.runConfiguring OCI IAM for OSOK
OSOK uses OKE Instance Principal authentication — it inherits the identity of the OKE worker nodes. Create a Dynamic Group for your worker nodes, then attach IAM policies granting Autonomous Database and networking permissions.
export CLUSTER_COMPARTMENT_OCID="ocid1.compartment.oc1..aaaa...xxxx"
export DG_NAME="osok-k8s-dg"
# Create the Dynamic Group
oci iam dynamic-group create \
--name "${DG_NAME}" \
--description "Dynamic Group for OSOK on OKE" \
--matching-rule "Any {instance.compartment.id = '${CLUSTER_COMPARTMENT_OCID}'}"Policy Set A — Cluster in root compartment:
oci iam policy create \
--compartment-id "${TENANCY_OCID}" \
--name "osok-oracle-db-policy" \
--description "OSOK Oracle Database provisioning policy" \
--statements "[
\"Allow dynamic-group ${DG_NAME} to manage autonomous-databases in tenancy\",
\"Allow dynamic-group ${DG_NAME} to manage autonomous-backups in tenancy\",
\"Allow dynamic-group ${DG_NAME} to use subnets in tenancy\",
\"Allow dynamic-group ${DG_NAME} to {VCN_READ, COMPARTMENT_INSPECT} in tenancy\",
\"Allow dynamic-group ${DG_NAME} to use tag-namespaces in tenancy\",
\"Allow dynamic-group ${DG_NAME} to manage secret-family in tenancy\"
]"Policy Set B — Cluster in a child compartment:
export COMPARTMENT_NAME="my-k8s-compartment"
oci iam policy create \
--compartment-id "${CLUSTER_COMPARTMENT_OCID}" \
--name "osok-oracle-db-policy-child" \
--description "OSOK Oracle Database provisioning policy (child compartment)" \
--statements "[
\"Allow dynamic-group ${DG_NAME} to manage autonomous-databases in compartment ${COMPARTMENT_NAME}\",
\"Allow dynamic-group ${DG_NAME} to manage autonomous-backups in compartment ${COMPARTMENT_NAME}\",
\"Allow dynamic-group ${DG_NAME} to use subnets in compartment ${COMPARTMENT_NAME}\",
\"Allow dynamic-group ${DG_NAME} to {VCN_READ, COMPARTMENT_INSPECT} in compartment ${COMPARTMENT_NAME}\",
\"Allow dynamic-group ${DG_NAME} to use tag-namespaces in compartment ${COMPARTMENT_NAME}\",
\"Allow dynamic-group ${DG_NAME} to manage secret-family in compartment ${COMPARTMENT_NAME}\"
]"DBA Security Note: The
manage autonomous-databasespermission grants full CRUD on Oracle Autonomous Databases in the target compartment. Follow the principle of least privilege: create a dedicated compartment for OSOK-managed databases and scope all policies to that compartment only.
The ResourceGraphDefinition — Spring Boot + Oracle ADB
Save this as oracle_kro_demo_rgd.yaml. This single object encodes the full application stack.
apiVersion: kro.run/v1alpha1
kind: ResourceGraphDefinition metadata: name: oracle-kro-demo spec: schema: apiVersion: v1alpha1 kind: OracleKroDemoApp # kro registers this as a new CRD spec: compartmentId: string | required=true adbAdminPassword: string | required=true adbWalletPassword: string | required=true cpuCoreCount: integer | default=4 storageSizeTBs: integer | default=1 appImageTag: string | default="latest" status: appEndpoint: ${appSvc.status.?loadBalancer.?ingress.orValue([])[0].ip.orValue('')} adbLifecycleState: ${adb.status.?lifecycleState.orValue('PROVISIONING')} resources: # ── 1. ADB Admin Credentials Secret ───────────────────────── - id: adbAdminSecret template: apiVersion: v1 kind: Secret metadata: name: ${schema.metadata.name + "-adb-admin"} namespace: ${schema.metadata.namespace} type: Opaque stringData: password: ${schema.spec.adbAdminPassword} # ── 2. ADB Wallet Password Secret ─────────────────────────── - id: adbWalletSecret template: apiVersion: v1 kind: Secret metadata: name: ${schema.metadata.name + "-adb-wallet"} namespace: ${schema.metadata.namespace} type: Opaque stringData: password: ${schema.spec.adbWalletPassword} # ── 3. Oracle Autonomous Database (provisioned by OSOK) ────── # Blocks all downstream resources until lifecycleState == AVAILABLE - id: adb readyWhen: - ${adb.status.?lifecycleState.orValue('') == 'AVAILABLE'} template: apiVersion: database.oracle.com/v1alpha1 kind: AutonomousDatabase metadata: name: ${schema.metadata.name + "-adb"} namespace: ${schema.metadata.namespace} spec: compartmentId: ${schema.spec.compartmentId} displayName: ${schema.metadata.name + "-adb"} dbName: ${schema.metadata.name} dbWorkload: OLTP cpuCoreCount: ${schema.spec.cpuCoreCount} dataStorageSizeInTBs: ${schema.spec.storageSizeTBs} isAutoScalingEnabled: true licenseModel: LICENSE_INCLUDED adminPassword: secretRef: name: ${adbAdminSecret.metadata.name} wallet: walletPasswordSecret: ${adbWalletSecret.metadata.name} # ── 4. Schema Init Job (DDL via SQLcl) ─────────────────────── # Reads connection string from the OSOK-managed ADB Secret - id: schemaInitJob readyWhen: - ${schemaInitJob.status.succeeded > 0} template: apiVersion: batch/v1 kind: Job metadata: name: ${schema.metadata.name + "-schema-init"} namespace: ${schema.metadata.namespace} spec: template: spec: containers: - name: sqlcl image: container-registry.oracle.com/database/sqlcl:latest env: - name: DB_CONN valueFrom: secretKeyRef: name: ${adb.metadata.name} # OSOK-created Secret key: connectionString - name: DB_PASSWORD valueFrom: secretKeyRef: name: ${adbAdminSecret.metadata.name} key: password command: - "/bin/sh" - "-c" - | sql -s ADMIN/${DB_PASSWORD}@${DB_CONN} << 'EOF' CREATE TABLE IF NOT EXISTS orders ( id NUMBER GENERATED ALWAYS AS IDENTITY PRIMARY KEY, order_ref VARCHAR2(64) NOT NULL, customer_id NUMBER NOT NULL, total_amount NUMBER(12,2) NOT NULL, created_at TIMESTAMP DEFAULT SYSTIMESTAMP ); CREATE INDEX idx_orders_customer ON orders(customer_id); COMMIT; EOF restartPolicy: OnFailure # ── 5. Spring Boot Deployment ─────────────────────────────── # Annotation forces kro to wait for schemaInitJob before creating - id: appDeployment readyWhen: - ${appDeployment.status.availableReplicas > 0} template: apiVersion: apps/v1 kind: Deployment metadata: name: ${schema.metadata.name + "-api"} namespace: ${schema.metadata.namespace} annotations: kro.run/after-schema-init: ${string(schemaInitJob.status.succeeded)} spec: replicas: 2 selector: matchLabels: app: ${schema.metadata.name + "-api"} template: metadata: labels: app: ${schema.metadata.name + "-api"} spec: containers: - name: api image: ${"iad.ocir.io/mytenancy/orders-api:" + schema.spec.appImageTag} ports: - containerPort: 8080 env: - name: SPRING_DATASOURCE_URL valueFrom: secretKeyRef: name: ${adb.metadata.name} key: connectionString - name: SPRING_DATASOURCE_USERNAME value: ADMIN - name: SPRING_DATASOURCE_PASSWORD valueFrom: secretKeyRef: name: ${adbAdminSecret.metadata.name} key: password livenessProbe: httpGet: path: /actuator/health/liveness port: 8080 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: /actuator/health/readiness port: 8080 initialDelaySeconds: 30 periodSeconds: 10 # ── 6. OCI Load Balancer Service ──────────────────────────── - id: appSvc readyWhen: - ${appSvc.status.?loadBalancer.?ingress.orValue([]).size() > 0} template: apiVersion: v1 kind: Service metadata: name: ${schema.metadata.name + "-svc"} namespace: ${schema.metadata.namespace} annotations: kro.run/after-deployment-ready: ${string(appDeployment.status.availableReplicas)} spec: type: LoadBalancer ports: - port: 80 targetPort: 8080 selector: app: ${schema.metadata.name + "-api"}
# Apply the RGD — kro registers OracleKroDemoApp as a new CRD
kubectl apply -f oracle_kro_demo_rgd.yaml
# Verify RGD is Active
kubectl get rgd oracle-kro-demo
# NAME STATE READY
# oracle-kro-demo Active True
# Confirm the new CRD exists
kubectl get crd | grep oraclekrodemoappDeploy and Verify:
# Create the target namespace
kubectl create namespace demo
# Create the application instance
cat > oracle_kro_demo.yaml <<EOF
apiVersion: kro.run/v1alpha1
kind: OracleKroDemoApp
metadata:
name: orders-api
namespace: demo
spec:
compartmentId: ocid1.compartment.oc1..aaaa...xxxx
adbAdminPassword: "SecureOracle#2026"
adbWalletPassword: "WalletPass#2026"
cpuCoreCount: 4
storageSizeTBs: 1
appImageTag: "1.0.0"
EOF
kubectl apply -f oracle_kro_demo.yaml
# Watch all resources being created in sequence (ADB takes 2–3 minutes)
watch -n 15 "
echo '=== OracleKroDemoApp ===' &&
kubectl get OracleKroDemoApp -n demo &&
echo '' &&
echo '=== Autonomous Database ===' &&
kubectl get AutonomousDatabase -n demo \
-o custom-columns='NAME:.metadata.name,STATE:.status.lifecycleState' &&
echo '' &&
echo '=== Schema Init Job ===' &&
kubectl get job -n demo &&
echo '' &&
echo '=== Deployment ===' &&
kubectl get deployment -n demo &&
echo '' &&
echo '=== Service ===' &&
kubectl get svc -n demo
"
# Check ADB lifecycle state
kubectl get AutonomousDatabase orders-api-adb -n demo \
-o jsonpath='{.status.lifecycleState}{"\n"}'
# Expected: PROVISIONING --> AVAILABLE
# Decode the JDBC connection string from the OSOK-managed Secret
kubectl get secret orders-api-adb -n demo \
-o jsonpath='{.data.connectionString}' | base64 -d; echo
# Get the application external IP
kubectl get OracleKroDemoApp orders-api -n demo \
-o jsonpath='{.status.appEndpoint}{"\n"}'
# Test the API
APP_IP=$(kubectl get OracleKroDemoApp orders-api -n demo \
-o jsonpath='{.status.appEndpoint}')
curl -s http://${APP_IP}/actuator/health | python3 -m json.tool
curl -s http://${APP_IP}/api/orders | python3 -m json.toolDebugging:
# kro events and conditions
kubectl describe OracleKroDemoApp orders-api -n demo
# OSOK controller logs
kubectl logs -n oci-service-operator-system \
deploy/oci-service-operator-controller-manager --tail=50
# kro controller logs
kubectl logs -n kro-system deploy/kro-controller-manager --tail=50
# Schema init job logs
kubectl logs -n demo job/orders-api-schema-init
# All Kubernetes events
kubectl get events -n demo --sort-by='.lastTimestamp' | tail -30OSOK on OKE vs OSOK on OpenShift
This is the most important operational difference for Cloud Engineers choosing between the two platforms.
| Consideration | OKE (Standard) | OpenShift on OCI |
|---|---|---|
| OLM installation | Manual — operator-sdk olm install | Pre-installed — skip this step |
| OSOK installation | kubectl apply from OperatorHub URL | OperatorHub UI or oc apply Subscription YAML |
| Authentication | Instance Principals (default) | Workload Identity recommended — set authType: workloadIdentity in OSOK ConfigMap |
| Security constraints | Standard Kubernetes PodSecurityAdmission | SecurityContextConstraints (SCCs) — OSOK and kro controllers may need anyuid SCC |
| Load Balancer services | Native OCI LB via type: LoadBalancer | Prefer OpenShift Route over LoadBalancer for HTTP traffic |
| Image registry | Docker Hub / OCIR | Oracle images may need an image pull secret for container-registry.oracle.com |
| kro networking | Standard | Modify appSvc in RGD to ClusterIP and add an OpenShift Route |
OpenShift Installation Commands
# ── Install OSOK via OpenShift OperatorHub ───────────────────
# OLM is already present in OpenShift — skip operator-sdk olm install
cat <<EOF | oc apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: oci-service-operator
namespace: openshift-operators
spec:
channel: alpha
name: oci-service-operator
source: community-operators
sourceNamespace: openshift-marketplace
EOF
# Grant OSOK controller the anyuid SCC if needed
oc adm policy add-scc-to-user anyuid \
-z oci-service-operator-controller-manager \
-n oci-service-operator-system
# Switch OSOK to Workload Identity authentication (recommended on OpenShift)
kubectl patch configmap osok-config \
-n oci-service-operator-system \
--patch '{"data":{"AUTH_TYPE":"workloadIdentity"}}'
# Create Oracle Container Registry pull secret for SQLcl init container
oc create secret docker-registry oracle-registry \
--docker-server=container-registry.oracle.com \
--docker-username=<oracle-sso-email> \
--docker-password=<oracle-sso-password> \
-n demo
# Create kro with OpenShift-compatible SCC
oc apply -f - <<EOF
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
name: kro-scc
allowHostPorts: false
allowPrivilegedContainer: false
runAsUser:
type: MustRunAsNonRoot
seLinuxContext:
type: MustRunAs
users:
- system:serviceaccount:kro-system:kro-controller-manager
EOF
# Expose the Spring Boot API via OpenShift Route (instead of LoadBalancer)
# First change appSvc type to ClusterIP in the RGD, then:
oc expose svc/orders-api-svc -n demo
oc get route orders-api-svc -n demoOpenShift Route vs LoadBalancer — RGD Modification
On OpenShift, replace the appSvc resource in the RGD with a ClusterIP service and add a Route:
# Replace appSvc LoadBalancer with ClusterIP on OpenShift
- id: appSvc
template:
apiVersion: v1
kind: Service
metadata:
name: ${schema.metadata.name + "-svc"}
namespace: ${schema.metadata.namespace}
spec:
type: ClusterIP # Changed from LoadBalancer
ports:
- port: 8080
selector:
app: ${schema.metadata.name + "-api"}
# Add an OpenShift Route
- id: appRoute
readyWhen:
- ${appRoute.status.?ingress.orValue([]).size() > 0}
template:
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: ${schema.metadata.name + "-route"}
namespace: ${schema.metadata.namespace}
spec:
to:
kind: Service
name: ${appSvc.metadata.name}
port:
targetPort: 8080
tls:
termination: edgeOracle DBA Perspective
What Changes
| Traditional DBA Workflow | OSOK + kro Workflow |
|---|---|
| OCI Console or CLI to provision ADB | kubectl apply -f adb.yaml — controller calls OCI API |
| Manual JDBC URL configuration | OSOK auto-creates a Kubernetes Secret with connectionString |
| OCI CLI scripts for storage scaling | Edit storageSizeTBs in YAML — reconciler calls OCI resize API |
| Manual HA configuration | Set isAutoScalingEnabled: true — OCI handles elasticity |
| Ad-hoc change tracking | Git history is the change record — every modification has a PR |
| Drift remediation: manual | Drift remediation: controller detects and corrects within 30 seconds |
DBA Day-2 Operations
# Check ADB status, endpoints, version, and lifecycle state
kubectl get AutonomousDatabase -n demo -o yaml
# Decode the JDBC connection string from the OSOK-managed Secret
kubectl get secret orders-api-adb -n demo \
-o jsonpath='{.data.connectionString}' | base64 -d; echo
# Scale CPU cores — OSOK reconciles the change to OCI automatically
kubectl patch OracleKroDemoApp orders-api -n demo \
--type merge \
-p '{"spec":{"cpuCoreCount":8}}'
# Scale storage
kubectl patch OracleKroDemoApp orders-api -n demo \
--type merge \
-p '{"spec":{"storageSizeTBs":2}}'
# Connect to ADB using SQLcl from a debug pod
kubectl run sqlcl-debug \
--image=container-registry.oracle.com/database/sqlcl:latest \
--restart=Never --rm -it -n demo \
--env="CONN=$(kubectl get secret orders-api-adb -n demo \
-o jsonpath='{.data.connectionString}' | base64 -d)" \
-- sql -s ADMIN@${CONN}
# Create a manual pre-release backup via OCI CLI
oci db autonomous-database-backup create \
--autonomous-database-id $(kubectl get AutonomousDatabase orders-api-adb -n demo \
-o jsonpath='{.status.autonomousDatabaseOCID}') \
--display-name "pre-release-backup-$(date +%Y%m%d)"DBA Pro Tip: The
configurationIdfield in an OSOK spec maps to an Oracle Database configuration profile. Create a named Configuration in OCI with your organization's preferred settings — connection pool limits, SQL timeout, optimizer statistics preferences — and reference it in your RGD. Every ADB provisioned via OSOK inherits those settings automatically, replacing manual post-provisioning tuning scripts.
SRE & Cloud Engineering Perspective
GitOps with ArgoCD or Flux
The entire stack — RGD, application instance spec, and IAM policies-as-code — can be committed to Git and managed by ArgoCD or Flux. Benefits:
- Full audit trail for every database and application change via Git history.
- PR-based review workflow for infrastructure changes — no emergency OCI Console access during incidents.
- Automated drift detection — ArgoCD flags any divergence between Git desired state and live Kubernetes state.
- Rollback via
git revert+kubectl apply— kro reconciles and OSOK calls the OCI API to restore previous state.
SRE Operational Commands
# Monitor kro reconciliation events
kubectl get events -n demo --sort-by='.lastTimestamp' | tail -20
# Verify readyWhen gate conditions
kubectl get OracleKroDemoApp orders-api -n demo \
-o jsonpath='{.status.conditions}' | python3 -m json.tool
# List all resources kro is managing for this instance
kubectl get all -n demo -l kro.run/instance=orders-api
# Force immediate re-reconciliation after an OCI incident
kubectl annotate OracleKroDemoApp orders-api -n demo \
reconcile.kro.run/last-attempt=$(date -u +%Y-%m-%dT%H:%M:%SZ) \
--overwrite
# Export OSOK Prometheus metrics
kubectl port-forward -n oci-service-operator-system \
deploy/oci-service-operator-controller-manager 8080:8080 &
curl -s http://localhost:8080/metrics | grep osok_
# Multi-environment promotion via Kustomize
# base/oracle_kro_demo.yaml (cpuCoreCount: 4, storageSizeTBs: 1)
# overlays/dev/ (cpuCoreCount: 2, appImageTag: dev)
# overlays/prod/ (cpuCoreCount: 8, storageSizeTBs: 2)
kubectl apply -k overlays/prod/
# Chaos test: delete the ADB directly and watch OSOK self-heal
oci db autonomous-database delete \
--autonomous-database-id $(kubectl get AutonomousDatabase orders-api-adb -n demo \
-o jsonpath='{.status.autonomousDatabaseOCID}') \
--force
# Watch OSOK detect drift and re-provision the ADB
kubectl get AutonomousDatabase orders-api-adb -n demo -wSRE Patterns Comparison
| SRE Concern | Traditional Approach | OSOK + kro Approach |
|---|---|---|
| Startup ordering | Init containers, retry loops in app code | kro readyWhen gates — ADB must be AVAILABLE before schema Job runs |
| DB connection secrets | Manual Secret creation and rotation scripts | OSOK auto-creates connection Secret on ADB provisioning |
| Drift detection | Scheduled Terraform plan runs or manual audit | Continuous controller reconciliation (30-second loops) |
| Rollback | Manual OCI API calls or Terraform state manipulation | git revert + kubectl apply |
| Multi-env promotion | Different Terraform workspaces | Kustomize overlays over a shared RGD |
| Incident remediation | Manual OCI Console intervention | Update YAML, merge PR, reconciler applies change |
| Scaling | OCI CLI commands during change window | kubectl patch on instance spec — OSOK calls OCI resize API |
Cleanup
DBA Warning: Oracle Autonomous Database deletion removes all data permanently. Before running any cleanup command in production, ensure you have a recent backup or manual snapshot. Consider adding an OCI resource lock on production ADB instances and using a Kubernetes Admission Webhook that requires manual approval before deleting
AutonomousDatabaseresources in production namespaces.
# Step 1: Delete the application instance
# kro cascades deletion to all child resources including the ADB
kubectl delete OracleKroDemoApp orders-api -n demo
# Watch the cascade deletion (~5 minutes for ADB deletion)
watch kubectl get all,AutonomousDatabase,secret -n demo
# Step 2: Delete the ResourceGraphDefinition
kubectl delete rgd oracle-kro-demo
# Step 3: Delete the namespace
kubectl delete namespace demo
# Step 4: Verify OCI ADB is deleted
oci db autonomous-database list \
--compartment-id "${CLUSTER_COMPARTMENT_OCID}" \
--query 'data[?"display-name"==`orders-api-adb`].{id:id,state:"lifecycle-state"}'
# Expected: empty or TERMINATING / TERMINATED
# Optional: Uninstall OSOK and kro
helm uninstall kro -n kro-system
operator-sdk olm uninstallKey Takeaways
| Takeaway | Details |
|---|---|
| Single Control Plane | OSOK + kro eliminate the OCI/Kubernetes boundary. One kubectl apply provisions the Oracle ADB and all Kubernetes resources with correct ordering enforced automatically. |
| Reliable Startup Ordering | readyWhen gates enforce strict provisioning order. The app Deployment is never created until the ADB is AVAILABLE and the schema init Job has succeeded. |
| Continuous Reconciliation | Both OSOK and kro reconcile desired vs actual state in 30-second loops. Infrastructure drift — including manual OCI Console changes — is detected and corrected automatically. |
| DBA Expertise Preserved | Your Oracle DBA knowledge of CPU shapes, storage sizing, auto-scaling, and backup policies is now encoded as version-controlled YAML — auditable, reviewable, and repeatable. |
| GitOps-Ready | The full stack can be owned by ArgoCD or Flux. Every database change has a Git commit and a PR reviewer. Rollback is a git revert. |
| OpenShift Compatible | With SCC adjustments, Workload Identity configuration, and OpenShift Routes in place of LoadBalancer services, the full OSOK + kro workflow runs on Red Hat OpenShift on OCI. |
No comments:
Post a Comment