Deploying ArangoDB on Kubernetes and customizing settings

Estimated reading time: 6 minutes

Introduction

   Deploying an ArangoDB cluster on Kubernetes is a straightforward process thanks to the ArangoDB Operator, a powerful tool that simplifies the deployment, management, and scaling of ArangoDB clusters in a Kubernetes environment. 

  In this post, we’ll walk you through the steps to quickly deploy an ArangoDB cluster on Kubernetes, covering what the main components are and how to customize settings to ensure the cluster runs optimally for your needs.

  The deployment steps will be followed considering that there is already a kubernetes set up and the user executing the commands has administrator privileges.

For this demonstration we will be using:

  • Arangodb 3.12.4 image
  • Kubernetes Server version 1.27.16
  • ArangoDB Operator  1.2.46

Executing the installation steps  

  An ArangoDB Cluster has three main components which are the Agents, Coordinators and DB Servers, but in a Kubernetes environment these components are created as pods along with the operator pods. Therefore, the installation steps will be based on configuring yaml files and creating pods using the "kubectl" utility, as described below: 

Installing the operator

export URLPREFIX=https://raw.githubusercontent.com/arangodb/kube-arangodb/1.2.46/manifests
kubectl apply -f $URLPREFIX/arango-crd.yaml

code 1

kubectl apply -f $URLPREFIX/arango-deployment.yaml

code 2
To use ArangoLocalStorage resources to provision Persistent Volumes on local storage, also run:

kubectl apply -f $URLPREFIX/arango-storage.yaml

kubectl apply -f $URLPREFIX/arango-storage.yaml
kubectl apply -f $URLPREFIX/arango-storage.yaml

code 3

Confirm that the pods were created successfully

kubectl get pods

kubectl apply -f $URLPREFIX/arango-storage.yaml
kubectl apply -f $URLPREFIX/arango-storage.yaml

code 4

Storage Configuration

Create the file storage.yaml with the content below

code 5

kubectl apply -f storage.yaml

kubectl apply -f $URLPREFIX/arango-storage.yaml
kubectl apply -f $URLPREFIX/arango-storage.yaml

code 5

Confirm that the pods were created successfully

kubectl get pods

kubectl apply -f $URLPREFIX/arango-storage.yaml
kubectl apply -f $URLPREFIX/arango-storage.yaml

code 5

ArangoDB deployment creation

  After deploying the latest ArangoDB Kubernetes operator and configuring storage resources, we will create the ArangoDB database deployment itself by creating an ArangoDeployment custom resource and deploying it into our Kubernetes.

We will create a basic yaml file cluster-deployment.yaml with the content below

code 5

kubectl apply -f cluster-deployment.yaml

kubectl apply -f $URLPREFIX/arango-storage.yaml
kubectl apply -f $URLPREFIX/arango-storage.yaml

code 5
 
Wait for few minutes and we will find all the components pods running (3 for each of them) when  confirming that the pods were created successfully. We have a working ArangoDB cluster running on kubernetes and ready to be used.

kubectl get pods

kubectl apply -f $URLPREFIX/arango-storage.yaml
kubectl apply -f $URLPREFIX/arango-storage.yaml

code 5

Customizing settings

   As we showed in the previous steps, we used a basic yaml file to create the ArangoDB database deployment, it means that most settings will be the default, this is fine and may work for multiple users, but we may need to customize some settings regarding how resources are allocated or even some database parameters to support specific needs that are particular to each environment, user or application using Arangodb.

   In our “cluster-deployment.yaml” we did not specify any resource limit for any of the components (agents, coordinators, DBServers), that means for example that the coordinators could use the whole memory available on the Kubernetes machine (in our case 32GB), which is something that we don’t want. The coordinator log section below highlights the statement above, we see a message that confirms the whole machine memory is available for the coordinators utilization.

code 5

  In order to limit the amount of resources available for the coordinators, we modify the Arango deployment using the command “kubectl edit” to include the lines below right after the coordinators section start in the yaml configuration.

kubectl edit arango/deployment

kubectl apply -f $URLPREFIX/arango-storage.yaml
kubectl apply -f $URLPREFIX/arango-storage.yaml

code 5

Each of the coordinators pods will be terminated and then started as we can see below

code 5
code 5

After having all the coordinators restarted, we look at the coordinator log again and we can see the amount of memory available for the coordinators is now only 512MB, as we specified in the deployment yaml file.  The same approach can be followed for the Agents and DBServers.

code 5

  Now let’s change an ArangoDB Server option, the idea is quite similar to the resource limit change we have done previously, we will need to edit the deployment yaml using the command “kubectl edit” and include the option we want to modify under the corresponding section.

  For our example, we are going to enable the experimental vector index feature , available as of 3.12.4 version, by setting “--experimental-vector-index” to true, this needs to be placed under both the coordinators and dbservers sections, by including the reserved word “args” as shown below:

code 5

After saving the changes, the coordinators and dbservers pods will restart again, then we can connect to any of the coordinators to confirm that “--experimental-vector-index” is really set to true.

code 5

code 5

There we go! The experimental vector index feature was successfully enabled, by the way, you can find more information about this feature on this  blog post.

Conclusion

  Deploying an ArangoDB cluster on Kubernetes is a quick and efficient process, offering significant flexibility to meet the specific requirements of multiple applications. The ability to adjust resources and fine-tune ArangoDB server options ensures that your setup can be tailored to fit various business needs, performance goals, and infrastructure capabilities. Customization is relevant for maintaining optimal performance and scalability, allowing your system to evolve in line with changing demands. Associating Kubernetes and ArangoDB, you're not only simplifying deployment, but also gaining a powerful solution that adapts seamlessly to your needs.

More info...

Integrating ArangoDB with Kubernetes for Seamless Deployment

Estimated reading time: 6 minutes

Are you a database architect or DevOps architect tasked with deploying modern databases like ArangoDB on Kubernetes? Kubernetes, with its robust orchestration capabilities, provides a solid foundation for managing containerized workloads, ensuring reliability and adaptability for database deployments.

In this post, we’ll guide you through the process of deploying ArangoDB on Kubernetes, addressing common DevOps challenges like scalability, high availability, and efficient resource utilization. By the end, you'll have a practical understanding of integrating ArangoDB with Kubernetes in a way that’s both robust and future-proof.

Why ArangoDB and Kubernetes?

ArangoDB, as a multi-model database, excels at handling diverse workloads—be it document, graph, or key-value data. When paired with Kubernetes, you gain:

  • Scalability: Automatically adjust resources to meet demand.
  • Resilience: Ensure high availability through self-healing capabilities.
  • Simplicity: Streamline deployment and updates with Infrastructure as Code (IaC).
  • Automation: Minimize manual intervention with Kubernetes' built-in orchestration.

Prerequisites

Before diving into deployment, ensure you have the following ready:

  1. A Kubernetes Cluster: Local (e.g., Minikube) or cloud-based (e.g., AWS EKS, GKE).
  2. kubectl: Installed and configured to interact with your cluster.
  3. Helm: Installed for managing Kubernetes charts.

Step 1: Installing the ArangoDB Kubernetes Operator

The ArangoDB Kubernetes Operator simplifies the deployment and management of ArangoDB clusters. It automates tasks like scaling, failover, and configuration management.

Add the Helm Repository

Start by adding the ArangoDB Helm repository:

bash

helm repo add arangodb https://arangodb.github.io/kube-arangodb
helm repo update

Deploy the Operator

Install the ArangoDB operator in a dedicated namespace:

bash

helm install arango-operator arangodb/kube-arangodb --namespace arangodb --create-namespace

This deploys the operator, which manages the lifecycle of your ArangoDB cluster.

Step 2: Configuring and Deploying an ArangoDB Cluster

Create the Cluster Configuration

Write a configuration file ( e.g., arangodb-cluster.yaml ) to define your cluster. This configuration outlines the desired topology, resource allocation, and environment settings.

Yaml

apiVersion: database.arangodb.com/v1
kind: ArangoDeployment
metadata:
  name: arango-cluster
  namespace: arangodb
spec:
  mode: Cluster
  environment: Production
  image:
    repository: arangodb/arangodb
    tag: latest
  tls:
    mode: None
  authentication:
    jwtSecretName: arango-cluster-jwt
  agents:
    count: 3
    resources:
      requests:
        memory: 1Gi
        cpu: 500m
  dbservers:
    count: 3
    resources:
      requests:
        memory: 2Gi
        cpu: 500m
  coordinators:
    count: 2
    resources:
      requests:
        memory: 1Gi
        cpu: 500m

Apply the Configuration

Deploy your cluster by applying the YAML file:

bash

kubectl apply -f arangodb-cluster.yaml

Verify the deployment status:

bash

kubectl get pods -n arangodb

Step 3: Addressing DevOps Concerns

Scalability

Kubernetes' horizontal scaling ensures that your ArangoDB cluster can handle fluctuating workloads:

bash

kubectl scale deployment arango-cluster-dbserver --replicas=5 -n arangodb

High Availability

With ArangoDB's fault-tolerant architecture and Kubernetes' self-healing, you minimize downtime. For example, Kubernetes automatically restarts failed pods:

bash

kubectl describe pod <pod-name> -n arangodb

Backup and Recovery

Set up a backup strategy using Kubernetes CronJobs:

yaml

apiVersion: batch/v1
kind: CronJob
metadata:
  name: arango-backup
  namespace: arangodb
spec:
  schedule: "0 2 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: arango-backup
            image: arangodb/arangodb
            command: ["arangodump"]
            args:
              - "--output-directory=/backups"
              - "--server.database=mydb"
          restartPolicy: OnFailure

Step 4: Monitoring and Maintenance

Use Kubernetes-native tools like Prometheus and Grafana to monitor your ArangoDB deployment. Enable metrics collection by annotating your pods:

yaml

metadata:
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "8529"

Summary

Integrating ArangoDB with Kubernetes offers an elegant solution for managing complex database workloads. By leveraging Kubernetes' orchestration capabilities, you can ensure your database is scalable, resilient, and easy to manage. Whether you're a seasoned DevOps professional or new to container orchestration, this setup will provide a reliable foundation for your ArangoDB deployment.

Happy deploying!

More info...

Celebrating Kube-ArangoDB’s 1.0 Release!

Estimated reading time: 4 minutes

Kube-ArangoDB, ArangoDB’s Kubernetes Operator first released two years ago and as of today is operating many ArangoDB production clusters (including ArangoDB’s Managed Service ArangoGraph). With many exciting features we felt kube-arango really deserves to be released as 1.0.

Read more

More info...

ArangoDB and the Cloud-Native Ecosystem: Integration Insights

ArangoDB is joining CNCF to continue its focus on providing a scalable native multi-model database, supporting Graph, Document, and Key-Value data models in the Cloud Native ecosystem.

ArangoDB

ArangoDB is a scalable multi-model model database. What does that mean?

You might have already encountered different NoSQL databases specialized for different data models e.g., graph or document databases. However most real-life use-cases actually require a combination of different data models like Single View of Everything, Machine Learning or even Case Management projects to name but a few.

In such scenarios, single data model databases typically require merging data from different databases and often even reimplementing some database logic in the application layer as well as the effort to operate multiple database in a production environment.

Read more

More info...

Building Our Managed Service on Kubernetes: ArangoDB Insights

Running distributed databases on-prem or in the cloud is always a challenge. Over the past years, we have invested a lot to make cluster deployments as simple as possible, both on traditional (virtual) machines (using the ArangoDB Starter) as well as on modern orchestration systems such as Kubernetes (using Kube-ArangoDB).

However, as long as teams have to run databases themselves, the burden of deploying, securing, monitoring, maintaining & upgrading can only be reduced to a certain extent but not avoided.

For this reason, we built ArangoDB ArangoGraph.
Read more

More info...

Deploying ArangoDB 3.4 on Kubernetes

It has been a few months since we first released the Kubernetes operator for ArangoDB and started to brag about it. Since then, quite a few things have happened.

For example, we have done a lot of testing, fixed bugs, and by now the operator is declared to be production ready for three popular public Kubernetes offerings, namely Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS) and Pivotal Kubernetes Service (PKS) (see here for the current state of affairs). Read more

More info...

The ArangoDB Operator for Kubernetes – Stateful Cluster Deployments in 5min

At ArangoDB we’ve got many requests for running our database on Kubernetes. This makes complete sense since Kubernetes is a highly popular system for deploying, scaling and managing containerized applications.

Running any stateful application on Kubernetes is a bit more involved than running a stateless application, because of the storage requirements and potentially other requirements such as static network addresses. Running a database on Kubernetes combines all the challenges of running a stateful application, combined with a quest for optimal performance.

This article explains what is needed to run ArangoDB on Kubernetes and what we’re doing to make it a lot easier.

Interested in trying out ArangoDB? Fire up your database in just a few clicks with ArangoDB ArangoGraph: the Cloud Service for ArangoDB. Start your free 14-day trial here. Read more

More info...

Get the latest tutorials,
blog posts and news: