ArangoDB vs. Neo4J
Estimated reading time: 7 minutes
Update: https://arangodb.com/2023/10/evolving-arangodbs-licensing-model-for-a-sustainable-
future/
Last October the first iteration of this blog post explained an update to ArangoDB’s 10-year-old license model. Thank you for providing feedback and suggestions. As mentioned, we will always remain committed to our community and hence today, we are happy to announce yet another update that integrates your feedback.
Your ArangoDB Team
ArangoDB as a company is firmly grounded in Open Source. The first commit was made in October 2011, and today we're very proud of having over 13,000 stargazers on GitHub. The ArangoDB community should be able to enjoy all of the benefits of using ArangoDB, and we have always offered a completely free community edition in addition to our paid enterprise offering.
With the evolving landscape of database technologies and the imperative to ensure ArangoDB remains sustainable, innovative, and competitive, we’re introducing some changes to our licensing model. These alterations will help us continue our commitment to the community, fuel further cutting-edge innovations and development, and assist businesses in obtaining the best from our platform. These alterations are based on changes in the broader database market.
Upcoming Changes
The changes to the licensing are in two primary areas:
- Distribution and Managed Services
- Commercial Use of Community Edition
Distribution and Managed Services
Effective version 3.12 of ArangoDB, the source code will replace its existing Apache 2.0 license with the BSL 1.1 for 3.12 and future versions.
BSL 1.1 is a source-available license that has three core tenets, some of which are customizable and specified by each licensor:
- BSL v.1.1 will always allow copying, modification, redistribution, non-commercial use, and commercial use in a non-production context.
- By default, BSL does not allow for production use unless the licensor provides a limited right as an “Additional Use Grant”; this piece is customizable and explained below.
- BSL provides a Change Date usually between one to four years in which the BSL license converts to a Change License that is open source, which can be GNU General Public License (GPL), GNU Affero General Public License (AGPL), or Apache, etc.
ArangoDB has defined our Additional Use Grant to allow BSL-licensed ArangoDB source code to be deployed for any purpose (e.g. production) as long as you are not (i) creating a commercial derivative work or (ii) offering or including it in a commercial product, application, or service (e.g. commercial DBaaS, SaaS, Embedded or Packaged Distribution/OEM). We have set the Change Date to four (4) years, and the Change License to Apache 2.0.
These changes will not impact the majority of those currently using the ArangoDB source code but will protect ArangoDB against larger companies from providing a competing service using our source code or monetizing ArangoDB by embedding/distributing the ArangoDB software.
As an example, If you use the ArangoDB source code and create derivative works of software based on ArangoDB and build/package the binaries yourself, you are free to use the software for commercial purposes as long as it is not a SaaS, DBaaS, or OEM distribution. You cannot use the Community Edition prepackaged binaries for any of the purposes mentioned above.
Commercial Use of Community Edition
We are also making changes to our Community Edition with the prepackaged ArangoDB binaries available for free on our website. Where before this edition was governed by the same Apache 2.0 license as the source code, it will now be governed by a new ArangoDB Community License, which limits the use of community edition for commercial purposes to a 100GB limit on dataset size in production within a single cluster and a maximum of three clusters.
Commercial use describes any activity in which you use a product or service for financial gain. This includes whenever you use software to support your customers or products, since that software is used for business purposes with the intent of increasing sales or supporting customers. This explicitly does not apply to non-profit organizations.
As an example, if you deploy software in production that uses ArangoDB as a database, the database size is under 100 GB per cluster, and it is limited to a maximum of three clusters within an organization. Even though the software is commercially used, you have no commercial obligation to ArangoDB because it falls under the allowed limits. Similarly, non-production deployments such as QA, Test, and Dev using community edition create no commercial obligations to ArangoDB.
Our Enterprise Edition will continue to be governed by the existing ArangoDB Enterprise License.
What should Community users do?
The license changes will roll out and be effective with the release of 3.12 slated for the end of Q1 2024, and there will be no immediate impact to any releases prior to 3.12. Once the license changes are fully applied, there will be a few impacts:
- If you are using Community Edition or Source Code for your managed service (DBaaS, SaaS), you will be unable to do so for future versions of ArangoDB starting with version 3.12.
- If you are using Community Edition or Source Code and distributing it to your customers along with your software, you will be unable to do so for future versions of ArangoDB starting with version 3.12.
- If you are using the Community Edition for commercial purposes for any production deployment either storing greater than 100 GB of data per cluster or having more than three clusters or both - you are required to have a commercial agreement with ArangoDB starting with version 3.12.
If any of these apply to you and you want to avoid future disruption, we encourage you to contact us so that we can work with you to find a commercially acceptable solution for your business.
How is ArangoDB easing the transition for community users with this change?
ArangoDB is willing to make concessions for community users to help them with the transition and the license change. Our joint shared goal is to both enable ArangoDB to continue commercially as the primary developer of the CE edition and still allow our CE users to have successful deployments that meet their business and commercial goals. Support from Arango and help with ongoing help with your deployments (Our Customer Success Team) allows us to maintain the quality of deployments and, ultimately, a more satisfying experience for users.
We do not intend to create hardship for the community users and are willing to discuss reasonable terms and conditions for commercial use.
ArangoDB can offer two solutions to meet your commercial use needs:
- Enterprise License: Provide a full-fledged enterprise license for your commercial use with all the enterprise features along with Enterprise SLA and Support.
- Community Transition We do not intend to create hardship for the community users and hence created a 'CE Transition Fund', which can be allocated by mutual discussion to ease the transition. This will allow us to balance the value that CE brings to an organization and the Support/Features available.
Summary
Our commitment to open-source ideals remains unshaken. Adjusting our model is essential to ensure ArangoDB’s longevity and to provide you with the cutting-edge features you expect from us. We continue to uphold our vision of an inclusive, collaborative, and innovative community. This change ensures we can keep investing in our products and you, our valued community.
Frequently Asked Questions
1. Does this affect the commercially packaged editions of your software such as Arango Enterprise Edition, and ArangoGraph Insights Platform?
No, this only affects ArangoDB source code and ArangoDB Community Edition.
2. Whom does this change primarily impact?
This has no effect on most paying customers, as they already license ArangoDB under a commercial license. This change also has no effect on users who use ArangoDB for non-commercial purposes. This change affects open-source users who are using ArangoDB for commercial purposes and/or distributing and monetizing ArangoDB with their software.
3: Why change now?
ArangoDB 3.12 is a breakthrough release that includes improved performance, resilience, and memory management. These highly appealing design changes may motivate third parties to fork ArangoDB source code in order to create their own commercial derivative works without giving back to the developer community. We feel it is in the best interest of the community and our customers to avoid that outcome.
4: In four years, after the Change Date, can I make my own commercial product from ArangoDB 3.12 source code under Apache 2.0?
Yes, if you desire.
5: Is ArangoDB still an Open Source company?
Yes. While the BSL 1.1 is not an official open source license approved by the Open Source Initiative (OSI), we still license a large amount of source code under an open source license such as our Drivers, Kube-Arango Operator, Tools/Utilities, and we continue to host ArangoDB-related open source projects. Furthermore, the BSL only restricts the use of our source code if you are trying to commercialize it. Finally, after four years, the source code automatically converts to an OSI-approved license (Apache 2.0).
6: How does the license change impact other products, specifically the kube-arango operator?
There are two versions of the kube-arango operator: the Community and the Enterprise versions. At this time there are no plans to change licensing terms for the operator. The operator will, however, automatically enforce the licensing depending upon the ArangoDB version under management (enterprise or community).
ArangoBNB Data Preparation Case Study: Optimizing for Efficiency
Estimated reading time: 18 minutes
This case study covers a data exploration and analysis scenario about modeling data when migrating to ArangoDB. The topics covered in this case study include:
- Importing data into ArangoDB
- Developing Application Requirements before modeling
- Data Analysis and Exploration with AQL
This case study can hopefully be used as a guide as it shows step-by-step instructions and discusses the motivations in exploring and transforming data in preparation for a real-world application.
The information contained in this case study is derived from the development of the ArangoBnB project; a community project developed in JavaScript that is always open to new contributors. The project is an Airbnb clone with a Vue frontend and a React frontend being developed in parallel by the community. It is not necessary to download the project or be familiar with JavaScript for this guide. To see how we are using the data in a real-world project, check out the repository.
Data Modeling Example
Data modeling is a broad topic and there are different scenarios in practice. Sometimes, your team may start from scratch and define the application’s requirements before any data exists. In that case, you can design a model from scratch and might be interested in defining strict rules about the data using schema validation features; for that topic, we have an interactive notebook and be sure to see the docs. This guide will focus on the situation where there is already some data to work with, and the task involves moving it into a new database, specifically ArangoDB, as well as cleaning up and preparing the data to use it in a project.
Preparing to migrate data is a great time to consider new features and ways to store the data. For instance, it might be possible to consolidate the number of collections being used or store the data as a graph for analytics purposes when coming from a relational database. It is crucial to outline the requirements and some nice-to-haves and then compare those to the available data. Once it is clear what features the data contains and what the application requires, it is time to evaluate the database system features and determine how the data will be modeled and stored.
So, the initial steps we take when modeling data include:
- Outline application requirements and optional features
- Explore the data with those requirements in mind
- Evaluate the database system against the dataset features and application requirements
As you will see, steps 2 and 3 can easily overlap; being aware of database system features can give you ideas while exploring the data and vice versa. This overlap is especially common when using the database system to explore, as we do in this example.
For this example, we are using the Airbnb dataset initially found here. The dataset contains listing information scraped from Airbnb, and the dataset maintainer provides it in CSV and GeoJSON format.
The files provided, their descriptions, and download links are:
NOTE: The following links are outdated and interested parties should use the recent links available at InsideAirBnB
- Listings.csv.gz
- Detailed Listings data for Berlin
- Download Link
- Calendar.csv.gz
- Detailed Calendar Data for listings in Berlin
- Download Link
- Reviews.csv.gz
- Detailed Review Data for listings in Berlin
- Download Link
- Listings.csv
- Summary information and metrics for listings in Berlin (good for visualisations).
- Download Link
- Reviews.csv
- Summary Review data and Listing ID (to facilitate time based analytics and visualisations linked to a listing).
- Download Link
- Neighborhoods.csv
- Neighbourhood list for geo filter. Sourced from city or open source GIS files.
- Download Link
- Neighborhoods.geojson
- GeoJSON file of neighbourhoods of the city.
- Download Link
The download links listed here are for 12-21-2020, which we used just before insideairbnb published the 02-22-2021 links. If they don’t work for some reason, you can always get the updated ones from insideairbnb, but there is no guarantee that they will be compatible with this guide.
Application Requirements
Looking back at the initial steps we typically take, the first step is to outline the application requirements and nice-to-haves. One could argue that doing data exploration might be necessary before determining the application requirements. However, knowing what our application requires can inform decisions when deciding how to store the data, such as extracting or aggregating data from other fields to fulfill an application requirement.
For this step, we had multiple meetings where we outlined our goals for the application. We have the added benefit of already knowing the database system we will be using and being familiar with its capabilities.
There are a couple of different motivations involved in this project. For us, ArangoDB, we wanted to do this project to:
- Showcase the upcoming ArangoSearch GeoJSON features
- Provide a real-world full stack JavaScript application with a modern client-side frontend framework that uses the ArangoJS driver to access ArangoDB on the backend.
With those in mind, we continued to drill down into the actual application requirements. Since this is an Airbnb clone, we started by looking on their website and determining what was likely reproducible in a reasonable amount of time.
Here is what we started with:
- Search an AirBnB dataset to find rentals nearby a specified location
- A draggable map that shows results based on position
- Use ArangoSearch to keep everything fast
- Search the dataset using geographic coordinates
- Filter results based on keywords, price, number of guests, etc
- Use AQL for all queries
- Multi-lingual support
We set up the GitHub repository and created issues for the tasks associated with our application goals to define further the required dataset features. Creating these issues helps in thinking through the high-level tasks for both the frontend and backend and keeps us on track throughout.
Data Exploration
With our application requirements ready to go, it is time to explore the dataset and match the available data with our design vision.
One approach is to reach for your favorite data analysis tools and visualization libraries such as the Python packages Pandas, Plotly, Seaborn, or many others. You can look here for an example of performing some basic data exploration with Pandas. In the notebook, we discover the available fields, data types, consistency issues and even generate some visualizations.
For the rest of this section, we will look at how you can explore the data by just using ArangoDB’s tools, the Web UI, and the AQL query language. It is worth noting that there are many third-party tools available for analyzing data, and using a combination of tools will almost always be necessary. The purpose of this guide is to show you how much you can accomplish and how quickly you can accomplish it just using the tools built into ArangoDB.
Importing CSV files
First things first, we need to import our data. When dealing with CSV files, the best option is to use arangoimport. The arangoimport tool imports either JSON, CSV, or TSV files. There are different options available to adjust the data during import to fit the ArangoDB document model better. It is possible to specify things such as:
- Fields that it should skip during import
- Whether or not to convert values to non-string types (numbers, booleans and null values)
- Options for changing field names
System Attributes
Aside from the required options, such as server information and collection name, we will use the `--translate` option. We are cheating a little here for the sake of keeping this guide as brief as possible. We already know that there is a field in the listings files named id that is unique and perfectly suited for the _key system attribute. This attribute is automatically generated if we don’t supply anything, but can also be user-defined. This attribute is automatically indexed by ArangoDB, so having a meaningful value provided here means that we can perform quick and useful lookups against the _key attribute right away, for free.
In ArangoDB system attributes cannot be changed, the system attributes include:
- _key
- _id (collectionName/_key)
- _rev
- _from (edge collection only)
- _to (edge collection only)
For more information on system attributes and ArangoDB’s data model, see the guide available in the documentation. To set a new _key attribute later, once we have a better understanding of the available data, we would need to create a new collection and specify the value to use; we get to skip that step.
Importing Listings
For our example, we import the listings.csv.gz file, which per the website description, contains detailed listings data for Berlin.
The following is the command to run from the terminal once you have ArangoDB installed and the listings file unzipped.
arangoimport --file .\listings.csv --collection "listings" --type csv --translate "id=_key" --create-collection true --server.database arangobnb
Once the import is complete, you can navigate to the WebUI and start exploring this collection. If you are following along locally, the default URL for the WebUI is 127.0.0.1:8529.
Once you open the listings collection, you should see documents that look like this:

Analyzing the Data Structure
The following AQL query aggregates over the collection and counts the number of documents with that same field, what those fields are, and their data types. This query provides insight into how consistent the data is and can point out any outliers in our data. When running these types of queries it may be a good idea to supply a LIMIT to avoid aggregating over the entire collection, it depends on how important it is to check every single document in the collection.
FOR doc IN listings FOR a IN ATTRIBUTES(doc, true) COLLECT attr = a, type = TYPENAME(doc[a]) WITH COUNT INTO count RETURN {attr, type, count}
Query Overview:
This query starts with searching the collection and then evaluates each document attribute using the ATTRIBUTES function. System attributes are deliberately ignored by setting the second argument to true. The COLLECT keyword signals that we will be performing an aggregation over the attributes of each document. We define two variables that we want to use in our return statement: the attribute name assigned to the `attr` variable and the type variable for the data types. Using the TYPENAME() function, we capture the data type for each attribute. With an ArangoDB aggregation, you can specify that you want to count the number of items by adding `WITH COUNT INTO` to your COLLECT statement followed by the variable to save the value into; in our case, we defined a `count` variable.

The results show that about half of the fields have a count of 20,224 (the collection size), while the rest have varying numbers. A schema-free database’s flexibility means understanding that specific values may or may not exist and planning around that. In our case, we can see that a good number of fields don’t have values. Since we are thinking about this data from a developer’s perspective, this will be invaluable when deciding which features to incorporate.
Data Transformations
The results contain 75 elements that we could potentially consider at this point, and a good place to start is with the essential attributes for our application.
Some good fields to begin with include:
- Accommodates: For the number of Guests feature
- Amenities: For filtering options such as wi-fi, hot tub, etc.
- Description: To potentially pull keywords from or for the user to read
- Review fields: For a review related feature
- Longitude, Latitude: Can we use this with our GeoJSON Analyzer?
- Name: What type of name? Why are two of the names a number?
- Price: For filtering by price
We have a lot to start with, and some of our questions will be answered easiest by taking a look at a few documents in the listings collection. Let’s move down the list of attributes we have to see how they could fit the application.
Accommodates
This attribute is pretty straightforward as it is simply a number, and based on our type checking; all documents contain a number for this field. The first one is always easy!
AmenitiesThe amenities appear to be arrays of strings, but encoded as JSON string. Being a JSON array is either a result of the scraping method used by insideAirbnb or placed there for formatting purposes. Either way, it would be more convenient to store them as an actual array in ArangoDB. The JSON_PARSE() AQL function to the rescue! Using this function, we can quickly decode and store the amenities as an array all at once.
FOR listing IN listings LET amenities = JSON_PARSE(listing.amenities) UPDATE listing WITH { amenities } IN listings
Query Overview:
This query iterates over the listings collection and declares a new `amenities` variable with the LET keyword. We finish the FOR loop by updating the document with the JSON_PARSE’d amenities array. The UPDATE operation replaces pre-existing values, which is what we want in this situation.
Description
Here is an example of a description of a rental location:
As you can see, this string contains some HTML tags, primarily for formatting, but depending on the application, it might be necessary to remove these characters to avoid undesired behavior. For this sort of text processing, we can use the AQL REGEX_REPLACE() function. We will be able to use this HTML formatting in our Vue application thanks to the v-html Vue directive, so we won’t remove the tags. However, for completeness, here is an example of what that function could look like:
FOR listing IN listings
RETURN REGEX_REPLACE(listing.description, "<[^>]+>\s+(?=<)|<[^>]+>", " ")
Query Overview:
This query iterates through the listings and uses REGEX_REPLACE() to match HTML tags and replaces them with spaces. This query does not update the documents as we want to make use of the HTML tags. However, you could UPDATE the documents instead of just returning the transformed text.
Reviews
For the fields related to reviews, it makes sense that they would have different numbers compared to the rest of the data. Some listings may have never had a review, and some will have more than others. The review data types are consistent, but not every listing has one. Handling reviews is not a part of our initial application requirements, but in a real-world setting, they likely would be. We had not discussed reviews during planning as this site likely won’t allow actual users to sign up for it.
Knowing that our data contains review information gives us options:
- Do we consider removing all review information from the dataset as it is unnecessary?
- Or, leave it and consider adding review components to the application?
This type of question is common when considering how to model data. It is important to consider these sorts of questions for performance, scalability, and data organization.
Eventually, we decided to use reviews as a way to sort the results. As of this writing, we have not implemented a review component that shows the reviews, but if any aspiring JavaScript developer is keen to make it happen, we would love to have another contributor on the project.
LocationWhen we started the project, we knew that this dataset contained location information. It is a dataset about renting properties in various locations, after all. The location data is stored as two attributes; longitude and latitude. However, we want to use the GeoJSON Analyzer which requires a GeoJSON object. We prefer to use GeoJSON as it can be easier to work with since, for example, the order of the coordinate pairs isn’t always consistent in datasets and the GeoJSON analyzer supports more than just points, should our application need that. Fortunately, since these values represent a single point, converting this data to a valid GeoJSON object is a cinch.
FOR listing IN listings
UPDATE listing._key
WITH {"location": GEO_POINT(listing.longitude, listing.latitude)}
IN listings
Query Overview:
This query UPDATEs each listing with a new location attribute. The location attribute contains the result of the GEO_POINT() AQL function, which constructs a GeoJSON object from longitude and latitude values.
Note: Sometimes, it is helpful to see the output of a function before making changes to a document. To just see the result of an AQL function such as the GEO_POINT() function we used above, you could simply RETURN the result, like so:
FOR listing IN listings LIMIT 1 RETURN GEO_POINT(listing.longitude, listing.latitude)
Query Overview:
This query makes no changes to the original document. It simply selects the first available document and RETURNs the result of the GEO_POINT() function. This can be helpful for testing before making any changes.

Name
The name value spurred a couple of questions after the data type query that we will attempt to answer in this section.
- What is the purpose of the name field?
- Why are there numeric values for only 2 of them?
The first one is straightforward to figure out by opening a document and seeing what the name field contains. Here is an example of a listing name:

The name is the title or a tagline for the rental; you would expect to see it when searching for a property. We will want to use this for our rental titles, so it makes sense to dig a little deeper to find any inconsistencies. Let’s figure out why some have numeric values and if they should be adjusted. With AQL queries, sorting in ascending order starts with symbols and numbers; this gives us an easy option to look at the listings with numeric values for the name field. We will evaluate the documents more robustly in a moment but first, let’s just have a look.
FOR listing in listings
SORT listing.name ASC
RETURN listing.name
Query Overview:
This query simply returns the listings sorted in ascending order. We explicitly declare ASC for ascending, but it is also the default SORT order.

We see the results containing the numbers we were expecting, but we also see some unexpected results; some empty strings for name values. Depending on how important this is to the application, it may be necessary to update these empty fields with something indicating a name was not supplied and perhaps also make it a required field for future listings.
If we return the entire listing, instead of just the name, they all seem normal and thus might be worth leaving in as they are still potentially valid options for renters.
We know that we have 34 values with invalid name attributes with the previous results, but what if we were unsure of how many there are because they didn’t all show up in these results?
FOR listing in listings
FILTER HAS(listing, "name")
AND
TYPENAME(listing.name) == "string"
AND
listing.name != ""
COLLECT WITH COUNT INTO c
RETURN {
"Collection Size": LENGTH(listings),
"Valid": c,
"Invalid": SUM([LENGTH(listings), -c])
}
Query Overview:
This query starts with checking that the document HAS() the name attribute. If it does have the name attribute, we check that the data type of the name value has a TYPENAME() of "string". Additionally, we check that the name value is not an empty string. Finally, we count the number of valid names and subtract them from the number of documents in the collection. This provides us with the number of valid and invalid listing names in our collection.

A developer could update this type of query with other checks to evaluate data validity. You could use the results of the above query to potentially motivate a decision for multiple things, such as:
- Is this enough of an issue to investigate further?
- Is there potentially a problem with my data?
- Do I need to cast these values TO_STRING() or leave them as is?
The questions of these depend on the data size and complexity, as well as the application.
Price
The final value we will take a look at is the price. Our data type results informed us that the price is a string, and while looking at the listings, we saw that they contain the dollar sign symbol.

Luckily, ArangoDB has an AQL function that can cast values to numbers, TO_NUMBER().
FOR listing IN listings
UPDATE listing WITH
{
price: TO_NUMBER(
SUBSTRING(SUBSTITUTE(listing.price, ",",""), 1)
)
}
IN listings
Query Overview:
There is kind of a lot going on with this query so let’s start by evaluating it from the inside out.
We begin with the SUBSTITUTE() function, checking for commas in the price (they are used as thousands separator). This step is necessary because the TO_NUMBER() function considers a value with a comma an invalid number and would set the price to 0.
Next, we need to get rid of the $ as it would also not be considered a valid number. This is where SUBSTRING() comes into play. SUBSTRING() allows for providing an offset number to indicate how many values we want to remove from the beginning of the string. In our case, we only want to remove the first character in the string, so we provide the number 1.
Finally, we pass in our now comma-less and symbol-less value to the TO_NUMBER() function and UPDATE the listing price with the numeric representation of the price.
As mentioned previously, it is sometimes helpful to RETURN values to get a better idea of what these transformations might look like before making changes. This query provides a better understanding of what exactly is happening in this query:
FOR listing IN listings
LIMIT 1
RETURN {
Price: listing.price,
Substitute: SUBSTITUTE(listing.price, ",",""),
Substring: SUBSTRING(SUBSTITUTE(listing.price, ",",""), 1),
To_Number: TO_NUMBER(SUBSTRING(SUBSTITUTE(listing.price, ",",""), 1))
}

Conclusion
Other fields could potentially be updated, changed, or removed, but those are all we will cover in this guide. As the application is developed, there will likely be even more changes that need to occur with the data, but we now have a good starting point.
Hopefully, this guide has also given you a good idea of the data exploration capabilities of AQL. We certainly didn’t cover all of the AQL functions that could be useful for data analysis and exploration but enough to get started. To continue exploring these, be sure to review the type check and cast functions and AQL in general.
Next Steps..
With the data modeling and transformations complete, some next steps would be to:
- Explore the remaining collections
- Configure a GeoJSON Analyzer
- Create an ArangoSearch View
- Consider indexing requirements
- Start building the app!
Hear More from the Author
Continue Reading
ArangoDB Assembles 10,000 GitHub Stargazers
ArangoDB Receives Series A Funding Led by Bow Capital
Phew, it’s been quite a ride, but today the whole team is super excited to announce a $10 million Series A funding for ArangoDB, our native multi-model database.
We feel honored and frankly a bit proud that Bow Capital is leading this investment round and shows their trust in our product, team, and amazing community. Vivek, Suraj and team – it is an absolute pleasure teaming up with you! Getting the chance to learn from the leaders guiding TIBCO to its remarkable success is an absolute privilege. Read more
ArangoDB Finalizes 4.2 Million Euro Investment Led by Target Partners
Funding to accelerate and strengthen the company’s US-based presence
ArangoDB, the company behind one of the fastest growing next generation databases, closed the final tranche of a 4.2 million Euro investment led by Munich-based venture capital firm Target Partners.
The company is developing the open-source NoSQL database ArangoDB, which combines three major data models (graph, key/value, JSON-documents) in one database and with one query language. ArangoDB allows startups and enterprises to speed up innovation cycles, simplify technology stacks and increase on-time and on-budget delivery of software projects. Read more
ArangoDB Hires Seasoned Software Executives to Run Americas
Michael Guerra, former MariaDB, Basho and VMWare leader and Ramona Rolando from Oracle join to foster US growth
ArangoDB, the company behind the leading native multi-model database, winner of 2017 Red Herring Top 100 Europe, today announced that Michael Louis Guerra has formally joined as Head of Sales Americas. Mike’s responsibilities will include building a high-performance sales team and a network of partners across the region.
Mike has over 20 years of experience at public, private, as well as venture-backed startups with a particular focus on open source technologies. He’s an accomplished software sales executive with strong leadership, team building, and business development skills. Most recently, Mike was responsible for growing the Western Territory at MariaDB and Basho Technologies. He helped several companies over two decades leading to acquisitions from Morgan Stanley, Yahoo, and VMware.
Ramona Rolando, a seasoned Sales Leader from Oracle and Honeywell, also joins the team to run Inside Sales in the region. Read more
ArangoDB Selected as Finalist for Red Herring Top 100 in Europe
“ArangoDB shows great promise and therefore deserves to be among the finalists.”
ArangoDB, the native multi-model database, announced today that it has been selected as a finalist for
Red Herring Top 100 award for the European business region. The nomination underlines the great success of the company and the growing momentum behind the multi-model movement it is heralding.
“The whole team is proud to be nominated for such a prestigious award,” says Claudius Weinberger, CEO of ArangoDB. “Looking back a few years, it was hard work to win people for our multi-model approach and unique query language. Today startups and enterprises enjoy the flexibility our database provides and use it to realize their brightest ideas.” Read more
ArangoDB Secures €2.2M Investment Led by Target Partners
Funding to accelerate the company’s product development and international expansion
Munich/Cologne (Germany), November 24th, 2016: – ArangoDB GmbH (www.arangodb.com), the company behind one of the fastest growing next generation databases, has landed a 2.2 million Euro investment led by Munich-based venture capital firm Target Partners (www.targetpartners.de). The company develops the open-source NoSQL database ArangoDB, which combines three major data models (graph, key/value, JSON-documents) in one database and one query language. ArangoDB’s product allows startups and enterprises alike to speed up their innovation cycles, simplify their technology stack and increase on-time and on-budget delivery of software projects.
Claudius Weinberger, Co-Founder and CEO, declares: “The previous funding round allowed us to build a rock-solid product and with this additional investment we can further accelerate our growth and expand internationally.” Read more
Multi-Model Benchmark: Round 1 Results | ArangoDB Blog
It’s time for another update of my NoSQL performance blog series. This hopefully concludes the first part of this series with the initial databases ArangoDB, MongoDB, Neo4J and OrientDB and I can now start to check out other databases. I’m getting a lot of requests to test others as well and I’ll try to add them as soon as possible. Pull requests to my repository are also more than welcome. Remember it is all open-source.
The first set of benchmarks was started as a proof that multi-model can compete with specialized solutions and I started with the corresponding top dogs (Neo4J and MongoDB) for graphs and documents. After the first blog post, we were asked by the community to include OrientDB as the other multi-model database, too, which makes sense and therefore I expanded the initial lineup.
Concluding the tests did take a bit longer than expected, because vendors took up the challenge and improved their products with impressive results – as we asked them to do. Still, for each iteration we needed some time to run all tests, see below. However, on the upside, everyone can benefit from the improvements, which is an awesome by-product of the benchmark tests. (more…)
Data Modeling with Multi-Model Databases: ArangoDB Insights
Max published an article on O’Reilly Radar about the use case he presented on Strata+Hadoop World in London earlier this year.
Read how multi-model databases can be used in an aircraft fleet maintenance system by mixing different data models within the same data store.
A query language like AQL can help to answer maintenance questions like:
- What are all the parts in a given component?
- Given a (broken) part, what is the smallest component of the aircraft that contains the part and for which there is a maintenance procedure?
- Which parts of this aircraft need maintenance next week?
Read on: O’Reilly Radar – Data modeling with multi-model databases
Is Multi-Model the Future of NoSQL? ArangoDB Insights
Here is a slideshare and recording of my talk about multi-model databases, presented in Santa Clara earlier this month.
Abstract: Recently a new breed of “multi-model” databases has emerged. They are a document store, a graph database and a key/value store combined in one program. Therefore they are able to cover a lot of use cases which otherwise would need multiple different database systems. This approach promises a boost to the idea of “polyglot persistence“, which has become very popular in recent years although it creates some friction in the form of data conversion and synchronisation between different systems. This is, because with a multi-model database one can enjoy the benefits of polyglot persistence without the disadvantages.
In this talk I will explain the motivation behind the multi-model approach, discuss its advantages and limitations, and will then risk to make some predictions about the NoSQL database market in five years time. (more…)
Get the latest tutorials,
blog posts and news:
Thanks for subscribing! Please check your email for further instructions.