ArangoDB 1.1 Feature Preview: Batch Request API | ArangoDB 2012
Clients normally send individual operations to ArangoDB in individual HTTP requests. This is straightforward and simple, but has the disadvantage that the network overhead can be significant if many small requests are issued in a row.
To mitigate this problem, ArangoDB 1.1 offers a batch request API that clients can use to send multiple operations in one batch to ArangoDB. This method is especially useful when the client has to send many HTTP requests with a small body/payload and the individual request results do not depend on each other.
(more…)
ArangoDB 2012: Gain Factor of 5 with Batch Updates
ArangoDB 1.1 will come with a new API for batch requests. This batch request API allows clients to send multiple requests to the ArangoDB server inside one multipart HTTP request. The server will then decompose the multipart request into the individual parts and process them as if they were sent individually. The communication layer can sustain up-to 800.000 requests/second – but absolute numbers strongly depend on the number of cores, the type of the requests, network connections and other factors. More important are the relative numbers: Depending on your use-case you can reduce insert/update times by 80%.
(more…)
ArangoDB 1.01 Released: What’s New? | ArangoDB 2012
This version is deprecated. Download the new version of ArangoDB
Quick note: ArangoDB 1.01 is available. This is a bugfix release. Check the “ArangoDB Google group” for the changelog . By the way – a lot of interesting discussions on ArangoDB, its feature roadmap and how it works in detail, are taking place there. Binaries are always available in the download section.
ArangoDB 2012: Performance Across Different Journal Sizes
As promised in one of the previous posts, here are some performance results that show the effect of different journal sizes for insert, update, delete, and get operations in ArangoDB. (more…)
Get 20% Off: NoSQL Matters Barcelona | ArangoDB 2012
We are on the road again and are invited to give a talk at the “nosql matters” in Barcelona. This is a one day conference in an amazing looking venue (UNESCO world heritage).
Now the conference team offered us a couple of promo codes for “nosql matters” on October, 6th. Katja, one of the organizers, writes:
“there might be some friends, colleagues, contacts or even your followers on twitter who are interested in hearing your talk at NoSQL matters Barcelona. Therefore we would like to give them the opportunity to buy price reduced tickets. With the promotion code BCNSchoenert_7959 you can give 5 of them the chance to buy a ticket with 20% discount.”
So, here we are. Grap the code and get your ticket. We are looking forward to meeting you in Spain.
Bulk Inserts Comparison: MongoDB, CouchDB, ArangoDB ’12
In the last couple of posts, we have been looking at ArangoDB’s insert performance when using individual document insert, delete, and update operations. This time we’ll be looking at batched inserts. To have some reference, we’ll compare the results of ArangoDB to what can be achieved with CouchDB and MongoDB.
(more…)
Bulk Insert Benchmark Tool | ArangoDB 2012
To easily conduct bulk insert benchmarks with different NoSQL databases, we wrapped a small benchmark tool in PHP. The tool can be used to measure the time it takes to bulk upload data into MongoDB, CouchDB, and ArangoDB using the databases’ bulk documents APIs.
(more…)
ArangoDB 2012: Additional Results for Mixed Workload
In a comment to the last post, there was a request to conduct some benchmarks with a mixed workload that does not test insert/delete/update/get operations in isolation but when they work together.
To do this, I put together a quick benchmark that inserts 10,000 documents, and after each insert either
- directly fetches the inserted document (i.e. insert / get),
- updates the inserted documents and retrieves it (i.e. insert / update / get), or
- deletes it (i.e. insert / delete)
The three cases are alternated deterministically, meaning each case occurs with the same frequency and in the same order. It’s probably still not the best ever test case, but at least it reflects a mixed read and write workload.
The document ids used in the test were monotically increasing integers, starting from some base value. That means no random values were used.
The test was repeated for 100,000 documents as well. The dataset still fully fits in RAM. The tests were run in the same environment as the previous tests so one can compare them.
The results are in line with the results shown in the previous post. Here’s the chart with the results of the 10,000 documents benchmark:
And here are the tests result for the 100,000 documents benchmark:
Data Modeling in a Schema-Free Environment | ArangoDB 2012
We just came back from FroSCon, a large, international open source conference near Bonn, Germany. Jan Steemann, one of the core developers of ArangoDB, had a talk on modelling data in a schema-free world. Jan was given the largest room in the conference for this talk, fortunately a lot of people showed up and even stayed ;-).
You can find Jan’s presentation below.
ArangoDB vs. CouchDB Benchmarking | ArangoDB 2012
A side-effect of measuring the impact of different journal sizes was that we generated some performance test results for CouchDB, too. They weren’t included in the previous post because it was about journal sizes in ArangoDB, but now we think it’s time to share them.
(more…)