home shape
logo ICmanage

NextGen Design Collaboration by IC Manage utilizing ArangoDB

right blob img min

IC Manage was founded in 2002, has over 50 employees worldwide, and more than 70 customers. IC Manage provides tools to manage data (store, track, secure, create, analyze, distribute) for silicon designers and their globally distributed teams.

"IC Manage provides next generation design management solutions for SoC, IP, IC and software design, enabling companies to efficiently and reliably manage single and multi-site development efforts."

by  Gary Gendel, Chief Software Architect @ IC Manage, Campbell, CA

Our Challenge

We are currently working on building our next generation products to meet customer requirements for silicon designs at advanced design nodes.

The original Global Design Platform (GDP) product was created back in 2002, working closely with potential customers and vendors to address their data management problems. The product married Source Control Management (SCM) with a relational database to add design, tracking, and workflow meta-data. The product has been in very active development ever since.
The meta-data contained tree-like structural information to maintain the consistency of the design and allowed alternative classifications, and properties to be assigned to this structure. The structure was hard-coded into the database schema and data retrieval could have many table joins. User defined properties could be used as hooks to bug trackers, allow alternative classification models, and so on. As the product developed and our customer base grew, some customers found our main structure to be a poor fit for their current design methodologies. To fix this, we introduced user-defined categories (sub-trees) that could be introduced anywhere in the main structure. However, these would be second-class objects as they had no function within our business logic.

Collecting sub-tree information along with the main tree was expensive in the relational database. We also had issues with workspace templates which we call configurations. User and tool workspaces are created from a template. Each configuration can contain other configurations and building blocks which we call libraries. In a relational database, it was resource intensive to expand, level by level the top configuration in order to create a workspace. We had to write highly optimized stored procedures to do this just to provide acceptable performance. The last issues were user definable properties that were hierarchical in nature to the main structure. In order to maintain performance, we assigned all the descendants of a node in the property. Otherwise we would have to walk the tree to collect any property. Doing this for several thousands of libraries in a configuration would be unrealistic. The tradeoff was additional complexity of property operations.

Example of GDP using GUI client with a configuration expanded (2 seconds to expand using a highly optimized stored procedure in MySQL)

2016-07-25 09:32:27.329: icmsite1:7001;CALL initExpandConf()
2016-07-25 09:32:27.634: icmsite1:7001;INSERT pmWork.tmpConfs VALUES(92838,0,0)
2016-07-25 09:32:27.694: icmsite1:7001;LOCK TABLES pmConfigs READ,pmConfProperty READ
2016-07-25 09:32:27.695: icmsite1:7001;CALL expandConfs()
2016-07-25 09:32:29.279: icmsite1:7001;UNLOCK TABLES
2016-07-25 09:32:29.279: icmsite1:7001;CALL getLibs(0)
2016-07-25 09:32:29.747: icmsite1:7001;

And the the actual expansion code for expandConfs():

BEGIN
        DECLARE count INT;
        DECLARE newCount INT;
        DECLARE depth INT;
        SELECT COUNT(*) INTO newCount FROM pmWork.tmpConfs;
        CREATE TEMPORARY TABLE pmWork.tmpLoopConfs(cid INT UNIQUE KEY NOT NULL,
           	parent INT NOT NULL,level INT NOT NULL) ENGINE=INNODB;
        INSERT pmWork.tmpLoopConfs SELECT * FROM pmWork.tmpConfs;
        SET count=0;
        SET depth=0;
        WHILE(newCount!=count) DO
           	SET count=newCount;
           	SET depth=depth+1;
           	CREATE TEMPORARY TABLE pmWork.tmpNewConfs(
               cid INT UNIQUE KEY NOT NULL,
           	   parent INT NOT NULL,level INT NOT NULL) ENGINE=INNODB;
             INSERT IGNORE pmWork.tmpNewConfs
           	    SELECT pmConfProperty.refId,confId,depth FROM pmConfigs,
           	       pmConfProperty,pmWork.tmpLoopConfs"
           	       WHERE confId=pmConfigs.id&&(ptype='foreign’||
                   type IN('composite','privateComp')&&confId=cid;
             INSERT IGNORE pmWork.tmpConfs SELECT * FROM pmWork.tmpNewConfs;
             DROP TABLE pmWork.tmpLoopConfs;
             ALTER TABLE pmWork.tmpNewConfs RENAME pmWork.tmpLoopConfs;
             SELECT COUNT(*) INTO newCount FROM pmWork.tmpConfs;
        END WHILE;
        DROP TABLE pmWork.tmpLoopConfs;
   END
GDP example
right blob min

Our Solution

When we embarked on the next generation GDP product, nGDP, we focused on performance, flexibility, security, and synergy with upstream and downstream tools.

For collecting data, a graph database seemed to fit the bill nicely. Because of it’s traversal capabilities, it made it easy to accommodate different customer’s required structures with the same business logic. However, for user defined, on-the-fly, properties, a document database would be a good fit. We looked at several database possibilities, relational, triple-store, key-value, graph, document, and mixed capabilities. We decided that a mixed graph-document database would give us the flexibility. It would allow us to actively develop the product far into the future without restriction. We also decided to use a client-server model rather than our existing model that relied on database and scm replication to maintain performance for globally scattered teams.

After switching to ArangoDB we could make use of it’s document and graph capabilities and measured tremendous performance improvements and vast simplification of our code.

We had not used ArangoDB for our prototype, but a competing product that had extremely good sails.js support. This confirmed that a mixed graph-document database was the best choice. However, we found that the database we chose was not ready for production. Performance was good, but it was too unstable. Each new version released changed SQL semantics enough to break our application. The breaking point was that support was virtually non-existent; questions would go unanswered. Because of this we moved to ArangoDB. This decision turned out to be the right one. ArangoDB is very stable, fast, and development support is phenomenal. Every question was replied to quickly and concisely allowing me to convert our application to ArangoDB in a few days. After switching to ArangoDB we could make use of it’s document and graph capabilities and measured tremendous performance improvements and vast simplification of our code.

Example of nGDP using nGDP Inspector debugging tool with the same configuration expanded (.001 seconds to expand using ArangoDB and a single AQL statement).

This tool is used to validate our API interface and the database contents during development.

Note: the customer client is browser-based.

The former highly optimized stored procedure became a single line of AQL:

FOR v,e,p IN 1...50 INBOUND ‘pmconfig/176489032’ pm_content RETURN p.vertices
nGDP example (1)

This simplification enabled us to add new features sometimes in only 15 minutes and react to customer needs very quickly in general.

We use both graph and document capabilities of ArangoDB. The graph provides us with a natural way to collect the documents. The graph allows us to easily traverse a customer designed schema and still provide the features needed to do it’s job.

It also can allow alternative classification strategies to be mapped onto this schema to provide views into the data geared specifically for different requirements. The document store is used to allow easy addition of data needed to interface to various tools like bug tracking, and check-lists as well as leaving client requirements (i.e. labels, icon information, etc.) to be included as needed.

right blob img min

Our Benefits from using ArangoDB

Performance testing has shown real world nGDP tests ran up to 2000x faster than in GDP. In addition, we have gained significant flexibility in the product.

We have currently committed our product to ArangoDB and ported our sails code to ArangoDBs JavaScript framework Foxx which has additionally lead to improved performance. Performance testing has shown real world nGDP tests ran up to 2000x faster than in GDP. In addition, we have gained significant flexibility in the product. For example, the tree hierarchy defined in GDP can be replaced entirely to meet specific customer requirements without changing business logic. Tree node types can be optional and can be recursive.

The application is bootstrapped by a customer requirement file to generate the tree specification. In GDP, we’ve implemented some of this but performance really suffers since tree walking requires query iterations in a relational database.

We use Foxx and expect to incorporate some Foxx microservices too for authorization as we move towards nGDP. Moving from Sails to Foxx for our product has shown significant benefits. Deployment of a Foxx application is much simpler and the application has significantly improved performance. Unexpected errors are handled better. For example, unhandled JavaScript exceptions return an error response to the client whereas sails aborts which means that the clients sit around waiting for a response.

It wasn’t easy to move from our old smart-client architecture using Mysql to the client-server ArangoDB-based one. ArangoDB’s AQL language is quite different than SQL, it took some time to understand its semantics. Support from ArangoDB on stackoverflow has been instrumental at helping us understand AQL. Once I hit the “Eureka!” moment, where AQL semantics became clear, I learned to appreciate it’s power and elegance, especially for our graph traversal needs. Moving from Sails to Foxx was relatively easy and took less than a day to re-write the sails-arangojs usage to Foxx.

Simplification, significant performance gains and the stability of ArangoDB led us to the decision to migrate three of our bread&butter products to ArangoDB.

Importance of key characteristics of ArangoDB

Factornot important important very important
Performance x
Clusterx
Documentation x
Active community x
Price x

Feature set
not importantimportantvery important
Multi-model x
AQL / JOINs x
Foxx Microservices x

A very big thanks to Gary Gendel, Chief Software Architect, IC Manage Inc., Campbell CA for investing the time to write this article!

Also using ArangoDB? Write a few lines – post it to your blog or send it to us and we’ll publish it here.