Building RPC framework for Rust programming language

Once I found there are so much work to do to make system programming on Rust a little bit easier, I make up my mind to take it as a chance to implement some basic distributed system features like Raft consensus algorithm. The foundation of a distributed system is a mechanism for server to communicate with. RPC comes with predefined protocols for each command like URL and parameters within which makes it ideal in large systems.

I had built 2 kinds of systems relies on RPC. Although they are all designated for client to invoke functions in another server, but the actual procedure is totally different.

The first one is a Clojure project. Because I can compile Clojure code in runtime or lookup a function though its namespace and symbol, this way of developing on this framework is deadly simple, Deliver the full name of the function and its parameter in map when invoking is all needed. In this way, functions to invoke are all normal one, definition is not required. It looks convenient, but inefficient due to function name encoding/decoding and parameters with full map key names and this RPC is only available in Clojure applications, which means other programming languages cannot understand the data easily.

In the second project, I used Protocol Buffers from Google instead. Profobuff require developer to define all command and their parameter and returns as message in a file. Google built some tools to compile those files to source code in the programming language we wanted. It is way more efficient than my previous home brew implementation, and also able to deliver messages between applications built upon different programming languages. But maintaining  a protobuffer definition file is cumbersome and not agile enough, things may be broken after recompiled.

When searching for RPC framework for my Rust projects. I want this it to be as flexible as what I built for Clojure, but also efficient. I tried protobuff and even faster Cap'n Proto, but not satisfied. I also cannot just copy the way I use on Clojure because Rust is static typed for performance and it is no way to link a function in runtime form a string. After I found tarpc (yet another project from Google), I was inspired by it's simplicity and decided to build it on my own.

The most impressive part I took from tarpc is the service macro, which translate a simple serials of statements into a trait and a bunch of helper code for encoding and decoding and server/client interfaces. Although tarpc is more complex because it also supports async calls, but the basics are the same. We still need to define what we can call from a client, but the protocol definition are only existed in your code. Developer can define the interface with service macro,  implement server trait and generate server/client objects. For example, a simple RPC server protocol can be something like this

service! {
    rpc hello(name: String) -> String;

This will generate the code we required for both server and client. Next we need to implement the actual server behavior for hello, like this

struct HelloServer;

impl Server for HelloServer {
    fn hello(&self, name: String) -> Result<String, ()> {
        Ok(format!("Hello, {}!", name))

To start the server

let addr = String::from("");

To create a client and say hello to the server

let mut client = SyncClient::new(&addr);
let response = client.hello(String::from("Jack"));

The response should have already contains another string from the server.

let greeting_str = response.unwrap();
assert_eq!(greeting_str, String::from("Hello, Jack!"));

Looks simple, idiomatic, like what we define a function in Rust itself. No need to create another file and compile it every time we make any modification on it, this it because the service macro did all of the job for us under the hood.

Most of the work was done in compile time, like hash function names into an integer for network transport, parameters for each function are all encoded into a struct for serialization. This requires developers to configure compile plugin, but the performance gain worth the effort.

There is still more works to do to improve the RPC, like add  promise pipelining, async call, version control etc. My next work is to understand and implement Raft algorithm based on this RPC framework, I have put my code on GitHub if you are interested to take a look.

Language choice for my next pivot

I had worked on some distributed system and system for a while by implement a key-value store based on RAMCloud design and some other trivial middleware development. After drill into high performance programming, I found myself hard to take control from JVM to do some significant optimization that really matter for a database system. Although I tried to overcome some of the problems by programming my code off-heap to gain memory efficiency without waste spaces on object header, but I think this is not what JVM was intended for developer to do so because it makes backfire (JVM crashing). The most severe problem is the GC in every higher level programming language that intended to free the developer from memory management. JVM in OpenJDK do a great job in most circumstances, but always failed in performance critical tasks by stop-the-world GC pause and lagging. JVM and Clojure runtime is also burdensome for this kind of project, It spent about 1.6 GB of RAM for Morpheus to startup which makes it impossible to run in embedded devices and cheap virtual machines.

I was expecting for a minimal runtime, multi-platform, running on bare-metal without a virtual machine, super fast, expressive and don't need to bother memory management but efficient programming language that is suitable for my system software design. I worked to write golang for a while, expecting it to my weapon of choice because it is lead by my most favourite company Google and it is also somehow famous in the community. At first, I did get great experience: the language itself is very easy to learn (tooks ne about 1 day), it's ecosystem is well developed, tool chain is not painful. But when I tried to build something a bit more larger, I found it missed some features that I was expected.

First of all, it does not support generic because the developer of golang teams want to keep the language clean and they think it is not necessary. Then I have to convert data types from every value I got from a hash map. To keep my functions flexible, I have to write long and lousy interface {} expression. Writing a sorting comparator is also painful because of lacking language features; Second, it has no macro. After I worked with lisp like languages, I found macro is the key feature to save my time on repeating code structure and also gain performance by do some job at compile time. But for some reason, golang just does not have such feature; Third, golang is not that fast as a system programming. In some standard test, golang is not able to overrun Java. It's garbage collection is also not as well developer as Java's. The implementation of simple hello world HTTP server is also failed to beat netty in almost every cases.

I really think I have to go back to C/C++ way, like most of the popular system did. Until I found another programming language name Rust, made by Mozilla, used to power their next generation web browser engine servo. I have head about rust for a while, by it's unique strict check and scope rules to manage memory in compile time like ARC from Apple LLVM but more strict and ensures no memory leak. It seems favour my need because I want to manage memory by myself only if I required. Rust provide scope rules based on ownership and borrowing, I can also manage memory on my own if I do the job in unsafe block. Even if I use my own way to manage memory, I can still use it's scope rules to release memory, like this

impl Chunk {
    fn new (size: usize) -> Chunk {
        let mem_ptr = unsafe {libc::malloc(size)} as usize;
        Chunk {
            addr: mem_ptr,
            segs: segments,

    fn dispose (&mut self) {
        info!("disposing chunk at {}", self.addr);
        unsafe {
            libc::free(self.addr as *mut libc::c_void)

impl Drop for Chunk {
    fn drop(&mut self) {

After I use malloc from libc, I don't need to explicitly decide when to free but let rust to do the job by implementing Drop trait. This let me able to combine the power from both from C and rust itself. The concepts of ownership and borrowing seems annoying in the beginning because the pointers and objects cannot been delivered between functions freely, but it makes sense when considering the problem in deeper level. Rust seems make fine rules to avoid common problems that programmers might make from both concurrency and memory management.

Rust is light, you can even write a operating system with it because some university have already asked their students to do so. it is cross-platform because it use LLVM to compile. Rust have a pattern match based macro system and more in progress. It have cargo for package management. It supports almost every language features that a modern PL should have, like blocks, lambda, type inference, C libraries interop and more.

It seems nothing wrong about rust it self, I kinda like this language after 1 week of trial. But the downside is, this language is still favoured by small group of people. It's community is active and the people are nice, but there is not so much libraries that can just borrow and use. I my case, I can't find a well maintained concurrent hash map but have to do this on my own. Well that's how a open source works.

I have already halted the development of JVM based Morpheus and Nebuchadnezzar, start to redo what I have done with rust. Although I am able to inherit most of the ideals I learned from the previous version, I found it more challenging because there is more concepts that are operating system related and some of the work that have been done by others in Java have to migrate to rust on my own. I respect the power of community, the reason of why Java ecosystem is so powerful is noting but the cumulating of the works form the community members years by years.

Lessons learned about locks

Locks are handy and unavoidable in parallel programming. Especially in some applications like databases, when there may be multiply threads modify one piece of data. Locks can make sure only one thread can get access and modify the data in the same time.

Morpheus key-value store rely on locks heavily. There is lock for segments in trunks, and lock for cells. Recently, I am trying to improve the performance of the store, and reduce memory footprint.

My first attempt is to replace the CellMeta Java objects with simple addresses (long data type) for each cell index. This can save at least 24 bytes each, that will be considerable amount of spaces when there may be millions of cell in the store per-node. But there will be one compromise. In the beginning, I use synchronized keyword to lock on cells. But when there is no object entity that have been replaced as address, there will also be no way to ues synchronized as locks. My first solution is to create 1024 locks for each trunk, each cell will assigned to one lock according by it's hash. That means, a locks may corresponding to multiply cells. It looks like a controllable solution, even JDK use this principle to implement it's ConcurrentHashMap. But it will produce deadlock you trying to make a lock chain in parallel. A lock chain means a cell will be locked when another cell be locked. When 2 or more locks be acquired in different order, deadlocks occurs. In the the key-value store case, deadlocks may happens even when lock targets are irrelevant because cells may share the same lock. After discover this defect, I implemented a lock cache. Because I don't want to create lock for every cell in the beginning, I only want lock objects to be created when it is needed to save memory space. When a cell is asked to be locked, the store will try to find the lock in the cache according to cell hash. If it was found, lock will locked, or lock will be created. If the lock is asked to be released and lock count of a cell is zero, the lock will be removed from cache and waiting for GC to remove it from memory. The cache did a little impact on performance, but more memory efficient. It is equivalent to previous synchronized way, a bit heavy but with able to provide read write lock feature rather than spin lock monitor.

I use reentrant locks in the first time. But I used to not fully understand what the term 'reentrant' means. It means you can lock the same lock in the same thread more than one times, without cause deadlocks[1]. This feature is useful to build more clean code without concern about the lock status. The rule of using a lock is lock and unlock appears in pairs regardless what happened between the lock period even a exception thrown. In Java and Clojure, unlock should be put in finally block of a try clause, catch block is irrelevant. When you are using reentrant lock, lock time must equal to unlock one, or, deadlock and exceptions will occur. This rule looks easy, but in some situation, developers still need to track the locks, like reference count in old school iOS programming without ARC. I made a mistake by use some judgements like if (! l.isLockedByCurrentThread()) l.lock() and if (l.isLockedByCurrentThread()) l.unlock(). This will make the lock been locked in the outer layer but be released in the inner layer, which will cause inconsistency.

Programming concurrent applications is a delicate work. Developer have to consider all of the possible corner cases and use locks prudently. Use locks too much or in unnecessary places will damage the performance, but not using it when required will cause unexpected states and data corruption. Some applications claimed they have better performance to others, but unstable in practice, it is likely because the developer traded stability for performance. Lots of the lock problems also are hard to been tested without proper production level stress, a mature project should test all of the possible circumstances before release..

Graph traversal in distributed systems

Last month I have finished depth-first search and breadth-first search in Morpheus. Morpheus, based on hash distributed key-value store, requires to traversal vertices in distributed and even parallelised method.

The first thing comes to my mind is building a messaging passing infrastructure. Message passing infrastructure is almost the same as RPC in cluster connector, but use more succinct and determined data format to improve it's performance. Message passing is asynchronous, it means that sender does not wait for receiver to complete the actual processing, but for the delivery. Each message for a task have a 128 bit long ID for nodes to determinate the messages from the tasks. Each message also corresponding to one action, enables messages dispatcher to invoke the function bind to the action. Just like the RPC in cluster connector, if the message receiver is the same as sender, the message will not go through the network interface even will not been serialize or deserialize for transmission. 

Both DFS and BFS algorithm are built based on this infrastructure. But their behaviours are vary. For example, DFS cannot been paralleled. To conduct DFS on distributed system, the stack to the visited and discovered nodes must be transfer to the nodes once at a time for updates. BFS, in the other hand, is able to paralleled by conduct each node to discover the vertices children they belongs to in each level. We will discuss how those 2 kinds of graph traversal been implemented in Morpheus in next few chapters.

DFS in Morpheus, adapted ideas from S.A.M. Makki and George Havas thesis. Passing vertices stack through nodes for update. This method is single threaded because depth-first search is inherently sequential. DFS in Morpheus is a good choice when the size of subgraph that the start vertex belongs is not very large. It is also suitable for conventional graph traversal for user queries like detect links and existence of path.

BFS in Morpheus, is more complicated. Currently, Morpheus supports parallel search in each level. Morpheus rely on nested fork-join multithread pattern, illustrated in the figure below


Consider the nodes contains vertices represents as A, B, C in 'Parallel Task I'; 'Main Thread' as the coordinator server (may be any one of the nodes in the cluster); 'Parallel Task I', 'Parallel Task II', 'Parallel Task III' as search task for each level. One BFS request will contain start vertex, end vertex and search rules. Here is what happend when the search begins.

  1. The coordinator first put the first vertex in the search list, send it to the node that belongs to the first vertex by it's id hashing and wait for the return message.
  2. The node received the search list and get the neighbour ids to the vertices in parallel according to search rules.
  3. When all parallel search for search ended, it send the search result to the coordinator as return message.
  4. The coordinator may receive return messages from multiply servers (it is not possible when there is only one vertex in the search list). When each of the return message arrived, it will tried to update local task status for each vertices, indicates whether have been visited, level, and parent
  5. After the coordinator received all of the return messages, it will extract the vertices for next level from local task status, partition the vertices by their servers into search lists and send them to their server
  6. Level advanced, step 2 will take the job. The iteration will continue until stop condition fulfilled or reached the maximum level in search rules when executing step 5

The idea to this algorithm was adapted from Lawrence National Laboratory on BlueGene. But I think I have not yet fully replicated their design, because in my implementation, coordinator will become the bottleneck in step 4 and 5; it will also cost a lot of memory to cache the vertices that have already been discovered, that's why I also provided on-disk cache for this occasion. BFS is used for finding shortest path, faster but more resource consuming path finding. It may also become the backbone of many other algorithms like fraud detection and community discovery.

To distribute task information like task id, parameters to each node in the cluster, we need a global shared hash map that any of it's changes can been updated to each node. I use paxos to do the job. There are also some other components like disk based hash map might come handy. I packed those computation related into hurricane component.

If you want to take a closer look at my actual implementation please refer: DFS, BFS

Building a Graph Engine -- Computation Engines

Morpheus is a distributed graph engine. Once I resolved it's  storage problem, I start to build it's computation engines. If I just leave this project in current stage as a pure high-performance graph storage, user has to write their own algorithms. Most importantly, client side traversal may cause performance issues on communications.

Currently Morpheus use message passing to establish BSP and any other traversal based computation models. To reduce the size of each message, Morpheus can distribute task parameters to all of the machines in the cluster. The current commit have already contains distributed depth first search for path search, construct sub-graph for vertex or bulk update. but I have't finished all of them yet because the condition to stop the search or update actions require the existence of query language in the system like Cypher in Neo4j, which I am working on to provide the flexible functionality in search and updates.

I haven't decided the format of the query language. Lisp-like expression (S-expression or Polish notation) is preferable because of it's easy implementation in parser. I will not simply eval the code due to security concern and performance (eval in Clojure will generate new class, it's life-cycle is not manageable and possibly leads to memory leak). The problem of S-expression is that there a few people get used to such expressions. People usually read 1 + 1 = 2, but an expression like (= (+ 1 1) 2) looks strange to them. Another advantage of s-expression is it is more succinct when 1 + 2 + 3 + 4 can be represented as (+ 1 2 3 4).

Morpheus also have a code distribute feature. Users can write their plain code, even pack as Leiningen project, send them to the cluster and run remotely without compile anything. It is useful if users want to run their own algorithms in the cluster. There should be a sort of API or framework, and I have done it yet.

Future plan includes a RDD based computation module for bulk data processing and map-reduce computation models, like GraphX in Apache Spark. It may consider vertices and edges as documents, users can join the documents with other sources. The computation model may not comply graph traversal models, but it is useful When relations are not the first class problems in the cases. The RDD module may have a dedicated memory management policy because the memory spaces required for computation are unpredictable. RDDs may require to temperately flush into disk instead of persist in the memory. For example, in LRU manner.

Computation models seems a much more challenging topic in this project. And I have so few time to finish the job.That is the reason I considered the Morpheus project as my second personal long-term project (the first the the WebFusion project, it was paused, will continue after the Morpheus project).


Nebuchadnezzar, finally evolved

After one month of desperate tests and adjustments (I nearly crashed one of the hard drive in my proving ground machine), I think I have found a reasonable design for the key value store.

The hard part of this project is memory management. My previous design is to allocate a large amount of memory for each trunk and append data cell to one append header. When a cell was deleted or obsoleted, there will be one defragmentation thread, moving all living cell to fill the fragments. That means if there is a fragment at the very beginning of the trunk, most of the cells in the trunk will be moved in one cycle and the fragment spaces only reclaims after one round of the defragmentation. Usually the fragment spaces reclaims so slow that the whole trunk cannot take updates operations too frequently, I used to apply slow mode in the system, but finally find it was a stupid idea.

I also tried  to fill the new cells into fragments. But it causes performance issues. Defragmentation process have to compete with other operation threads for fragments, there must be locks to keep consistent, that can heavily slow down the whole system.

After I wrote my previous blog in Celebrate Ubuntu Shanghai Hackathon, I nearly ditched the design I mentioned before and trying to implement partly log-structured design that was seen from RAMCloud. It took me only one night to do rewirte that part. I does not fully followed that design in this paper. Especially I does not append tombstones into the log. Instead, I write tombstones into the places that belongs the deleted cell, because the paper spent large amount of paragraph on about how to track tombstones and reclaim their spaces. And I found it has too much overhead. The major improvements compare to the original design is dividing one trunk into 8M segments. Each segment tracks their own dead and alive objects, and the maximum object size reduced from 2GB to 1MB. This design was proved to be efficient and manageable. Segmentation makes it possible to start multiply threads and defragment segments in one trunk in parallel. It significantly improved the performance on defragmentation. When a writer trying to acquire new space from trunks, it only need to search for one segment that has sufficient space. In most common situations, defragmentation process can always been finished on time.

I also redesigned the backup process. When trunks are divided into segments, synchronizing memory data into disks are much more straight forward. There is one thread for each trunk keep detecting dirty segments and send them to remote servers for backup once at a time. Dirty segment means the segments contains new cells. Segments with only tombstones will not considered as dirty and need to backup because the system tracks versions for each cell. In recover process, cells with lower version will be replaced by higher version, but higher one will not been replaced by lower one.

As in the application level, 1MB object size produce limitation. In project Morpheus, it is highly possible for one cell to exceed this limitation. Especially in edge list, one vertex may have large amount of edges in or out. But this limitation can be overcomed by implement linked list. Each edge list contains an extra field indicates next list cell id. When the list require to be iterated, we can follow the links to retrieve all of the edges.

Although, there is still some bugs in Nebuchadnezzar and I don't meant to recommend it to be used in production, but I glad to find that this project made some progress, instead of become another of my short term fever project.

Some thought about storage system for the in-memory key-value store

I was struggling for how to implement the durability feature for the key-value store. I have finished my first attempt of write memory data into stable storage. The paper from Microsoft did not give me much clue because it just mentioned that they designed a file system called TFS, which is similar with HDFS from Apache Hadoop. It did not give any more information about how and when to write memory data into the file system. My first and current design for memory management followed their implementation. After some tests by importing large amount of data into the store and then frequently update the inserted cells, I realised the original design from Trinity is problematic.

Memory management, also named defragmentation is used to reclaim spaces from deleted cells. Especially in update operations, the store will mark the original spaces as fragment and append new cell to the append header. If the spaces did not been reclaimed in time, the container will been filled and no more data can been written into. The approach in my previous presented article have major flaws in copying all of cells that follows the position of the fragment. That means if we mark the first cell in the trunk as fragment, in the defragmentation process, all the rest of the cells in the trunk will been copied. In some harsh environment,  the defragmentation process will never catch up.

The key-value store also supports durability. Each trunk will have a mirrored backup file in multiply remote servers to ensure high availability. Each operation on the trunk memory space, including defragmentation will mark the modified region of memory space in a skiped list buffer as dirty, order by the memory address offset to the operation. After certain interval of time, a daemon will flush all of the dirty data and their address to remote backup servers. Backup servers will take those data in a buffer and a daemon will take data from buffer and write then to disk asynchronously. With such defragmentation approach, each defragment operation will also leads to updating all of the copied cells. That is a huge io performance issue.

Then I read the paper from Stanford to find more feasible solution. The author provided full details on log-structured memory management. The main feature of this approach is to make memory space into append only segments. In cleaning process, the cleaner will make full empty segments for reuse. Each operation, including add, delete and updates are all appended logs. The system maintains a hash table to locate cells in segments. The cleaner is much complex than defragmentation in my key-value store, but it provides possibility to minimize  memory copy and parallel cleanning. The downside is the maximum size of each cell is limited by segment size by 1MB when my current approach can provide maximum 2GB for each cell (even it is not possible to have such big cells) and maintaining segments costs memory, especially when the memory space is large, the number of segments increases accordingly. I prefer to give the log-structured approach a try because defragmentation will become major bottleneck in current design, and large object is not essential in current use cases.

Manage memory is far more complex that I was thought in the first time. Other system like JVM and other programming languages took years on improving their GC, each improvements may reduce pause by 1 ~ 2x. I cannot tell which is the best solution without proper tests because each solution seems specified for certain use cases. I think I can do more research on this topic in the future.

Building a Graph Engine -- Key-Value Store -- Nested Schema

In this post I explained the reason of why it is necessary to define schema before insert any data into the key-value store. The previous solution did achieved the goal of compact arrangement. User can define a map schema in a DSL like this for a student record:

[[:id :long] [:name :text] [:school :text] [:enrollment :int-array]]

The a map data like this will fit in the schema

{:id 1, :name "Jack", :school "DHU", :enrollment [2010 2011 2012 2013]}

Most of the relational database which need to define schema or tables works like this, users can define a one-layer schema which it's data types are primitive or arrays at most. If we need to present a map like this:

{:id 1, :name "Jack", :school "DHU",
:scores {:developing 99
:data-structure 87
:math 60}}}

the 'score' field must been placed in another table and leave only a identifier in the student table.

But sometimes we need to embed additional information in one document rather than use identifier to link two items. For example, in Morpheus, we need to define an array which each items have a long and id type field for edge lists[1]. In this case we need to define schema in an nested manner. In the student record example, we can describe the schema with scores like this

[[:id :long] [:name :text] [:school :text]
[:scores [[:developing :short]
[:data-structure :short]
[:math :short]]]

This schema is just right for the student record with scores we seen. It looks like what MongoDB was designed for, just without storing key names and structure of each field in the map and their children. We can utilize this feature to save more spaces from repeatedly describe what is the data represents for in the memory.

The key-value also store supports array of nested schema and array in nested schema. It allows almost infinite combination, which should be flexible enough to model any data structure.

There are two parts of the actual implementation of this feature, the writer and the reader.

In the writer there is a planner responsible for taking data and schema, compile them into a flattened sequential instructions contains field data, data type (encoder) and data bytes length. Next, we just need to write field data according to those instructions one after another, without any gaps and tags.

Reader takes the location of the cell, find out the cell schema and reconstruct data from the schema. Both writer and reader require walk on the schema recursively.

There is a downside of this feature, which is it performance is lower than the one-layer schema. It is also harder to do optimization (Precompile. I will cover this later). But it did provide both memory efficiency and flexibility.

For more exact details, please read this.

Building a Graph Engine -- Modeling Graph: Vertexes, Edges and their relations

The graph engine was designed to build graph data models based on key-value schema, because the engine use identifier list in the vertex to make connections to other vertex or edges. It is efficient to fetch data according to the identifier of a key-value pair from a hash map than do any traditional search in an array. Every unit, including edges and vertex are store as key value pair in the memory store in a very compact way (see this article). In a data explore process, it should be deliver much more better performance than traditional relational databases. It is also flexible and able to model hyper graph, that one edge may have multiply vertex inbound and outbound.

For performance concern, how to present edges in vertex is an interesting topic. The problem is there may be many different kind on edges. For example, when we model a school roster, one student should have multiply kinds of relation to other matters like his class, his friends, his teachers, this scores. Usually we only need to know one kind of his relation like how many friends he has. If we put every edges in one list, it will be too much overhead to extract all of the edges that was linked with the vertex to find only one type of edges. One solution is to define the possible edge types in the schema like what we see how Trinity models movie and actor graph.Snip20160228_2But when the datasets increasing, one entity of a vertex may have more relations and edges to other vertexes over time, model edges like this will leads to alter the cell schema for the vertex frequently, which also leads to rebuild the whole data sets of the the schema in the store. I have studied neo4j and OrientDB, finally decided to develop my own approach, to store array map in vertex for each type of edges to a identifier to edge list cell. The edge list cell contains all of the edges of this type related to the vertex. A vertex contains 3 such array map for inbound, outbound and indirected edges. It can also contains more fields for other data that was defined in the vertex schema.
IMG_0053  Edge list cell has only one field, that is the list of the identifiers to other vertex or edges. For best performance, the engine allows both vertex id (for simple edges) or edge/hyper edges with additional document id for more complex models. The engine can determinate the type of the cell based on it's schema id from the store.IMG_0054_

After designed such structure for graph model, I found that the store still have limitations in custom types like array map. I need to dedicate more time on improving the key-value store, supporting nested schema will suit my needs.

Building a Graph Engine -- Key-Value Store -- Defragmentation

The key-value store to the graph engine did not use any native way in the programming language for memory efficiency concern. Instead, I use a constant size memory space to store all of the data.

In the beginning, it was good, when we add record to the space, the cells arranged tight and their location as well as their id was store in another HashMap as index. Then, if we delete any of the cells before the last one will make gaps between cells. Because I use append header to allocate new position for new cells, before reclaim those gaped spaces and reset the append header, the location of append header will continually increase and pop out of the store boundary.

Defragmentation seems a classical procedure in data base and operating systems. It is my first time to build such system through, I was first stuck in this part, but after some drawing the idea was pretty clear. I was able to finish this job in the Chinese new year eve (Yes, I write my code even when the others are celebrating and on vacation).

The digram I drawn was shown above. There is a concurrent skip list map in trunks to record on fragments. When adding fragments, there is a check to detect adjacent fragments and merge them into one, like we seen in the 4th and 5th row in the digram. Defragmentation is to move data cell to the left nearest cell tail and mark new fragments  that the movements made just like the first fragment in the 2nd row. In my case, the move procedure is to copy original memory blocks and mark the new fragment for next turn in a loop. After one round of defragment loop, all of the fragments should been move and merged to tail of all data cells just like the last row in the digram. Then we can remove the last fragment and move the append header to the start position of the last fragment.

My implementation is about 50 lines of Clojure code. I also made tests and they passed for every circumstances I can imagine. You can check my code out from here.