Question

From the NimbusDB website:

Our distributed non-blocking atomic commit protocol allows database transaction processing at any available node.

They claim that they can guarantee ACID transactions in a distributed environment, and provide all of: consistency, high availability and partition tolerance. As far as I can tell from the text, their "secret" for overcoming the limitations of CAP theorem is some sort of "predictable and consistent" way to manage network partitions.

I'm wondering if anyone has some insights or more information on what's behind?

Was it helpful?

Solution

It's been a while since this post was written and since then NuoDB has added a lot to their product marketing and technical resources on their website.

They've achieve data durability and ACID compliance by using their Distributed Data Cache System. They now call it an "Emergent Architecture:" (p.6-7)

The architecture opens a variety of possible future directions including “time-travel”, the ability to create a copy of the database that recreates its state at an earlier time; “cloud bursting”, the ability to move a database across cloud systems managed by separate groups; and “coteries” a mechanism that addresses the CAP Theorem by allowing the DBA to specify which systems survive a network partition to provide consistency and partition resistance with continuous availability.

From the How It Works page :

Today’s database vendors have applied three common design patterns around traditional systems to extend them into distributed scale-out database systems. These approaches – Shared-Disk, Shared-Nothing and Synchronous Commit - overcome some of the limitations of single-server deployments, but remain complex and prone to error.

By stepping back and rethinking database design from the ground up, Jim Starkey, NuoDB’s technical founder, has come up with an entirely new design approach called Durable Distributed Cache (DDC). The net effect is a system that scales-out/in dynamically on commodity machines and virtual machines, has no single point of failure, and delivers full ACID transactional semantics.

The primary architectural difference between NuodDB's NewSQL model and that of the more traditional RDMS systems is that the NuoDB inverts the traditional relationship between Memory and Storage, creating an ACID compliant RDBMS with an underlying design similar to that of a distributed DRAM cache. From the NuoDB Durable Distributed Cache page:

All general-purpose relational databases to date have been architected around a storage-centric assumption. Unfortunately this creates a fundamental problem relative to scaling out. In effect, these database systems are fancy file systems that arrange for concurrent read/write access to disk-based files such that users do not interfere with each other.

The NuoDB DDC architecture inverts this idea, imagining the database as a set of in-memory container objects that can overflow to disk if necessary and can be retained in backing stores for durability purposes.

All servers in the NuoDB DDC architecture can request and supply objects (referred to as Atoms) thereby acting as peers to each other. Some servers have a subset of the objects at any given time, and can therefore only supply a subset of the database to other servers. Other servers have all the objects and can supply any of them, but will be slower to supply objects that are not resident in memory.

NuoDB consists of two types of servers: Transaction Engines (TEs) hold a subset of the objects; Storage Managers (SMs) are servers that have a complete copy of all objects. TEs are pure in memory servers that do not need use disks. They are autonomous and can unilaterally load and eject objects from memory according to their needs. Unlike TEs, SMs can’t just drop objects on the floor when they are finished with them; instead they must ensure that they are safely placed in durable storage.

For those familiar with caching architectures, you might have already recognized that these TEs are in effect a distributed DRAM cache, and the SMs are specialized TEs that ensure durability. Hence the name Durable Distributed Cache.

They also publish a technical white paper that deep-dives into the sub-system components and the way they work together to provide an ACID-compliant RDMBS with most of the performance of a NoSQL system (NOTE: registration on their site to download the white paper). The general gist is that they provide an automated network cluster partitioning system that, when combined with their persistent storage system, addresses the concerns the CAP Theorem.

There are also a lot of informative technical white papers and independent analysis reports on their technology in their Online Documents Library

OTHER TIPS

There are multiple possible meanings for the word "consistency". See, e.g., Why is C in CAP theorem not same as C in ACID? .

Plus, some level of debate is also possible as to the meaning of the C in 'ACID' : while it is typically defined in a sense that relates to database integrity ("no transaction shall get to see a database state that violates a declared constraint - modulo the inconsistencies that that transaction has created itself of course"), one commenter said he interpreted it as referring to "the database state as seen (or perhaps better, as effectively used) by any transaction does not change while that transaction is in progress. Paraphrased : transactions are ACID-compliant if they are executing in at least repeatable read mode.

If you take the CAP-C to mean "all nodes see the same data at the same time", then availability is necessarily hampered because while the system is busy distributing the data to the various nodes, it cannot allow any transaction access to (the elder versions of) that data. (Unless of course access to elder versions is precisely what is needed, such as when a transaction is running under MVCC.)

If you take the CAP-C to mean something along the lines of "no transaction can get to see an inconsistent database state", then essentially the same applies, except that it is now the user's update process that should be locking out access for all other transactions.

If you impose a rule to the effect that "whenever a transaction has accessed a particular node N to read from some resource R (assuming R could theoretically be accessed on more than one node), then whenever that transaction accesses R again, it should do so on the same node N.", then I can imagine this will increase your guarantee of "consistency", but you pay in availability, because if node N falls out, then precisely because of the rule imposed, your transaction cannot access R anymore even if it could be done on other nodes.

At any rate, I think that if an institution such as Berkeley comes up with a proof of some theorem, then you're on the safe side if you consider vociferous claims such as the one you mention, as marketing lies.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top