Wavelet introduces a novel family of directed-acyclic-graph (DAG)-based consensus protocols. It is designed to alleviate the numerous scalability dilemmas predicated in decentralized ledgers, such as those that utilize either the longest chain rule, or some variant of stake delegation or committee election scheme.
Wavelet guarantees irreversibility of transactions, and a consistent total ordering of transactions without any compromise on safety, performance or liveness; enabling features such as transaction graph pruning and Turing-complete smart contract execution. The safety and liveness of Wavelet is solely dependent on the safety and liveness of any arbitrarily chosen Byzantine fault-tolerant binary consensus protocol executed within the Wavelet framework. Unlike prior works, Wavelet requires minimal configuration of a minuscule set of system parameters to work in a large range of practical network settings where communication is only partially synchronous.
All distributed systems suffer from what is known as the scalability dilemma. The dilemma forces system designers to have to make tradeoffs between performance, consistency and availability; aspects which are of utmost importance when it comes towards the creation of practical, resilient, fault-tolerant applications. Least to say, a particular class of distributed systems, blockchains, suffer from the aforementioned dilemma.Below, we perform a trivial analysis over results from classical computer science literature to describe and analyze a plethora of faults that render numerous modern blockchain constructions impractical.
1.1 The scalability dilemma
Recall the multiple readers, multiple writers problem . There exists n processors all trying to concurrently read and write to some shared resource R. For each and every additional processor alloted to the system, there requires an exponential increase in the amounts of coordination among all n processors such that they may read or write to resource R.
Now, reposition this problem into a distributed setting where there exists communication latency between each and every processor. An equivalence may be established to this problem: it is fundamentally difficult to efficiently coordinate a network of nodes to maintain a distributed ledger without introducing exponential communication costs.
Cumbersomely, accounting for faulty processors in designing a protocol to coordinate multiple readers and multiple writers to interact with resource R introduces significant additional overhead. Throughout this paper, we denote this problem as the scalability dilemma.
1 1.2 The longest chain rule does not scale
Consider a network of n nodes. A difficulty system, mechanized as a decentralized random lottery amongst all n nodes is established. Within a short time interval ∆, a subset F of n nodes may propose a block of transactions to append to the longest weighted path of a directed-acyclic graph of blocks G shared across the entire network.
We denote this distributed system to follow the longest chain rule . Bitcoin, Ethereum, and several other decentralized ledger architectures are noteworthy examples of distributed systems that follow such a rule.
Within the time interval ∆, there may exist at most |F| − 1 conflicting blocks attempting to be appended to the longest weighted path of G, otherwise known as the longest chain. Resolving conflicts between the |F| − 1 blocks requires nodes in F to append additional blocks to the blocks that they have proposed. The difficulty system makes this task for each individual node ∈ F require exponential time complexity to be able to resolve conflicts, which is arduous.
A relaxation of this scenario is that the longest chain rule is effectively trying to have n nodes compete in having the chance to read and write to some resource R, being the longest chain, which is simply not scalable for large n. Therefore, any distributed system that follows the longest chain rule suffers from the scalability dilemma.
1.3 Committees improve performance by weakening security
Consider a network of n nodes, with C being a subset of the n nodes, denoted as a committee. The committee is responsible for verifying and processing all transactions within the network.
The methodology in electing the committee within whatever time interval permits does not matter; whether it be through means of stake delegation , verifiable random functions , or through decentralized electoral voting schemes . Noteworthy examples of distributed ledgers that utilize committees are Algorand, Tendermint, Ouroboros Praos, Dfinity, and EOS.
Such a scheme may scale variably, given that only communication between all nodes ∈ C is necessary to maintain a distributed ledger. However, should |C| be small, any form of denial-of-service attack or collusion within C would jeopardize the entire network.
To minimize risks of such attacks from happening would require scaling up the size of C. This contradicts however the benefits of maintaining a small committee given that a larger committee exponentially increases communication complexity. Therefore, any distributed system that relies on a subset of nodes to verify and process all transactions for its entire network suffers from the scalability dilemma.
1.4 A partial ordering of transactions has limited practicality
Notably, several consensus protocols that orient consensus over the replication of a directed-acyclic-graph of transactions across nodes, such as Avalanche , are:
1. only able to guarantee a partial ordering of transactions,
2. unable to guarantee irreversibility of transactions, and
3. susceptible to numerous challenges when seeking to enable new nodes to bootstrap and join the network.
The inability to fulfill these guarantees restricts a ledger in being unable to provide a wide plethora of features. For instance, a lack of guarantee for a total ordering of transactions would render a ledger incapable of being anything more than a payments platform.
It is impossible for a Turing-complete smart contracts platform to be built on top of a ledger that only 2 guarantees a partial ordering of transactions. This is because the sequence in which smart contracts are executed would be totally different across different nodes in the network.
1.5 Network assumptions and protocol guarantees
Definition 1.1. Adversarial model. We assume a powerful, adaptive adversary capable of observing the internal state of every node within the network. The adversary may uniformly drop, reorder, or redirect any amount of messages sent from each node, so long as 1/k messages successfully gets delivered by each node to their intended recipient with k being some small, fixed security parameter.
Wavelet guarantees both strong safety and liveness assuming that all communication in the network is partially synchronous  with soft probabilistic bounds with respect to k given the prescribed adversarial model. Under these conditions, Wavelet is able to achieve an irreversible and consistent total ordering and application of all virtuous transactions within the network.Perlin Whitepaper