Distributed ledgers are inefficient relative to a centralized computing platform.
DLTs sacrifice the efficiency of a single party processing transactions and managing application state for the decentralization and redundancy associated with having multiple parties perform those duties in a collaborative and consistent manner. While a centralized model will be more efficient, that single party may have undesirable ability to censor, corrupt, or charge – DLTs mitigate those risks. As a concrete example, the default is that every consensus node executes every smart contract call. In a public ledger with many nodes, this extravagant redundancy ensures that the results of the execution (some change to the state managed by the contract) can be relied upon. Consider a public ledger that maintains the state of some set of business data across a network of computers. Transactions are submitted to the computers of the ledger and cause the state to change. At a given moment in time, the different computers of the network should agree on the state. At the highest level, a DLT progresses via the following steps:1. Receive transactions from clients 2. Distribute transactions amongst network nodes 3. Place transactions into order 4. Execute transactions in the order of #3 5. Store the new state created by the transactions 6. Respond to clients with the result of transaction execution 7. Monitor for discrepancies 8. Repeat 1-7 The simplest model is that all nodes perform all the above steps for each and every transaction. Some of the inefficiency of DLTs can be mitigated if we relax this assumption. For instance, sharding a DLT presumes that a transaction submitted by a client to a node (as in #1 above) is not distributed to all other nodes (as in #2 above), but rather only to a smaller subset of nodes. It is only if a transaction impacts the state of different shards that cross-shard communication is required. Sharding can, for some transactions, enable a degree of parallel processing – two different nodes can process two different transactions at a given moment in time. Layer 2 solutions attempt to mitigate the inefficiencies in a different way. Rather than clients submitting each and every transaction to the network (as in #1), clients periodically send only transactions that reflect an aggregate of multiple atomic transactions. In this model, clients engage with each other directly by sending transactions back and forth, and maintain a local state reflecting the results of those transactions between themselves, and settle to the network only when necessary. In this model, the DLT still performs steps 2-7, albeit only for the aggregate transactions (and a small number necessary to set up and tear down the local state). Cryptoexchanges are a form of Layer 2 solution as exchanges generally maintain their own database of customer account balances (step #5) off-chain. Some proposals for more efficient distributed computing move the execution of #4 to a different set of computers and then storing and verifying the results of that execution on the DLT – as in steps #6 and #7. The fundamental step in the above is #3 – that of establishing a consensus order for transactions. Different DLTs are distinguished by the specific algorithm by which step #3 occurs. It is generally this step that determines the overall security, latency, and speed of the DLT. If step #3 is slow, then so will be the full sequence. A significant efficiency is possible if we recognize that it is this step that most requires the decentralized trust of the full DLT and that the other steps can be moved off-ledger. But critically, trust can still be ascribed to the other steps even if performed by a smaller set of computers, if the order of #3 can be confidently determined, and the results of transaction execution can be verified. The Hedera Consensus Service (HCS) enables a model where only Step #3 happens on the public Hedera DLT. And, because the ordering of a transaction is the core functionality of the underlying hashgraph algorithm and places only a minimal processing burden on the Hedera network, this step can be performed at extremely high throughput and low latency. All other of the above steps occur either on a mirror node, or a set of computers that act as clients of mirror nodes – together making up what we refer to as an ‘appnet’. Critically, even though this appnet may be comprised of a relatively small number of computers (and so lowering the bar for a malicious node to attempt to corrupt these steps) this network can effectively inherit the trust of the larger Hedera network. If the honest participants of this smaller network have confidence that the transaction order of #3 is valid then they can be confident that any of their appnet peers that are dishonest will not be able to corrupt or skew the processing of those transactions – even if that execution does not occur on Hedera. Hedera requires that 2/3 of mainnet nodes are honest, the appnet requires only one honest member. Consider an appnet that wants to maintain some set of business data between themselves. For instance, StockApp is a market that matches bids and asks in a stock market. The participants want the privacy and efficiency of a small network, but the trust and security of a large public network. HCS enables that combination. They create an HCS “topic” for their app on Hedera - a separate channel just for messages related to StockApp. StockApp participants will send messages to this topic, encrypting those messages if they wish so that the contents of the message will not be visible to the Hedera nodes or the general public. Each node of the StockApp appnet runs a Hedera mirror node. Mirror nodes can be thought of as ‘read only’ nodes - they receive transactions from the mainnet and can calculate consensus order for those transactions, but do not write to the mainnet. The mirror nodes do not slow down the mainnet, no matter how many mirror nodes are added. The above generic sequence of steps is modified to become:1. StockApp clients create messages describing their ask or bid 2. StockApp clients submit the messages to Hedera within the StockApp topic 3. Hedera network places messages in order within the StockApp topic 4. Mirror nodes, including those acting as StockApp nodes, receive the ordered messages 5. The StockApp nodes execute transactions in the order of #3. That appropriate bids and asks are matched up. Other mirror nodes ignore the messages in the StockApp topic6. StockApp nodes store the new state created by the execution of the ordered messages 7. StockApp nodes respond to clients with the result of transaction execution 8. Monitor for discrepancies 9. Repeat 1-8 After Alice submits a bid to Hedera as an HCS message, within a few seconds that bid will flow out to all mirror nodes (including those being run by the StockApp nodes). The mainnet gave the bid a timestamp and order within the StockApp topic. The StockApp nodes extract only those transactions of interest (those in the StockApp topic) and place them in order in preparation for execution. Each StockApp node executes the bids (and asks) in the same order, and applies the same matching rules, and so agrees on the results of that execution. For instance, they agree that Alice’s bid was matched against Carol’s ask. If Alice’s bid receives an earlier position in the StockApp topic order than Bob’s, then Alice’s will be processed before Bob’s by the honest nodes of StockApp – possibly resulting in Alice getting a slightly lower price for the stock than Bob. So it is very important that the ordering be fair. Let’s examine why Alice would trust that a bid she submitted was being fairly treated. For Alice, the fundamental questions are:
The first question is answered by Alice’s trust in the fairness and security of the Hedera network. Her bid does not just receive a timestamp and order within the StockApp topic. In addition, a ‘state proof’ can be created and made available to all parties. The state proof for the transaction that carried Alice’s bid gives a cryptographically secure and portable representation of the timestamp and order for that transaction – as agreed by the nodes of the Hedera mainnet. As Alice trusts the Hedera mainnet (its governance, its algorithm, the software the nodes run, etc.), she trusts the state proofs for her bids that come from the nodes of the Hedera mainnet. Similarly, any other party will be able to examine a state proof for Alice’s bids and, based on their trust in Hedera, be able to cryptographically validate the timestamp and order for her bids. These state proofs are persistent – they can be stored away, by either Alice or other participants, in anticipation of future arbitration and dispute resolution. The last question bears on whether the software executing the transactions (matching the bids and asks in this case) is performing as expected. Alice can reassure herself of this by comparing the results from different StockApp nodes. Alternatively, Alice could maintain a copy of the valid software and occasionally audit the StockApp nodes to see if they handled the correct transactions, in the correct order, with the correct results. There could even be a third party auditor that performed that function periodically. If even a single StockApp node is honest, it can prove its honesty to Alice or to auditors, in real time or at a later date. If Alice runs a StockApp node herself, and trusts Hedera, then she can have 100% trust in StockApp, even if all of the other StockApp nodes are malicious. In this way, StockApp inherits the trust of Hedera, without burdening Hedera with the task of running smart contracts or storing the StockApp state data. Layer 2 solutions propose improving DLT scale by effectively batch processing transactions off-chain whilst still maintaining the resulting state on-chain. In contrast, in the Hedera Consensus Service model, transaction execution (step #4) and state storage (step #5) occur on an off-ledger appnet – without the inefficiencies inherent to the duplication and redundancy of a public DLT (even one as performant as Hedera). The public Hedera network is dedicated to that which the underlying hashgraph algorithm is optimized for – securely and fairly ordering transactions (step #3). Without the burden of execution and storage, the nodes of the Hedera mainnet can perform that step with high throughput and low latency.