The Fault-Tolerant Consensus Problem and Its Solutions in Blockchains and Distributed Systems

Giovanni Zaarour
Blockchain@USC
Published in
21 min readMay 20, 2023

--

Abstract

This essay explores the Byzantine Generals Problem (BGP) and Byzantine Fault Tolerance (BFT), which are fundamental concepts in blockchain and distributed systems consensus. After detailing the Two Generals Problem (TGP), we explain how it extends to BGP and describe some intricacies of BGP. Then, we explain consensus algorithms including Proof-of-Work (PoW), Proof of Stake (PoS), Practical Byzantine Fault Tolerance (PBFT), Tendermint, and Raft and explain their level of fault tolerance, as well as how each consensus algorithm is used in various distributed systems.

I. Introduction

In distributed computing, a fundamental idea is that of distributed nodes reaching agreement on information. This idea is called consensus, and it applies to all types of systems. Distributed systems are collections of interconnected computing devices or servers — called nodes — that work together to achieve a common goal. Distributed ledger technology (DLT) is a type of distributed system that emphasizes the storage and synchronization of a shared data ledger across distributed nodes. Blockchain is a type of DLT that utilizes a chain of chronological blocks to record transactions or data entries.

Although these are all relatively new technologies, problems concerning consensus in distributed networks date back to the 1980’s. Most notably, the Byzantine Generals Problem, which is an extension of the Two Generals Problem, outlines the risk of malicious actors causing lapses in the consensus of these distributed systems. The concept of tolerance to these malicious nodes is called Byzantine Fault Tolerance, which is critical in permissionless networks because there is no centralized entity vetting which nodes are allowed to join the network. In order for a distributed system to be sufficiently decentralized, it is ideal for it to be permissionless so that any party can join the network. Thus, decentralized networks, and especially permissionless networks, should be BFT (we will see later how permissioned networks do not need BFT consensus). BFT is a fundamental criteria for creating consensus algorithms such as Proof-of-Work, Proof-of-Stake, PBFT, and Tendermint that address the Byzantine consensus problem in real-life networks. An additional consensus algorithm, Raft, is crash fault tolerant (CFT, meaning it can handle nodes crashing) but not BFT. Raft still deserves recognition, however, as it is popular for use in permissioned or private distributed systems like server clusters. We will discuss this as well.

II. Two Generals Problem (TGP)

Two armies that plan on invading a city can only succeed if they invade at exactly the same time. Situated on opposite sides of a valley that the city is in, they can only communicate with one another by sending oral messengers directly across the valley, which runs the risk of a messenger being captured. The objective is for the two armies to agree on an attack time. However, army A cannot decide that army B has received their message unless they assume that the message did not get captured by the city guards. Let us say that army B successfully receives the message and sends a confirmation message back; they have no way to confirm that army A received that confirmation message. If every confirmation has to be confirmed by the other party, this will end up in an infinite loop that never reaches consensus[1]. Logically, neither army can guarantee they have come to an agreement on a time to attack, without making certain assumptions that their very final messages are not getting intercepted[2]. If this problem is treated as an analogy to computer communication, it ultimately illustrates a significant issue for two computers transmitting data.

When looking at practical examples of the TGP, the Transmission Control Protocol (TCP) is a good example. TCP uses a four way handshake as a pragmatic approach to the two generals problem, with FIN and ACK messages signifying each side of a TCP connection agreeing to open or close the connection between them. However, if any one of these messages fails to send, the other side will be left with a half-opened or half-closed TCP connection. There are many more approaches to mitigating communication issues in the TGP (each with their own efficiency tradeoffs), making some approaches like TCP sufficient enough for us to use with an accepted level of uncertainty. However, the two generals’ problem is still deterministically unsolvable[2]. Thus, it is moreso used as a thought experiment that defines challenges with computer communication than a computational problem with a distinct complexity class.

How TCP’s 4-way handshake can fail with respect to TGP [2]

III. Byzantine Generals Problem (BGP)

The Byzantine Generals Problem was first ideated by Leslie Lamport, Robert Shostak, and Marshall Pease in 1982[3]. The problem extends the two generals problem such that there is an arbitrary number of army generals (not just two) and there is no concern that messengers can get intercepted or captured. However, there may be certain malicious generals that are traitors who can try to confuse their peers by colluding among each other to send false messages to loyal generals. The challenge of this problem is to find an algorithm that ensures all of the loyal generals reach consensus (agree on the same plan of action), even when not aware of traitors sending false messages. It turns out that there is no solution to BGP for any arbitrary amount of traitors in the network, but the problem is solvable if more than ⅔ of the generals are loyal[3]. More specifically, Lamport et. al. shows that no solution exists with fewer than 3t + 1 total generals for t colluding traitors by reducing the problem to the most simple form with just 3 generals. Thus, 3t = n generals has no solution[4].

Just like the two generals problem, this problem can also be extrapolated to computer networking. For a network of independent, mutually suspicious computer nodes acting in their own interest (like any permissionless blockchain), there exists a solution to reach consensus if more than ⅔ of the honest computer nodes are committing the same data to the network. If ⅓ or more of the nodes are malicious, successful consensus can fail if these malicious nodes are colluding (a malicious Byzantine attack). For this essay, let us call the honest computer nodes in a network “good actors” and the malicious nodes “bad actors.”

Later, we will look at modified instances of the Byzantine Generals Problem where data shared between nodes is cryptographically signed, meaning message authenticity is verifiable in the problem. Consequently, the loyal generals don’t have to worry about being tricked by traitor’s false messages because those messages will not pass cryptographic checks. This modification of BGP actually allows for 2t + 1 total generals for t colluding traitors, meaning that as low as 51% of the generals can be honest.

IV. Byzantine Fault Tolerance (BFT)

A Byzantine fault, or bad actor, is a term used to describe when a node in the system sends false messages, omits messages, or sends contradictory messages. The concept of Byzantine Fault Tolerance (BFT) means that the good actors can still reach consensus on data even in the presence of some Byzantine faults. A system is considered Byzantine Fault Tolerant if, for n nodes and a fixed number of good actors g and bad actors t, a Byzantine agreement can still be reached among the good actors[4]. Using the theorem from BGP, we know that no more than (n-1)/3 of the nodes can be colluding Byzantine faults, or else the consensus will fail.

To address the Byzantine Generals Problem, system design requires consensus algorithms that are tolerant to Byzantine faults, and even malicious Byzantine attacks where those faulty nodes are colluding. This means good actors can still reach consensus and ensure the continuity of data in the presence of bad actors. This begs the question, what is the best way to construct such a consensus algorithm such that it is not simple for bad actors to collude and take over the system?

We will see next how various BFT consensus algorithms are designed using techniques such as redundancy, voting, and cryptography.

V. BFT Blockchain Consensus Algorithms

A. Proof of Work (PoW)

Proof of Work is a cryptographic consensus mechanism of the more general type “Proof of X” (PoX), where nodes must prove they expended some scarce resource X in order to attest their data to the system[5]. The BFT nature of PoX consensus algorithms makes it reasonable for these distributed systems to be permissionless (public) blockchains, meaning any actor can join the network and participate in the consensus mechanism without peers having to worry that the actor may be bad. This permissionless nature is what makes blockchains become decentralized, being that the nodes are distributed among countless different entities which join the network. Thus, PoX and other BFT consensus algorithms are common in public blockchains.

Most notably, PoW is used in the Bitcoin blockchain, which was first created by Satoshi Nakamoto in 2009[6], and was previously used by Ethereum before merging to Proof of Stake (PoS). PoW ensures fault tolerance using cryptographic guarantees. Essentially, all the nodes in the network compete to “mine” a new block of data, which entails finding a block hash that meets the current network requirements. The network requirement is a certain target value that defines the difficulty of mining a block, and it fluctuates periodically. Miners are required to append a nonce to the data block, and continuously increment that nonce until the block hash meets the target value requirement (a certain number of leading zeros in the block hash). This is essentially a competition among nodes to solve a cryptographic puzzle. Once it is solved, all the other nodes in the network can very easily verify the validity of that new block by comparing its hash to the network target value in constant runtime[6]. Sparing the details on how mining difficulty is defined, we can frame proof of work in the context of the BGP:

  • For a general that wants to send a message containing a proposal on when to attack the city, he appends a nonce to the proposal and continuously re-hashes the proposal with a new nonce until he meets the network requirement.
  • Once one of the generals solves the puzzle, they can send the hash to the rest of the generals for verification. The other generals will easily be able to tell if it is fabricated because then the hash will not meet the network requirement.
  • If any bad actor generals decide to send around messages that conflict with the solved proposal hash, it will be cryptographically identifiable by the good actors and they will penalize the bad actor by kicking him out of their peer network[7].
An example of PoW nonce mechanism, where Node 1 succeeds in finding the block hash starting with “0000” to meet the network requirement [8]

For a bad actor to fabricate the block data in a PoW network, they would need at least 51% of the computing power of the entire network in order to dictate their own consensus by overpowering the cryptographic puzzle race every block. Thus, PoW is Byzantine fault tolerant in all situations unless a single entity or colluding entities accumulate 51% of the blockchain’s computing power[7]. Fundamentally, it is very infeasible for bad actors to accumulate such a large proportion of the computing nodes, especially in a sufficiently decentralized, distributed system such as Bitcoin.

Evidently, this level of fault tolerance is far better than the ⅓ amount defined in the original BGP. This is because, due to the cryptographic mining model just described above, Bitcoin is not an exact 1:1 replica of BGP. Thus, running PoW consensus on a modified version of BGP results in a system design that is fault tolerant to far more than (n-1)/3 bad actors — being only prone to a situation where bad actors take over a majority of the blockchain’s computing power.

A downside to PoW is the sheer amount of energy it takes to sustain the computation. Since so much computing power is needed to solve the cryptographic puzzles involved in block mining, networks like Bitcoin consume ever increasing amounts of energy, especially as mining farms grow.

B. Proof of Stake (PoS)

Proof of Stake (PoS) is another consensus method of the type PoX, but in this case the scarce resource X is capital, which nodes must “stake” as collateral, usually in the form of the blockchain’s native currency. In this case, we will examine how PoS works on Ethereum.

Rather than nodes competing to solve a cryptographic puzzle, the network pseudo-randomly selects a node to be the next block-builder at each time slot. The probability of a node being selected to build and broadcast the next block is based on how much capital a node has staked. The more ETH a validator stakes, the higher their chances of being selected to propose new blocks.

Also in every time slot, a committee of validators is randomly chosen, whose votes are used to determine the validity of the block being proposed. Once a new block (or “beacon block”) is broadcast to the network by the selected block builder, all the committee nodes vote on the beacon block’s validity by submitting attestations. To verify the validity of the beacon block, validators in the committee go through the following process [9]:

  • Validating the block header: The block header contains information about the block, such as the block number, the timestamp, the hash of the previous block, and the hash of the transactions in the block. The validator nodes check that the information in the header is consistent and that the header hash is valid.
  • Validating the transactions: The validator node verifies transactions in the block by checking that the senders have enough ETH to pay for the transactions and that the signatures on the transactions are valid. Transaction signatures are cryptographically known to be valid because each user signs the transaction with their private key, which can then be verified by checking it with their public key (or Ethereum address).
  • Validating the state transition (re-execute all txs in the block): The state transition function is a mathematical function that takes the previous blockchain state (stored in a Merkle tree) and the transactions in the block, and produces a new blockchain state. The validator node verifies that the state transition is valid by re-running this function on the previous state with the transactions in the proposed beacon block. If the outcome of this function matches the proposed new state, then the state change is valid.

A validator’s stake can be slashed for any dishonest behavior, such as proposing multiple blocks in a single slot, submitting contradicting attestations, or false beacon blocks. Validators also get slashed for failing to respond when called upon, either for being a block builder or voting in a committee. Because of slashing, it becomes very costly for an entity to try to coordinate an attack from multiple different nodes and have those nodes collude in some dishonest behavior. Dishonest behavior becomes economically irrational for validators, as it is more profitable to be honest and benefit from the network’s monetary rewards [9].

By using public-key cryptography, all new data and actions being stored on the blockchain are verifiable. This, in combination with the previously mentioned stake slashing, secures the network with crypto-economic security.

When it comes to fault-tolerance, PoS has the same 51% tolerance as PoW. An attacker or colluding attackers would need 51% of staked capital on the network to use their own attestations to prove their version of the blockchain is correct. However, an upside to PoS is that it has far less energy consumption than PoW [9].

Going back to BGP, it follows that adding cryptographic techniques such as those in PoS and PoW actually allow for Byzantine fault tolerance in systems where the number of nodes n = 2t + 1 rather than n = 3t + 1 (given t Byzantine faults). Even a Byzantine attack coordinated by up to 49% of the network cannot succeed in a distributed system with cryptographic primitives.

Byzantine faults can either be malicious (intelligent and coordinated), or non-malicious (random or accidental). Interestingly enough, the rules change a bit for non-malicious faults. Even in non-cryptographic consensus algorithms, 2t + 1 nodes still suffice to survive all non-malicious Byzantine failures. However, as mentioned above, malicious attacks require cryptographic features such as those PoX-type algorithms to achieve that same 2t + 1 value. Without such features, the system must have 3t + 1 nodes to resist a malicious Byzantine attack, as defined in the original BGP. However, when designing or choosing consensus algorithms, engineers must always assume the plausibility of malicious faults and coordinated attacks.

C. Practical Byzantine Fault Tolerance (PBFT)

The Practical Byzantine Fault Tolerance (PBFT) consensus algorithm, introduced by Miguel Castro and Barbara Liskov in their 1999 MIT paper[10] is a straightforward consensus mechanism that addresses BGP and is BFT. Hyperledger Sawtooth is one blockchain that utilizes PBFT consensus, which is the instance we will examine here [11]. Hyperledger is a well known foundation that produces various open-source distributed ledger technologies.

PBFT operates based on a leader-based voting protocol. In this algorithm, a designated leader node, known as the primary, proposes a block, and other nodes, known as replicas, execute a three-phase voting process called the “normal-case operation” to reach consensus on the block’s validity. The three phases include a pre-prepare phase, prepare phase, and commit phase [11].

During the pre-prepare phase, the primary proposes a block and sends it to the replicas. Upon receiving the proposal, the replicas verify its validity, including checking for duplicate or conflicting requests. Once the replicas validate the proposal, they move to the prepare phase.

In the prepare phase, replicas multicast their prepared votes to all other replicas. These votes serve as endorsements of the proposed block’s validity. To ensure Byzantine fault tolerance, replicas exchange messages and validate signatures, ensuring the authenticity and integrity of the voting process.

Finally, in the commit phase, replicas multicast their commit messages to all other replicas. Once a replica receives commit messages from at least two-thirds of the replicas, it considers the block as committed. At this point, the block is added to the blockchain, and the consensus is achieved[10].

Normal case operation with faulty node 3 and primary node 0 [11]

PBFT was one of the early BFT consensus methods created after the BGP paper in 1982. However, PBFT tolerates at most (n-1)/3 bad actors, making it a less-ideal solution for real blockchain or DLT implementations[11].

D. Tendermint

Tendermint, introduced by Jae Kwon in 2014 [12], is another blockchain BFT consensus algorithm with tolerance for up to (n-1)/3 of malicious Byzantine faults [13]. In addition to a consensus engine, Tendermint offers an application interface called the Application Blockchain Interface (ABCI), which enables the transactions to be processed in any programming language. Essentially, the ABCI serves as an interface between a Tendermint Core consensus layer and any application running on top of it. This enables developers to build custom applications on the Tendermint consensus protocol. This is a version of blockchain modularity, which deviates from traditional monolithic blockchain architectures where transaction creation, execution, settlement, and node consensus all happen on one network layer. Tendermint “decouple[s] the consensus engine and P2P layers from the details of the application state of the particular blockchain application” [13]. This idea, as implemented by Cosmos, is coined as “app chains.” Cosmos is the most prominent implementation of the Tendermint consensus method, making app chains possible through their Inter-Blockchain Communication (IBC) protocol [14].

The ABCI (Application BlockChain Interface) is an interface specification that facilitates communication between the Tendermint Core blockchain engine and the application or app-chain running on top of it. It consists of three primary message types: DeliverTx, CheckTx, and Commit [13].

  1. DeliverTx: This message is used to deliver each validated transaction from the application. The application validates the transaction against the current state, application protocol, and cryptographic credentials. If the transaction is valid, the application updates its state by binding values into a key-value store or updating the UTXO (Unspent Transaction Output) database, for instance.
  2. CheckTx: Similar to DeliverTx, this message is specifically for transaction validation. Transactions are first checked for validity by Tendermint Core’s mempool using CheckTx.
  3. Commit: This message computes a cryptographic commitment to the current application state, which is placed in the next block header.

The application can establish multiple ABCI socket connections. Tendermint Core creates three connections: one for transaction validation during mempool broadcasting, one for the consensus engine to propose blocks, and another for querying the application state [13].

Flow of ABCI messages[13]

Tendermint is a mostly asynchronous Byzantine Fault Tolerant (BFT) consensus protocol where participants, called validators, propose and vote on blocks of transactions. Blocks are committed in a chain, with one block per “height”. The protocol requires two stages of voting, pre-vote and pre-commit, with a block being committed when over ⅔ of validators pre-commit for it in the same round.

A polka occurs when more than two-thirds of validators pre-vote for the same block in a round, and every pre-commit must be justified by a polka in the same round. Validators can skip a block for reasons like offline proposers or slow networks. They wait for a proposer’s complete proposal block within a small time frame before voting to move to the next round. This timeout reliance makes Tendermint a weakly synchronous protocol (not fully asynchronous)[13].

Tendermint ensures safety by introducing locking rules. Assuming fewer than one-third of validators are Byzantine, it guarantees the absence of conflicting blocks at the same height. Once a validator pre-commits a block, it becomes locked on that block and must pre-vote for it. Then:

  1. “it must prevote for the block it is locked on
  2. it can only unlock, and pre-commit for a new block, if there is a polka for that block in a later round [13].”
Overview of Tendermint state machine[12]

Tendermint was initially designed with an in-built currency in which validators had to bond that currency in a deposit which would be revoked upon misbehavior — essentially, staking. However, this has since been abstracted away to make it able to support arbitrary application states, including other blockchains which don’t utilize PoS. However, Cosmos is an implementation of Tendermint which is designed to use PoS Tendermint. In this case, what matters more than ⅔ of validator honesty, is ⅔ of the actual voting power, or the stake of those validators [13].

“Tendermint in a nutshell” [13]

E. Raft

Next, we will look at a non-BFT consensus algorithm called Raft. Although Raft cannot withstand coordinated bad actors or malicious Byzantine attacks, it is still popular for use in private distributed systems such as enterprise blockchains and server clusters. Therefore, it is worth mentioning and learning even though it does not address the Byzantine Generals Problem. Raft was introduced by Diego Ongaro and John Ousterhout in their Stanford research paper in 2014 [15] as an alternative to the Paxos family of algorithms.

In a replicated state machine, we can define each individual node in the system as one of three types [16]:

  • Leader — The singular server node which directly interacts with the client, while all other server nodes sync with the leader.
  • Follower — At regular time intervals, followers sync their copy of the state with the leader. If the leader server goes down, a follower can start a new election to become the leader.
  • Candidate — When an election is started to elect a leader server, servers can request votes from other servers. They then become candidates. Initially, all servers are in the Candidate state [16].
State transition flow for a server[15]

The Raft algorithm divides time into arbitrary lengths called terms, with each term having a monotonically increasing term number. Every node keeps track of the term number and includes it in communications with other nodes. At the beginning of each term, an election is conducted to determine the new leader. Candidates seek votes from other server nodes (followers) to achieve a majority. If the candidate gathers a majority, it becomes the leader for the current term. In the event that no majority is achieved, it is referred to as a split vote, and the term concludes without a leader. Therefore, each term can have at most one leader [16].

Servers in the cluster update their term number if it is lower than the term numbers of other servers. When a new term begins, the term numbers are compared with the leader or candidate, and updated to match the leader’s term number. If a candidate or leader has an outdated term number (lower than others), they are demoted to the follower state. If another server in the system has a highest term number at any point, it can become the leader immediately [16].

The Raft algorithm utilizes two types of Remote Procedure Calls (RPCs) for its operations [15]:

  1. RequestVotes RPC: Candidates gather votes during an election.
  2. AppendEntries RPC: The Leader node uses this to replicate log entries and also as a heartbeat mechanism to check the availability of servers.

Now, let’s delve into the leader election process [16]:

During leader election, the Leader node keeps its authority by sending heartbeats to the Follower nodes. If a Follower node times out waiting for a heartbeat, it transitions to the Candidate state. The timed-out node then votes for itself and issues a RequestVotes RPC to obtain a majority and attempt to become the new Leader. The election can result in three scenarios [16]:

  1. The Candidate node becomes the Leader by receiving a majority of votes from the cluster nodes. It updates its status to Leader and starts sending heartbeats to inform other servers of the new Leader.
  2. The Candidate node fails to secure a majority of votes in the election, resulting in the term ending without a Leader. The Candidate node returns to the Follower state.
  3. If the term number of the Candidate node requesting votes is lower than that of other Candidate nodes in the cluster, the AppendEntries RPC is rejected, and the other nodes retain their Candidate status. However, if the term number is greater, the Candidate node is elected as the new Leader.

To mitigate the frequency of split votes and ensure that they get solved quickly, Raft utilizes election timeouts randomly chosen within a fixed interval (e.g., 150–300ms) [15].

In Raft, the log is the common state stored by all servers in the system. Similar to a transaction mempool in a blockchain, a request made by a client is stored in the log of the leader, which is then replicated to all the followers. A log entry typically consists of three pieces of information: the client command to execute, an index indicating its position in the log, and a term number representing the time when the command was logged [16].

To ensure consistency, the Leader node sends AppendEntries RPCs to all Followers to synchronize their logs with the Leader’s log. The Leader continues sending these RPCs until all Followers have successfully replicated the new entry in their logs.

When the majority of servers in the cluster have successfully copied the new entries into their logs, these entries are considered committed. The Leader also marks the entry as committed in its log to indicate successful replication. Previous entries in the log are also considered committed. Once an entry is committed, the Leader executes the corresponding command and sends the result back to the client [16].

Raft is used in distributed server cluster systems such as CockroachDB, Splunk, and even MongoDB (in a modified form).

VI. Conclusion

Distributed systems — whether DLTs, distributed server clusters, or blockchains — are taking over technology, as an increasing amount of the world is using them for data storage solutions and transactional infrastructure. Consequently, consensus algorithms are more important than ever, especially when engineers want to design systems that are tolerant to crashes and Byzantine faults. Stemming from a problem about Byzantine army generals which was ideated in the 1980s, Byzantine fault tolerance has become an important part of computer science theory within distributed systems. The various consensus algorithms we briefly discussed in this paper are just a few of the many consensus solutions out there — there are many more such as Paxos, DPoS, and PoET. Each of these methods has tradeoffs relating to Byzantine attack tolerance, asynchronicity, latency, computational demand, security, and more. Understanding a survey of the most important consensus algorithms and their tradeoffs is pivotal for engineers who wish to build robust distributed systems and blockchains, whether for public or for enterprise.

Bibliography

[1] E. A. Akkoyunlu, V. Profile, K. Ekanadham, R. V. Huber, and O. M. A. Metrics, “Some constraints and tradeoffs in the design of Network Communications: PROCEEDINGS OF THE FIFTH ACM Symposium on Operating Systems principles,” ACM Conferences, https://dl.acm.org/doi/10.1145/800213.806523 (accessed May 8, 2023).

[2] Jakub and Jakub, “Two generals’ problem,” Finematics, https://finematics.com/two-generals-problem/ (accessed May 8, 2023).

[3] L. Lamport, R. Shostak, and M. Pease, The Byzantine generals problem, https://lamport.azurewebsites.net/pubs/byz.pdf (accessed May 9, 2023).

[4] Complexring, “The Byzantine generals problem and its relation to consensus,” Ecency, https://ecency.com/mathematics/@complexring/the-byzantine-generals-problem-and-its-relation-to-consensus (accessed May 8, 2023).

[5] Crypto.com, “How to agree: Different types of consensus for Blockchain,” Crypto.com, https://crypto.com/university/different-types-of-consensus-for-blockchain (accessed May 8, 2023).

[6] S. Nakamoto, “Bitcoin: A Peer-to-Peer Electronic Cash System,” Bitcoin,

https://bitcoin.org/bitcoin.pdf (accessed May 8, 2023).

[7] R. Patel, “Byzantine fault tolerance (BFT) and its significance in Blockchain World,” HCLTech, https://www.hcltech.com/blogs/byzantine-fault-tolerance-bft-and-its-significance-blockchain-world (accessed May 8, 2023).

[8] T.-T. Kuo, H.-E. Kim, and L. Ohno-Machado, “Blockchain distributed Ledger Technologies for Biomedical and health …”, Journal of the American Medical Informatics Association, https://www.researchgate.net/publication/321147639_Blockchain_distributed_ledger_technologies_for_biomedical_and_health_care_applications (accessed May 20, 2023).

[9] “Proof-of-stake (POS),” ethereum.org, https://ethereum.org/en/developers/docs/consensus-mechanisms/pos/ (accessed May 19, 2023).

[10] M. Castro and B. Liskov, “Practical Byzantine Fault Tolerance,” Massachusetts Institute of Technology, https://www.pmg.csail.mit.edu/papers/osdi99.pdf (accessed May 20, 2023).

[11] L. Seeley, “Introduction to Sawtooth PBFT,” Hyperledger Foundation, https://www.hyperledger.org/blog/2019/02/13/introduction-to-sawtooth-pbft (accessed May 19, 2023).

[12] J. Kwon, “Tendermint: Consensus Without Mining,” https://tendermint.com/static/docs/tendermint.pdf (accessed May 20, 2023).

[13] “What is Tendermint,” What is Tendermint | Tendermint Core, https://docs.tendermint.com/v0.34/introduction/what-is-tendermint.html (accessed May 19, 2023).

[14] “The Internet of Blockchains,” Cosmos, https://cosmos.network/ (accessed May 19, 2023).

[15] D. Ongaro and J. Ousterhout, “In Search of an Understandable Consensus Algorithm,” USENIX, https://www.usenix.org/system/files/conference/atc14/atc14-paper-ongaro.pdf (accessed May 20, 2023).

[16] P. Hooda, “Raft Consensus Algorithm,” GeeksforGeeks, https://www.geeksforgeeks.org/raft-consensus-algorithm/ (accessed May 19, 2023).

--

--