Decentralized Identity on the Hedera Consensus Service
1516208829798
May 01, 2020
by Paul Madsen
Head of Identity, The HBAR Foundation

Hedera is pleased to announce the early availability of beta tools that Hedera developers can use to manage the lifecycle of Decentralized Identifiers (DIDs) and Verifiable Credentials via the Hedera Consensus Service. We welcome feedback from the community on these tools.

Decentralized Identity

Decentralized Identity (DI) is an architecture and set of emerging standards for identity management that may use a blockchain or Distributed Ledger (DLT) within the architecture.

The fundamental criteria for the applicability of a DLT to a use case (including identity) is the opportunity to disintermediate some actor and thereby gain in efficiency, security, or cost.

However, most identity applications require 3rd parties as attribute authorities – the government of Canada must assert that I am Canadian, Hedera that I am an employee, etc - no other party is authorized to say so. So there is arguably a fundamental incompatibility between DLTs and identity in that 3rd parties play a justifiable role.

But, even if we can’t disintermediate the business/trust relationship that are fundamental to identity use cases, there may be value in disintermediating the network or messaging architecture – and so gain advantages in privacy, or resilience, or integration efficiencies.

The fundamental value of DLTs to identity applications is in enabling a loose coupling between Issuer and RP with respect to the flow of identity attributes and associated metadata.

Loose coupling means that Issuer and RP do not directly interact/message, but rather do so indirectly via transactions submitted to and queries of the DLT.

Two particular aspects of the above flow can be facilitated by a DLT – metadata discovery and revocation:

  • Validation of credentials presumes verification of digital signatures. DLTs can play a role in facilitating retrieval of the associated public keys. Additionally, DLTs can enable discovery of associated service endpoints.
  • A critical aspect of credential validation is for the RP to assess whether the credential has been revoked. The credential may be long-lived, or short-lived. If short-lived, the RP can be confident that the claims within the credential are fresh and timely. If long-lived, the RP will likely need to gain some other indicator that the claims are still valid.

Scaling Decentralized Identity

In the Decentralized Identity model, the identifiers by which actors are known may be resolvable against a DLT. When two actors engage in an online interaction that requires their identities be verified, they can both perform a lookup of the other’s identifier against a DLT on which the identity is anchored and retrieve identity metadata that will assist in subsequent validation.

Such lookups of identity metadata can be satisfied by whatever DLT node the query is sent to and so place no significant burden on the network. Consequently, the network can collectively provide a high throughput, i.e. many transactions per second or ‘tps’. But a read presumes that the necessary metadata has previously been added to the DLT with a write which, because such a transaction requires that all nodes must process it into consensus state, is generally constrained in throughput.

Many identity use cases will require that an identity be written to a DLT only once, thus enabling many subsequent reads. For instance, if a university issues a credential to a graduating student, then that issuance can be recorded once on a DLT. Whenever that student subsequently applies for a job, the HR department would query the DLT for the credential artifacts. It's likely that even low throughput DLTs could handle the associated volume.

On the other hand, consider an application like the Internet of Things (IoT) in which each device may receive a distinct and unique identity at manufacture time – this initial identity a key aspect of subsequent assessment of the provenance and validity of the device by later applications and other devices. Each issued identity, and perhaps other moments in the thing's lifecycle, could require a corresponding write to a DLT, which implies that the DLT be able to support that volume of transactions.

Most public DLTs are not able to natively support the sort of transaction volumes that such a use case might imply. For example, Bitcoin’s maximum throughout is less than 7 transactions per second (tps), and in its current state, the Ethereum network can process a maximum of 15 tps – these are the speed limits for all transaction types, not just those anchoring identities to the ledger.

Hedera Consensus Service

The Hedera Consensus Service offers one model for scaling Decentralized Identity. HCS enables a model where identities can be anchored to Hedera at very high throughput. Critically, rather than storing identity metadata (such as public keys, DID Documents, credential hashes, etc) on the Hedera mainnet nodes – these identity artifacts flow through, instead of into, Hedera and out to other computers. Consequently, HCS can avoid the bottleneck of having the nodes of the Hedera network writing to disk.

If we apply the HCS model to decentralized identity, the Hedera nodes focus exclusively on assigning a consensus timestamp and order to the transactions storing and updating identity artifacts, but the identity artifacts themselves are not persisted on the Hedera nodes, but rather on the computers of some other business network. With this storage burden removed from the Hedera nodes, they can process transactions at the native speed of the hashgraph consensus algorithm, the very fast 10-100k tps. Similarly, the business network nodes, without bearing the burden of contributing to consensus, can be optimized for fast storage and lookups of these identity artifacts.

Standardization

Key pieces of the DI architecture are under standardization at the W3C. Hedera recently joined the W3C so that we can ensure that DI applications built on Hedera will be compliant with those emerging standards and where appropriate, contribute Hedera specifications into those standardization efforts.

An early preview of the Hedera HCS DID Method specification defines how Decentralized Identifiers and their corresponding DID Documents are managed via HCS messages. Participants create, update, and delete the DID Documents via HCS messages submitted to a specific topic. Those messages (and the DID data they carry) are timestamped and ordered by the Hedera network before flowing out via the mirror network to members of a business network. It is against the business network that Relying Parties (RP) resolve DIDs into the corresponding DID Documents.

Similarly, HCS messages can be used to enable a revocation registry of Verifiable Credentials. When a Verifiable Credential (for instance, a Driver’s License issued by the DMV or a particular safety certification issued by a Technical Training School) is issued to a Subject, the hash of that credential is sent via an HCS message to be stored on a business network. When that credential is subsequently presented (for instance, in order to rent a car or to establish a technician’s qualifications for a particular maintenance job) the current status of that credential can be queried against the nodes of the business network. If the credential had been revoked, the hash would have been deleted and the query would fail.

Java DID SDK

We have created the open source Java DID SDK to simplify integrating the above messaging patterns for managing DIDs and Verifiable Credentials via HCS into your application.

The DID SDK abstracts away the specifics of CRUD operations for DIDs and VCs via HCS messages.

The DID SDK builds on the existing HCS features of the Hedera Java SDK to add DID and VC specific functions – like recording the issuance of a DID, or the revocation of a VC, etc.

The DID SDK is available here.