Orb is a decentralized identifier (DID) method based on a federated and replicated Verifiable Data Registry (VDR). The decentralized network consists of Orb servers that write, monitor, witness, and propagate batches of DID operations. The batches form a graph that is propagated and replicated between the servers as content-addressable objects. These content-addressable objects can be located via both domain and distributed hash table (DHT) mechanisms. Each Orb witness server observes a subset of batches in the graph and includes them in their ledgers (as append-only Merkle Tree logs). The servers coordinate by propagating batches of DID operations and by monitoring the applicable witness servers' ledgers. The Orb servers form a decentralized network without reliance on a common blockchain for coordination.
This document specifies the Orb DID Method along with a data model and rules for representing a graph of batches, the Orb server API, a registry of protocol versions and parameters, an [[[ACTIVITYSTREAMS-CORE]]] vocabulary and an [[[ACTIVITYPUB]]] profile for propagating batches, a [[[RFC7033]]] profile for discovering endpoints, a [[[SIDETREE]]] protocol profile for encoding DID operations, and a [[[RFC6962]]] profile for witness ledgers.
The Orb DID method enables users or entities to create self certifying decentralized identifiers (DIDs) that are propagated across a decentralized network without reliance on a common blockchain for coordination. Decentralized identifiers [[DID-CORE]] are a fundamental building block to enable persons and entities to prove that they control an identifier that is attached to digital objects such as Verifiable Credentials [[VC-DATA-MODEL]].
Systems that rely on self certifying DIDs need a mechanism to propagate the ordered updates to DID documents from the server writing operations to their DID resolver. As an example, operations can be distributed via a content-addressable network (such as IPFS) and announced on a public blockchain (such as Bitcoin) or a permissioned blockchain (such as Hyperledger Fabric). This DID method, instead, makes use of decentralized federation protocols to propagate announcements and replicate content in a gossip manner. By using a decentralized federation mechanism, we have no need to choose a common blockchain nor rely on a consortium's distributed ledger (DLT) for coordination. Orb has the ability to enable use cases where a public blockchain is not acceptable to stakeholders and also avoids lock-in to a single DLT.
As operations on DIDs can be originated by different writers in a decentralized network, it is also beneficial to enable a mechanism for determining the latest operations for a particular suffix. For example, when all DID operations are announced by the same blockchain, the other systems can monitor that blockchain for the latest operations. These blockchains can also provide relative timing in cases of late publishing. This DID method, instead, makes use of independent witness ledgers that the DID controller can associate with their DID. These witness ledgers can be monitored as additional propagation sources and proofs from these ledgers can also provide relative timing confidence in cases of late publishing. Rather than being a global chain of all operation announcements, these ledgers contain a subset of the announcements that propagate through the decentralized federation protocols.
The property of self certifying DIDs is obtained due to Orb being an extension of the [[[SIDETREE]]] protocol. By using the Sidetree protocol, the DID controller forms their own verifiable chain of patches from inception to the current state of the DID document. These verifiable chains are included into larger batches by one of the independent Orb servers and encoded as content-addressable objects. We also inherit the scalability properties of the Sidetree protocol batch strategy.
Our motivation for the Orb DID method is to enable many independent organizations to create Verifiable Data Registries (VDRs) that can become connected over time. In designing Orb, we have the goal to enable these VDRs to interconnect into a decentralized network without the need to choose a common public blockchain or to rely on special-purpose consortiums to form (and remain in operation). By using decentralized federation protocols and witness ledgers, we enable self-certifying DIDs even in use cases where leveraging a public blockchain or a consortium DLT is not acceptable to stakeholders. Orb enables a federated, replicated and scalable VDR based on a decentralized network for propagating, witnessing and replicating changes to DID documents.
Orb uses a propagation model where servers announce transactions to other servers that are following them. In general, Orb Servers have the capability to propagate transactions and replicate Orb Verifiable Data Registries (VDRs). Beyond propagation and replication, there are three primary Orb Server roles in the transaction flow:
Orb servers apply both the DID controller's intention and also observed timings to resolve scenarios where a DID update is applied at the same sequence position in the DID's operation chain. As the DID controller is fully responsible for the changes to their DIDs, they can create a new branch in their operation chain even after time has passed. Orb Servers use rules to determine the first published branch and use observations to assist in that decision as follows:
The following diagram provides an example of applying the stale rule to DID B and the usage of a Witness associated to DID A as a tie-breaker to resolve a branch.
Transactions are structured as a content-addressable graph. An Orb server accepts operations from DID controllers and periodically creates a Sidetree batch consisting of operations from multiple DIDs. From the batch, a pretransaction is created that includes hashlinks [[HASHLINK]] of the most recent transaction for each DID from the Writer server's point of view. The Writer preannounces the transaction to a set of Witnesses. Each Witness promises to include the transaction in their ledger by returning a timestamped and signed pretransaction. The Orb server then propagates the combination of the pretransaction with the timestamped witness signatures as a new transaction.
Orb enables the VDR to be replicated between independent systems to increase the overall resilience of the ecosystem. The transaction model includes Witness signatures as a common Orb format regardless of the Witness ledger implementation. To achieve our system monitoring properties, we assume a Witness holds a monitorable ledger that stores timestamped transactions with an append-only model. Witness ledgers can be replicated and monitored for consistent behavior. In particular, monitoring systems validate:
Orb DIDs are self certifying. The data structure that encodes the ordered updates to a DID document form their own verifiable chain from inception to the current state of the DID document. We inherit this property due to our usage of the [[[SIDETREE]]] protocol for encoding the DID document updates.
Orb makes use of decentralized federation protocols to propagate announcements and replicate content in a gossip manner. We use [[[ACTIVITYSTREAMS-CORE]]] and [[[ACTIVITYPUB]]]. In later sections, we describe the vocabulary and profile for using these protocols in Orb.
Orb does not rely on public blockchains nor DLTs to coordinate - there is no need for network consensus. Instead, we rely on self certifying DIDs being propagated and the ability to replicate VDRs in a gossip manner.
Modifications to DID documents that are sent to an Orb servers are aggregated into batches prior to being announced on the decentralized network. By supporting batching, we enhance the scalability properties due to a reduction in messages and objects being stored. We inherit this property due to our usage of the [[[SIDETREE]]] protocol for encoding the DID document updates.
When writing a batch of DID document updates, an Orb server also includes immutable references to the prior batches. These references form a graph (in the form of a Merkle Tree) such that the ancestor operations can be processed prior to a newly observed batch. This property is needed in the Orb data structures since we do not have a common blockchain to provide this history.
When a DID controller supplies their DID, they also include a reference to a batch in their DID string. This reference allows an Orb server to discover the graph where a DID was created (or more generally where a core Sidetree update occurred). This reference also enables the DID controller to specify a minimum version of the DID document that must be discovered in order to resolve.
The Orb VDR leverages Content-Addressable Storage (CAS) to hold the DID document batch files. The processed batch files result in hosting resolvable DID documents. By supporting a CAS, we enhance the ability to replicate immutable content across the decentralized network. We inherit this property due to our usage of the [[[SIDETREE]]] protocol for encoding the DID document updates and batches.
We also leverage the CAS to hold the graph of batches. The immutable references to prior batches are in the form of hashlinks [[HASHLINK]].
The Orb method relies on decentralized federation, gossip and CAS replication mechanisms to propagate the VDR among servers. The self certifying nature of Orb DIDs enables confidence in the DID validity without the need for network consensus. As the DID controller is fully responsible for the changes to their DIDs, they can create a new branch in their operation chain even after time has passed. This situation is called late publishing. The resolver systems require a mechanism to decide which branch should be used to represent the current DID state. It is also beneficial to create rules that enable a consistent viewpoint of the active branch among interconnected Orb servers. Resolver systems use rules to determine the first published branch and use observations from Witness systems to assist in that decision.
In the absence of Witnesses, we start with a simple rule: the first branch of a DID observed by an Orb server is the first published branch for that DID from that server's point of view. As Orb servers become increasingly interconnected, batches are gossiped between the Orb servers. Each server that forwards a batch includes a signed timestamp of when they observed the batch. The Orb Server MAY designate some of the other Orb servers as trusted. These trusted servers are then used to determine relative timing between batches. In the case that a majority viewpoint of itself and the other trusted Orb servers exists, that majority viewpoint will be used to determine the relative timing of the batches. From the relative timing of the batches, we also resolve the relative timing of the branch in the DID. An Orb server does not wait for consensus - its viewpoint may eventually converge to a majority view, over time, in these situations.
Rules exist to resolve the active branch so that Orb servers can consistently resolve the DID controller's intended and current DID document. To mitigate against initial divergences, we also introduce the Witness role to observe a transaction prior to propagation. The Orb transaction writer preannounces the existence of an Orb transaction by submitting it to a Witness. A Witness creates a proof that they observed an Orb transaction at a certain time. Witnesses have the capability to provide timestamping and also host a Witness Ledger to provide relative ordering between merged sets of Orb Transactions.
Although each Orb Server decides who to designate as trusted, it is also important that an Orb Server's behavior can be monitored. We use a Witness ledger as the mechanism for monitoring behavior - if inconsistencies are detected, other systems can adjust their viewpoint for trustworthiness. A Witness ledger is responsible for recording individual Orb transactions, providing signed transaction timestamps that can be embedded within a propagated Orb transaction, providing ledger consistency proofs, and providing an API that exposes their ledger. A Witness ledger is not responsible for maintaining the Orb transaction graph structure nor is it responsible for maintaining a complete history of Orb transactions. An Orb server is monitorable by exposing a Witness ledger API.
We structure witnesses ledgers in a similar manner to certificate transparency [[RFC6962]], as append-only Merkle Trees with proof capabilities. A Witness ledger promises to include a submitted Orb transaction within a time period known as the Maximum Merge Delay (MMD). Once the ledger merges a set of Orb transactions into the Merkle Tree, a Signed Tree Head (STH) is produced. The STH (and associated Merkle Proofs) is used to validate consistency between older and newer ledger revisions.
When an Orb server creates a transaction, the server requests other Orb servers (as Witnesses) to include the transaction into their Witness ledger. The Orb server preannounces a transaction to Witnesses after preparing and writing the Sidetree CAS objects. The transaction announcement is then sent to Witnesses for validation and inclusion by the Witnesses. Each witnesses validates the Orb transaction structure and that it was issued within an acceptable delta of their current time. Upon successful validation, each witness returns a signed transaction timestamp and a promise to include the Orb transaction into their Ledger. The Orb transaction (combined with the signed transaction timestamps) is then written into the CAS and propagated. When an Orb Server receives a propagated transaction, they invalidate (as stale) each Sidetree operation that has a timestamp that is not within an acceptable delta of the Orb transaction's timestamp. When these proofs originate from trusted servers, their timing information is immediately applied after receiving an Orb transaction propagation. An Orb server that receives witnessed transactions from trusted servers likely holds the majority timing view immediately in these situations.
As Orb relies on gossip replication, it is possible for a resolver system to miss transactions originating from other servers that they do not follow. To mitigate this issue for a particular DID, we allow the DID controller to specify a witness policy. This policy contains a set of witnesses that MUST be used when an Orb transaction includes changes to the associated DID. The Writer server ensures that, according to the DID's policy, a sufficient number of these server(s) are also acceptable as Witnesses to the Writer server. If the DID's policy is unacceptable to the Writer server, the operation MUST be rejected and not included into the Orb transaction. The DID policy ensures that the Resolver Servers can determine if they have the latest propagations and that the DID controller can use any Writer server that has mutually acceptable policies.
As a DID can be associated to a particular Witness, that Witness provides observations of the DID controller's behavior. In addition to propagating for a DID, the associated Witness is also given extra weight as a tie-breaker for resolving the late publishing scenario described in the previous section.
The ability to set (or change) the policy is an operation that is included into a propagated Sidetree core index.
The Orb DID method enables CAS discovery and usage via Web APIs. The WebFinger protocol [[RFC7033]] allows systems to query for both [[ACTIVITYPUB]] endpoints and also for CAS endpoints for a given resource. By enabling a fully Web-enabled model, we do not introduce a requirement on Distributed Hash Table (DHT) usage into the method.
Although Orb DIDs can be created that do not have a dependency on DHTs, we also enable optional support for registering CAS resources on a DHT. Orb servers MAY choose to expose their CAS-based VDRs on a Distributed Hash Table (DHT) network such as IPFS. When using a DHT, we gain the advantage of not needing to specify any Web domain to be queried for discovery.
The Orb transaction graph that is stored into the content-addressable storage (CAS) also enables discovery of propagation properties. Each node in the transaction graph includes known CAS discovery information and witness endpoints. The CAS discovery information enable Orb servers to include additional endpoints when responding to WebFinger [[RFC7033]] queries. The witness endpoints enable Orb servers to follow additional systems for propagation.
The format for the did:orb method conforms to the [[[DID-CORE]]] specification. The DID scheme consists of the did:orb prefix, the mechanism for discovering content-addressable objects, a multihash (with a multibase prefix) for the minimum node in the DID operation batch graph, and the unique suffix of the DID.
The method uses the following ABNF [[RFC5234]] format:
did-orb-format = "did:orb:" (orb-scheme-did / orb-long-form-did / orb-canonical-did) orb-canonical-did = anchor-hash ":" did-suffix orb-long-form-did = anchor-hash ":" did-suffix ":" did-suffix-data orb-scheme-did = scheme path ":" orb-metadata-did orb-metadata-did = anchor-hash [":" hash-metadata] ":" did-suffix [":" did-suffix-data] scheme = 1*idchar path = *(":" segment) segment = 1*idchar ; more constrained than [RFC3986] anchor-hash = 1*idchar hash-metadata = 1*idchar did-suffix = 1*idchar did-suffix-data = 1*idchar
See [[?RFC3986]] for the original definition of scheme and path and [[DID-CORE]] for the definition of idchar. [[SIDETREE]] provides additional explanation for the did-suffix and did-suffix-data elements. [[MULTIHASH]] and [[MULTIBASE]] defines the multihash and multibase formats that are used for the anchor-hash element.
The canonical Orb DID includes a multihash of the latest anchor object that contains a create or recovery operation for the Sidetree DID suffix. The first segment after `did:orb` contains the anchor multihash and the second segment contains the Sidetree DID suffix.
did:orb:uEiDlXjleTwr4eZalpXVy086zs-TPK-h54ojbpl7EBvZeHQ:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg
In cases where the anchoring information is not included into the DID string, the graph multihash will be represented by the identity code (`0x00`) with a 0 length output. The base64url representation of an unknown anchor object is: `uAAA`. For example: when a DID is initially created, the graph multihash is not known until the DID is added to an anchor object. In this case, the anchor object cannot be included into the DID string so the unknown anchor object is included.
did:orb:uAAA:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg
The Orb DID method extends the operations specified in the [[SIDETREE]] specification.
An Orb DID is created by submitting a create to the operations endpoint of an Orb Server, as specified in Sidetree.
Detailed steps are specified in the Sidetree API.
An Orb Server exposes a DID resolution API as defined in [[DID-CORE]] using the HTTP(S) Binding specified by [[DIDRESOLUTION]].
Detailed steps and additional method metadata properties are specified in the Sidetree API.
An Orb DID is created by submitting an update to the operations endpoint of an Orb Server, as specified in Sidetree.
Detailed steps are specified in the Sidetree API.
An Orb DID is recovered by submitting a recover to the operations endpoint of an Orb Server, as specified in Sidetree.
Detailed steps are specified in the Sidetree API.
An Orb DID is deactivated by submitting a deactivate to the operations endpoint of an Orb Server, as specified in Sidetree.
Detailed steps are specified in the Sidetree API.
did:orb:https:example.com:uAAA:EiA329wd6Aj36YRmp7NGkeB5ADnVt8ARdMZMPzfXsjwTJA
did:orb:https:example.com:uAAA:EiA329wd6Aj36YRmp7NGkeB5ADnVt8ARdMZMPzfXsjwTJA:ey...When an Orb Server has propagation information for a resolved DID, the server includes the anchor-hash segment within the canonicalId property of the returned DID document metadata. The anchor-hash is set to the multihash (with multibase prefix) of the latest known AnchorCredential that contains a Create or Recover operation for the DID. After a DID controller sends a Create or Recover operation to an Orb Server, the resolution result may have no (or an outdated) anchor-hash segment. The DID controller may need to retry resolution until the operation has been propagated and the multihash becomes available. Once the multihash is available, resolution responses contain the updated canonicalId property.
A client discovers a domain's endpoints for DID resolution and DID operations using a .well-known scheme [[RFC8615]]. The domain declares its own endpoints for resolution and operations.
GET /.well-known/did-orb HTTP/1.1 Host: alice.example.com Accept-Encoding: gzip, deflate
When the discovery profile exists, an HTTP 200 status code is returned.
HTTP/1.1 200 OK Date: Sat, 30 Jan 2021 18:31:58 GMT Content-Type: application/json; charset=utf-8 Connection: keep-alive Access-Control-Allow-Origin: * { "resolutionEndpoint": "https://alice.example.com/sidetree/0.1/identifiers", "operationEndpoint": "https://alice.example.com/sidetree/0.1/operations" }
A shared domain may be used to enable a group of organizations to share resolution responsibilities. This model enables a resolution client to have increased confidence by validating a resolution result against multiple entities. A domain is a shared domain model when it declares its linked domains and an n-of-m policy in its well-known configuration. Linked domains indicate to clients that these domains have an synchronized (eventually consistent) relationship and can be used n-of-m for resolution. These domains and policy are fetched using a WebFinger [[RFC7033]] query. When using linked domain resolution, the client performs the following steps to obtain higher confidence in the resolution results:
GET /.well-known/did-orb HTTP/1.1 Host: shared.example.com Accept-Encoding: gzip, deflate
When the discovery profile for the shared domain exists, an HTTP 200 status code is returned.
HTTP/1.1 200 OK Date: Sat, 30 Jan 2021 18:31:58 GMT Content-Type: application/json; charset=utf-8 Connection: keep-alive Access-Control-Allow-Origin: * { "resolutionEndpoint": "https://shared.example.com/sidetree/0.1/identifiers", "operationEndpoint": "https://shared.example.com/sidetree/0.1/operations" }
GET /.well-known/webfinger?resource=https%3A%2F%2Fshared.example.com%2Fsidetree%2F0.1%2Fidentifiers HTTP/1.1 Host: shared.example.com Accept-Encoding: gzip, deflate
When the DID resolution capability exists, an HTTP 200 status code is returned.
HTTP/1.1 200 OK Date: Sat, 30 Jan 2021 18:31:58 GMT Content-Type: application/jrd+json Connection: keep-alive Access-Control-Allow-Origin: * { "subject": "https://shared.example.com/sidetree/0.1/identifiers", "properties": { "https://trustbloc.dev/ns/min-resolvers": 2 }, "links": [ { "rel": "self", "href": "https://shared.example.com/sidetree/0.1/identifiers" }, { "rel": "alternate", "href": "https://charlie.example.com/sidetree/0.1/identifiers" }, { "rel": "alternate", "href": "https://oscar.example.com/sidetree/0.1/identifiers" }, { "rel": "alternate", "href": "https://mike.example.com/sidetree/0.1/identifiers" } ] }
The DID controller declares their origin policy by setting the URI of their desired witness into the anchorOrigin property in the Create or Replace operation.
TODO: In a followup revision, we might specify additional origin policy information.
The DID Controller includes their skew-adjusted current timestamp in the anchorFrom property of an operation's data object payload. To allow for clock skew, the DID Controller subtracts five minutes from their current timestamp. The timestamp is formatted as a [[RFC7519]] NumericDate.
The DID Controller includes their skew-adjusted current timestamp in the anchorUntil property of an operation's data object payload. To allow for batching and clock skew, the DID Controller adds 25 minutes to their current timestamp. The timestamp is formatted as a [[RFC7519]] NumericDate.
The delta range between anchorFrom and anchorUntil is 30 minutes.
TODO: In a followup revision, we will specify protocol parameters to constrain the delta range.
An Orb Server exposes the [[[ACTIVITYANCHORS]]] API for propagation, content-addressable storage, discovery, and witnessing.
Orb specifies anchoring metadata for the Sidetree core index file. This metadata includes each DID included in the index along with a hashlink to the previous anchor. The metadata is formatted as a Linkset [[LINKSET]].
The anchor property contains the hashlink to the Sidetree core index file. The profile property contains the version of the Orb spec (e.g., https://w3id.org/orb#v1). The author property contains the Orb server URI that published the anchor. The item property contains a collection of the DIDs contained in the Sidetree files (referenced by anchor). The previous property (within an item contains a hashlink to the previous anchor that contained a Sidetree operation for the DID.
{ "linkset": [ { "anchor": "hl:uEiDkZV2rw2XKdqPoGKy6Vg0HEk36HfsxyWfnY3jp3Q2K-g", "author": "https://orb.domain2.com/services/orb", "item":[ {"href": "did:orb:uAAA:EiBTRfNkzKwW3ZVDl6WwXsYhre6HPE8jQ7e9l3m6pii-iw"}, { "href": "did:orb:uAAA:EiCUt537plK2HuI0k2rQNP3MgDtq1T5Wj_LZ7yQOJY2gcA", "previous": ["hl:uEiA30C_KU_wogvUhJKLUA8bXHqSM_75X1QZQN3z_4VacXQ"] }, {"href": "did:orb:uAAA:EiDpy0--tFbSNGG-UNce9_yXfRBFQ9kkme8SREFLUiaK6g"}, {"href": "did:orb:uAAA:EiAVOBVLnwIgF6f83JfJUi82mhfAQ2oVKpsZCfkt5tX2aQ"} ], "profile": "https://w3id.org/orb#v1" } ] }
Orb URI Scheme | Description | Reference |
---|---|---|
hl | Hashlink retrieval of anchor | |
https | Hostmeta discovery (via https) | |
ipns | Hostmeta discovery (via ipns) | |
ipfs | IPFS retrieval of anchor |
The scheme uses a hashlink [[HASHLINK]] including the multihash of the latest anchor object and metadata containing URLs to the anchor object. The first segment after `did:orb` contains the anchor multihash, the second segment contains the hashlink metadata and the third segment contains the Sidetree DID suffix.
did:orb:hl:uEiDlXjleTwr4eZalpXVy086zs-TPK-h54ojbpl7EBvZeHQ:uoQ-Bc2h0dHBzOi8vZXhhbXBsZS5jb20:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg
The hostmeta schemes fetch service discovery information using the JRD form of hostmeta [[RFC6415]]. To retrieve `host-meta.json`, the `scheme` and `path` segments of the Orb DID are converted into a URL. The DID is transformed into a URL by replacing `:` in the `path` segments with `/` and prefixing the `scheme`. From the transformed URL, the host-meta.json can be retrieved according to [[RFC6415]].
did:orb:https:example.com:uEiDlXjleTwr4eZalpXVy086zs-TPK-h54ojbpl7EBvZeHQ:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg
did:orb:ipns:k51qzi5uqu5dl3ua2aal8vdw82j4i8s112p495j1spfkd2blqygghwccsw1z0p:uEiDlXjleTwr4eZalpXVy086zs-TPK-h54ojbpl7EBvZeHQ:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg
Add description and example of Orb JRD.
The scheme retrieves the anchor object from IPFS. To retrieve the anchor object, the multihash must be converted into an IPFS URL (containing an IPFS CID). The multihash is transformed into a CID as follows:
DID: did:orb:ipfs:uEiDlXjleTwr4eZalpXVy086zs-TPK-h54ojbpl7EBvZeHQ:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg URL: ipfs://bafkreihfly4v4tyk7b4znjnfovznhtvtwpsm6k7iphrirw5gl3can5s6du
Type Code | Description | Reference |
---|---|---|
vct-v1 | Verifiable Credential Transparency |
The linked data proof includes a domain property that indicates the ledger that promises to include an Orb transaction. The type of the ledger can be determined using the WebFinger protocol [[RFC7033]] along with known replicas, if any. The replica links have the relation field set to alternate.
GET /.well-known/webfinger?resource=https%3A%2F%2Fwitness1.example.com%2Fledgers%2Fmaple2021 HTTP/1.1 Host: witness1.example.com Accept-Encoding: gzip, deflate
When the ledger exists, an HTTP 200 status code is returned.
HTTP/1.1 200 OK Date: Sat, 30 Jan 2021 18:31:58 GMT Content-Type: application/jrd+json Connection: keep-alive Access-Control-Allow-Origin: * { "subject": "https://witness1.example.com/ledgers/maple2021", "properties": { "https://trustbloc.dev/ns/ledger-type": "vct-v1" }, "links": [ { "rel": "self", "href": "https://witness1.example.com/ledgers/maple2021" }, { "rel": "alternate", "href": "https://replica.example.com/ledgers/maple2021" } ] }
The Verifiable Credential Transparency (VCT) Witness ledger is based on on certificate transparency [[RFC6962]]. The Anchor Credentials are included into VCT ledgers (as append-only Merkle Tree logs). VCT ledgers may be named to enable rollover.
We extend [[RFC6962]] to include Verifiable Credential (VC) [[VC-DATA-MODEL]] objects. The following subsections describe the endpoints and the additional VC type for the Merkle Tree.
Orb AnchorCredentials are added using the process described in , as required by Orb.
The Retrieve Latest Signed Tree Head endpoint is described in RFC6962. In this example, a signed tree head is requested from a ledger named maple2021.
GET /ledgers/maple2021/ct/v1/get-sth HTTP/1.1 Host: witness1.example.com Accept-Encoding: gzip, deflate
HTTP/1.1 200 OK Date: Sat, 30 Jan 2021 13:20:11 GMT Content-Type: application/json Connection: keep-alive Access-Control-Allow-Origin: * { "tree_size": 100, "timestamp": 1612097791, "sha256_root_hash": "qOK55eWhQ96DmRbAsHTridM1BrxZtqJAW2uF0ceQcTo=", "tree_head_signature": "6UIXmSf8Pg9jY6e8DUPguEW3sl0HxX0fckMnP5ckUchcnsjlXee0722AV4RL1+xNEGVX/poCKu/wPH9teprMyw", }
The Retrieve Merkle Consistency Proof between Two Signed Tree Heads endpoint is described in RFC6962.
The Retrieve Merkle Audit Proof from Log by Leaf Hash endpoint is described in RFC6962.
The Retrieve Entries from Log endpoint is described in RFC6962.
extra_data: We extend the entry types to include verifiable credentials. In the case of a VC type, extra_data contains the verifiable credential.
TODO: In a followup revision, we may specify an equivalent endpoint for [[RFC6962]] section 4.7.
The Retrieve Entry+Merkle Audit Proof endpoint is described in RFC6962.
extra_data: We extend the entry types to include verifiable credentials. In the case of a VC type, extra_data contains the verifiable credential.
We extend the LogEntryType enumeration described in RFC6962 to include vc_entry. The MerkleTree structure described in RFC6962 is extended to include a vc_entry case. In the case of vc_entry, the signed_entry field of the Merkle Tree contains a Verifiable Credential.
Please refer to [[RFC6962]] section 5.3 and section 5.4.
Security considerations are provided in conformance with [[RFC3552]] section 5 and [[DID-CORE]].
As Orb is an extension of [[SIDETREE]], we inherit the security properties of self certifying DIDs and the Sidetree operation chains:
As Orb is a DID method based on gossip propagation, each Orb Server holds a subset of the overall transaction graph.
The following table summarizes the security considerations required by [[RFC3552]].
Attack | In-Scope | Susceptible | Mitigations and Notes |
---|---|---|---|
Eavesdropping | No | No |
|
Replay | Yes | No |
|
Message Insertion | Yes | No |
|
Deletion | Yes | Mitigated |
|
Modification | Yes | No |
|
Man-in-the-Middle | Yes | Mitigated |
|
Denial of Service | Yes | Mitigated |
|
Please refer to [[DID-CORE]] Privacy Considerations. The following table summarizes privacy considerations from [[RFC6973]] section 5:
Threat | Comments |
---|---|
Surveillance | Orb creates a decentralized and replicated VDR. Interconnected systems should be expected to observe DID update operations. For DID resolution, the opportunities for surveillance can be mitigated where an RP is running their own highly interconnected server. The usage of batches also brings advantages as operations from many DIDs become intermingled into files that are being requested. A Witness Server may also have surveillance opportunities when other systems inquire for the latest transaction for a particular DID. These opportunities are mitigated for highly interconnected servers that already have confidence in having the latest transactions, either from propagations or from monitoring and replicating Witness ledgers. |
Stored data compromise | Orb creates a decentralized and replicated VDR. Interconnected systems should be expected to observe DID update operations. |
Unsolicited traffic | The DID controller is responsible for choosing which endpoints to include within their DIDs. |
Misattribution | Orb uses self certifying DIDs - the DID controller is fully responsible for the changes to their DIDs. |
Correlation | Orb DID Suffixes are not inherently correlatable since the Unique Suffix is bound to the DID's initial state including inception keys. Please refer to [[DID-CORE]] for additional guidance. |
Identification | Orb DID Suffixes are not inherently identifiable since the Unique Suffix is bound to the DID's initial state including inception keys. Please refer to [[DID-CORE]] for additional guidance. |
Secondary use | The DID operations applied by a DID controller map to their usage of constructing a DID document. |
Disclosure | The DID controller is responsible for the operations applied to their DID documents. |
Exclusion | The DID method specifies a decentralized and replicable VDR. Increasing the connections between systems and using a replicable VDR strategy mitigates against exclusion. |
As Orb enables Servers to interconnect as much as they choose and is agnostic to underlying ledgers, the availability of data can vary based on those choices.