Technical Specifications

Guiding Principles

Locky follows a few guiding principles when designing it's security model:

  1. Separation of Powers: Whenever possible, don't give one server or engineer the power to compromise the system. Use cryptography to spread out responsibility and avoid single points of failure.

  2. Long-term Security: Assume all communication between servers is being recorded by an adversary. Data encrypted today should continue to be secure even as technologies to crack these theoretical recordings advance.

  3. Use Standards: Cryptographic algorithms need years of testing by expert cryptographers to be secure. Use what has already been built and tested.

Algorithm Selection

Locky uses NIST ML-KEM-768 + AES256-GCM for data encryption throughout. AuthN/Z is done using a combination of:

  1. TLS with ECDHE-ECDSA-AES256-GCM-SHA384

  2. TLS with ECDHE-ECDSA-AES128-GCM-SHA256

  3. Internally generated 256-bit uniformly random keys

How Locky Works Inside

What follows will be a discussion of Locky's internals. To understand how to use Locky, please refer to our Rust SDK.

Lets first go over some basics. There are two 'levels' of keys inside Locky:

  1. Root Keys

  2. User Keys

User keys are what Locky's customers store in Locky. Users retrieve them from Locky and use them to encrypt and decrypt data within the user's infrastructure. Every user key is 256 bits. Root keys are used by Locky to encrypt user keys when storing them internally.

+------------------------+
|        Root Key        |
|    +--------------+    |
|    |   User Key   |    |
|    +--------------+    |
+------------------------+

Root keys never leave the locky ecosystem. In fact, they never exist fully assembled, and rather are split up into pieces or 'shares' which are in turn stored across multiple servers in different databases.

 _______________   _______________   _______________
|               \ \               \ \               |
|               / /               / /               |
|    Share 1    \ \    Share 2    \ \    Share 3    |
|               / /               / /               |
|_______________\ \_______________\ \_______________|
                      Root Key

More details on these keys can be found in Root Key Establishment

Server Types

Within the Locky ecosystem, there are two different types of servers:

  1. Mesh

  2. Assembly

Mesh nodes are not publicly accessible. Within Locky, they are responsible for:

  • Storing a single share of a root key.

  • Participating in a multiparty computation algorithm with other mesh nodes to establish new root keys

  • Publishing the public key of a root key to Assembly nodes

  • Responding to (and verifying) Assembly nodes' requests to partially decrypt a user key that has been encrypted with a root key

Assembly nodes are responsible for:

  • Hosting the user-facing Locky gRPC API

  • Performing AuthN and AuthZ for gRPC API requests

  • Managing a distributed database of User Credentials and user keys encrypted with root keys

  • Requesting partial decryptions from individual mesh nodes

  • Assembling partial decryptions and transferring the final result to the user

(User) --> Assembly              Mesh
                                  |
           Assembly              Mesh
                        ===>      |
           Assembly     all      Mesh
                        ===>      |
           Assembly              Mesh
                                  |
           Assembly              Mesh

Communication Between Mesh Nodes

Mesh nodes are connected to all other mesh nodes forming a fully-connected network. The following protocol layer stack is in effect between mesh nodes:

  1. Mesh nodes connect to each other using TCP

  2. TLS 1.3 is used for both client and server authN/Z as well as an initial layer of encryption

    1. Locky manages their own CA to provide these certificates

  3. Once the TLS handshake is complete, ML-KEM-768 with AES-256-GCM encapsulates messages to give the communication quantum-resistance.

    1. The client initiates by sending a 1-byte header followed by a fresh ML-KEM-768 encapsulation key.

    2. The server responds with a fresh ML-KEM-768 ciphertext

    3. The client and server privately derive the shared secret and use it as the key for AES-256-GCM

    4. Each subsequent message consists of a 4-byte size, a 12-byte nonce (unique per message), and an AES-256-GCM encrypted payload using the shared secret as the key.

Once all handshakes are complete, mesh nodes participate in a MPC algorithm with one another to establish root keys.

Root Key Establishment

Root keys use ML-KEM-768 adapted for a multiparty setting.

Root keys use a t-of-n threshold scheme, which necessitates a minimum of t+1 root key shares participate in a decryption. However, all n shares must participate in the root key establishment protocol.

Upon deployment, each mesh node is allocated a unique index starting from '1'. Indexes are used as the cryptographic indicies for shamirs secret sharing. Each node knows every other nodes unique index, and this index is validated against the subjectAltName during the earlier TLS handshake.

Academic Threat Modeling

Locky's design follows a slightly more secure variation of the "honest but curious" academic threat model. In the "honest but curious" model, root key shareholders are trusted to adhere to the protocol exactly, so root keys remains secure even in the presence of a theoretical adversary that can plainly see all secrets of at most t nodes. Locky adds an additional layer of security, staying secure even if at most t mesh nodes have been 'silently' compromised. That is, they appear to be following the communication protocol, but deliberately choose key material that would compromise security.

Real-world Threat Modeling

Translating this to the real-world, it means that if a few trusted employees within Locky (or within datacenters) get hacked or go rogue, they still don't have the power to reveal user secrets or hold Locky hostage. Say a foreign adversary with deep pockets bribes or coerces a trusted employee to compromise hardware in a difficult-to-detect manner. Locky stays secure even in the face of these nation-state-level threats. This provides a strong defense against attacks like the LastPass incident, wherein a single trusted employee was breached leading to the total compromise of their infrastructure.

Worth noting, Locky does not attempt to guarantee robustness in the presence of a compromised mesh node. That is, if t or fewer mesh nodes are compromised and no longer following the protocol, while the attacker would not be able to compromise any secrets, they COULD prevent the protocol from completing successfully. That's a bit of a worst-case-scenario, but in the unlikely event it occurs locky engineers can detect which nodes have been compromised through traditional breach detection techniques and simply point the remaining healthy nodes away from the compromised ones.

Protocol Overview

The protocol overview is written assuming expert-level familiarity with ML-KEM, Shamirs Secret Sharing, and Polynomial Rings. If you don't have experience in these topics, we recommend some outside study first.

There are 3 phases of the root key establishment protocol:

  1. Secret Key Generation: A t-of-n secret is generated and each node receives a share. If at least t mesh nodes collaborate, they can utilize this root secret key.

  2. Public Key Generation: A public key for the t-of-n secret is generated.

  3. Challenge Decryption: The public key and secret shares are tested to ensure nodes correctly followed the above steps.

At the conclusion of the protocol, mesh nodes publish the public key of the new root key to assembly nodes and consider the key ready for use.

Secret Key Generation

There are two message types sent during the secret key generation phase: SHARE_COMMITMENT and SHARE.

SHARE_COMMITMENT consists of a 256-bit SHA-3 hash of the following SHARE message. Nodes do not send any SHARE messages until they have received SHARE_COMMITMENTS from all other nodes.

Any mesh node can seed a root key generation protocol by sending the first SHARE_COMMITMENT to the other nodes. Other nodes then follow with their SHARE_COMMITMENT messages. To create a SHARE_COMMITMENT message, nodes generate a:

  1. 256-bit uniformly random 'A seed' contribution ρ_i

  2. Binomally distributed ML-KEM-768 secret key contribution ss

    1. This is a [[u16; 256]; 3] where every value is drawn from {0, 1, 2, 3328, 3327} binomially centered around 0. This is described in SamplePolyCBD_n1 of the ML-KEM spec.

  3. n 256-bit uniformly random commitment keys commit_rand

  4. t uniformly random ML-KEM-768 polynomials

    1. This is a [[(0..3329); 256]; 3] aka [Z_3329^256; 3]

Each node then does a Shamirs Secret Sharing over the polynomial ring using their secret key contribution ss at index 0:

  1. Each node first generates a random share polynomial.

    1. using the t uniformly random [[(0..3329); 256]; 3] => c_i as coeffecients where 1 <= i < t+1

  2. Each node then calculates s_i <= ss + sum(c_i * i ^ p for x in 0..t), creating a share s_i for every mesh node (including themselves)

Polynomial ring multiplication is described in FIPS-203 section 4.3.1: Multiplication in the NTT domain.

Each node assembles a SHARE message for each other node, but does not send it yet

SHARE {
    // uniformly random commitment key
    // unique per recipient
    commit_rand: [u8; 32]
    // Senders contribution to 'A's randomness
    // The same for every recipient
    ρ: [u8; 32],
    // the actual share for the recipient
    s_i: [[u16; N]; K],
}

Each node then generates a SHA-3 hash over this message and puts it in the SHARE_COMMITMENT message:

SHARE_COMMITMENT {
    commitment: [u8; 32]
}

Each node first sends SHARE_COMMITMENT to each other node.

Once a node receives a SHARE_COMMITMENT from all other nodes, it then sends its SHARE message to all nodes.

Upon receipt of all SHARE messages, each node:

  1. validates the commitment they received earlier

  2. sums s_i messages to create a new s <= sum(s_i)

  3. performs a bitwise xor of every ρ they received

Nodes should now have a t-of-n secret key share s and agree on a common ρ.

Public Key Generation

Mesh nodes must now collectively generate a public key with suitable randomness. Again, there are two messages exchanged during this phase: PUBLIC_KEY_COMMITMENT and PUBLIC_KEY.

Each node generates:

  1. Fresh binomially distributed random e: [[u16; 256]; 3] as described in SamplePolyCBD_n1 in FIPS-203

  2. A: [[[u16; 256]; 3; 3] exponentiated from ρ using SampleNTT and XOF as described in K-PKE.KeyGen() in FIPS 203

  3. pk_i <= A * s + e as described in K-PKE.KeyGen()

PUBLIC_KEY {
    commit_rand: [u8; 32]
    pk_i: [[u16; 256]; 3]
}

Each node then hashes their PUBLIC_KEY with SHA-3 to create a PUBLIC_KEY_COMMITMENT:

PUBLIC_KEY_COMMITMENT {
    commitment: [u8; 32]
}

Each node then sends their PUBLIC_KEY_COMMITMENT to all other nodes. Once commitments have been received by all nodes, each node sends their PUBLIC_KEY to all other nodes.

Once a node receives a PUBLIC_KEY from all other nodes, the final public key is created by summing all received pk_i's into a final pk.

pk should be the same across every node.

Challenge Decryption

Mesh nodes must now check that the protocol succeeded and decryption can function properly.

Each node chooses a uniformly random challenge plaintext pt <= [u8; 32], encrypts it for pk by calculating (v, u) = (pk * r + pt*q/2 + e2, A * r + e1) with fresh r, e1, and e2 as described in K-PKE.Encrypt. We do not perform Compress or Decompress as key and ciphertext size are not needed to be optimized in this application.

Each node then sends to every other node:

CHALLENGE_CIPHERTEXT {
    u: [[u16; 256]; 3]
}

Upon receipt of a challenge ciphertext, nodes compute h = (s_i * lagrange(i, 1..n+1)) * u + e where e is a fresh, binomially distributed random [u16; 256] per SamplePolyCBD_n, and langrange(i, 1..n+1) is the lagrange coeffecient with all mesh nodes participating in decryption.

They then send back to the sender:

CHALLENGE_PARTIAL_DECRYPTION {
    h: [u16; 256]
}

Once a mesh node receives a CHALLENGE_PARTIAL_DECRYPTION from every other node, it computes: u <= sum(h_i) and performs pt <= ByteEncode(v - u) as described in K-PKE.Decrypt.

If the pt matches the pt it generated in the first step, it knows the public key generation succeeded and sends a READY message to all other nodes. READY is an empty message.

Once a node receives a READY from every other node, it considers the protocol completed successfully, marks the key as ready for usage by assembly nodes, and publishes to the pk to any Assembly nodes that request it.

User Key Storage

256-bit user keys are generated randomly by assembly nodes when requested by a user, encrypted with a root public key, and stored in a distributed database accessible by only Assembly nodes.

...to be continued

Last updated