Technical Specifications
Guiding Principles
Locky follows a few guiding principles when designing it's security model:
Separation of Powers: Whenever possible, don't give one server or engineer the power to compromise the system. Use cryptography to spread out responsibility and avoid single points of failure.
Long-term Security: Assume all communication between servers is being recorded by an adversary. Data encrypted today should continue to be secure even as technologies to crack these theoretical recordings advance.
Use Standards: Cryptographic algorithms need years of testing by expert cryptographers to be secure. Use what has already been built and tested.
Algorithm Selection
Locky uses NIST ML-KEM-768 + AES256-GCM for data encryption throughout. AuthN/Z is done using a combination of:
TLS with ECDHE-ECDSA-AES256-GCM-SHA384
TLS with ECDHE-ECDSA-AES128-GCM-SHA256
Internally generated 256-bit uniformly random keys
How Locky Works Inside
What follows will be a discussion of Locky's internals. To understand how to use Locky, please refer to our Rust SDK.
Lets first go over some basics. There are two 'levels' of keys inside Locky:
Root Keys
User Keys
User keys are what Locky's customers store in Locky. Users retrieve them from Locky and use them to encrypt and decrypt data within the user's infrastructure. Every user key is 256 bits. Root keys are used by Locky to encrypt user keys when storing them internally.
Root keys never leave the locky ecosystem. In fact, they never exist fully assembled, and rather are split up into pieces or 'shares' which are in turn stored across multiple servers in different databases.
More details on these keys can be found in Root Key Establishment
Server Types
Within the Locky ecosystem, there are two different types of servers:
Mesh
Assembly
Mesh nodes are not publicly accessible. Within Locky, they are responsible for:
Storing a single share of a root key.
Participating in a multiparty computation algorithm with other mesh nodes to establish new root keys
Publishing the public key of a root key to Assembly nodes
Responding to (and verifying) Assembly nodes' requests to partially decrypt a user key that has been encrypted with a root key
Assembly nodes are responsible for:
Hosting the user-facing Locky gRPC API
Performing AuthN and AuthZ for gRPC API requests
Managing a distributed database of User Credentials and user keys encrypted with root keys
Requesting partial decryptions from individual mesh nodes
Assembling partial decryptions and transferring the final result to the user
Communication Between Mesh Nodes
Mesh nodes are connected to all other mesh nodes forming a fully-connected network. The following protocol layer stack is in effect between mesh nodes:
Mesh nodes connect to each other using TCP
TLS 1.3 is used for both client and server authN/Z as well as an initial layer of encryption
Locky manages their own CA to provide these certificates
Once the TLS handshake is complete, ML-KEM-768 with AES-256-GCM encapsulates messages to give the communication quantum-resistance.
The client initiates by sending a 1-byte header followed by a fresh ML-KEM-768 encapsulation key.
The server responds with a fresh ML-KEM-768 ciphertext
The client and server privately derive the shared secret and use it as the key for AES-256-GCM
Each subsequent message consists of a 4-byte size, a 12-byte nonce (unique per message), and an AES-256-GCM encrypted payload using the shared secret as the key.
Once all handshakes are complete, mesh nodes participate in a MPC algorithm with one another to establish root keys.
Root Key Establishment
Root keys use ML-KEM-768 adapted for a multiparty setting.
Root keys use a t-of-n
threshold scheme, which necessitates a minimum of t+1
root key shares participate in a decryption. However, all n
shares must participate in the root key establishment protocol.
Upon deployment, each mesh node is allocated a unique index starting from '1'. Indexes are used as the cryptographic indicies for shamirs secret sharing. Each node knows every other nodes unique index, and this index is validated against the subjectAltName
during the earlier TLS handshake.
Academic Threat Modeling
Locky's design follows a slightly more secure variation of the "honest but curious" academic threat model. In the "honest but curious" model, root key shareholders are trusted to adhere to the protocol exactly, so root keys remains secure even in the presence of a theoretical adversary that can plainly see all secrets of at most t
nodes. Locky adds an additional layer of security, staying secure even if at most t
mesh nodes have been 'silently' compromised. That is, they appear to be following the communication protocol, but deliberately choose key material that would compromise security.
Real-world Threat Modeling
Translating this to the real-world, it means that if a few trusted employees within Locky (or within datacenters) get hacked or go rogue, they still don't have the power to reveal user secrets or hold Locky hostage. Say a foreign adversary with deep pockets bribes or coerces a trusted employee to compromise hardware in a difficult-to-detect manner. Locky stays secure even in the face of these nation-state-level threats. This provides a strong defense against attacks like the LastPass incident, wherein a single trusted employee was breached leading to the total compromise of their infrastructure.
Worth noting, Locky does not attempt to guarantee robustness in the presence of a compromised mesh node. That is, if t
or fewer mesh nodes are compromised and no longer following the protocol, while the attacker would not be able to compromise any secrets, they COULD prevent the protocol from completing successfully. That's a bit of a worst-case-scenario, but in the unlikely event it occurs locky engineers can detect which nodes have been compromised through traditional breach detection techniques and simply point the remaining healthy nodes away from the compromised ones.
Protocol Overview
The protocol overview is written assuming expert-level familiarity with ML-KEM, Shamirs Secret Sharing, and Polynomial Rings. If you don't have experience in these topics, we recommend some outside study first.
There are 3 phases of the root key establishment protocol:
Secret Key Generation: A
t-of-n
secret is generated and each node receives a share. If at leastt
mesh nodes collaborate, they can utilize this root secret key.Public Key Generation: A public key for the
t-of-n
secret is generated.Challenge Decryption: The public key and secret shares are tested to ensure nodes correctly followed the above steps.
At the conclusion of the protocol, mesh nodes publish the public key of the new root key to assembly nodes and consider the key ready for use.
Secret Key Generation
There are two message types sent during the secret key generation phase: SHARE_COMMITMENT and SHARE.
SHARE_COMMITMENT consists of a 256-bit SHA-3 hash of the following SHARE message. Nodes do not send any SHARE messages until they have received SHARE_COMMITMENTS from all other nodes.
Any mesh node can seed a root key generation protocol by sending the first SHARE_COMMITMENT to the other nodes. Other nodes then follow with their SHARE_COMMITMENT messages. To create a SHARE_COMMITMENT message, nodes generate a:
256-bit uniformly random 'A seed' contribution
ρ_i
Binomally distributed ML-KEM-768 secret key contribution
ss
This is a [[u16; 256]; 3] where every value is drawn from {0, 1, 2, 3328, 3327} binomially centered around 0. This is described in
SamplePolyCBD_n1
of the ML-KEM spec.
n
256-bit uniformly random commitment keyscommit_rand
t
uniformly random ML-KEM-768 polynomialsThis is a
[[(0..3329); 256]; 3]
aka[Z_3329^256; 3]
Each node then does a Shamirs Secret Sharing over the polynomial ring using their secret key contribution ss
at index 0:
Each node first generates a random share polynomial.
using the
t
uniformly random[[(0..3329); 256]; 3] => c_i
as coeffecients where1 <= i < t+1
Each node then calculates
s_i <= ss + sum(c_i * i ^ p for x in 0..t)
, creating a shares_i
for every mesh node (including themselves)
Polynomial ring multiplication is described in FIPS-203 section 4.3.1: Multiplication in the NTT domain.
Each node assembles a SHARE message for each other node, but does not send it yet
Each node then generates a SHA-3 hash over this message and puts it in the SHARE_COMMITMENT message:
Each node first sends SHARE_COMMITMENT to each other node.
Once a node receives a SHARE_COMMITMENT from all other nodes, it then sends its SHARE message to all nodes.
Upon receipt of all SHARE messages, each node:
validates the commitment they received earlier
sums
s_i
messages to create a news <= sum(s_i)
performs a bitwise xor of every
ρ
they received
Nodes should now have a t-of-n
secret key share s
and agree on a common ρ
.
Public Key Generation
Mesh nodes must now collectively generate a public key with suitable randomness. Again, there are two messages exchanged during this phase: PUBLIC_KEY_COMMITMENT and PUBLIC_KEY.
Each node generates:
Fresh binomially distributed random
e: [[u16; 256]; 3]
as described inSamplePolyCBD_n1
in FIPS-203A: [[[u16; 256]; 3; 3]
exponentiated fromρ
usingSampleNTT
andXOF
as described inK-PKE.KeyGen()
in FIPS 203pk_i <= A * s + e
as described inK-PKE.KeyGen()
Each node then hashes their PUBLIC_KEY with SHA-3 to create a PUBLIC_KEY_COMMITMENT:
Each node then sends their PUBLIC_KEY_COMMITMENT to all other nodes. Once commitments have been received by all nodes, each node sends their PUBLIC_KEY to all other nodes.
Once a node receives a PUBLIC_KEY from all other nodes, the final public key is created by summing all received pk_i
's into a final pk
.
pk
should be the same across every node.
Challenge Decryption
Mesh nodes must now check that the protocol succeeded and decryption can function properly.
Each node chooses a uniformly random challenge plaintext pt <= [u8; 32]
, encrypts it for pk
by calculating (v, u) = (pk * r + pt*q/2 + e2, A * r + e1)
with fresh r
, e1
, and e2
as described in K-PKE.Encrypt
. We do not perform Compress
or Decompress
as key and ciphertext size are not needed to be optimized in this application.
Each node then sends to every other node:
Upon receipt of a challenge ciphertext, nodes compute h = (s_i * lagrange(i, 1..n+1)) * u + e
where e
is a fresh, binomially distributed random [u16; 256]
per SamplePolyCBD_n
, and langrange(i, 1..n+1)
is the lagrange coeffecient with all mesh nodes participating in decryption.
They then send back to the sender:
Once a mesh node receives a CHALLENGE_PARTIAL_DECRYPTION from every other node, it computes: u <= sum(h_i)
and performs pt <= ByteEncode(v - u)
as described in K-PKE.Decrypt
.
If the pt
matches the pt
it generated in the first step, it knows the public key generation succeeded and sends a READY message to all other nodes. READY is an empty message.
Once a node receives a READY from every other node, it considers the protocol completed successfully, marks the key as ready for usage by assembly nodes, and publishes to the pk
to any Assembly nodes that request it.
User Key Storage
256-bit user keys are generated randomly by assembly nodes when requested by a user, encrypted with a root public key, and stored in a distributed database accessible by only Assembly nodes.
...to be continued
Last updated