Security
Threat model
Section titled “Threat model”The attacker controls the proof bytes, the public-input bytes, and sometimes the verifying-key bytes. Their goals, in order of how bad they are:
- Forgery, get
verifyto returnOk(true)for a proof they didn’t legitimately generate - Denial of service, make the verifier panic, hang, OOM, or reset the host device
- Malleability, find two different encodings of the same logical input to break identity-based invariants (nullifiers, replay tags, Merkle leaves)
zkmcu defends against all three. Same threat model across the three verifier crates (zkmcu-verifier for BN254, zkmcu-verifier-bls12 for BLS12-381, zkmcu-verifier-stark for winterfell). Same parser shape, same DoS hardening, same strict canonical-encoding checks where they apply.
Proof-system-specific notes:
- BN254: enforces strict
Fr < rcanonical encoding, wich is stricter thansubstrate-bn’s default (that one silently reduces modr). Matters a lot for nullifier-style applications where non-canonical encoding is a malleability vector. - BLS12-381: enforces that the 16-byte leading-zero padding on every
Fpelement is actually zero, not “ignored”. Any non-zero bits in the padding get rejected asError::InvalidFp. Without that check, an attacker could flip padding bits and the proof would still decode to the same curve point, wich is a trivial malleability vector. Closed at parse time. - STARK: enforces
MinConjecturedSecurity(95)at the verifier level, so a prover submitting a proof with weaker options gets rejected even if the underlying crypto verifies. Prevents downgrade attacks where an attacker submits a 63-bit-secure proof in place of a 95-bit one.
What’s tested
Section titled “What’s tested”Adversarial unit tests
Section titled “Adversarial unit tests”41 tests under zkmcu-verifier/tests/adversarial.rs covering:
- Empty, truncated, and oversized inputs to every parser
- Field elements ≥ their respective moduli
- Points not on the curve (e.g.
G1::(1, 1), wich failsy² = x³ + 3) - Adversarial
num_ic/countfields includingu32::MAX - All-zero inputs (identity points, must parse, must not spuriously verify)
- Cross-vector mismatches (square proof against squares-5 VK, etc.)
- Exhaustive single-bit flip of every byte in a known-good VK, proof, and public-inputs buffer, zero mutations produce
Ok(true)
Property-based tests
Section titled “Property-based tests”6 properties under zkmcu-verifier/tests/properties.rs, each running 256 generated cases per invocation via proptest:
- No random byte sequence up to 4 KB panics any of the three parsers
- If all three parsers succeed,
verifynever panics - Random XOR masks applied to proof bytes never produce
Ok(true) - Random XOR masks applied to public-input bytes never produce
Ok(true)
Cross-library consistency
Section titled “Cross-library consistency”Every committed test vector is generated by arkworks and natively verified there before it gets written to disk. The embedded path then re-verifies the same bytes with substrate-bn. If either library drifts from EIP-197, the test breaks immediately. See architecture for the full flow.
Known findings (fixed)
Section titled “Known findings (fixed)”DoS via unbounded allocation
Section titled “DoS via unbounded allocation”parse_vk and parse_public used to call Vec::with_capacity(n) where n came straight from untrusted input. An attacker sending num_ic = u32::MAX triggered a u32::MAX × 96 B ≈ 412 GB allocation request. SIGABRT on desktop, instant reset on MCU. Not great.
Patched in v0.1.0 with checked arithmetic and a buffer-length validation step before allocation. Covered by the parse_vk_claimed_ic_count_overflows and parse_public_count_astronomical tests.
Fr non-canonical encoding accepted
Section titled “Fr non-canonical encoding accepted”substrate-bn::Fr::from_slice silently reduces 256-bit inputs mod r instead of rejecting non-canonical encodings. Pairing itself is fine (reduction preserves the pairing result), but any application that uses the raw Fr bytes as an identity gets malleability for free. Nullifiers, replay-protection tags, Merkle leaves, anything like that.
Patched in v0.1.0 with a strict < r check in read_fr_at before delegating to substrate-bn.
Timing properties
Section titled “Timing properties”Remote-timing-oracle resistance is a property zkmcu measures, not one it tries to formally prove. See Deterministic timing for the methodology. The short version:
| Verifier | Std-dev variance (M33) | Side-channel posture |
|---|---|---|
| BN254 Groth16 | ~0.05 % | Low allocator activity, naturally tight |
| BLS12-381 Groth16 | ~0.05 % | Same as BN254 |
| STARK (TlsfHeap) | 0.08 % | Deterministic allocator brings variance to silicon floor |
| STARK (LlffHeap) | ~0.25 % | Allocator noise obscures crypto timing |
Under the recommended TlsfHeap allocator, all three verifiers hit sub-0.1 % iteration-to-iteration variance. That’s below the noise floor of any non-lab-grade timing oracle (USB / BLE / network transports have millisecond-or-worse resolution). Ofcourse this is not the same as formal constant-time execution, it’s a claim about what an attacker can actually observe in a realistic deployment: indistinguishable from silicon noise. Two different properties, be honest with yourself about wich one your threat model actually needs.
If your application has secret data flowing into the verify code path (not the usual zkmcu use case), the picture changes. Both substrate-bn and bls12_381 have scalar-dependent branches that a lab-grade attacker with full-cycle-precision measurement could exploit. zkmcu’s threat model does not cover that.
Explicit out-of-scope for v0.1.0
Section titled “Explicit out-of-scope for v0.1.0”Things that are NOT validated yet, documented openly instead of quietly skipped:
- Lab-grade constant-time execution. As above, observable-to-remote-attacker timing is in the noise floor, but full-cycle CT would require a whole-verifier audit across winterfell’s internal code paths (for STARK) and
substrate-bn/bls12_381(for Groth16). Acceptable for verify-only threat models where the proof and public inputs are already public. Not acceptable if secret data ever flows into the verify code path. - Power analysis / EM leakage. Unmeasured. Requires a ChipWhisperer-class lab setup. Treated as a separate follow-up project.
- G2 subgroup membership (Groth16). Wether
substrate-bnandbls12_381correctly reject G2 twist points that are not in the prime-order subgroup is trusted from the upstream libraries. I haven’t directly tested it. Historically a bug class in BN254 precompile implementations, so it’s on the audit list. - STARK security conjecture. 95-bit “conjectured” security relies on the list-decoding bound assumed by most deployed STARK systems. Provable security is lower by a factor of 2 queries. Acceptable in practice, but worth documenting as “conjectured” not “proven”.
- Trusted VK assumption (Groth16). An adversary who controls the VK can in principle engineer the pairing check to accept a forged proof. zkmcu assumes the VK is trusted, baked into firmware at provisioning time, or loaded from a trusted channel. If your use case loads the VK dynamically from an untrusted source, that’s a separate threat model wich zkmcu doesn’t defend against.
- Trusted AIR assumption (STARK). Analogous to the Groth16 VK assumption, the AIR definition compiled into the verifier binary is the integrity anchor. An adversary who changes the AIR source before compilation can build a verifier that accepts proofs it shouldn’t. STARK upgrades therefore require signed firmware updates, not runtime configuration.
Reporting a vulnerability
Section titled “Reporting a vulnerability”Open a GitHub security advisory with repro steps and the affected zkmcu-verifier version. Default disclosure window is 90 days from first report, earlier if I can ship a patch sooner.