All papers (26109 results)
On the Need for (Quantum) Memory with Short Outputs
In this work, we establish the first separation between computation with bounded and unbounded space, for problems with short outputs (i.e., working memory can be exponentially larger than output size), both in the classical and the quantum setting. Towards that, we introduce a problem called nested collision finding, and show that optimal query complexity can not be achieved without exponential memory.
Our result is based on a novel ``two-oracle recording'' technique, where one oracle ``records'' the computation's long outputs under the other oracle, effectively reducing the time-space trade-off for short-output problems to that of long-output problems. We believe this technique will be of independent interest for establishing time-space tradeoffs in other short-output settings.
Conditionally Linkable Attribute-Based Signatures
Attribute-Based Signatures (ABS) enable users to authenticate messages under expressive attribute policies while remaining anonymous. Existing ABS variants, however, treat linkability as a static, system-wide property: signatures are either always unlinkable, as in standard ABS, or globally linkable, as in traceable or accountable extensions. This rigid dichotomy fails to capture scenarios where correlation should arise only under explicitly declared conditions.
This work introduces Conditionally Linkable Attribute-Based Signatures (CLABS), a framework extending ABS with programmable, context-dependent linkability. Each certified user with attribute $x$ is associated with a linking set $L_x$ over a public context space $\mathcal{T}$. For each context $\tau\in\mathcal{T}$, a public function $f_\tau$ specifies how attributes are compared. Two signatures are publicly linkable if and only if $\tau\in L_x\cap L_{x'}$ and $f_\tau(x)=f_\tau(x')$; otherwise they remain unlinkable. This enables selective, verifiable correlation without central trust and with leakage limited to the opt-in bit.
We formalize the syntax and security notions of CLABS, capturing conditional linkability and context-aware anonymity, thereby ensuring privacy and verifiable linkage under voluntary participation. CLABS unifies global unlinkability and fine-grained, context-specific linkage within a single formal framework.
We realize CLABS generically using three modular components: a pseudorandom function for deterministic tag generation, a conventional signature for attribute certification, and a signature of knowledge (SoK) proving correct tag computation and Boolean policy satisfaction without revealing $x$. Finally, we instantiate CLABS under standard lattice assumptions in the quantum random oracle model (QROM), achieving post-quantum security while supporting arbitrary Boolean policies. The techniques we employ to prove circuit satisfiability and tag correctness may be of independent interest.
NIROPoK-Based Post-Quantum Sidechain Design on Ethereum
This paper presents a pioneering approach to constructing sidechains on the Ethereum network with a focus on post-quantum security. Our framework integrates a novel quantum-resistant version of a non-interactive random oracle proof of knowledge (NIROPoK) scheme, alongside a quantum-resistant proof-of-stake mechanism and a post-quantum bridge based on the Dilithium digital signature scheme. By harnessing these advanced cryptographic techniques, we establish a robust defense against quantum computing threats while ensuring enhanced privacy for blockchain transactions. By conducting a thorough analysis and implementing our approach, we demonstrate the feasibility and effectiveness of creating quantum-resistant sidechains within the Ethereum ecosystem. Our proposed sidechain is also capable of securing Ethereum transactions from quantum threats using Ethereum’s current architecture and security measures.
Non-interactive Blind Signatures with Threshold Issuance
Non-interactive blind signatures (NIBS) capture the minimal setting of blind signatures where the message space is restricted to unstructured random strings. They enable a signer to pre-compute presignatures without prior interaction, while ensuring that only the intended recipient can derive the corresponding blind signature.
In this work, we consider the problem of threshold issuance of NIBS. Specifically, we introduce the notion of non-interactive threshold blind signatures (NITBS), where a user obtains partial presignatures from a threshold of signers and locally combines them into a valid blind signature. We provide a formal treatment of this primitive by defining the corresponding security notions of blindness and one-more unforgeability. We then present the first concrete construction of NITBS, obtained by adapting the Pointcheval-Sanders (PS) signature scheme, and establish its security in the algebraic group model. Our micro-benchmarking results show that our construction attains the smallest presignature and signature sizes and the fastest issuance among all existing NIBS schemes.
What a Wonderful World: zkSNARKs in the Algebraic Group Model are Universally Composable
Zero-Knowledge Succinct Non-interactive Arguments of Knowledge (zkSNARKs) are important cryptographic primitives critical in many real-world applications. zkSNARKs are not used in isolation but are deployed within a broader context in which other cryptographic protocols may be concurrently executed. Universal-Composability (UC) allows rigorous analysis of cryptographic primitives being used in such arbitrary contexts. A UC analysis is even more desirable for popular, well-audited, and heavily deployed zkSNARKs already being used in practice.
Prior works that study the UC security of existing zkSNARKs (without modifications) are either not modular, hence requiring case-by-case analysis for new proof systems, or have largely focused on zkSNARKs in the Random Oracle Model (ROM). The latter includes zkSNARKs with logarithmic proof sizes compiled from Interactive Oracle Proofs. This state of the art leaves out a large family of very efficient, often constant-size, zkSNARKs that rely on the Algebraic Group Model (AGM) and optionally on the ROM. This includes zkSNARKs compiled from Polynomial Interactive Oracle Proofs, such as Plonk and Marlin, among others.
In this work, we address the UC security for unmodified zkSNARKs that are proven secure in AGM (+ROM). Our approach is modular: we identify simple, and mostly standard properties on the underlying zkSNARK that imply UC security. We observe that checking these properties for existing zkSNARKs is a surprisingly simple task using the rigorous formulation of AGM from Jaeger and Mohan (CRYPTO'24). The simplicity and modularity of our framework makes it easy-to-use for concluding UC security of several zkSNARKs in the same setting. Concretely, using our framework we establish that Plonk and Marlin are UC secure without any overhead.
Orthus: Practical Sublinear Batch-Verification of Lattice Relations from Standard Assumptions
The past several years have seen a rapid rise in practical lattice-based proof systems with linear-sized zero-knowledge proofs forming the foundation of many of the most efficient quantum-safe privacy protocols, and succinct proofs rapidly catching up and surpassing other quantum-safe alternatives in many metrics. A recent comparison of lattice-based aggregate signatures (Ethereum Foundation, 2025) involving the hash-based aggregate signature scheme Plonky3 and the instantiation of aggregate signatures from Falcon from the LaZer lattice library (Lyubashevsky, Seiler, Steuer, CCS 2024) using LaBRADOR (Beullens, Seiler, Crypto 2023), showed that lattice-based constructions have an advantage in terms of proof size and prover time, but are around an order of magnitude slower with regards to verification time. In general, it appears that slower verification times are the main obstacle to the adoption of succinct lattice-based proof systems.
In this work, we introduce and implement Orthus, a proof system with sub-linear verification designed for relations that naturally arise in lattice-based constructions. Asymptotically, the verification time grows with the square root of the witness size, and for a concrete example of aggregating Falcon signatures our implementation reduces the verifier running time by a factor of $9X$ when aggregating $2^{17}$ signatures.
Bittersweet Signatures: Bringing LWR to a Picnic for Hardware-Friendly MPC-in-the-Head
Post-quantum signature schemes are becoming increasingly important due to the threat of quantum computers to classical cryptographic schemes. Among the approaches considered in the literature, the MPC-in-the-head paradigm introduced by Ishai et al. (STOC'07) provides an innovative solution for constructing zero-knowledge proofs by exploiting Multi-Party Computation (MPC). This technique has proven to be a versatile tool in order to design efficient cryptographic schemes, including post-quantum signatures.
Building on the MPC-in-the-head paradigm, we introduce Bittersweet signatures, a new class of signature schemes based on the Learning With Rounding (LWR) assumption. Their main advantage is conceptual simplicity: by exploiting (almost) key-homomorphic pseudorandom functions (PRFs), a cryptographic object that preserves pseudorandomness while allowing linear operations on keys, we obtain a very regular design offering nice opportunities for parallel implementations.
Theoretically, analyzing Bittersweet signatures requires addressing significant challenges related to the (carry) leakage that almost key-homomorphic operations lead to.
Concretely, Bittersweet signatures natively lead to competitive signature sizes, trading moderate software performance overheads for hardware performance gains when compared to state-of-the-art MPC-in-the-head schemes (e.g., relying on code-based assumptions), while admittedly lagging a bit behind recent algorithms based on
the VOLE-in-the-head or Threshold-Computation-in-the-head frameworks.
Besides, their scalability and algebraic structure makes them promising candidates for leakage-resilient implementations. The new abstractions we introduce additionally suggest interesting research directions towards further optimization and generalization.
Anonymity of X-Wing and its Variants
X-Wing (Barbosa et al., CiC Volume 1, Issue 1) is a hybrid key encapsulation mechanism (KEM) currently considered for standardization by IETF and deployed by major companies such as Google to ensure a secure transition to post-quantum cryptography. It combines a classical X25519 KEM with the post-quantum ML-KEM-768.
In this paper, we propose the first analysis of the anonymity of X-Wing. We are interested in tight and memory-tight reductions that offer stronger security guarantees. We first establish in the standard model that for any IND-CCA secure KEM, weak anonymity implies full anonymity, and our reduction is tight not only in success probability and time but also in memory consumption. We then prove in the random oracle model that X-Wing achieves weak anonymity if both X25519 and ML-KEM-768 are weakly anonymous. The former can even be proven without a hardness assumption.
Our proof on the weak anonymity of X-Wing does not preserve the memory-tightness of the underlying KEMs. To improve it, we propose a slight variant of X-Wing that preserves memory-tightness. Finally, we improve the existing IND-CCA proof of the original X-Wing by Barbosa et al. using our new memory-tight analysis.
How To Make Delegated Payments on Bitcoin: A Question for the AI Agentic Future
AI agents and custodial services are increasingly being entrusted as intermediaries to conduct transactions on behalf of institutions. The stakes are high: The digital asset market is projected to exceed \$16 trillion by 2030, where exchanges often involve proprietary, time-sensitive goods. Although industry efforts like Google’s Agent-to-Payments (AP2) protocol standardize how agents authorize payments, they leave open the core challenge of fair exchange: ensuring that a buyer obtains the asset if and only if the seller is compensated without exposing sensitive information.
We introduce proxy adaptor signatures (PAS), a new cryptographic primitive that enables fair exchange through delegation without sacrificing atomicity or privacy. A stateless buyer issues a single request and does not need to manage long-term cryptographic secrets while proxies complete the exchange with a seller. The seller is guaranteed payment if the buyer can later reconstruct the purchased witness; meanwhile, the proxies remain oblivious to the witness throughout the protocol. We formalize PAS under a threshold model that tolerates the collusion of up to $t-1$ proxies. We also present an efficient construction from standard primitives that is compatible with Bitcoin, Cardano, and Ethereum. Finally, we evaluate a Rust implementation that supports up to 30 proxies. Our prototype is concretely efficient: buyer and seller computations take place in microseconds, proxy operations in milliseconds, and on-chain costs are equivalent to those of a standard transaction without fair exchange.
SQISign on ARM
We present the first vectorized implementation of SQIsign for high-performance Arm architectures. SQIsign is a promising candidate in the NIST On-Ramp Digital Signatures Call Round 2 to its most compact key and signature sizes. However, its signing performance remains a primary bottleneck, particularly the ideal-to-isogeny conversion. The conversion requires a large number of operations on elliptic curves and Abelian varieties, which depend on finite field arithmetic. Despite recent algorithmic improvements, research on high-performance implementations and efficient vectorized finite field arithmetic for SQIsign is still unexplored.
Our main contribution is the first demonstration of non-trivial vectorization speedups for SQIsign. By leveraging the NEON instruction set, we implement highly efficient finite field arithmetic and batched elliptic curve operations tailored for 2-dimensional isogeny chain computations. This accelerates the subroutine by 2.24$\times$ over the state-of-the-art. Moreover, our improvements are completely orthogonal to the recent algorithmic improvement Qlapoti (Asiacrypt 2025), offering similar performance gains in the SQIsign signing algorithm. When combined with Qlapoti, our implementation achieves a speedup of more than 2.24$\times$ in signing at NIST security level I. We expect our work to inspire further SQIsign optimization from a vectorization perspective, especially for quaternion computations with precise bounds.
VROOM: Accelerating (Almost All) Number-Theoretic Cryptography Using Vectorization and the Residue Number System
Modular arithmetic with a large prime modulus is a dominant computational cost in number-theoretic cryptography. Modular operations are especially challenging to parallelize efficiently on CPUs using vector instructions; standard CPU implementations rely on costly carry operations and permutation instructions to align with the multiplication datapath, negating the benefits of vectorization.
We develop vectorized algorithms for modular addition and multiplication, and present a new, constant-time modular multiplication algorithm suitable for general moduli - prime or otherwise. Our method uses a Residue Number System (RNS) representation to align the arithmetic naturally with wide vector units, and strategically eliminate extraneous instructions. Existing works either require the use of customized hardware or fail to show latency improvements.
Reducing the latency of modular arithmetic results in speedups for cryptographic applications. We accelerate RSA-4096 signatures by $4.0\times$ (verify) and $1.3\times$ (sign) over OpenSSL, and speed up BLS signature verifications by $3.43\times$ over the assembly-optimized BLST library. To facilitate broad practical adoption, we plan to upstream our implementation into BoringSSL, where it will directly benefit real-world TLS and cryptographic deployments.
Fast cube roots in Fp2 via the algebraic torus
Computing cube roots in quadratic extensions of finite fields is a subroutine that arises in elliptic-curve point decompression, hash-to-curve and isogeny-based protocols. We present a new algorithm that, for $p \equiv 1 \pmod{3}$ –which holds in all settings where $\mathbb{F}_{p^2}$ cube roots arise in practice– reduces the $\mathbb{F}_{p^2}$ cube root to operations entirely in the base field $\mathbb{F}_p$ via the algebraic torus $\mathbb{T}_2(\mathbb{F}_p)$ and Lucas sequences. We prove correctness in all residuosity cases and implement the algorithm using the $\texttt{gnark-crypto}$ open-source library. Benchmarks on six primes spanning pairing-based and isogeny-based cryptography show $1.6$–$2.3\times$ speed-ups over direct (addition chain) exponentiations in $\mathbb{F}_{p^2}$. We also extend the approach to $p \equiv 2 \pmod{3}$ and, more generally, to any odd $n$-th roots in quadratic towers $\mathbb{F}_{p^{2^k}}$ with $\gcd(n, p+1) = 1$.
Zero-Knowledge IOPPs for Constrained Interleaved Codes
Succinct arguments based on interactive oracle proofs (IOPs) have achieved remarkable efficiency improvements and are now widely adopted in applications. State-of-the-art IOPs involve protocols for testing proximity to constrained interleaved linear codes, and enjoy essentially optimal parameters. However, recent IOP constructions provide no privacy guarantees, which remain a must for many applications.
We present an IOP of proximity for testing constrained interleaved linear codes that achieves (honest-verifier) zero-knowledge, while incurring a negligible overhead compared to the (non-zero-knowledge) state of the art. In line with recent constructions, our construction satisfies round-by-round knowledge soundness with a straightline extractor and negligible error.
We propose a definition of (honest-verifier) zero-knowledge for interactive oracle reductions (IORs) that we prove is compatible with composition, and then obtain our result by constructing and modularly composing several lightweight zero-knowledge IORs. Our key technical contributions are a zero-knowledge sumcheck IOR and a zero-knowledge code-switching IOR that fit the strict efficiency requirements of our setting; these contributions and other technical complications entailed overcoming several challenges with new notions and protocols. Finally, along the way, we highlight the efficiency benefits of high-distance codes obtained from dispersers, which may be of independent interest.
Succinct Arguments for BatchQMA and Friends under 6 Rounds
We study the problem of minimizing round complexity in the context of succinct classical argument systems for quantum computation. All prior works either require at least 8 rounds of interaction between the quantum prover and classical verifier, or rely on the idealized quantum random oracle model (QROM). We design:
1. A 4-round public-coin (except for the first message) argument system for batchQMA lan- guages. Our results come in two flavors:
(a) Under the post-quantum hardness of functional encryption and learning with errors (LWE), we achieve optimal communication complexity (i.e., all message sizes are independent of batch size).
(b) Under the post-quantum hardness of LWE, we achieve optimal communication com- plexity except for the verifier’s first message.
2. A 6-round private-coin argument system for monotone policy batchQMA languages, under the post-quantum hardness of LWE. The communication complexity is independent of the batch size as well as the monotone circuit size.
One of our main technical contributions is a new approach to prove soundness without rewind- ing cheating provers. We bring the notion of straight-line partial extractability to argument systems for quantum computation. All previous works crucially relied on rewinding cheating provers, thus needed “state-preserving” succinct arguments of knowledge (AoKs) for NP to prove soundness.
Towards Accountability for Anonymous Credentials
Anonymous Credentials (or ACs) enable users to prove claims
with strong privacy guarantees, protecting credential holders
from being tracked by issuers and verifiers. However, these
privacy guarantees imply that a credential holder cannot be
held accountable for misuse (e.g., selling credential checks
online for proving 𝑎𝑔𝑒 > 18). The lack of accountability may
raise questions about the adoption of ACs into national iden-
tity systems (e.g., European EUDI or Swiss e-ID), which
might lead to the issuing authorities resorting to credential
systems with weaker privacy guarantees (e.g., batch issuance
of one-show credentials). This shows that the lack of account-
ability can adversely impact the levels of privacy enjoyed by
users.
Hence, in this paper, we discuss transferability attacks on
ACs and introduce a framework for providing accountability
in AC systems. In addition to issuers, holders and verifiers,
it assumes the existence of: (i) a law enforcement body (the
police) and a judicial body (the judge) that work together to
find information on credential misuse and; (ii) one or more
digital privacy advocates, called the NGO(s), that ensure the
system is not used for tracking people.
We introduce the cryptographic forensic trail (CFT), which
is attached to each credential show. The CFT can be used for
obtaining more information about an individual if and only
if the police have probable cause and can convince the judge
to issue a corresponding search warrant. Then, the police,
the judge, and the NGO(s) run a multiparty protocol for
decrypting relevant trails only. The protocol mimics checks
and balances of a healthy democracy, in which neither law
enforcement nor justice can track people as they will. Even
if both branches colluded, the NGO(s) can detect the misuse
and block further use.
In addition to possible extensions, we discuss performance
constraints on mobile phones and argue that practical feasi-
bility of the CFT is within reach.
Necessary and Sufficient Conditions for the Existence of Ideal Linear Secret Sharing Schemes for Arbitrary Access Structures
Determining whether an arbitrary access structure can be realized by an ideal linear secret sharing scheme is an important research topic. we use linear codes as the main tool to construct matrices $H$ and $G$ over a finite field $\mathbb{F}_q$ for a given access structure $\Gamma_{\min}$, and show that a necessary and sufficient condition for the existence of an ideal linear secret sharing scheme realizing $\Gamma_{\min}$ is that the equation $GH^{\mathsf{T}}=0$ has a solution. If this equation has a solution, then $H$ serves as the parity-check matrix of a linear code that realizes $\Gamma_{\min}$, and $G$ is the corresponding generator matrix. Furthermore, we prove that the result is equivalent to the following statement: there exists an ideal linear code for realizing the $\Gamma_{\min}$ if and only if it is the port of a matroid that is representable over $\mathbb{F}_q$.
A Comprehensive Break of the Tropical Matrix-Based Signature Scheme
The recent digital signature scheme proposed by Grigoriev, Monico, and Shpilrain (GMS26) attempts to leverage the NP-hardness of tropical matrix factorization as its security foundation. We present a systematic cryptanalysis revealing multiple structural vulnerabilities. First, we demonstrate an existential forgery attack in the chosen-hash model requiring only polynomial-time operations. Second, we establish that the scheme is fundamentally malleable, allowing an adversary to generate arbitrarily many distinct valid signatures for any observed message—thereby breaking strong unforgeability. Third, we show that collecting a polynomial number of honest signatures enables partial key recovery through probabilistic leakage of private key entries. Finally, we present an SMT-based key recovery attack that, given only two valid signatures, recovers the full private key in seconds for the recommended parameters. Our attacks do not rely on solving the claimed NP-hard problem but exploit algebraic properties of the signing equations. Combined, they demonstrate that the scheme fails to achieve the standard security notions of existential unforgeability, strong unforgeability, and key confidentiality.
Determining those Boolean functions whose restrictions to affine spaces are plateaued
Quadratic Boolean functions (that is, Boolean functions of algebraic degree at most 2), bent Boolean functions (i.e. maximally nonlinear Boolean functions in even numbers of variables) and, as we prove in this paper, partially-bent Boolean functions (i.e. affine extensions of bent functions to linear super-spaces), share a strong property: all their restrictions to affine hyperplanes are plateaued (i.e. have a Walsh transform valued in a set of the form $\{0,\pm \lambda\}$, where $\lambda$ is a positive integer called the amplitude). In this paper we determine for any $n$ and $k<n$ the class $C^n_k$ of those $n$-variable Boolean functions whose restrictions to all $k$-dimensional affine subspaces of $\mathbb F_2^n$ are plateaued (of any amplitude). We characterize partially-bent (resp., quadratic) Boolean functions as those functions that are plateaued on any affine hyperplane (resp., any affine subspace of dimension $k$, where $3 \leq k \leq n-2$, while these are all Boolean functions for $0\leq k\leq 2$). For $n\geq 5$, each of the following classes of Boolean functions happens then to be strictly included in the next one: quadratic functions, partially-bent functions, the restrictions of partially-bent functions to affine hyperplanes, plateaued functions, the restrictions of plateaued functions to affine hyperplanes, and all Boolean functions. We leave open the two problems of determining exactly what are the third and fifth of these classes (we begin the study of the first of these two classes by giving a non-trivial characterization). Our characterization of partially-bent (resp., quadratic) functions extends to strongly plateaued vectorial functions. We state an open question on vectorial functions that happens to be related to an important one on crooked functions.
Bridging Privacy and Utility: A Verifiable Framework for Data Valuation via Zero-Knowledge Proofs
Deep learning's hunger for high-quality data has catalyzed a burgeoning economy of decentralized data marketplaces. However, a fundamental trust deficit stifles this ecosystem: buyers fear data poisoning, while sellers fear data leakage. Although the Shapley value offers a rigorous economic framework for fair compensation, its calculation traditionally requires a Trusted Third Party (TTP) to access raw data, creating a single point of failure for privacy. Verifying data valuation without compromising confidentiality remains an open challenge.
In this paper, we present ZK-DV, the first Zero-Knowledge Proof (ZKP) system designed for verifiable, privacy-preserving data valuation. ZK-DV enables a seller to prove that a claimed valuation score (based on Gradient Shapley) is mathematically consistent with the underlying private data and the buyer's model, without revealing either. Our key technical insight is the architectural coupling of model training and valuation: we construct a specialized arithmetic circuit that combines the valuation logic into the backpropagation, extracting marginal utility scores from intermediate gradients. This design, implemented via the GKR protocol with a hybrid commitment strategy, amortizes the heavy cryptographic overhead through batched processing. Extensive experiments on the MNIST dataset demonstrate that ZK-DV is practical: by optimizing batch sizes, we achieve a $2.7\times$ speedup in proof generation, while maintaining a quantization fidelity of $\rho=1.0$ and a low verification cost ($< 0.2$s). ZK-DV thus bridges the gap between cryptographic integrity and economic fairness, paving the way for trustless data exchange
The Structured Generic-Group Model
This paper introduces the structured generic-group model, an extension of Shoup’s generic-group model (from Eurocrypt 1997) to capture algorithms that take advantage of some non-generic structure of the group. We show that any discrete-log algorithm in a group of prime order $q$ that exploits the structure of at most a $\delta$ fraction of group elements, in a way that we precisely define, must run in time $\Omega(\min(\sqrt{q},1/\delta))$. As an application, we prove a tight subexponential-time lower bound against discrete-log algorithms that exploit the multiplicative structure of smooth integers, but that are otherwise generic. This lower bound applies to a broad class of index-calculus algorithms. We prove similar lower bounds against algorithms that exploit the structure of small integers, smooth polynomials, and elliptic-curve points.
HCTR$^{++}$ : A Beyond Birthday Bound Secure HCTR2 Variant
Current industry-standard block cipher modes of operation, such as CBC and GCM, are fundamentally limited by the birthday bound $O(2^{n/2})$, a constraint that has evolved from a theoretical concern into a practical security bottleneck in contemporary high-throughput, high-data-volume environments. To address this, the cryptographic community and NIST are prioritizing Beyond Birthday Bound (BBB) security to extend the operational security margin toward the full block size $O(2^n)$. Achieving BBB security requires a departure from traditional constructions, primarily utilizing three methodologies: XOR of Permutations (XORP), Tweakable Block Ciphers (TBCs), and Fresh Rekeying. While none of these innovative BBB modes have been formally standardized, NIST has initiated the Accordion Mode project, defining a new primitive class: the Tweakable Variable-Input-Length Strong Pseudorandom Permutation (VIL-SPRP). This primitive treats the entire message as a single, indivisible block and expects the submission of BBB-secure variants. To contribute to this standardization effort, we propose a simple BBB-secure variant of the HCTR2 algorithm. We first explain the core BBB methodologies, then discuss the operational mechanism of HCTR2, and finally present our proposed BBB-secure construction.
Multi-key Security in the Quantum World: Revisiting Tweakable Even-Mansour and FX
In this paper, we prove the security of symmetric-key constructions in an adversary model called the Q1MK model, which combines the Q1 model, where the adversary makes classical online queries and quantum offline queries, and the multi-key (multi-user) setting. Specifically, under this model, we prove the security of two symmetric-key constructions: the tweakable Even-Mansour cipher (TEM) and the FX construction (FX), as starting points for understanding the post-quantum security of symmetric-key constructions in this adversary model. Our security proofs are based on the hybrid argument technique introduced by Alagic et al. at EUROCRYPT 2022. First, we prove that in order to break TEM in the Q1MK model, $\Omega(2^{\kappa/3})$ classical and quantum queries are needed, regardless of the number of target $\kappa$-bit keys. Then, before turning to the Q1MK security analysis of FX, we revisit the security proof of FX in the standard Q1 model proposed in version 20230317:200508 of ePrint 2022/1097 and tighten it. By the modified proof, we show that in order to break FX with $(\kappa + n)$-bit secret key in the Q1 model, $\Omega(2^{(\kappa+n)/3})$ classical and quantum queries are needed. We then apply this analysis to the Q1MK setting, and we show that in order to break FX in the Q1MK model, $\Omega(2^{(\kappa + n - u)/3})$ classical and quantum queries are needed, when $2^u$ ($\le 2^{\kappa}$) independent keys are in use.
Multi-Committee MPC: From Unanimous to Identifiable Abort
In this work, we consider dishonest majority MPC protocols with $(1-\epsilon)n$ corrupted parties for some constant $\epsilon\in (0,1/2)$. In this setting, there exist MPC protocols with unanimous abort that achieve constant communication in both online and offline phases via a packed secret sharing scheme. Departing from their approaches, we revisit the ``committee-based'' approach to design an efficient MPC protocol with constant online and offline communication complexity.
To balance the communication load of each party, our protocol adopts multiple committees, each of constant size. The computation of circuit $C$ is then divided into layers, each assigned to one committee. To securely transmit messages between committees, we introduce the handoff gates, incurring only a slight communication overhead. Furthermore, we leverage circuit-dependent preprocessing and incremental checking to improve the online efficiency. Compared to other MPC protocols in the same corruption setting, our protocol achieves the smallest concrete total communication complexity.
Building upon our multi-committee unanimous-abort protocol, we upgrade it to identifiable abort by adapting a technique from (Rivinius, EUROCRYPT 2025). To integrate this technique into our setting, we adjust the verification timing and introduce a king party to reduce the communication complexity of openings. This yields the first identifiable-abort MPC protocol with constant communication complexity in the sub-optimal dishonest majority setting.
Lattice HD Wallets: Post-Quantum BIP32 Hierarchical Deterministic Wallets from Lattice Assumptions
Hierarchical deterministic (HD) wallets, standardized as BIP32, allow users to manage a tree of cryptographic key pairs from a single master seed. A defining feature is non-hardened derivation: child public keys can be derived from a parent public key alone, enabling watch-only wallets where a server generates fresh receiving addresses while the signing key remains offline. Existing constructions rely on the algebraic structure of elliptic curve public keys, and recovering this functionality in the post-quantum setting is an open problem. We present two post-quantum HD wallet constructions. The first uses ML-DSA and supports hardened (private-key-dependent) derivation with proofs of unlinkability and unforgeability. The second, our primary contribution, uses Raccoon-G, a variant of the Raccoon signature scheme with Gaussian-distributed secrets. We modify Raccoon-G to publish full, unrounded public keys, preserving linearity. Since the sum of Gaussians is again Gaussian with predictable wider variance, derived keys remain statistically close to freshly generated ones, enabling non-hardened public key derivation. We prove unlinkability and unforgeability under standard lattice assumptions, and introduce a method for generating rerandomizable Raccoon-G key pairs from fixed randomness with provable existential unforgeability. Both constructions are implemented in Rust. To our knowledge, this is the first post-quantum HD wallet construction that recovers BIP32's full public key derivation functionality with provable security under standard assumptions.
Pairing-based Functional Commitments for Circuits with Shorter Parameters
Functional commitments (FCs) enable a prover to commit to a message and later produce a succinct proof of its image under any given admissible function. Unlike succinct non-interactive arguments (SNARGs), secure FCs can be realised under falsifiable assumptions in the standard model, making them attractive alternatives when fully algebraic constructions are desired. All known algebraic constructions of FC are either lattice-based or pairing-based. While a lattice-based FC for circuits with almost optimal complexities has been recently achieved [Wee, CRYPTO'25], state-of-the-art pairing-based FCs for bounded-width $w$ [Balbás-Catalano-Fiore-Lai (BCFL), TCC'23] and bounded-size $s$ circuits [Wee-Wu, EUROCRYPT'24] require prohibitive public parameter sizes $O(\lambda w^5)$ and $O(\lambda s^5)$, respectively.
In this work, we present a new algebraic pairing-based FC which achieves $O(\lambda w^3)$ public parameter size for bounded-width $w$ and unbounded depth $d$ circuits. The construction preserves all nice properties of BCFL: $O(\lambda)$ commitment size, $O(\lambda d^2)$ proof size, additive homomorphism, efficient verification, and chainability. For bounded-size $s$ circuits, we alternatively obtain $O(\lambda s^3)$ public parameters with $O(\lambda d)$ proofs. At the core of our scheme lies a new chainable FC for quadratic functions with commitments computed with respect to a power basis, as well as techniques for switching between different commitment bases.
Information-Theoretic Network-Agnostic MPC with Polynomial Communication
Network-agnostic MPC protocols tolerate simultaneously a higher number of corruptions $t_s < n/2$ when the network is synchronous, and a lower number $t_a < n/3$ when the network is asynchronous. As such, they provide strong resilience, irrespective of the type of underlying communication network.
We focus on improving the communication complexity of network-agnostic MPC with optimal resilience $2t_s + t_a < n$. In this regime, there are no polynomial-time information-theoretic solutions and current computational protocols (without fully-homomorphic encryption) communicate $O(n^2)$ elements per multiplication gate.
In this work, we significantly advance the landscape by introducing the first information-theoretic protocol with quadratic communication per multiplication gate and the first computational protocol with linear communication per multiplication gate based solely on signatures and symmetric-key encryption.
Perfectly Secure Network-Agnostic MPC Comes for Free
Secure multiparty computation (MPC) allows a set of parties to jointly compute a function while keeping their inputs private. Classical MPC protocols assume either a synchronous or an asynchronous network. Synchronous protocols tolerate more corrupted parties but rely on a timing bound, while asynchronous protocols make no timing assumptions but handle fewer corruptions.
The network-agnostic model aims to combine the advantages of both. It requires security without knowing in advance whether the network is synchronous or asynchronous, guaranteeing resilience against up to $t_s$ corruptions in the synchronous case and $t_a$ corruptions in the asynchronous case. The optimal corruption threshold for perfect security has been established as $n = 2\max(t_s, t_a) + \max(2t_a, t_s) + 1$, but prior work either falls short of this threshold or requires exponential local computation.
In this work, we present the first perfectly secure network-agnostic MPC protocol with polynomial communication and computation complexity under the optimal threshold. Our protocol achieves expected communication complexity $\mathcal{O}((|C|n + (D+C_I)n^2 + n^6)\log n)$ bits for a circuit of size $|C|$ over a finite field $\mathbb{F}$ of size $\mathcal{O}(n)$, depth $D$, and input size $C_I$.
Our main technical contribution is a compiler that generates Beaver triples in the network-agnostic setting using synchronous and asynchronous triple-generation protocols in a black-box way. Beyond the cost of the underlying protocols, it only requires $\mathcal{O}(n^2)$ instances of network-agnostic Byzantine agreement.
Is PSI Really Faster Than PSU? Achieving Efficient PSU with Invertible Bloom Filters
Private Set Union (PSU) enables two parties to compute the union of their private sets without revealing anything beyond the union itself. Existing PSU protocols remain much slower than private set intersection (PSI), often by a factor of around $30\times$.
In this work, we present the first PSU protocol based on Invertible Bloom Lookup Tables (IBLTs), introducing a fundamentally new framework that departs from traditional, inefficient approaches. Our protocol exploits structural invariants between each party’s IBLTs and their union to compute the union efficiently without explicitly constructing a combined IBLT. Central to our approach is the notion of union peelability, which allows union elements to be recovered directly from the original IBLTs. We securely implement this functionality using only Oblivious Transfer (OT) and Oblivious Pseudorandom Function (OPRF) for equality checks, ensuring no information beyond the union is leaked.
As a result, for set sizes ranging from $2^{14}$ to $2^{20}$, our protocol achieves a runtime of $0.08$ to $2.95$ seconds in the LAN setting, which is comparable to state-of-the-art PSI. We also show substantial speedups over prior PSU work—up to $10\times$ faster in LAN settings and consistently faster in WAN scenarios—while maintaining linear computation and communication complexity with small constants.
Liquid Democracy With Two Opposing Factions
Liquid democracy is a transitive vote delegation process. Previously, the advantages of liquid democracy over
direct voting have been studied in settings where there is a ground truth “good” voting outcome. In this
work, we analyse liquid democracy in a realistic setting with two opposing factions without a ground truth
and under uncertainty. Formally, we consider 𝑛 voters who want to decide on some binary issue by voting.
Each voter has a preference in {0, 1} that represents the opinion of the voter on these issues. That is, a voter
with preference 0 prefers to vote for option 0. We refer to voters with the same preference as being in the
same faction. The goal is for voters in the same faction to cooperatively decide on vote delegation strategies
that maximise their probability of winning the election. In this setting, we present a practical distributed
algorithm under realistic assumptions to decide on an approximately vote delegation strategy that involves
minimal interaction and communication, and under incomplete information about the opposing faction. We
then provide a complete analytical characterisation of optimal vote delegation strategies under complete
information about the opposing faction. Finally, we show that finding optimal delegation strategies in the
general setting is PSPACE-complete.
WOTS-Tree: Merkle-Optimized Winternitz Signatures for Post-Quantum Bitcoin
We present WOTS-Tree, a stateful hash-based signature scheme for Bitcoin that combines WOTS+ one-time signatures with a binary Merkle tree, supporting up to $2^{21}$ independent signatures per address. The construction instantiates XMSS with parameters specifically optimized for Bitcoin's UTXO model, using a dual hash function design: SHA-256 truncated to 128 bits ($n=16$, $w=256$) for WOTS+ chain evaluations, and full 256-bit SHA-256 for Merkle tree compression. Deployed as dual leaves within BIP-341 Taproot (compatible with the proposed BIP-360 Pay-to-Merkle-Root), the default (hardened) mode provides: (i) a fast-path for single-use UTXOs at 353 bytes, and (ii) a fallback tree for Replace-By-Fee and address reuse at 675 bytes ($K=1{,}024$). For Lightning channels, $K=2^{21}$ yields 1,028-byte witnesses with a 19-second one-time setup. An opt-in compact mode using truncated 128-bit Merkle nodes reduces witnesses to 515--692 bytes at the cost of reduced Merkle binding security ($\approx 60$ bits). WOTS-Tree achieves strict L1 verification bounds of 4,601 hashes ($\approx 0.009$ ms on SHA-NI hardware). The default parameterization provides 115.8-bit classical and 57.9-bit quantum forgery resistance, with the Merkle binding at $\approx 124$ bits, exceeding the WOTS+ forgery bound in both classical and quantum settings. We provide a complete security reduction with concrete bounds, a dual hash instantiation analysis, and a reference implementation with comprehensive test coverage. Default-mode witnesses are $4$--$7\times$ smaller than hypertree variants while aligning naturally with Bitcoin's spend-once transaction model.
Partially Non-Interactive Two-Round Threshold and Multi-Signatures with Tighter and Adaptive Security
Threshold and multi-signature schemes enable a group of signers to jointly sign messages. In the pairing-free discrete-logarithm setting, there have been extensive research efforts on two-round threshold and multi-signature schemes. A prominent category is partially non-interactive schemes, where the first round is message-independent and can be preprocessed offline. However, there exist gaps in security properties between partially non-interactive schemes and fully online schemes. While there are fully online schemes that achieve rewinding-free or fully adaptive security based on standard assumptions, existing partially non-interactive schemes require the adversary to act algebraically or rely on non-standard assumptions for these properties.
We bridge this gap by proposing a family of new constructions of partially non-interactive threshold and multi-signature schemes. Central to our constructions is a technique that renders HBMS (Bellare and Dai, Asiacrypt 2021) and its variants partially non-interactive. Specifically, we present the following instances:
- The first partially non-interactive two-round multi-signature scheme with a rewinding-free reduction under standard assumptions. This scheme provides a tighter security guarantee against non-algebraic adversaries than all prior partially non-interactive schemes.
- The first partially non-interactive two-round threshold signature scheme achieving fully adaptive security against non-algebraic adversaries under standard assumptions.
Distributed Monotone-Policy Encryption with Silent Setup from Lattices
Distributed cryptography serves as a cornerstone for building trustless systems, yet existing solutions for distributed policy encryption typically require either a trusted dealer or complex interactive setup protocols. Recent advances have introduced the concept of $silent~setup$, where users independently generate keys and a joint public key is derived deterministically. However, all current silent-setup constructions rely on bilinear pairings, leaving them vulnerable to quantum adversaries. In this work, we present the first distributed monotone-policy encryption scheme with silent setup based on lattices. Our construction supports arbitrary monotone policies represented as Disjunctive Normal Forms (DNFs) and provides a robust, post-quantum foundation for decentralized access control.
A Modular Approach to Succinct Arguments for QMA
Succinct argument systems are of central importance to modern crytpography, enabling the efficient verification of computational claims. In the classical setting, Kilian (STOC 92) established that any probabilistically checkable proof for NP can be transformed into a succinct argument system for NP using only collision-resistant hash functions. In the quantum setting, recent works have established the feasibility of (classically-verifiable) succinct arguments for QMA, capturing statements that require \emph{quantum} proofs. However, known constructions all rely on the highly structured assumption of learning with errors (LWE), which stands in stark contrast with the unstructured assumptions that suffice for NP.
In this work, we develop a new framework that broadens the cryptographic foundations of succinct arguments for QMA. We assume the existence of (i) an oblivious state preparation (OSP) protocol, which in turn can be constructed from \emph{plain} trapdoor claw-free functions, and (ii) collapsing hash functions, the quantum analogue of collision-resistance. In particular, we obtain the first succinct, classically-verifiable argument system for QMA which does not rely on the hardness of LWE.
Our construction proceeds in two steps. First, we design a \emph{round-efficient} classically-verifiable argument system for QMA based only on the assumption of OSP. Second, we introduce a \emph{generalized communication compression compiler}, which, assuming collapsing hash functions, transforms any $T$-round interactive protocol with a classical verifier into one in which the communication size is bounded by $T \cdot \poly(\secp)$ for some fixed $\poly$ independent of the original size of each message. Our compiler extends a quantum rigidity-based communication compression technique of Zhang (QCrypt 25), and may be of independent interest.
Round-Optimal Byzantine Agreement without Trusted Setup
Byzantine Agreement is a fundamental primitive in cryptography and distributed computing, and minimizing its round complexity is of paramount importance. The seminal works of Karlin and Yao [Manuscript'84] and Chor, Merritt and Shmoys [JACM'89] showed that any randomized $r$-round protocol must fail with probability at least $(c\cdot r)^{-r}$, for some constant $c$, when the number of corruptions is linear in the number of parties, $t = \theta(n)$. The work of Ghinea, Goyal and Liu-Zhang [Eurocrypt'22] introduced the first \emph{round-optimal BA} protocol matching this lower bound. However, the protocol requires a trusted setup for unique threshold signatures and random oracles.
In this work, we present the first round-optimal BA protocols without trusted setup: a protocol for $t<n/3$ with statistical security, and a protocol for $t<(1-\epsilon)n/2$ with any constant $\epsilon > 0$, assuming a bulletin-board PKI for signatures.
Issuer-Hiding for BBS Anonymous Credentials via Randomizable Keys
Anonymous credentials (AC) equip users with credentials on attested attributes, which enable them to prove verifiable yet data-minimizing statements over their attributes. However, in standard ACs, each credential presentation reveals the credential issuer, which could be more information than intended and necessary, e.g., when merely proving age or personhood. Issuer-Hiding Anonymous Credentials (IHAC) address this limitation and hide the issuer in the presentation. That is, they only reveal that the user has a credential from an issuer within a certain trust set, referred to as the policy. Recent works by Sanders and
Traoré, and Katz and Sefranek show how to add issuer-hiding to PS- and BBS-based credentials while keeping presentations compact, i.e., not scaling in the number of issuers. However, both constructions require the verifier to generate dedicated policy key pairs, turning verification into a secret key operation. Managing these verifier-specific keys introduces additional complexity and affects the resulting practical privacy and security guarantees. In this work, we propose two IHAC schemes from BBS signatures that achieve compact presentations without the need of such verifier-specific keys. At the core of our schemes is a technique to randomize BBS public keys and adapt the signatures accordingly, which we believe to be of independent interest.
Additions, Multiplications, and the Interaction In-Between: Optimizing MPC Protocols via Leveled Linear Secret Sharing
In concretely efficient secure multiparty computation (MPC) protocols based on secret sharing, a frequent pattern is to locally multiply two shared values into some intermediate representation that is immediately and interactively translated back into sharings of the product. The intermediate representation is often still a full-fledged but different secret sharing scheme. This has been used to efficiently compute dot products by computing the sum of all intermediate products and then interactively translating only the sum instead of translating each individual product. Beyond that, the intermediate representation or secret sharing scheme has mostly been seen only as a necessary interim step, leaving most of its potential untapped.
We change that by proposing the paradigm of leveled linear secret sharing, which allows dynamic switching between the original secret sharing and the previously only intermediate one more freely, while enabling arbitrary linear computations in any of the domains. Prior multiplications are split into a non-interactive multiplication that switches from one to the other secret sharing, and an interactive upgrade back to the original secret sharing domain. The upgrade now does not necessarily follow each multiplication immediately, but just needs to be placed somewhere before the next multiplication is computed, possibly upgrading the linear aggregation of many multiplications' results. We apply this idea to improve three-party computation on replicated sharings (Araki et al., CCS'16), n-party BGW-style protocols (Ben-Or et al., STOC'88), and masked secret sharing protocols such as ABY2.0 (Patra et al., USENIX Security'21). We develop a novel optimizer that optimally selects which gate of a circuit is evaluated in which domain. With that, we improve communication by 10-37% for many circuits. Furthermore, we implement our generalization for replicated sharing, measure run time improvements of mostly 10-26% in a LAN, and make a full implementation of the protocol and our novel optimizer publicly available.
High-Precision Functional Bootstrapping for CKKS from Fourier Extension
We introduce a new (amortized) functional bootstrapping framework over the CKKS homomorphic encryption (HE) scheme based on Fourier extension. While approximating the modular reduction function in CKKS bootstrapping through Fourier series is a well-known technique, how such method can be efficiently generalized to functional bootstrapping is less understood. In this work, we show that, by constructing proper Fourier extensions, any function with a bounded domain in the smoothness class $C^{\kappa}$ can be approximated by a degree-$n$ Fourier series with errors of order $O(n^{-\kappa-2})$ (except at the singularities), improving on previous results on a global error bound of $O(n^{-1})$ [AKP2025]. To achieve such bound, we propose a new way of constructing Fourier extensions, such that the extended functions appear as smooth as possible in the sense of a Fourier approximation. By implementing our functional bootstrapping over OpenFHE, we demonstrate that we can improve the data precision by $10$-$27$ bits and reduce the amortized FBS latency by $1.1$-$2\times$ over a variety of benchmarking functions.
Careful with the Ring: Enhanced Hybrid Decoding Attacks against Module/Ring-LWE
In order to reduce size and improve efficiency, many lattice-based cryptographic schemes adopt structured variants of the Learning With Errors (LWE) problem, such as the Module-LWE and Ring-LWE. Nevertheless, when analyzing the concrete security of lattice-based schemes, these algebraic structures are usually not considered, given the absence of techniques to exploit them for accelerating known attacks.
For the widely-used polynomial ring $\mathbb{Z}_q[X]/(x^N+1)$, we first propose an enhanced hybrid decoding attack against Module/Ring-LWE by leveraging the ring structure to accelerate its guessing and decoding steps. Then, we theoretically show that compared to the prior hybrid decoding attack, our new attack can lead to a complexity improvement by a factor of $O(N)$ in sparse secret setting. Moreover, we implement our new enhanced hybrid decoding attack on the benchmark instances established by [WSM+25, S{\&}P], and achieve several new records. In particular, compared with state-of-the-art methods given by [KKN+26, EC], our approach is 17$\times$ to 114$\times$ faster on the known broken instances. Finally, we show how to estimate the concrete bit security with our new hybrid attack under the same model as in the lattice estimator, and perform the analyses of the latest sparse Ring-LWE parameter sets used in Fully Homomorphic Encryption (FHE) schemes including [JM22, EC], [CCKS23, CCS], [BCKS24, EC], [CHKS25, EC] and [AKP25, C]. The numerical results show that compared to the best-known attack, our enhanced attack can improve the attack complexity by 7-13 bits for all the considered parameter sets. In particular, under our new enhanced hybrid decoding attack, 10 out of the 16 parameter sets fall below the targeted 128-bit security level.
Cube and Integral Attacks on ChiLow-32
The protection of executable code in embedded systems requires efficient mechanisms that ensure confidentiality and integrity.
Belkheyar \emph{et al.} recently proposed the Authenticated Code Encryption (ACE) framework, with \chilow as the first ACE-2 instantiation at EUROCRYPT~2025.
\chilow-(32 + $\tau$) is a 32-bit tweakable block cipher combined with a pseudorandom function, featuring quadratic nonlinear layers called ChiChi (\dchi) and a nested tweak/key schedule optimized for low-latency \emph{decryptions} in secure code execution under strict query limits.
In this paper, we exploit the algebraic structure of \dchi and study the resistance of \chilow-(32 + $\tau$) to cube-like and integral cryptanalysis in single- and multiple-tweak settings.
In the multiple-tweak setting, we present conditional attacks that can recover the full key for 5-round \chilow-(32 + $\tau$) with practical complexity, and extend the analysis to 6 rounds at a still non-trivial but purely theoretical cost below brute force.
We additionally construct borderline cube attacks on 5- and 6-round \chilow-(32 + $\tau$), each capable of recovering the full key with practical complexity.
Specifically, we recover the full key for 5-round \chilow-(32 + $\tau$) using $2^{32}$ decryptions, $2^{18.58}$ chosen ciphertext data, and $2^{33.56}$ bits of memory, and for 6-round \chilow-(32 + $\tau$) using $2^{34}$ decryptions, $2^{33.58}$ chosen ciphertext data, and $2^{54.28}$ bits of memory.
We then focus on integral cryptanalysis and the challenge of extending the analysis to 7 rounds.
We identify integral distinguishers in the single- and multiple-tweak models and extend suitable 2-round and 3-round integral distinguishers to build a 7-round attack.
We present a nested strategy to recover all round tweaks and tackle the problem of deriving the master key from round-tweak and key information.
Our key-recovery method exploits high-degree monomials that arise in the integral key-recovery phase to reduce the average number of guessed key bits and hence reduce the time complexity.
As a result, we mount a 7-round key-recovery attack on \chilow-(32 + $\tau$) that requires $2^{6.32}$ chosen ciphertext data, has a time complexity of about $2^{108.55}$ encryptions, and needs negligible memory.
Notably, all our attacks remain consistent with the security claims of the design.
SPRINT: New Isogeny Proofs of Knowledge and Isogeny-Based Signatures
Zero-knowledge proofs of knowledge are a fundamental building block in many isogeny-based cryptographic protocols, such as signature schemes based on identification-to-signature transformations, or multi-party ceremonies that avoid a trusted setup, in particular for generating supersingular elliptic curves with unknown endomorphism rings.
In this paper, we construct SPRINT, an efficient polynomial IOP-based proof system that encodes the radical $2$-isogeny formulas into a system of multivariate polynomials. When combined with the recent polynomial commitment scheme (PCS) DeepFold, our construction yields substantial improvements over state-of-the-art isogeny proofs of knowledge. For the SQIsign prime $p=5 \cdot 2^{248}-1$ (giving NIST security level I), our implementation takes only a few milliseconds for proving and verification, with proof sizes around 80 kB. Compared to previous works, we achieve a $1.1$-$8\times$ speedup for the prover, a $4.4$-$24\times$ speedup for verification, and proof sizes that are $1.2$-$2.3\times$ smaller across different parameter sets.
Moreover, we study the weak simulation extractability of our proof system, which we can use as a starting point for a modular construction of signatures. We show that any Fiat–Shamir compiled interactive proof with a so-called canonical simulator is weakly simulation-extractable. We expect this general result to be widely applicable to other post-quantum proof systems and thus of independent interest.
Building on SPRINT and our wSE result, we introduce a new family of signature schemes whose security solely relies on the $\ell$-isogeny path problem, a foundational problem in isogeny-based cryptography. As a concrete instantiation, we construct a signature scheme using DeepFold as the PCS. Across the different NIST security levels, a prototype implementation of our scheme achieves performance on par with the highly optimized NIST specification for SQIsign. Even though our signatures are relatively large, our scheme relies on weaker assumptions and our framework offers flexibility for tradeoffs and optimizations – both within a given PCS and by switching to alternative PCS constructions. In particular, it will naturally inherit efficiency gains from future advances in plausibly post-quantum secure PCS constructions.
LazyArc: Dynamic Out-of-Order Engine for High-Throughput FHE
Fully Homomorphic Encryption (FHE) is a modern cryptographic technique that allows performing computations directly over encrypted data. This makes FHE an indispensable method for privacy-preserving applications, where users' data are encrypted and processed by a potentially untrusted third party. Nevertheless, FHE computations are computationally expensive, often rendering them less practical for realistic scenarios. Notably, a major performance bottleneck for FHE is an operation called bootstrapping that allows refreshing the inherent noise of an FHE ciphertext so it could support more computations. In this work, we introduce LazyArc, a versatile lightweight dynamic Out-of-Order (OoO) engine that supports higher-throughput FHE computations expressed as a sequence of instructions. Notably, LazyArc encapsulates a hybrid engine capable of evaluating both arithmetic and Boolean instructions in the same program. Moreover, our proposed OoO paradigm improves the runtime performance by masking the latency of bootstrapping by executing of independent instructions in an FHE application. To enable LazyArc, we introduce a novel data structure, dubbed RegisterMap, which performs static analysis on FHE arithmetic circuits and tracks the noise of each ciphertext to allow proactive bootstrapping scheduling. Our approach is evaluated using linear algebra benchmarks and can achieve about 10% performance improvement over baselines.
Janus-FHE: A Side Channel Resilient Framework for High-Degree Homomorphic Encryption on GPUs
Homomorphic Encryption (HE) enables secure cloud computing through computations on encrypted data, yet the physical security of these implementations on shared hardware accelerators remains a critical challenge. While Graphics Processing Units (GPUs) offer the massive parallelism required for HE workloads, their Single Instruction Multiple Thread (SIMT) architecture amplifies side-channel vulnerabilities. Standard implementations of polynomial multiplication and relinearization often exhibit data-dependent control flows and irregular memory access patterns that leak sensitive information through variable timing behavior. In this paper, we present Janus-FHE, a GPU-based framework for BFV ciphertext multiplication and relinearization designed with intrinsic side-channel resistance. We reformulate polynomial multiplication as large-integer arithmetic via Kronecker substitution, executing it using a Schonhage-Strassen algorithm based on the Discrete Galois Transform (DGT). Critically, we compute these transforms using the Stockham algorithm, which enforces strictly deterministic, input-independent memory access patterns, effectively mitigating cache-timing vulnerabilities. Furthermore, we implement a constant-time relinearization strategy that replaces conditional branching with masked arithmetic to prevent warp divergence. Our experimental evaluation confirms that Janus-FHE eliminates the control-flow leakage observed in state-of-the-art libraries like HEonGPU, extending the computational reach of GPU-based FHE by successfully computing multiplications for polynomial degrees up to $2^{18}$.
Scytale: A Compiler Framework for Accelerating TFHE with Circuit Bootstrapping
Fully Homomorphic Encryption (FHE) offers strong cryptographic guarantees for secure outsourced computation, yet the performance of modern schemes like TFHE remains a barrier for complex applications. Existing TFHE approaches relying on programmable bootstrapping (PBS) are inefficient for large circuits, as they are limited to evaluating small (3-4 bit) lookup tables (LUTs).
Our work introduces a novel compiler framework that overcomes this limitation by integrating circuit bootstrapping (CBS) and vertical packing (VP) to enable the evaluation of circuits composed of LUTs up to 12 bits. Our framework, built upon MLIR, introduces new dialects for CBS and VP and leverages Yosys for circuit synthesis, automating the translation from high-level programs to optimized TFHE circuits. Furthermore, we propose bespoke optimization passes that combine shared LUTs to minimize the overall cryptographic operations required. Experimental results demonstrate that our CBS-based design achieves execution times several times faster than the baseline PBS-only approach, highlighting the practical benefits of combining CBS and VP with compiler-driven circuit-level optimizations.
Improved preprocessing for the Crossbred algorithm and application to the MQ problem
First, we correct certain omissions in the literature on the complexity analysis of Crossbred and give a full analysis of this algorithm.
Secondly, we propose a criterion to reduce the number of polynomials generated in the preprocessing step for a set of admissible parameters $D$, $d$ and $k$, whenever this step of the algorithm produces more polynomials than necessary. We conclude by applying this criterion to the security of MQOM.
Cyclo: Lightweight Lattice-based Folding via Partial Range Checks
Folding is a powerful technique for constructing efficient succinct proof systems, especially for computations that are expressed in a streaming fashion.
In this work, we present Cyclo, a new lattice-based folding protocol that improves upon LatticeFold+ [Boneh and Chen '25] in multiple dimensions and which incorporates, among others, the pay-per-bit techniques from Neo when folding constraints expressed over a field $\mathbb{F}_q$ [Nguyen and Setty '25].
Cyclo proposes a new framework for building lattice-based folding schemes that eliminates the need for norm checks \emph{on the accumulator} by adopting an amortized norm-refreshing design, ensuring that the witness norm grows additively per round within a (generously) bounded number of folds. This design simplifies the protocol and reduces prover overhead. In particular, Cyclo only performs range checks on the input \emph{non-accumulated} witness, and when applied to fold constraints over $\mathbb{F}_q$, it does not decompose any witnesses into low-norm chunks within the folding protocol itself.
Cyclo, supporting a complete family of cyclotomic rings, combines two simple building blocks: an extension commitment that reduces the norm of the witness by decomposing it and recommitting, and an $\ell_\infty$ range test via a sum-check protocol.
We demonstrate, by proving communication and runtime estimates, that the construction results in an efficient and proof-size-friendly folding scheme.
We also establish an algebraic connection between $\mathcal{R}_q$ and $\mathbb{F}_q$ using the polynomial evaluation map, enabling efficient reduction from R1CS/CCS over $\mathbb{F}_q$ to a linear relation over $\mathcal{R}_q$, providing a new and simpler formulation of the techniques in [Nguyen and Setty '25].
In practical settings, Cyclo achieves succinct proof sizes on the order of $30$ KB, improving by an order of magnitude over LatticeFold+. Our efficiency benchmarks indicate that our protocol also outperforms LatticeFold+ in practice.
Round-Based Approximation of (Higher-Order) Differential-Linear Correlation
This paper presents a new method for approximating the correlations of differential-linear distinguishers.
From the perspective of Beyne's geometric approach, the differential-linear correlation is a corresponding coordinate of the \textit{correlation vector} associated with the ciphertext multiset, which can be obtained by using the correlation matrix of the \textit{2-wise form} of the cipher.
The composite nature of the correlation matrix leads to a round-based approach to approximate the correlation vector.
This simple approximation is remarkably precise, yielding the most accurate estimation for differential-linear correlations in \ascon thus far and the first DL distinguishers for 6-round \ascon-128a initialization.
For \present, we present 17-round DL distinguishers, 4 rounds longer than the current record.
To apply the round-based approach to ciphers with the large Chi ($\chi$) function as nonlinear functions, we derive theorems to handle the correlation propagation for $\chi$ and its 2-wise form.
Strong DL distinguishers for up to 6 rounds of \subterranean and \koala-$p$ are provided, 2 rounds longer than the previous differential and linear distinguishers.
Furthermore, the round-based approximation idea is also extended to the higher-order differential-linear distinguishers.
We give the first second-order DL distinguisher for 6-round \ascon-128 initialization and the first second-order DL distinguishers for up to 7 rounds of \subterranean and \koala-$p$.
Simulating Noisy Leakage with Bounded Leakage: Simpler, Better, Faster
Theoretical treatments of leakage-resilient cryptography typically work under the assumption that the leakage learned by the adversary (e.g., about an n-bit secret key) is arbitrary but bounded, in the sense that the leakage is an l-bit string for some threshold l significantly smaller than n. On the other hand, real-world side-channel attacks on physical implementations of cryptographic protocols produce leakage transcripts that are much longer than n. However, unlike the bounded leakage model, these transcripts are inherently noisy. We would like to generically claim that cryptographic schemes resilient to bounded leakage are also resilient to realistic noisy leakages. This boils down to showing that noisy leakages can be simulated using only one bounded leakage query. Prior work (EUROCRYPT 2021 and CRYPTO 2024) made important progress on this problem. Yet, barriers to applicability and interpretability remain, such as the need for large noise levels, the difficulty to estimate the necessary parameters of the leakage distributions, undesirable independence assumptions, and inefficient simulation in certain regimes. In this work, we resolve (or make progress towards resolving) these shortcomings:
1. We show that simple modifications to the simulation strategies in prior work simultaneously allow a cheaper computation of simulation parameters and better parameters than previous results.
2. Leveraging the first item, we obtain a reduction whose amount of extra bounded leakage to simulate correlated signals only increase very mildly. This captures the limited incentive for an adversary to oversample a side-channel signal leading to correlated signal, improving previous results treating these samples as independent.
3. We establish a new ``bounded leakage vs.\ simulation efficiency'' tradeoff, roughly trading $\mathcal{O}(\Delta)$ bits leaked by the bounded leakage query for a $\frac{2^{\Delta}}{\mathrm{poly}(\Delta)}$-factor reduction in simulation complexity. This widens the applicability of our results in the context of computational security, as former simulators were only efficient when simulating from $\mathcal{O}(\log \lambda)$ bits of bounded leakage, with $\lambda$ the security parameter.
Publicly Certifiable Min-Entropy Without Quantum Communication
Is it possible to publicly certify that a string was sampled from a high min-entropy distribution?
Certified randomness protocols, such as Brakerski et al. (FOCS 2018) enable private certification—Alice can convince Bob—but it does not yield public certification. We construct a certified min-entropy scheme with the following properties: (1) public certification, so Alice can convince Bob, Charlie, and Dave; (2) all prover–verifier communication is classical; (3) transferability—if Bob has already been convinced, he can subsequently convince Eve and Frank; and (4) classical verification—Grace can be convinced even without a quantum computer, at the cost of losing transferability.
Assuming quantum one-shot signatures (and variants), we construct quantum fire with new properties and use it to obtain our publicly certifiable min-entropy scheme. Both primitives can be instantiated from sub-exponential iO and LWE, and our quantum fire scheme is the first standard-model construction of quantum fire.
Forget-IT: Optimal Good-Case Latency For Information-Theoretic BFT
The good-case latency of a consensus protocol measures the latency from block proposal by a consensus leader to decision, in the case in which the leader is correct. It is arguably the efficiency metric most pertinent for discussing the practical latency performance of consensus protocols. Well understood in the context of the authenticated setting, with PBFT [Castro 99], Tendermint [Buchman 16] & Simplex [Chan, Pass 23] achieving the optimal good-case latency of 3 rounds, significant gaps remain in the unauthenticated setting. We present Forget-IT, an unauthenticated consensus protocol with optimal good-case latency of 3 rounds. Furthermore, our protocol only requires constant persistent storage, and has $O(n^2)$ message complexity per view.
Structural Collapse of the Amutha-Perumal Scheme Based on Duo Circulant Matrices
Amutha and Perumal recently proposed a two-party key exchange protocol based on \(\alpha\)-\(v\)-\(w\)-duo circulant matrices over the max-plus semiring, claiming resistance to known tropical attacks and suitability for IoT environments. This paper presents a complete cryptanalysis of this protocol. We uncover a fundamental structural weakness: the construction imposes an affine parameterization on the secret matrices, reducing each matrix to a single integer parameter. Consequently, the public messages become simple shifts of a publicly computable matrix, and the session key reduces to a constant shift of another public matrix. An eavesdropper can recover the shared secret in constant time after a one-time precomputation. The attack is deterministic, succeeds with probability 1, and requires only passive observation. We verify the attack on the authors' own example.
Dual-Syncopation Meet-in-the-Middle Attacks: New Results on SHA-2 and MD5
In this paper, we introduce a novel framework for meet-in-the-middle (MITM) attacks on ARX designs, termed \textit{dual-syncopation} MITM attack, which formalizes a compact, rule-based language for tracking deterministic and non-deterministic information for two independent propagations through ARX operations. The language provides an uniform abstraction for previous techniques, e.g., initial structure, partial matching, partial-fixing, and naturally supports automation. With the language, we additionally encode new technical insights that is not covered in literature and build an efficient automatic search tool for MITM attacks on ARX designs that can be fully optimized within hours.
As a result, we obtain the first preimage attacks on 46- and 47-step SHA-256, extending the previous record of 45 steps by two steps. For SHA-512, we present the first preimage attack on 51 steps, extending the prior record of 50 steps by one step. In addition, we provide a collection of improved preimage attacks on 43-, 44-, and 45-step SHA-256; 46-, 47-, 48-, 49-, and 50-step SHA-512; as well as full-step MD5. The proposed attacks can be converted to free-start collision attacks with the technique proposed by Li, Isobe, and Shibutani at FSE 2012. Our results mark the first improvements on theoretical attacks on SHA-2 in a decade and push the boundary of cryptanalysis of ARX designs.
Migrating Bitcoin and Ethereum Addresses to the Quantum Blockchain Era
Recent advances in quantum computing threaten the cryptographic foundations of blockchain systems, including Bitcoin and Ethereum, which rely on elliptic-curve cryptography (ECC) for security. Algorithms such as Shor's algorithm can efficiently solve the discrete logarithm problem (DLP), enabling recovery of private keys from public keys. Existing funds, especially those tied to long-lived addresses or unspent coinbase outputs (such as Satoshi Nakamoto's bitcoins), and Ethereum externally owned accounts become vulnerable once large-scale quantum computers become available. While previous work has suggested post-quantum signature schemes and migration strategies, no widely deployed, end-to-end, backward-compatible, and privacy-preserving migration mechanism has been presented for migrating legacy funds without revealing the corresponding classical public keys on-chain.
In this paper, we present a complete framework for secure migration of both spent and unspent Bitcoin and Ethereum assets to a post-quantum (PQ) security model, using a hybrid approach based on post-quantum signatures and quantum-resistant zero-knowledge proofs (ZKPs). We design zkSTARK circuits that prove knowledge of a witness linking a legacy Bitcoin or Ethereum address to a fresh PQ public key without disclosing the legacy elliptic-curve public key on-chain. We also formalize a one-way post-quantum transition model for migrated assets: legacy authorization is used only at enrollment, while future authorization semantics are governed by post-quantum credentials and migrated-state registry semantics. We further show why hash-security margins must be re-evaluated in the quantum setting by distinguishing collision resistance (BHT-style attacks, approximately $2^{n/3}$) from preimage resistance (Grover-style attacks, approximately $2^{n/2}$), and we motivate hash agility for migration-era commitments and registries. To enable verifiable on-chain transitions, we propose new primitives (\texttt{OP\_CHECKQUANTUMSIG}, \texttt{OP\_CHECKSTARKPROOF}), enabling verification of quantum-safe proofs and signatures. Our work and implementation\footnote{\href{https://github.com/skardas/pq_bitcoin}{\texttt{github.com/skardas/pq\_bitcoin}}} provide a practical framework for securing legacy blockchain assets against quantum-era threats while preserving backward compatibility and operational continuity.
Lie algebras and the security of cryptosystems based on classical varieties in disguise
In 2006 de Graaf et al. devised a Lie-algebra-based strategy for finding a linear transformation $T \in PGL_{N+1}(\mathbb{Q})$ connecting two linearly equivalent projective varieties $X, X' \subseteq \mathbb{P}^N$ over $\mathbb{Q}$. The method succeeds for several families of "classical" varieties such as Veronese varieties, which have large automorphism groups. In this paper, we study the Lie algebra method over finite fields, which comes with new technicalities when compared to $\mathbb{Q}$ due to, e.g., the characteristic being positive. Concretely, we make the method work for Veronese varieties of dimension $r \geq 2$ and (heuristically) for secant varieties of Grassmannians of planes. This leads to classical polynomial-time attacks against two candidate-post-quantum key exchange protocols based on disguised Veronese surfaces and threefolds, which were recently proposed by Alzati et al., as well as a digital signature scheme based on secant varieties of Grassmannians of planes due to Di Tullio and Gyawali. We provide an implementation in Magma.
Hybridization of Cryptographic Primitives: A Generalized Framework for Adaptive Security
Hybrid cryptographic schemes combine multiple primitives to provide resilience against diverse threats, particularly in the post-quantum era where classical algorithms face potential quantum attacks. However, existing hybrid approaches rely on predefined, fixed pairings of specific cryptographic algorithms, limiting their adaptability to evolving security requirements and heterogeneous deployment environments. This paper presents a generalized framework for the hybridization of cryptographic primitives that enables dynamic, user-driven composition of encryption schemes and digital signatures. Our approach leverages all-or-nothing transformations (AONTs) to construct hybrid schemes where an adversary must break all constituent primitives simultaneously to compromise the system. We formally prove that if at least one component scheme remains secure (IND-CPA for encryption, EUF-CMA for signatures), the entire hybrid construction achieves security equivalent to its strongest component. Unlike conventional approaches that prescribe specific algorithm combinations, our framework allows flexible selection and integration of classical, post-quantum, or mixed cryptographic primitives based on specific security requirements, computational constraints, and threat models. Our generalized hybridization methodology naturally extends to key encapsulation mechanisms and other cryptographic primitives, providing a foundation for building future adaptive cryptographic systems that remain secure even as individual components are compromised over time. This addresses a critical gap in current cryptographic practices and will provide users a methodology to construct flexible, robust security architectures for the post-quantum era.
Multipath PA-PUFs generate all Boolean functions
In this paper, we propose a generalized model of Priority Arbiter-based Physical Unclonable Function (PA-PUF) with an arbitrary number of paths inside each switch. We first develop a mathematical model for this generalized model. Experimentally, we observed that the class of Boolean functions generated from our model of PA-PUF increases proportionally with the number of paths inside each switch, and that motivated us to attempt one of the open challenges proposed by Kansal et al. [DAM 2024]. We first show that the set of Boolean functions generated from $i$-length PA-PUF with $(i+1)$ number of paths is a proper super set of the set of Boolean functions generated from $i$-length PA-PUF with $i$ number of paths. Based upon that, we show in our main result that we need at least $(n+1)$ numbers of paths inside each switch of an $n$-length PA-PUF to generate all the Boolean functions involving $n$-number of variables. Furthermore, we performed significant software and hardware experimentations to assess the resilience of our model against machine learning based modeling attacks.
Provable Security and Privacy Analysis of WPA3's SAE and SAE-PK Protocols
SAE and SAE-PK are the core security protocols introduced in the latest Wi-Fi security standard, WPA3, to protect personal networks. SAE-PK extends SAE to prevent the so-called evil twin attacks, where an attacker with the knowledge of the password attempts to impersonate a legitimate access point. In this work, we present the first provable security and privacy analysis of SAE and SAE-PK. We introduce formal models that capture their intended properties and use these models to analyze the guarantees these protocols provide.
First, we identify an attack that prevents SAE from fulfilling its intended authentication guarantees. As a result, SAE can only be proven secure within a weaker security model, which we also formalize and show the proof in. To achieve the desired level of security, we propose two simple fixes, resulting in two efficient SAE protocols that we call SAEv2 and SAEv3. We prove that both protocols meet the intended security guarantees, with SAEv3 providing greater robustness.
Next, we prove that SAE-PK is indeed secure against evil twin attacks, but its current design introduces a theoretical vulnerability to offline dictionary attacks, which contradicts the expected security guarantees of SAE-PK as an enhanced password-authenticated key exchange protocol. To remedy this, we show that SAE-PK can be modified with minimal changes to fully realize its desired security goals.
Finally, we analyze the privacy guarantees of SAE, SAE-PK, and our proposed enhanced variants. We prove that their cryptographic core preserves the unlinkability of client devices across distinct Wi-Fi networks, if MAC address randomization is properly applied.
Relaxed Modular PCS from Arbitrary PCS and Applications to SNARKs for Integers
\emph{Modular Polynomial Commitment Schemes (Mod-PCS)} extend standard PCSs by enabling provable evaluation of integer polynomials modulo a random modulus, providing a natural foundation for SNARKs that operate directly over large integers without emulating arithmetic in finite fields. Only two Mod-PCS constructions are known. The first (Campanelli and Hall-Andersen, IACR ePrint 2024) serves primarily as a feasibility result and is impractical and not post-quantum secure due to its reliance on groups of unknown order. The second (Garetta et al., CRYPTO 2025) introduces the weaker notion of \emph{relaxed} Mod-PCS, but is not fully succinct: committing to a multilinear polynomial with $N$ terms and $B$-bit coefficients requires $O(\sqrt{N}B)$ proof size and verification time.
We present a black-box transformation that builds relaxed Mod-PCS from any standard PCS, enabling new constructions. Instantiating our transformation with a tensor-code PCS yields the first relaxed Mod-PCS with $O(\log (N+B))$ proof size and verifier time, which is transparent and plausibly post-quantum secure. Using this scheme within the framework of Garetta et al., we obtain the first fully succinct SNARK for the Customizable Constraint System over $\mathbb{Z}_B$, achieving $O(B\log N + N\log N \log B)$ prover time and $O(\log (N+B))$ verifier time and proof size.
Our approach relies on a commitment-switching technique for integer polynomials and a new batched integer commitment scheme from any PCS. We further introduce improved arguments for integer addition and multiplication, correctness of the number-theoretic transform, and general Diophantine relations over committed integers.
Lighthouse: Single-Server Secure Aggregation with $O(1)$ Server-Committee Communication at Scale
Secure aggregation is a core primitive for privacy-preserving federated learning, enabling a server to compute aggregates of client updates without learning individual inputs. Recent protocols have explored committee-based designs to reduce client overhead and tolerate weakly connected participants. However, existing approaches still incur communication and computation costs that scale with the number of clients and/or the size of model updates. This becomes a serious bottleneck in interaction between the server and the committee, given that model updates are high-dimensional and committee is a small set of clients.
We present Lighthouse, a new secure aggregation protocol that supports one-shot client communication and achieves constant committee computation and communication overhead with server, independent of both the number of clients and the size of the input vector. Our protocol attains the best-known round complexity of two rounds, matching OPA (CRYPTO 2025) and TACITA (ePrint 2025) and improving upon Flamingo (IEEE S&P 2023) and Willow (CRYPTO 2025). Our core technical contribution is a novel application of recent advances in batched threshold encryption, which enables succinct server–committee interaction while preserving security and correctness. Beyond asymptotic improvements over prior works, Lighthouse yields substantial concrete efficiency gains: For an aggregation with 1024 clients, we reduce server-to-committee communication by over 100× and committee-to-server communication by over 300× compared to Flamingo and Willow. Also, we present an extension that supports dynamic client participation, a critical requirement for practical deployments at scale, while preserving the asymptotic and concrete efficiency of the static protocol for clients.
Zebra: Arithmetic Garbled RAM for Large Words from DCR
Garbled RAM is a promising technique for scaling secure two-party computation to large datasets. It features an efficient two-round protocol and supports each memory access with polylogarithmic overhead, thereby avoiding the prohibitive cost of RAM-to-circuit conversion. While earlier works on Garbled RAM primarily focused on establishing theoretical feasibility, recent research has increasingly emphasized concrete efficiency, culminating in constructions that achieve approximately $O(\lambda T W \log N)$ bandwidth cost (up to super-constant factors) for garbling a RAM with running time $T$, memory size $N$, and word size $W$.
We ask whether it is possible to further improve the bandwidth cost of Garbled RAM. In contrast, the Garbled Circuit literature has developed a rich set of techniques that remove the bandwidth's dependence on the security parameter $\lambda$, leading to constant-rate or even sub-constant-rate garbling schemes. However, no comparable methods are currently known for Garbled RAM.
We propose a new garbling scheme for arithmetic RAM, called Zebra (short for ``Zero Exposure B-bounded Random Accesses''). Specifically, we show that when the word size $W$ is suitably large, we can eliminate the $\lambda$-factor dependence and achieve a bandwidth cost of $O(T W \log N)$. In this sense, our scheme can also be viewed as the RAM analogue of ``constant-rate garbling for arithmetic circuits''. Further, we show how to extend our techniques to support the garbling of boolean RAMs, achieving a bandwidth cost of $O(T W (\log N + \lambda))$ when the word size is suitably large. We implemented Zebra and released our code through open source. Our evaluation shows a $10.1\times$ concrete improvement in bandwidth and a $3.5\times$ improvement in end-to-end time relative to the state-of-the-art Garbled RAM schemes on a 256MB database with 4kB entries.
Area-Efficient LUT-Based Multipliers for AMD Versal FPGAs
AMD Versal FPGAs introduce a new CLB microarchitecture
in which legacy CARRY4/8 chains are replaced
by LOOKAHEAD8 structures. Existing area-efficient LUT-based
multiplier designs typically rely on CARRY4/8 primitives from
prior FPGA generations. On Versal devices, these designs exhibit
poor mapping efficiency. This paper proposes a new LUT-based
integer multiplier architecture tailored to Versal fabric, together
with an automated RTL generator supporting arbitrary operand
bit-widths and configurable pipeline depths. Through the joint
exploitation of radix-4 modified Booth recoding and the new
micro-architectural features of Versal LUTs, only ∼$n^2/4$ LUTs
are required to generate the partial-product bit heap for an nbit
multiplication. Moreover, a new heuristic is developed for
compressor tree synthesis to sum the bit heap, yielding an 8–20%
improvement in area–delay product compared with state-of-theart
heuristics for Versal devices. Overall, the proposed multipliers
achieve up to 40% LUT footprint reduction relative to AMD
LogiCORE IP multipliers while maintaining comparable criticalpath
delay. The proposed generator enables scalable and customizable
deployment of resource-efficient bit heap compressors
and integer multipliers for Versal-based accelerator designs.
PaCMan - Partition-Code Masking for Combined Security
Physical attacks are a well-known threat for otherwise secure implementations of cryptographic algorithms. Although attacks and countermeasures for Side-Channel Analysis (SCA) and Fault-Injection Analysis (FIA) are well studied and individually understood, their combined exploitation and the corresponding countermeasures remain a relatively new area of research. Just recently, Feldtkeller et al. presented Combined Private Circuit (CPC) gadgets at CCS 2022 and CCS 2023 which were the first provably secure combined hardware gadgets that adhere to the notion of Combined-Isolating Non-Interference (CINI).
The definition of the CINI notion has been a milestone for the development and formal verification of combined secure gadgets. However, it is also specifically tailored to the realization of side-channel resistance via plain masking and redundancy via replication, without further considerations of other constructions, e.g., those based on coding theory.
In this work, we extend the existing definition of CINI to the notion of generalized Combined Isolated Non-Interfering (gCINI). Our generalizations allow to capture a much wider range of possible encodings, including - but not limited to - Boolean masking and replication, and provide a formal basis for the analysis of more general gadget constructions. We formally prove the combined security and composability of our new gCINI definition and give an explicit way to build such gadgets. The significance of our proposed construction is demonstrated through the implementation of several use cases, including an AES S-box design that outperforms comparable CPC-based approaches while maintaining the same level of combined security. Finally, we formally verify the security of our gadget constructions using an adapted version of VERICA.
Improved Reduction from RLWE to MP-LWE
The Middle Product Learning With Errors (MP-LWE) problem was introduced in 2017 by Rosca, Sakzad, Steinfeld, and Stehlé (Crypto 2017). In their work and in a follow up work by Rosca, Stehlé, and Wallet (Eurocrypt 2018), the authors proved that MP-LWE is at least as hard as the Ring-LWE problem over the field $\mathbb{Q}[x]/f(x)$, for an exponentially large class of polynomials $f$ (with fixed degree and bounded coefficients). A few years later, Peikert and Pepin gave a new reduction from Ring-LWE to MP-LWE (Journal of Cryptology 2024). This new reduction improved the results of Rosca et al. by increasing the set of polynomials $f$ for which the reduction holds. However, even though the sets of polynomials covered by both reductions have exponential size, they remain negligible among the set of all polynomials of fixed degree and bounded coefficients.
In this work, we provide a refined analysis of the reduction of Rosca et al. Our new analysis shows that the reduction of Rosca et al. actually covers a much larger class of polynomials than what was known before, containing (experimentally) at least $90\%$ of all polynomials of fixed degree and bounded coefficients.
Syndrome Decoding with Hints
We study the syndrome decoding problem (SDP) in the presence of side information. The SDP asks, given a binary parity-check matrix $\mathbf{H}$ and a syndrome $\mathbf{s}$, to find a low Hamming weight binary error $\mathbf{e}$ such that $\mathbf{H} \mathbf{e} = \mathbf{s}$ over $\mathbb{F}_2$. Recent work (Cayrel et al., Eurocrypt '21) exploits a fault injection attack to reveal syndrome entries over the integers, referred to as perfect hints. Subsequent works considered side-channel scenarios to reveal similar, but noisy, information (approximate hints).
Both types of hints have been shown empirically to allow for solving the SDP once enough of them are available. However, fundamental questions about the impact of these hints on the hardness of the SDP, such as thresholds for a collapse into the polynomial-time regime or how to exploit arbitrary amounts of hints, remain open.
In this work, we show that both types of hints effectively allow one to transform the SDP instance into a soft-decision decoding instance. We then adapt Information Set Decoding (ISD) algorithms, the best known technique to solve generic SDP instances, to this setting. In contrast to previous work, we obtain non-trivial speedups for any amount of available hints, interpolating smoothly between the complexity of standard ISD (no hints) and polynomial time (sufficient hints). Furthermore, our practical simulations show that Hint-ISD achieves the polynomial-time regime generally under fewer hints than previous approaches.
We then provide an explicit bound on the number of hints required to reach the polynomial-time regime. This bound confirms earlier practical observations that higher error weights, such as those found in the McEliece cryptosystem, exhibit higher resistance against hint exposure than schemes using smaller error weights, such as HQC.
Improving Neural-Inspired Integral Distinguishers via a Linear-Algebraic Approach
The recent study has demonstrated that neural networks can serve as a navigator for an automatic search model for integral cryptanalysis with a reduction in computational complexity. However, the inherent drawbacks of using a deep learning model such as large datasets and limited interpretability are the major obstacles in cryptanalysis. In this paper, we introduce another simple data-driven approach using the linear algebraic concept to characterize key-independent balance properties as the kernel of a matrix with empirical parity data. For a fixed cipher, we stack the ciphertext parities obtained under many independent keys into the parity matrix and prove that every mask satisfying the matrix multiplication as zero corresponds exactly to a balance property.
We demonstrate the practicality and generality of the kernel methodology on seven lightweight block ciphers spanning SPN and ARX designs. Across these cases, our method recovers known distinguishers and reveals additional non-trivial linear combinations missed by conventional analyses. We additionally position the kernel method relative to other similar methodologies. Our results show that the kernel method provides a rigorous and cipher-agnostic alternative to neural feature exploration and complements division property-based search techniques.
$\mathsf{Spectra}$: Interval-Agnostic Vector Range Argument for Unstructured Range Assertions
A structured vector range argument proves that a committed vector $\mathbf{v}$ lies in a well-structured range of the form $[0,2^d-1]$. This structure makes the protocol extremely efficient, although it cannot handle more sophisticated range assertions, such as those arising from non-membership attestations. To address this gap, we study a more general setting not captured by prior constructions. In this setting, for each $i$, the admissible integer set for $v_i$ is a union of $k$ intervals $\mathsf{R}_i \overset{\text{def}}{=} \bigcup_{j=0}^{k-1}\left[l_{i,j},r_{i,j}\right]$. In this work, we present novel techniques to prove that $\mathbf{v} \in \mathbb{Z}^n_p$ lies within $\mathsf{R}_0 \times \mathsf{R}_1 \times \cdots \times \mathsf{R}_{n-1}$. We first introduce $\mathsf{RangeLift}$, a generic compiler that lifts a structured vector range argument to support such unstructured range assertions.
Then we present $\mathsf{Spectra}$, a realization of $\mathsf{RangeLift}$ over the $\mathsf{KZG}$-based vector commitment scheme. $\mathsf{Spectra}$ achieves succinct communication and verifier time; its prover complexity is $O(n\,\tfrac{\log N}{\log\log N}\cdot \log(n\tfrac{\log N}{\log\log N}))$, where $N$ upper bounds the maximum interval size across all $\mathsf{R}_i$. Notably, $\mathsf{Spectra}$ is interval-agnostic, meaning its prover complexity is independent of the number of intervals $k$; therefore, its prover cost matches the single-interval case even when each $\mathsf{R}_i$ is composed of hundreds of thousands of intervals. We also obtain two new structured vector range arguments and a batching-friendly variant of the $\mathsf{Cq}^{+}$ lookup argument (PKC'24), which are also of independent interest. Experiments show that $\mathsf{Spectra}$ outperforms well-known curve-based vector range arguments on standard metrics while supporting strictly more expressive range assertions.
Is it Really Broken? The Failure of DL-SCA Scoring Metrics under Non-Uniform Priors
This paper investigates a recent, claimed state-of-the-art attack on the ASCADv2 dataset, a higher-order masked and shuffled AES implementation, which we demonstrate to be a false positive. Despite successful validation using classical metrics, including a converging Guessing Entropy (GE), we prove that the model learned no actual side-channel leakage. Instead, it exploited a statistical bias in the intermediate value distribution.
We argue that the usual scoring function used in the GE is an unreliable metric in the presence of such biases. To address this critical evaluation flaw, we propose a set of methods to avoid falling in this pitfall. First, we introduce pre-emptive methods to detect significant biases in the target's value distribution before profiling, as well as post-mortem ones to examine the resulting model. Second, we present guidelines to avoid regimes where the GE is unreliable, and we derive the Asymptotically Optimal Distinguisher, a new, lightweight distinguisher that provably neutralizes the influence of learned priors in the GE metric, thereby isolating the information gained purely from the side-channel leakage. We demonstrate our methodology by successfully identifying the ASCADv2 false positive and applying it to synthetically biased versions of the ASCADv1 dataset.
Efficient, UC-secure and Publicly Auditable MPC from OLE & VOLE-in-the-head
Secure Multiparty Computation (MPC) computes on private input data, but generally does not guarantee correctness of the output towards third parties. This property, also called public auditability, was first studied explicitly by Baum et al. (SCN 2014). Their work and its follow-ups generate a Non-Interactive Zero-Knowledge proof of correctness of the MPC outcome during the MPC protocol, ensuring validity of the output even if all parties are corrupted.
In this work, we revisit and improve the MPC with Public Auditability blueprint. While the original work uses a version of the SPDZ MPC protocol with expensive lattice-based preprocessing, our construction combines any generic OLE-based preprocessing with a publicly verifiable somewhat linearly homomorphic commitment scheme from VOLE-in-the-head in a non-trivial way. Our commitment scheme relies solely on random oracle calls instead of previously used linearly homomorphic commitments based on structured Public-Key assumptions.
How to Build a Short-Input Random Oracle from Public Random Permutations
A vast body of work studies how to build a pseudorandom function (PRF) from a pseudorandom permutation (PRP) with beyond-the-birthday-bound (BBB) security. Often, such constructions are also expected to offer some security in keyless settings, for example in the context of committing security or to substitute a parallelizable short-input random oracle (RO) if used in counter mode. This has spurred several works on keyless variants of PRP-to-PRF constructions. However, recent works (Gunsing et al., CRYPTO 2022, 2023) reveal flaws in almost all existing proofs, painting a grim picture. This paper clarifies the situation with an in-depth analysis of RP-based short-input RF constructions.
First, we categorize all two-call short-input/output RP-to-RF constructions and evaluate their indifferentiability level. We introduce the "chaining attack'', a powerful, widely applicable differentiability attack. When applied to the sum of a permutation and its inverse, it invalidates an earlier result (Dodis et al., EUROCRYPT 2008). On the positive side, we show that only the Sum Of Permutations and Encrypted Davies-Meyer Dual, when instantiated with independent permutations, achieve BBB security and could potentially yield a parallelizable short-input RO.
Second, we study the indifferentiability of expanding RP-to-RF constructions and show that $\mathtt{XORP}_w$, the core PRF underlying $\textsf{CENC}$, achieves BBB security. As side effects, we obtain a simplified proof of indifferentiability for Sum of Permutations, and the committing security of $\textsf{CENC}$.
Sumcheck-based zkSNARKs are Non-Malleable
Simulation extractability ensures that any adversary who produces a valid proof must possess a corresponding witness, even after seeing simulated proofs for potentially false statements. This property is vital for preventing malleability attacks and is therefore essential for securely deploying zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs) in distributed systems. While prior work, particularly the frameworks by Faonio et al. (CCS’24, TCC’23) and Kohlweiss et al. (TCC’23), has established simulation extractability for a wide class of pairing-based zkSNARKs using the KZG univariate polynomial commitment scheme (Kate et al., Asiacrypt’10), we initiate a systematic study of simulation extractability for zkSNARKs based on the celebrated sumcheck protocol and the PST multivariate polynomial commitment scheme (Papamanthou et al., TCC’13). PST cannot be simulation extractable, due to its linear homomorphism, however, we show that it satisfies a refined notion of controlled malleability similar to the notion of Chase et al. (EUROCRYPT’12), which informally captures that linear homomorphism is essentially the only admissible malleability. We demonstrate that our notion of controlled malleability suffices to ensure security within the widely adopted design paradigm of compiling polynomial interactive oracle proofs into zkSNARKs, covering several state-of-the-art schemes such as HyperPlonk (EUROCRYPT’23), Spartan (CRYPTO’20) and Libra (CRYPTO’19).
Tripling on Hessian curves via isogeny decomposition
We provide a new interpretation of the arithmetic on Hessian Kummer lines using level-3 theta structures. This allows us to break the record for tripling on elliptic curves and their Kummer lines, requiring only 4 multiplications and 4 squarings per tripling for well-chosen curve parameters.
A Cryptographic Framework for Proof of Personhood
We initiate study on how to build a rigorous, cryptographic foundation for *proofs of personhood* - convincing, privacy-preserving evidence that a digital participant is a real, unique, and reputable human, optionally with authenticated attributes such as age or institutional affiliation. Towards this goal, we introduce a framework based on two types of credentials: *personhood credentials* (PHCs), issued by trusted authorities to attest to uniqueness and basic attributes, and *verifiable relationship credentials* (VRCs), issued peer-to-peer to capture reputation and real-world interactions.
We formalize ideal functionalities that capture desirable security and privacy notions for proofs of personhood, including Sybil-resistance, authenticated personhood, and unlinkability across contexts. Finally, we then give efficient cryptographic constructions that realize these functionalities by combining PHCs, VRCs, and zero-knowledge proofs. Our results suggest that a scalable, Sybil-resistant, and decentralized proof-of-personhood layer can serve as a reusable trust substrate for a wide range of online economic, social, and civic applications.
Cost-Layer–Blind Hybrid QAOA for MAX K-CUT via Native MBQC and Selective Graph Masking
Delegating the Quantum Approximate Optimization Algorithm (QAOA) to an untrusted quantum cloud can leak sensitive instance structure: for graph objectives, the connectivity of the cost unitary directly reveals which edges are present. We propose a selectively blind protocol that hides only the instance-dependent cost Hamiltonian while keeping the mixer public and unmodified. Our approach combines (i) the native measurement-based implementation of the MAX $K$-CUT cost layer from Proietti \emph{et al.} (MBQC-QAOA) and (ii) selective masking techniques inspired by Selectively Blind Quantum Computation (SBQC). The client pads the private graph into a public candidate supergraph by adding dummy edges/vertices. During the measurement-based cost evolution, the server prepares a fixed public MBQC resource over the candidate edges and streams the corresponding cost ancillas to the client for measurement (measurement-only delegation). By choosing either the intended interaction angle (real edge) or the identity angle (dummy edge) \emph{locally}, the client privately prunes dummy edges while revealing no cost-layer angles to the server; one-time-padded correction bits preserve a leakage-free Pauli-frame interface to a standard gate-model mixer. We prove correctness and selective edge blindness, show that padding does not alter the QAOA optimization landscape (hence does not worsen barren plateaus), and provide proof-of-concept numerical validations for MAX-CUT ($K=2$) (exact state-vector equivalence tests and shot-based circuit emulation with feed-forward), together with an asymptotic resource analysis for general MAX $K$-CUT and an explicit dummy-vertex invariance check under full-register mixers.
Non-Trivial Zero-Knowledge Implies One-Way Functions
A recent breakthrough [Hirahara and Nanashima, STOC’2024] established that if $\mathsf{NP} \not \subseteq \mathsf{ioP/poly}$, the existence of zero-knowledge (ZK) with negligible errors for $\mathsf{NP}$ implies the existence of one-way functions (OWFs). This work obtains a characterization of one-way functions from the worst-case complexity of zero-knowledge in the high-error regime.
Assuming $\mathsf{NP} \not \subseteq \mathsf{ioP/poly}$, we show that any non-trivial, constant-round public-coin ZK argument for NP implies the existence of OWFs, and therefore also (standard) four-message zero-knowledge arguments for $\mathsf{NP}$. Here, we call a ZK argument non-trivial if the sum of its completeness, soundness and zero-knowledge errors is bounded away from 1.
As a special case, we also prove that non-trivial non-interactive ZK (NIZK) arguments for $\mathsf{NP}$ imply the existence of OWFs. Using known amplification techniques, this also provides an unconditional transformation from weak to standard NIZK proofs for all meaningful error parameters. Prior work [Chakraborty, Hulett and Khurana, CRYPTO’2025] was limited to NIZKs with constant zero-knowledge error $\varepsilon_{\mathsf{zk}}$ and soundness error $\varepsilon_{\mathsf{s}}$ satisfying $\varepsilon_{\mathsf{zk}} + \sqrt{\varepsilon_{\mathsf{s}}} < 1$.
SoK: Anonymous Credentials for Digital Identity Wallets
Digital identity wallets are currently being developed around the globe, aiming to provide user-centric and secure authentication. Realizing this in a privacy-preserving manner is paramount, and even mandated in Europe which is developing the European Digital Identity Wallet with planned release in 2026. Current proposals to build these wallets are based on classic signature schemes such as ECDSA, but would benefit greatly from the use of anonymous credentials. Thus, there is currently a strong interest in developing the necessary standards to bring these cryptographic concepts into the real world. This work aims to inform ongoing standardization efforts by providing an overview of the mostprominent solutions, and the remaining open challenges. We split our overview among two fundamental architectural approaches: (1) dedicated multi-message signature schemes that allow for efficient ZKPs, and (2) general-purpose ZKPs used on top of legacy ECDSA. We also provide a comprehensive summary of the broad feature set that anonymous credentials can provide for identity wallets, in order to demonstrate that upgrading to these systems is a worthwhile endeavor and help to design standards that can leverage the rich existing body of work.
Oblivious Ciphertext Compression via Linear Codes
Oblivious ciphertext compression and decompression transform encrypted dense vectors of length $n$ with at most $t$ non-zero entries into compact encrypted sparse representations, and vice versa.
These primitives appear in the context of efficient protocols for encrypted search, PIR, and oblivious message retrieval.
Existing schemes suffer from large ciphertext sizes or high computational cost.
We present new deterministic and perfectly correct constructions based on linear codes, yielding encrypted sparse representations of optimal size with near-linear compression and decompression times.
Our results improve both communication and computation over prior work.
A central ingredient of our work is to show that, for carefully chosen generalized Reed–Solomon codes, variants of classical decoding algorithms combined with efficient algebraic techniques enable to recover the error vector directly from the syndrome in quasi-linear time in the syndrome length, rather than in the full block length of the code.
NeuralCPA: A Deep Learning Perspective on Chosen-Plaintext Attacks
A Chosen-Plaintext Attack (CPA) is a cryptographic analysis game for encryption, where an adversary queries an encryption oracle with plaintexts and observes the mapping to their ciphertexts. At an arbitrary time, it provides two challenge plaintexts but receives only one ciphertext, and finally guesses which of the two challenge plaintexts has been encrypted. Neural distinguishers, as a powerful representative of Artificial Intelligence (AI) methods, have been recently used in cryptographic analysis methods. However, they cannot directly be applied to perform CPA due to different input requirements and objectives. This work aims to address this gap. We provide the first rigorous and systematic formulation of CPA from a deep learning perspective.
Specifically, we introduce NeuralCPA, a novel deep neural network-based method designed for the evaluation of block cipher CPA security as an initial effort for AI-based CPA analysis. We empirically validate its effectiveness across a diverse range of block ciphers, including SIMON, SPECK, LEA, HIGHT, XTEA, TEA, PRESENT, AES, and KATAN. We also analyze the stream cipher CHACHA, restricting our study to its internal permutation rather than the complete keystream construction. Our experimental results confirm that NeuralCPA consistently achieves significant distinguishing advantages in round-reduced settings. Notably, our attack success rate ranges from 51% to 76.4%.
Breaking digital signatures from tropical matrix semirings
In a recent preprint, Grigoriev, Monico, and Shpilrain proposed a digital signature protocol based on the use of matrices over the tropical integer semiring.
We show some design flaws of the proposed scheme, together with an efficient attack to forge signatures for an arbitrary message, and a key-recovery attack when given access to a list of honest signatures.
Special Soundness and Binding Properties: A Framework for Tightly Secure zk-SNARKs
Uncategorized
Uncategorized
Interactive arguments often combine polynomial IOPs with polynomials commitment schemes (PCSs).
Frequently, the interactive argument is proven to be knowledge sound, but this incurs a high security loss when applying the Fiat-Shamir transformation to obtain a non-interactive argument in the random oracle model (ROM).
We introduce the notion of special soundness for polynomial IOPs, which surprisingly has not been considered before.
We study relations between various binding properties of univariate PCSs. In the case of the KZG PCS, these properties can be based on falsifiable assumptions.
We prove that a special-sound polynomial IOP plus a PCS under suitable binding notions gives a computationally special-sound interactive argument. By Attema, Fehr, and Klooss (TCC 2022), applying Fiat-Shamir to this argument yields a tightly knowledge-sound argument (or zk-SNARK) in the ROM under the same assumptions.
In the case of the KZG PCS, we add various batching optimizations to our compiler and prove that they preserve computational special soundness.
This yields a generic approach for achieving efficient zk-SNARKs with constant proof size and tight knowledge soundness in the ROM under falsifiable assumptions.
eDAS: Extending Data Availability Sampling with Privacy and Compliance
A data availability sampling scheme (DAS) allows a network of verifiers to collectively ensure that an untrusted party is correctly storing and distributing some committed data. Importantly, the protocol should not require that the verifiers coordinate or store the full data individually.
In this paper, we initiate the study of privacy in DAS schemes: we ask whether the DAS guarantees can be upheld while keeping the committed data secret. We define a natural notion of zero-knowledge for DAS and show that no secure DAS scheme can satisfy this notion. In doing so, we motivate the need to study privacy-preserving variants of DAS.
We define eDAS as DAS schemes over encrypted data and introduce the necessary security notions; namely, completeness, soundness and consistency. We additionally define a notion of privacy (zero-knowledge) and compliance (policy-enforcing). We then present two constructions of eDAS schemes. The first uses a DAS scheme as a black box and can be deployed on top of existing production-ready DAS systems with only minor modifications. The second is a more efficient construction that relies on the same principles but removes various layers of abstraction.
FLiPD: Privacy-Preserving Federated Learning via Multi-Party Computation and Differential Privacy
Federated Learning (FL) is a collaborative Machine Learning (ML) process where clients locally train an ML model on their private inputs, and send it to a server that aggregates the local model updates to obtain a global model update. FL is widely used in applications where the training data is distributed among several clients, e.g., for next word prediction in Google keyboard (Gboard). Nevertheless, FL faces several challenges concerning privacy and security. 1) Client privacy needs to be preserved by employing defenses against inference attacks using Secure Aggregation (SA) protocols. 2) The security of the model has to be defended against poisoning and backdoor attacks, e.g., by using clustering or filtering algorithms.
In this work, we present FLiPD, an optimised SA protocol for FL that protects against several attacks via a combination of Multi-Party Computation (MPC) and Differential Privacy (DP) mechanisms. We provide defenses against both inference and backdoor attacks. Moreover, by applying distributed DP noise generation, we show that our protocol is secure even when the majority of the clients collude with a server.
As opposed to existing solutions, in FLiPD, the client-server communication cost is essentially the same as in unprotected FL, which sends plaintext updates. Furthermore, the server-server communication cost is slightly lower (by 11%) than the state-of-the-art Prio+ (Addanki et al., SCN'22). In addition, we examine the accuracy of FLiPD both in the presence and absence of attacks. We achieve 87% accuracy for a Linear Regression model trained on the HAR dataset, and 90% for a Convolution Neural Network trained on the MNIST dataset.
Cryptokinetics
Cryptographic operations are momentary. Typically, the verification of a digital signature validates the signer's intervention at a specific moment in the past whereas a successful $\Sigma$-protocol round validates a prover's present existence. While cryptography handles very well the notions of "before" and "after" (typically in the blockchain), it remains blind to physical time.
In many practical situations it is important to assess the probability that the legitimate user is still present as time elapses. The situation is even more complex when the system needs to provide an assessment based on both cryptographic and non-cryptographic (e.g. biometric) inputs.
This paper draws an analogy between Continuous User Authentication (CUA) and pharmacokinetics, a branch of pharmacology that studies how drugs interact with the body over time using differential equations.
We relate CUA events to two modes of drug administration: injection and infusion. We compare password logins or digital signatures to injection, where trust is immediately established while $\Sigma$-protocols or facial recognition are analogized to intravenous infusion, where trust is maintained continuously as long as rounds succeed or as long as the user is in view of the camera.
We introduce mathematical models blending data from heterogeneous security inputs (that we call "sensors") to estimate the overall level of trust at any time. The models take into account the presence of the legitimate user, attackers and the absence of incoming information.
Multi-key Fully Homomorphic Encryption with Non-Interactive Setup in the Plain Model
Multi-key fully homomorphic encryption (MKFHE) enables computation over encrypted data under multiple different keys. Constructing MKFHE without any trusted or interactive setup remains an open problem. In the context of MKFHE, a trusted setup is often assumed to mean the use of a common random string (CRS).
In this paper, we present the first MKFHE scheme in the plain model (i.e., without any trusted or interactive setup) based solely on the RLWE assumption. Our design yields a 2-round multi-party computation (MPC) in the plain model against semi-malicious adversaries. Moreover, it can be applied to transform existing FHE schemes that rely on RGSW in their construction into a multi-key variant. We also provide concrete conversions for widely-used FHE schemes, including BGV, BFV, CKKS, FHEW, TFHE, and Carousel.
Finally, we implement our scheme and present experimental results for the expansion algorithm from a single-key ciphertext to a multi-key ciphertext and the multi-key homomorphic multiplication algorithm.
Sliced Rényi Pufferfish Privacy: Tractable Privatization Mechanism and Private Learning with Gradient Clipping
We study the design of a privatization mechanism and privacy accounting in the Pufferfish Privacy (PP) family. Specifically, motivated by the curse of dimensionality and lack of practical composition tools for iterative learning in the recent Rényi Pufferfish Privacy (RPP) framework, we propose Sliced Rényi Pufferfish Privacy (SRPP). SRPP preserves PP/RPP semantics (customizable secrets with probability-aware secret–dataset relationships) while replacing high-dimensional Rényi divergence with projection-based quantification via two sliced measures, Average SRPP and Joint SRPP. We develop sliced Wasserstein mechanisms, yielding sound SRPP certificates and closed-form Gaussian noise calibration. For iterative learning systems, we introduce an SRPP-SGD scheme with gradient clipping and new accountants based on History-Uniform Caps (HUC) and a subsampling-aware variant (sa-HUC), enabling decompose-then-compose privatization and additive composition under a common slicing geometry. Experiments on static and iterative privatization show that the proposed framework exhibits favorable privacy–utility trade-offs, as well as practical scalability.
Statistically Secure Asynchronous MPC with Linear Communication and $\mathcal{O}(n^5)$ Additive Overhead
Secure multiparty computation (MPC) allows distrusted parties to jointly evaluate a common function while keeping their inputs private. In this work, we focus on improving the communication complexity of information-theoretic asynchronous MPC with optimal resilience ($t < n/3$). The current state-of-the-art work in this setting by Goyal, Liu-Zhang, and Song [Crypto’24] achieves amortized linear communication per gate with $\Omega(n^{14})$ additive overhead. In comparison, the best-known result in the synchronous setting by Goyal, Song, and Zhu [Crypto’20] achieves the amortized linear communication per gate with only $\mathcal{O}(n^{6})$ additive overhead, revealing a significant gap that we aim to close.
This work closes this gap and goes further. We present the first statistically secure asynchronous MPC protocol achieving amortized linear communication per gate with only $\mathcal{O}(n^{5})$ additive overhead assuming a functionality for Agreement on Common Set (ACS). With the best-known instantiation for ACS, we obtain an asynchronous MPC protocol in the plain model with additive overhead $\mathcal{O}((\kappa+n)n^4\log\kappa)$ in expectation, where $\kappa$ is the security parameter. In the setting of statistical security with optimal resilience, this even surpasses the best synchronous result when $n\geq\sqrt{\kappa\log\kappa}$.
Technically, our contributions include (i) a new protocol for generating Beaver triples with linear communication per triple and $\mathcal{O}(n^3)$ additive overhead assuming functionalities for generating random degree-$t$ Shamir sharings and ACS, and (ii) a new protocol for generating random degree-$t$ Shamir sharings with linear communication per sharing and $\mathcal{O}(n^{5})$ additive overhead assuming a functionality for ACS.
New Techniques for Information-Theoretic Asynchronous MPC with Abort
We study the communication complexity of information-theoretic asynchronous multiparty computation (AMPC) with optimal resilience $n=3t+1$ and malicious security. In this setting, the only known result with linear communication per gate is due to Goyal, Liu-Zhang, and Song [CRYPTO ’24]. However, their construction incurs a large communication overhead of $\Omega(n^{14})$ elements that is independent of the circuit size, rendering their result only of theoretical interest. By additionally assuming a random oracle, Bandarupalli et al. [CCS ’25] reduce the communication overhead to $\mathcal{O}(n^3)$ while maintaining the linear communication per gate, at the cost of only achieving malicious security with fairness.
In this work, we remove the random oracle assumption and design an information-theoretic AMPC protocol that achieves malicious security with abort. The communication complexity of our construction is $\mathcal{O}(|C|n + Dn^2 + n^3)$ field elements for an arithmetic circuit of size $|C|$ and depth $D$, assuming a functionality for Agreement on Common Set (ACS).
Our main technical contribution is a novel verification mechanism with the following guarantee: whenever verification succeeds, there exists a subset of at least $t+1$ honest parties whose local computations are mutually consistent, and the final output is correct with respect to their computation. In contrast to prior approaches that require all honest parties to hold consistent states and execute identical computations before verification, our mechanism tolerates inconsistencies among honest parties while still ensuring that the verified computation is correct for at least one such honest subset, if the verification succeeds.
Distributed Monotone-Policy Encryption for DNFs from Lattices
Distributed monotone-policy encryption augments public-key encryption with fine-grained decryption capabilities in a trustless manner. In this scheme, users independently generate a public/private key-pair and post their public key to a public-key directory. Thereafter, anyone can encrypt a message to a set of public keys together with an access policy. Any set of users that satisfies the access policy can decrypt the ciphertext while the message should remain computationally hidden to any unsatisfying set of users. The primary efficiency requirement is succinctness: namely, the size of the ciphertext should be sublinear (or polylogarithmic) in the description length of the policy. Distributed monotone-policy encryption directly generalizes recent trustless cryptographic notions like threshold encryption with silent setup and distributed broadcast encryption.
In this work, we show how to construct distributed monotone-policy encryption for Boolean formulas in disjunctive normal form (DNF formulas) that supports an unbounded number of users. Security relies on the decomposed learning with errors (LWE) assumption, a simple and falsifiable lattice assumption, in the random oracle model. Previously, such a scheme was only known from plain witness encryption in the random oracle model. Our scheme has a transparent setup and the ciphertext size is $\mathsf{poly}(\lambda, \log N)$, where $N$ is the number of variables in the DNF formula.
Two-Factor Authentication Can Harden Servers Against Offline Password Search
We propose a novel notion of Two-Factor Authenticated Key Exchange (TFA-KE), defined in the universal composability model (UC), which extends asymmetric PAKE (aPAKE) by a 2nd authentication factor in the form of a $t$-bit one-time code computed by a personal device based on a clock or counter. Our notion strengthens the security of standard integration of aPAKE with short authentication codes by additionally slowing down offline brute-force password search in case of server compromise by a factor of $2^t$. In other words, our TFA-KE notion uses $t$-bit authentication codes not only to improve on-line security of password authentication, as is the current practice, but also to strengthen password security on server corruption, whilst retaining the ability of aPAKE to avoid the common but deplorable practice of relying on "secure-channel" encryption for password protection.
We show a generic framework for implementing TFA-KE, with two efficient instantiations. Our key enabling tool is a tight one-way function (TOWF) with an algebraic structure that allows for its evaluation on a secret-shared input. We initiate the study of such functions, and we provide two proposals which we show to be tightly one-way in the Generic Group Model. Tightness means that a function evaluation on an input sampled from domain $\mathcal{X}$ takes $\Omega(|\mathcal{X}|)$ time to invert, which in our application implies that offline password search attacks are slowed to $\Omega(|D|\cdot 2^t)$ for passwords sampled from dictionary $D$.
GG-GSW: Chosen-Ciphertext Secure Leveled FHE From Gadget Trapdoors
We build a leveled fully homomorphic encryption (FHE) scheme that achieves IND-CCA1 security under the learning with errors (LWE) assumption in the standard model. It is the first scheme of this kind that does not rely on succinct non-interactive arguments of knowledge (SNARK) to obtain security against active adversaries. Instead, we use the gadget lattice trapdoors introduced by Micciancio and Peikert [Eurocrypt 2012] in combination with a dual version of the GSW FHE scheme [Gentry, Sahai, Waters, Crypto 2013]. Instead of proving the integrity of a ciphertext with a SNARK, we use the gadget trapdoor to recover the LWE noise of a ciphertext. This ensures IND-CCA1 security because it allows us to determine whether a ciphertext queried to the decryption oracle will reveal information about the secret key to an adversary.
Our scheme is fully compact, multi-hop and requires very few changes to the original GSW scheme beyond the key generation and decryption algorithm. In particular, the homomorphic operations remain unchanged. We also follow ideas from Bourse et al. [Crypto 2016] to obtain IND-CPA-D security almost for free, without requiring correctness.
Proving Knowledge of Syndrome Decoding Problems with Soundness
This paper introduces a novel set of code-based protocols to demonstrate algebraic relationships in zero knowledge. Specifically, we present a comprehensive collection of secure arguments of knowledge for verifying additive and multiplicative relationships between syndrome-committed secrets, including matrix products, which enable us to construct a generic arithmetic circuit framework. We leverage these primitives to formulate a rich variety of privacy-oriented primitives, such as a post-quantum range proof, lookup argument, and verifiable shuffle protocols. These contributions provide the necessary ingredients to transition advanced confidential designs, such as cryptocurrencies, into the post-quantum code-based setting.
Understanding Multi-Query Attacks on Key-Then-Hash Functions
We present multi-query attacks on key-then-hash (KTH) functions in the blinded keyed hash model that achieve an advantage growing quadratically in the number of queries up to a small constant factor from the information-theoretic upper bound.
We introduce three families of attacks.
Catch attacks exploit the group structure of the digest space and achieve deterministic success with $2\sqrt{\varepsilon^{-1}}$ queries.
Group attacks embed high-probability differentials into subgroups of the message space of quadratic advantage.
Translation attacks exploit offset-invariance to linearly scale any existing attack.
Our attacks apply in two concrete settings: with $\Delta$ fixed to $0$, they target the compression phase of farfalle-based primitives such as Xoofff, and with $\Delta$ as a free parameter, they target deck-based wide block cipher constructions such as the double-decker.
We connect optimal query set construction to results in additive combinatorics and generalize our results to concatenated KTH functions.
Experiments on NH and Xoodoo[3] show our attacks reach an advantage within a factor $2^{4}$ of the theoretical bound.
Our analysis reveals that for bit-sliced permutations with degree-2 round functions, solution set overlap is inherent, limiting but not preventing the attacker from approaching the bound. Our experiments highlight that trail cores with a large number of active columns in the last round are particularly dangerous for KTH functions, introducing a new criterion for the design of permutations used in such constructions.
On the Equivalence of Forgery and Key Recovery in Key-Then-Hash Functions
For any key-then-hash function, there is no security gap between key recovery and forgery.
The expected cost of recovering the key given differential-based forgery, in the information-theoretic setting, is logarithmic in the number of solutions to the underlying differential equation.
The notion of weak-key classes as defined by Handschuh and Preneel in their CRYPTO 2008 paper does not apply to key-then-hash functions.
Every key is equally vulnerable, and the attack complexity is entirely determined by the universality bound.
This applies to four out of six keyed hash function families studied in their paper, namely, NH, NMH, WH and Square Hash.
In this paper, we revisit the analysis done in 2008 to NH through the lens of the key-then-hash framework.
We are able to prove that the properties attributed to the class of weak keys in NH are actually intrinsic to the whole key space.
Furthermore, this result can be generalized to any key-then-hash function.
We demonstrate this generality by applying our framework to key-then-hash constructions instantiated with Xoodoo[3] and Square Hash, and show that an efficient key recovery is possible.
RISQrypt: Fast, Secure and Agile Hardware-Software Co-Design for Post-Quantum Cryptography
In this paper, we present RISQrypt, the first unified architecture in the literature that implements Kyber (ML-KEM) and Dilithium (ML-DSA), standardized lattice-based Post-Quantum Cryptography (PQC) algorithms, with masking. RISQrypt is a hardware–software co-design framework that integrates dedicated cryptographic accelerators to speed up polynomial arithmetic, hashing, and mask-conversion operations, the latter being one of the primary bottlenecks in masked implementations of lattice-based PQC. Our design achieves low latency while providing both theoretical and practical side-channel security, as validated through experimental evaluation. Specifically, the masked decapsulation of Kyber768 requires 109K clock cycles, while masked signing of Dilithium3 requires 1230K clock cycles on average. These results demonstrate 11.3x time-performance improvement over existing masked implementations. Our performance results for unprotected functions also outperform the existing work by up to an order of magnitude. In addition, prior designs are more limited in scope, generally supporting only a single scheme and lacking the unified, crypto-agile framework that enables support for both Kyber and Dilithium as in our architecture. Leveraging the HW/SW co-design approach, our proposed architecture can be readily extended to other PQC standards such as Falcon and SPHINCS+, as well as to algorithms sharing similar computational building blocks, through firmware reprogramming.
Security of the Fischlin Transform in Quantum Random Oracle Model
The Fischlin transform yields non-interactive zero-knowledge proofs with straight-line extractability in the classical random oracle model. This is done by forcing a prover to generate multiple accepting transcripts through a proof-of-work mechanism. Whether the Fischlin transform is straight-line extractable against quantum adversaries has remained open due to the difficulty of reasoning about the likelihood of query transcripts in the quantum-accessible random oracle model (QROM), even when using the compressed oracle methodology. In this work, we prove that the Fischlin transform remains straight-line extractable in the QROM, via an extractor based on the compressed oracle. This establishes the post-quantum security of the Fischlin transform, providing a post-quantum straight-line extractable NIZK alternative to Pass’ transform with smaller proof size. Our techniques include tail bounds for sums of independent random variables and for martingales as well as symmetrization, query amplitude and quantum union bound arguments.
Bolt: Faster SNARKs from Sketched Codes
We introduce Bolt, a new Multilinear Polynomial Commitment Scheme (MLPCS) designed for high-performance SNARKs over binary fields. Bolt is geared towards SNARKs for large computations, in which prover speed is paramount but one can afford slightly larger proofs. The construction is based on the code-switching paradigm; our core technical contribution is a new "proof-system friendly" error-correcting code with extremely efficient encoding both asymptotically and concretely. Bolt offers a significantly faster prover than prior works, while maintaining a moderately larger, yet still reasonable, proof size.
Theoretically, Bolt achieves a commitment time of approximately $(3+\varepsilon) \cdot N$ field additions plus a Merkle Tree hash computation of size $(1+\varepsilon) \cdot N$ field elements, where $N$ is the size of the multilinear polynomial and $\varepsilon>0$ is arbitrarily small. The prior state-of-the-art, Blaze (Brehm et al., Eurocrypt 2025) used more than $8N$ field ops and a $4N$ size Merkle hash.
Concretely, our implementation demonstrates that these asymptotic gains translate into substantial real-world speedups. Our benchmarks show that for $N=2^{30}$ over a $32$-bit field, Bolt achieves a commitment time roughly $3 \times$ faster than Reed-Solomon based schemes, albeit with a moderately larger proof. Bolt also offers better commitment time and proof size than recent linear-time schemes. For example, its commitment time is about $1.34 \times$ faster than Brakedown (Golovnev et al., Crypto 2023) and with a $2 \times$ shorter proof.
Hash Function Constructions from Lightweight Block Ciphers for Fully Homomorphic Encryption
This paper investigates hash-function constructions derived
from lightweight block ciphers, that are suitable for evaluation in fully homomorphic encryption (FHE) settings. We focus on PRINCEv2, a 64-bit lightweight block cipher with 128-bit keys and low algebraic complexity, which is particularly amenable to FHE evaluation. However, the small block size of such ciphers limits the applicability of standard hash-function transforms. Indeed, achieving 128-bit collision resistance in the (n, 2n) setting, i.e., with 64-bit blocks, requires a quadruple-block-length (QBL) compression function, for which no generic construction is known.
In this work, we propose a concrete QBL compression construction tailored to PRINCEv2 and analyze its collision resistance. Candidate QBL designs inspired from existing double-block-length constructions are also outlined. As a further contribution, we describe a carefully optimized
homomorphic circuit design for PRINCEv2. The resulting implementation outperforms previous works in both operation counts and computational depth. Experimental timings demonstrate the practical feasibility of evaluating the corresponding hash constructions under FHE with low latency, while providing cryptographically small failure probability.
Anamorphic E-Voting: Coercion-Resistant Through Fake and Real Votes
The paper addresses the challenging and timely issue of vote buying in electronic voting. Electronic voting is a well established process in modern democracies. However, problems such as vote buying continue to pose a significant challenge. This paper aims at addressing this issue by leveraging the novel concept of Anamorphic Encryption to enable voters to cast their original vote together with a hidden ``fake" one. In this way voters can pursue their true preferences unobstructed while providing a compliance proof to potential vote buyers. The security of our construction is proved through a formal security analysis, that considers powerful adversaries who aim at breaching user privacy and influencing votes. We believe that this innovation e-voting approach can enhance the overall security of e-voting systems and pave the way to avoid electoral manipulation.
Composition Theorems for Zero-Knowledge IOPs
Interactive Oracle Proofs (IOPs) enable a probabilistic verifier interacting with a prover to verify NP statements while reading only few bits from the prover messages. Zero-Knowledge IOPs (ZK-IOPs) have the additional guarantee that a query-bounded (possibly malicious) verifier learns nothing about the NP witness.
We initiate a systematic study of ZK preservation under IOP composition, and prove general composition theorems for ZK-IOPs in the 2- and multi-IOP setting. Our main result shows that ZK is preserved in the setting of perfect, black-box, straight-line ZK (the standard setting for ZK-IOPs), if the outer IOP has an additional mild property that is satisfied by existing ZK-IOPs. Contrary to common belief, this does not follow from composition theorems for multiparty protocols (Kushilevitz, Lindell and Rabin, STOC`06).
Our composition theorems show that ZK-IOPs can be modularly designed by composing sub-protocols, and ZK of the composed system follows seamlessly from the ZK guarantees of its building blocks. Using our composition theorems, we easily derive both new and known results on ZK-IOPs in various settings, including ZK preservation under parallel/sequential composition, ZK of IOPs for sumcheck and codeswitching, ZK of IOPs for NP using arithmetization and sumcheck, and ZK preservation under IOP proof composition (reproving a result of Bootle, Chiesa and Liu, EC`22).
Skipping Class: Algebraic Attacks exploiting weak matrices and operation modes of Poseidon2(b)
We present new algebraic attacks on Poseidon2 and Poseidon2b. We exploit the specific structure of the matrices that define the linear layers in the hash function which allows us to improve round-skipping for the constrained-input constrained-output CICO problem. The security of many circuit-friendly hash functions has been measured by their resistance against attacks on the CICO problem. However, we show how to boost our round-skipping attack when directly modelling algebraic preimage attacks of Poseidon2(b) in compression and sponge mode. To the best of our knowledge, our attack provides the first examples where finding preimages is easier than solving the corresponding CICO problem in Poseidon2(b). Furthermore, we describe the first algebraic collision attack that outperforms its algebraic preimage counterpart. We improve over state-of-the-art algebraic attacks for a range of parameters, e.g. for one recommended $128$-bit parameter set we improve over previous state-of-the-art algebraic collision attacks by a factor of $2^{106}$. However, due to the algebraic security margin this does not mean the primitive falls short of its claimed security level. Finally, we discuss how our attacks can be mitigated without affecting the efficiency of Poseidon2(b).
Quantum Truncated Differential Attacks using Convolutions
This paper focuses on quantum key-recovery attacks on block ciphers. Previous works on quantum differential and truncated differential attacks like [Kaplan et al., ToSC 2016] have shown that classical algorithms for key-recovery, typically based on generating differential pairs and sieving them, can be accelerated by up to a quadratic speedup using variants of quantum search, quantum amplitude amplification, and quantum collision-finding.
In this paper, we introduce a new quantum truncated differential key-recovery attack, which leverages the quantum convolution algorithm introduced in [Schrottenloher, CRYPTO 2022] and previously used in linear cryptanalysis. We adapt this algorithm to the case of differential cryptanalysis, by rewriting the probability of a differential of an $n$-bit cipher as a convolution of functions with $2n$-bit input. We then construct a quantum state whose amplitudes encode the probability of the differential for different key guesses, and use this as the starting point of a quantum search. In some cases (although not on practical ciphers so far), the speedup is better than quadratic compared to classical attacks. We also extend the framework to related-key differential attacks.
We give applications to a 9-round attack on QARMAv2-64 adapted from [Ahmadian et al., DCC 2024] and a 12-round related-key attack on AES-256 from [Boura et al., CRYPTO 2023], which show improvement over classical attacks and over Kaplan et al.'s strategy when taking into account the amount of memory and the type of quantum memory used (as our attack requires only quantum-accessible classical memory).
Linear-Communication ACSS with Guaranteed Termination and Lower Amortized Bound
Uncategorized
Uncategorized
An Asynchronous Complete Secret Sharing (ACSS) protocol enables a dealer to distribute \( N \) Shamir shares such that all parties eventually receive their shares over an asynchronous network. It serves as a fundamental building block for asynchronous Secure Multiparty Computation (MPC) and Byzantine Agreement.
In this work, we focus on the statistically secure ACSS with optimal resilience (\(t < n/3\)). Recently, Ji, Li, and Song~[CRYPTO'24] proposed the first ACSS protocol with amortized linear communication. However, their scheme lacks guaranteed termination, and they identified the construction of a linear-communication ACSS with guaranteed termination as an open problem. Furthermore, their protocol requires a large amortized bound of \( N = \Omega(n^{11} \kappa) \), where \( n \) is the number of parties and \( \kappa \) is the size of the secret. In this work, we resolve the open problem and significantly reduce the amortized bound by presenting a linear-communication ACSS protocol with guaranteed termination and a lower bound of \( N = \Omega(n^{4}+n\kappa) \). Our ACSS protocol can be directly applied to asynchronous MPC protocols, ensuring both guaranteed termination and improved communication per multiplication gate, as well as to asynchronous Byzantine Agreement.