Papers updated in last 31 days (490 results)

Last updated:  2026-03-01
PANCAKE: A SNARK with Plonkish Constraints, Almost-Free Additions, No Permutation Check, and a Linear-Time Prover
Yuxi Xue, Peimin Gao, Xingye Lu, and Man Ho Au
We present \(\mathsf{Pancake}\), a linear-time SNARK with a circuit-specific setup that eliminates the explicit representation and separate verification of addition gates in Plonkish constraint systems. Specifically, we consolidate wiring constraints and addition-gate constraints into a single family of general linear constraints, which can be enforced efficiently via a single sumcheck protocol. As a result, \(\mathsf{Pancake}\) achieves ``almost-free'' addition gates, which significantly reduces the witness size and directly improves prover efficiency while preserving full support for high-degree custom gates. Our implementation shows that \(\mathsf{Pancake}\) outperforms the state-of-the-art Plonkish SNARK \(\mathsf{HyperPlonk}\) (Chen et al., EUROCRYPT 2023) in terms of prover efficiency. For a circuit size of $2^{24}$ where half the gates are additions, \(\mathsf{Pancake}\) achieves prover speedups of $1.67\times$ (single-threaded) and $2.43\times$ (32-threaded), while also generating smaller proofs and maintaining comparable verification time.
Last updated:  2026-02-28
Block-Accumulate Codes: Accelerated Linear Codes for PCGs and ZK
Vladimir Kolesnikov, Stanislav Peceny, Rahul Rachuri, Srinivasan Raghuraman, Peter Rindal, and Harshal Shah
Linear error-correcting codes with fast encoding and high minimum distance are a central primitive across modern cryptography. They appear prominently in at least two domains: (1) pseudorandom correlation generators (PCGs), which enable sublinear-communication generation of correlations such as oblivious transfer and vector oblivious linear evaluation, and (2) zero-knowledge proof systems, where linear-time encoders underpin proof soundness and scalability. In both settings, the prover or sender must multiply by a large generator matrix $\mathbf{G}$, often with dimensions in the millions, making computational efficiency the dominant bottleneck. We propose a generalized paradigm for building crypto-friendly codes with provable minimum distance. Roughly speaking, these codes are based on the idea of randomized turbo codes such as repeat accumulate codes. We prove that their asymptotic minimum distance is linear and compute the exact expected weight spectrum of the codes for concrete sizes. We observe that our codes achieve full Gilbert-Varshamov minimum distance and outperform all prior constructions. We construct several novel codes, the most promising of which we call Block-Accumulate codes. Our codes are the fastest on both GPUs and CPUs. If we restrict our attention to codes with provable distance, our code is $8\times$ faster than state of the art on a CPU and $50\times$ faster on a GPU. Even if we use aggressive parameters with conjectured distance, our code is $3\times$ and $20\times$ faster, respectively. Under these parameters, this yields overall PCG speedups of $2.5\times$ on the CPU and $15\times$ on the GPU, achieving about 200 million OTs or binary Beaver triples per second on the GPU (excluding the one-time 10 ms GGM seed expansion). We also observe a $3\times$ speedup and half the peak memory consumption of the Blaze zero-knowledge (PCS) scheme of Brehm et al. (Eurocrypt 25).
Last updated:  2026-02-28
Module Learning With Errors and Structured Extrapolated Dihedral Cosets
Weiqiang Wen and Jinwei Zheng
The Module Learning With Errors (MLWE) problem is the fundamental hardness assumption underlying the key encapsulation and signature schemes ML-KEM and ML-DSA, which have been selected by NIST for post-quantum cryptography standardization. Understanding its quantum hardness is crucial for assessing the security of these standardized schemes. Inspired by the equivalence between LWE and Extrapolated Dihedral Cosets Problem (EDCP) in [Brakerski, Kirshanova, Stehlé and Wen, PKC 2018], we show that the MLWE problem is as hard as a variant of the EDCP, which we refer to as the structured EDCP (stEDCP). This extension from EDCP to stEDCP relies crucially on the algebraic structure of the ring underlying MLWE: the extrapolation depends not only on the noise rate, but also on the ring’s degree. In fact, an stEDCP state forms a superposition over an exponential (in ring degree) number of possibilities. Our equivalence result holds for MLWE defined over power-of-two cyclotomic rings with constant module rank, a setting of particular relevance in cryptographic applications. Moreover, we present a reduction from stEDCP to EDCP. Therefore, to analyze the quantum hardness of MLWE, it may be advantageous to study stEDCP, which might be easier than EDCP.
Last updated:  2026-02-28
High-Precision Functional Bootstrapping for CKKS from Fourier Extension
Song Bian, Yunhao Fu, Ruiyu Shen, Haowen Pan, Anyu Wang, and Zhenyu Guan
We introduce a new (amortized) functional bootstrapping framework over the CKKS homomorphic encryption (HE) scheme based on Fourier extension. While approximating the modular reduction function in CKKS bootstrapping through Fourier series is a well-known technique, how such method can be efficiently generalized to functional bootstrapping is less understood. In this work, we show that, by constructing proper Fourier extensions, any function with a bounded domain in the smoothness class $C^{\kappa}$ can be approximated by a degree-$n$ Fourier series with errors of order $O(n^{-\kappa-2})$ (except at the singularities), improving on previous results on a global error bound of $O(n^{-1})$ [AKP2025]. To achieve such bound, we propose a new way of constructing Fourier extensions, such that the extended functions appear as smooth as possible in the sense of a Fourier approximation. By implementing our functional bootstrapping over OpenFHE, we demonstrate that we can improve the data precision by $10$-$27$ bits and reduce the amortized FBS latency by $1.1$-$2\times$ over a variety of benchmarking functions.
Last updated:  2026-02-27
Accelerating FALCON: Speed Records for FALCON on Xilinx FPGAs
Sharath Pendyala, Rahul Magesh, Elif Bilge Kavun, and Aydin Aysu
FALCON is a NIST-selected post-quantum digital signature scheme whose performance bottleneck lies in the SamplerZ subroutine for discrete Gaussian sampling. We present a throughput-optimized, custom hardware implementation of SamplerZ that introduces several architectural and algorithmic innovations to significantly accelerate signature generation. Our design incorporates a datapath-aware floating-point arithmetic pipeline that strategically balances latency and resource utilization. Our novel algorithmic innovations include an Estrin's Scheme-based polynomial evaluator and a constant-latency BerExp routine using floating-point exponentiation IP to eliminate fixed-point decomposition critical paths. Additionally, we optimize rejection handling through parallel sampling loops and propose a speed-optimized flooring circuit. These advancements lower the sampling time by 55%-81% and overall FALCON signature generation time by 36%-53% compared to the state-of-the-art FPGA implementation. In a landmark result, our work is the first to demonstrate a Xilinx FPGA SamplerZ design that outperforms state-of-the-art software (by 15%) and ASIC (by 16%) designs, advancing the practical deployment of post-quantum signatures on reconfigurable hardware.
Last updated:  2026-02-27
Bird of Prey: Practical Signature Combiners Preserving Strong Unforgeability
Jonas Janneck
Following the announcement of the first winners of the NIST post-quantum cryptography standardization process in 2022, cryptographic protocols are now undergoing migration to the newly standardized schemes. In most cases, this transition is realized through a hybrid approach, in which algorithms based on classical hardness assumptions, such as the discrete logarithm problem, are combined with post-quantum algorithms that rely on quantum-resistant assumptions, such as the Short Integer Solution (SIS) problem. A combiner for signature schemes can be obtained by simply concatenating the signatures of both schemes. This construction preserves unforgeability of the underlying schemes; however, it does not extend to stronger notions, such as strong unforgeability. Several applications, including authenticated key exchange and secure messaging, inherently require strong unforgeability, yet no existing combiner is known to achieve this property. This work introduces three practical combiners that preserve strong unforgeability and all BUFF (beyond unforgeability features) properties. Each combiner is tailored to a specific class of classical signature schemes capturing all broadly used schemes that are strongly unforgeable. Remarkably, all combiners can be instantiated with any post-quantum signature scheme in a black-box way making deployment practical and significantly less error prone. The proposed solutions are further highly efficient and have signatures that are at most the size of the (insecure) concatenation combiner. For instance, our most efficient combiner enables the combination of EdDSA with ML-DSA, yielding a signature size that is smaller than the sum of an individual EdDSA signature and an individual ML-DSA signature. Additionally, we identify a novel signature property that we call random-message validity and show that it can be used to replace the BUFF transform with the more efficient Pornin-Stern transform. The notion may be of independent interest.
Last updated:  2026-02-27
Sum-check protocol for approximate computations
Dor Bitan, Zachary DeStefano, Shafi Goldwasser, Yuval Ishai, Yael Tauman Kalai, and Justin Thaler
Motivated by the mismatch between floating-point arithmetic, which is intrinsically approximate, and verifiable computing protocols for exact computations, we develop a generalization of the sum-check protocol. Our generalization proves claims of the form $\sum_{x \in \{0,1\}^v} g(x) \approx H$, where $g$ is a low-degree $v$-variate polynomial over an integral domain $\mathbb{U}$. The verifier performs its check in each round of the protocol using a tunable error parameter $\delta$. If $\Delta$ is the error in the prover's initial claim, then the soundness error of our protocols degrades gracefully with $\delta/\Delta$. In other words, if the initial error $\Delta$ is large relative to $\delta$, then the soundness error is small, meaning the verifier is very likely to reject. Unlike the classical sum-check protocol, which is fundamentally algebraic, our generalization exploits the metric structure of low-degree polynomials. The protocol can be instantiated over various domains, but is most natural over the complex numbers, where the analysis draws on the behavior of polynomials over the unit circle. We also analyze the protocol under the Fiat-Shamir transform, revealing a new intermediate security phenomenon that appears intrinsic to approximation. Prior work on verifiable computing for numerical tasks typically verifies that a prover exactly executed a computation that only approximates the desired function. In contrast, our protocols treat approximation as a first-class citizen: the verifier's checks are relaxed to accept prover messages that are only approximately consistent with the claimed result. This establishes the first black-box feasibility result for approximate arithmetic proof systems: the protocol compiler is independent of how arithmetic operations are implemented, requiring only that they satisfy error bounds. This opens a path to verifying approximate computations while sidestepping much of the prover overhead imposed by existing techniques that require encoding real-valued data into finite field arithmetic.
Last updated:  2026-02-27
Lattice HD Wallets: Post-Quantum BIP32 Hierarchical Deterministic Wallets from Lattice Assumptions
Conor Deegan, James Fitzwater, Kamil Doruk Gur, and David Nugent
Hierarchical deterministic (HD) wallets, standardized as BIP32, allow users to manage a tree of cryptographic key pairs from a single master seed. A defining feature is non-hardened derivation: child public keys can be derived from a parent public key alone, enabling watch-only wallets where a server generates fresh receiving addresses while the signing key remains offline. Existing constructions rely on the algebraic structure of elliptic curve public keys, and recovering this functionality in the post-quantum setting is an open problem. We present two post-quantum HD wallet constructions. The first uses ML-DSA and supports hardened (private-key-dependent) derivation with proofs of unlinkability and unforgeability. The second, our primary contribution, uses Raccoon-G, a variant of the Raccoon signature scheme with Gaussian-distributed secrets. We modify Raccoon-G to publish full, unrounded public keys, preserving linearity. Since the sum of Gaussians is again Gaussian with predictable wider variance, derived keys remain statistically close to freshly generated ones, enabling non-hardened public key derivation. We prove unlinkability and unforgeability under standard lattice assumptions, and introduce a method for generating rerandomizable Raccoon-G key pairs from fixed randomness with provable existential unforgeability. Both constructions are implemented in Rust. To our knowledge, this is the first post-quantum HD wallet construction that recovers BIP32's full public key derivation functionality with provable security under standard assumptions.
Last updated:  2026-02-27
Wedges, oil, and vinegar -- An analysis of UOV in the exterior algebra
Lars Ran
The Unbalanced Oil and Vinegar (UOV) construction has been a central framework in multivariate cryptography since its appearance in 1999. In fact, four schemes in the second round of NISTs additional call for signatures are UOV-based. For efficiency considerations, most of these schemes are defined over a field of characteristic 2. This has as a side effect that the polar forms of the UOV public maps are not only symmetric, but also alternating. In this work, we propose a new key-recovery attack on UOV over fields of characteristic 2 that exploits this alternating property. We interpret the polar forms of the UOV public map as elements of the exterior algebra. Moreover, we show that these forms exhibit a structure that is dependent on the secret oil space. Using the Plücker embedding, we also express the dual of the secret oil space as an element of the exterior algebra. Utilizing the structure of the public maps, we can formulate relations on this secret element in this algebra. Finally, we demonstrate that the secret oil space can be recovered using sparse linear algebra techniques. This new attack has a lower time complexity than previous methods and reduces the security of $\texttt{uov-Ip}$, $\texttt{uov-III}$, and $\texttt{uov-V}$, by 4, 11, and 20 bits respectively. In addition, the attack is applicable to MAYO$_2$ and reduces its security by 28 bits.
Last updated:  2026-02-27
From $\textsf{TS-SUF-2}$ to $\textsf{TS-SUF-4}$: Practical Security Enhancements for $\textsf{FROST2}$ Threshold Signatures
Syh-Yuan Tan, Will Wang, and Ryan Chow
Threshold signature schemes play a vital role in securing digital assets within blockchain and distributed systems. $\textsf{FROST2}$ stands out as a practical threshold Schnorr signature scheme, noted for its efficiency and compatibility with standard verification processes. However, under the one-more discrete logarithm assumption, with static corruption and centralized key generation settings, $\textsf{FROST2}$ has been shown by Bellare et al. (in CRYPTO 2022) to achieve only $\textsf{TS-SUF-2}$ security, which is a consequence of its vulnerability to $\textsf{TS-UF-3}$ attacks. In this paper, we address this security limitation by presenting an enhanced variant of $\textsf{FROST2}$, namely, $\textsf{FROST2}\texttt{+}$ which achieves the $\textsf{TS-SUF-4}$ security level under the same computational assumptions as the original $\textsf{FROST2}$. $\textsf{FROST2}\texttt{+}$ strengthens $\textsf{FROST2}$ by integrating additional pre-processing token verifications that help mitigate $\textsf{TS-UF-3}$ and $\textsf{TS-UF-4}$ vulnerabilities while maintaining practical efficiency. We show that $\textsf{FROST2}\texttt{+}$ can achieve $\textsf{TS-SUF-4}$ security not only under the same conditions as the original $\textsf{FROST2}$ analysis, but also when initialized with a distributed key generation protocol such as $\textsf{PedPoP}$. Our benchmark using ZCash's $\textsf{FROST}$ library shows that the performance of $\textsf{FROST2}\texttt{+}$ is comparable to $\textsf{FROST2}$.
Last updated:  2026-02-27
Structure-Preserving Compressing Primitives: Vector Commitments and Accumulators and Applications
Stephan Krenn, Omid Mir, and Daniel Slamanig
Compressing primitives such as accumulators and vector commitments, allow to rep- resent large data sets with some compact, ideally constant-size value. Moreover, they support operations like proving membership or non-membership with minimal, ideally also constant- size, storage and communication overhead. In recent years, these primitives have found numer- ous practical applications, with many constructions based on various hardness assumptions. So far, however, it has been elusive to construct these primitives in a strictly structure-preserving setting, i.e., in a bilinear group in a way that messages, commitments and openings are all ele- ments of the two source groups. Interestingly, backed by existing impossibility results, not even conventional commitments with such constraints are known in this setting. However, in many practical applications it would be convenient or even required to be structure-preserving, e.g., to commit or accumulate group elements. In this paper we investigate whether strictly structure-preserving compressing primitives can be realized. We close this gap by presenting the first strictly structure-preserving commitment that is shrinking (and in particular constant-size). We circumvent existing impossibility results by em- ploying a more structured message space, i.e., a variant of the Diffie-Hellman message space. Our main results are constructions of structure-preserving vector commitments as well as structure- preserving accumulators. We first discuss generic constructions and then present concrete con- structions under the Diffie-Hellman Exponent assumption. To demonstrate the usefulness of our constructions, we discuss various applications. Most notable, we present the first entirely prac- tical constant-size ring signature scheme in bilinear groups (i.e., the discrete logarithm setting). Concretely, using the popular BLS12-381 pairing-friendly curve, our ring signatures achieve a size of roughly 8500 bits.
Last updated:  2026-02-27
A Framework for Advanced Signature Notions
Patrick Struck and Maximiliane Weishäupl
The beyond unforgeability features formalize additional security properties for signature schemes. We develop a general framework of binding properties for signature schemes that encompasses existing beyond unforgeability features and reveals new notions. Furthermore, we give new results regarding various transforms: We show that the transform by Cremers et al. (SP’21) achieves all of our security notions; we give a modified definition of so-called weak keys, and show that the Pornin–Stern transform (ACNS’05) also achieves all notions if there are no weak keys. Finally, we connect our framework to unforgeability notions.
Last updated:  2026-02-27
Batched & Non-interactive Blind Signatures from Lattices
Foteini Baldimtsi, Rishab Goyal, and Aayush Yadav
Non-interactive blind signatures (NIBS; Eurocrypt '23) allow a signer to asynchronously generate presignatures for a recipient, ensuring that only the intended recipient can extract a "blinded" signature for a random message. We introduce a new generalization called non-interactive batched blind signatures (NIBBS). Our goal is to reduce the computation and communication costs for signers and receivers, by batching multiple blind signature queries. More precisely, we define the property of 'succinct communication' which requires that the communication cost from signer to receiver be independent of the batch size. NIBBS is very suitable for large-scale deployments requiring only minimal signer-side effort. We design a NIBBS scheme and prove its security based on the hardness of lattice assumptions (in the random oracle model). When instantiated with the low-depth PRF candidate "Crypto Dark Matter" (TCC '18) and the succinct lattice-based proof system for rank-1 constraint systems (Crypto '23), our final signature size is 308 KB with <1 KB communication.
Last updated:  2026-02-27
Improved Plantard Arithmetic for Lattice-based Cryptography
Junhao Huang, Jipeng Zhang, Haosong Zhao, Zhe Liu, Ray C. C. Cheung, Çetin Kaya Koç, and Donglong Chen
This paper presents an improved Plantard's modular arithmetic (Plantard arithmetic) tailored for Lattice-Based Cryptography (LBC). Based on the improved Plantard arithmetic, we present faster implementations of two LBC schemes, Kyber and NTTRU, running on Cortex-M4. The intrinsic advantage of Plantard arithmetic is that one multiplication can be saved from the modular multiplication of a constant. However, the original Plantard arithmetic is not very practical in LBC schemes because of the limitation on the unsigned input range. In this paper, we improve the Plantard arithmetic and customize it for the existing LBC schemes with theoretical proof. The improved Plantard arithmetic not only inherits its aforementioned advantage but also accepts signed inputs, produces signed output, and enlarges its input range compared with the original design. Moreover, compared with the state-of-the-art Montgomery arithmetic, the improved Plantard arithmetic has a larger input range and smaller output range, which allows better lazy reduction strategies during the NTT/INTT implementation in current LBC schemes. All these merits make it possible to replace the Montgomery arithmetic with the improved Plantard arithmetic in LBC schemes on some platforms. After applying this novel method to Kyber and NTTRU schemes using 16-bit NTT on Cortex-M4 devices, we show that the proposed design outperforms the known fastest implementation that uses Montgomery and Barrett arithmetic. Specifically, compared with the state-of-the-art Kyber implementation, applying the improved Plantard arithmetic in Kyber results in a speedup of 25.02% and 18.56% for NTT and INTT, respectively. Compared with the reference implementation of NTTRU, our NTT and INTT achieve speedup by 83.21% and 78.64%, respectively. As for the LBC KEM schemes, we set new speed records for Kyber and NTTRU running on Cortex-M4.
Last updated:  2026-02-27
Key Updatable Hash Based VRF
Suman Ghosh, Ratna Dutta, and Sourav Mukhopadhyay
Unbiased, unpredictable, and publicly verifiable randomness is essential for a wide range of blockchain-based Web3 applications. Verifiable Random Functions (VRFs) naturally satisfy these requirements. For practical deployment, however, a VRF scheme must support efficient key generation and allow multiple evaluations across different blockchain rounds. In this work, we present a post-quantum secure, key-updatable VRF construction built from symmetric cryptographic primitives, including hash functions and pseudorandom generators (PRGs). The core of our design is a quantum-secure Extended Merkle Signature Scheme (XMSS) structured over multiple layers. We reorganize the XMSS framework in a systematic way to integrate it seamlessly into our VRF construction. Compared to existing approaches, our scheme offers improved key generation efficiency while enabling multiple evaluations from a single secret–verification key pair.
Last updated:  2026-02-27
On the Need for (Quantum) Memory with Short Outputs
Zihan Hao, Zikuan Huang, and Qipeng Liu
In this work, we establish the first separation between computation with bounded and unbounded space, for problems with short outputs (i.e., working memory can be exponentially larger than output size), both in the classical and the quantum setting. Towards that, we introduce a problem called nested collision finding, and show that optimal query complexity can not be achieved without exponential memory. Our result is based on a novel ``two-oracle recording'' technique, where one oracle ``records'' the computation's long outputs under the other oracle, effectively reducing the time-space trade-off for short-output problems to that of long-output problems. We believe this technique will be of independent interest for establishing time-space tradeoffs in other short-output settings.
Last updated:  2026-02-27
Bridging Privacy and Utility: A Verifiable Framework for Data Valuation via Zero-Knowledge Proofs
Ruibang Liu, Minyu Chen, Dengji Ma, and Guoqiang Li
Deep learning's hunger for high-quality data has catalyzed a burgeoning economy of decentralized data marketplaces. However, a fundamental trust deficit stifles this ecosystem: buyers fear data poisoning, while sellers fear data leakage. Although the Shapley value offers a rigorous economic framework for fair compensation, its calculation traditionally requires a Trusted Third Party (TTP) to access raw data, creating a single point of failure for privacy. Verifying data valuation without compromising confidentiality remains an open challenge. In this paper, we present ZK-DV, the first Zero-Knowledge Proof (ZKP) system designed for verifiable, privacy-preserving data valuation. ZK-DV enables a seller to prove that a claimed valuation score (based on Gradient Shapley) is mathematically consistent with the underlying private data and the buyer's model, without revealing either. Our key technical insight is the architectural coupling of model training and valuation: we construct a specialized arithmetic circuit that combines the valuation logic into the backpropagation, extracting marginal utility scores from intermediate gradients. This design, implemented via the GKR protocol with a hybrid commitment strategy, amortizes the heavy cryptographic overhead through batched processing. Extensive experiments on the MNIST dataset demonstrate that ZK-DV is practical: by optimizing batch sizes, we achieve a $2.7\times$ speedup in proof generation, while maintaining a quantization fidelity of $\rho=1.0$ and a low verification cost ($< 0.2$s). ZK-DV thus bridges the gap between cryptographic integrity and economic fairness, paving the way for trustless data exchange
Last updated:  2026-02-27
Improved Radix-based Approximate Homomorphic Encryption for Large Integers via Lightweight Bootstrapped Digit Carry
Gyeongwon Cha, Dongjin Park, and Joon-Woo Lee
Homomorphic encryption (HE) for high-precision integers has been steadily researched through various schemes; however, these approaches incurred severe overhead as the bit-width grew, requiring larger parameters to support integers of several hundred to a thousand bits. A significant breakthrough was recently made by Boneh and kim (Crypto'25). Their scheme constructs a residue number system from the different slots of a single CKKS ciphertext. This enables arithmetic on thousand-bit integers without increasing parameters. However, RNS approach in Boneh et al., which performs approximate reduction, fundamentally cannot support non-arithmetic operations. Alternatively, radix-based approach proposed by Kim (CHES'25) can perform non-arithmetic operations, but they require $O(k)$ bootstraps for a bit-width $k$. This makes them highly inefficient, and thus impractical, for non-arithmetic operations requiring thousand-bit precision. This paper proposes an improved radix-based CKKS scheme, centered on a 2-step algorithm that optimizes the number of bootstraps required for the digit carry operation to $O(\log k)$. The proposed scheme requires only 3-6 bootstraps to restore the result of a 32-2048 bit integer multiplication to its unique representation, which enables the efficient implementation of non-arithmetic operations such as comparison. Furthermore, our scheme extends the radix-based system, previously limited to prime-power moduli, to support an efficient homomorphic reduction algorithm for arbitrary moduli. Furthermore, our experiments demonstrate substantial efficiency gains compared to Boneh et al. For example, for moduli used in homomorphic signatures (Curve25519, P-384, and 2048-bit RSA), our scheme can process up to 4$\times$ more integers in a single ciphertext. Specifically for Curve25519, we also reduce the latency by 1.4$\times$, shortening the amortized time by 5.6$\times$ compared to Boneh et. al. and achieving a final processing time of 1.34 seconds per data point.
Last updated:  2026-02-27
AgentCrypt: Advancing Privacy and (Secure) Computation in AI Agent Collaboration
Harish Karthikeyan, Yue Guo, Leo de Castro, Antigoni Polychroniadou, Leo Ardon, Udari Madhushani Sehwag, Sumitra Ganesh, and Manuela Veloso
As AI agents increasingly operate in real-world, multi-agent environments, ensuring reliable and context-aware privacy in agent communication is critical, especially to comply with evolving regulatory requirements. Traditional access controls are insufficient, as privacy risks in agentic applications often arise after access is granted; agents may use information in ways that compromise privacy, such as messaging humans, sharing context with other agents, making tool calls, persisting data, or generating arbitrary functions of private information. Existing approaches often treat privacy as a binary constraint, whether data is shareable or not, overlooking nuanced, role-specific, and computation-dependent privacy needs essential for regulatory compliance. Agents, including those based on large language models, are inherently probabilistic and heuristic. There is no formal guarantee of how an agent will behave for any query, making them ill-suited for operations critical to security. To address this, we introduce AgentCrypt, a three-tiered framework for fine-grained, secure agent communication that adds a protection layer atop any AI agent platform. AgentCrypt spans unrestricted data exchange (Level 1) to fully encrypted computation using techniques such as homomorphic encryption (Level 3). Unlike much of the recent work, our approach guarantees that the privacy of tagged data is always preserved—even when the underlying AI model makes errors. AgentCrypt ensures privacy across diverse interactions and enables computation on otherwise inaccessible data, overcoming barriers such as data silos. We implemented and tested it with Langgraph and Google ADK, demonstrating versatility across platforms. We also introduce a benchmark dataset simulating privacy-critical tasks at all privacy levels, enabling systematic evaluation and fostering the development of regulatable machine learning systems for secure agent communication and computation.
Last updated:  2026-02-27
Optimal Threshold Traitor Tracing
Sourav Das, Pratish Datta, Aditi Partap, Swagata Sasmal, and Mark Zhandry
Threshold encryption distributes decryption capability across $n$ parties such that any $t$ of them can jointly decrypt a ciphertext, while smaller coalitions learn nothing. However, once $t$ or more parties collude, traditional threshold schemes provide no accountability: a coalition of $t$ or more parties can pool its keys into a pirate decoder that enables unrestricted decryption, all without any risk of being exposed. To address this, Boneh, Partap, and Rotem [CRYPTO '24] introduced threshold traitor tracing (TTT), which equips threshold encryption with traceability. Yet, all known TTT schemes either suffer from parameter sizes growing with at least $n^{1/3}$, or rely on indistinguishability obfuscation to achieve optimal parameters. In this paper, we present the first TTT schemes with optimal parameters, where public keys, secret keys, and ciphertexts are all bounded by ${\sf poly}(\lambda,\log n)$, built solely from standard cryptographic tools and assumptions. Our first construction relies on the decisional Bilinear Diffie–Hellman (DBDH) assumption in prime order bilinear groups. Our second scheme is a candidate construction based on the Learning with Errors (LWE) assumption, which relies on the existence of secret sharing schemes with certain properties. This construction is plausibly post-quantum secure, and supports ramp-thresholds where decryption requires a larger coalition than those tolerated by security. Both of our constructions provide traceability against coalitions of arbitrary sizes. To achieve these results, we introduce a new primitive, Attribute-Based Threshold Encryption (ABTE), which generalizes both threshold and attribute-based encryption. We then combine ABTE with Mixed Functional Encryption through a new compiler to obtain our TTT schemes. We believe ABTE is a powerful primitive that may have independent applications beyond optimal TTT.
Last updated:  2026-02-26
Conditionally Linkable Attribute-Based Signatures
Minh Pham, Khoa Nguyen, Slim Bettaieb, Mukul Kulkarni, and Willy Susilo
Attribute-Based Signatures (ABS) enable users to authenticate messages under expressive attribute policies while remaining anonymous. Existing ABS variants, however, treat linkability as a static, system-wide property: signatures are either always unlinkable, as in standard ABS, or globally linkable, as in traceable or accountable extensions. This rigid dichotomy fails to capture scenarios where correlation should arise only under explicitly declared conditions. This work introduces Conditionally Linkable Attribute-Based Signatures (CLABS), a framework extending ABS with programmable, context-dependent linkability. Each certified user with attribute $x$ is associated with a linking set $L_x$ over a public context space $\mathcal{T}$. For each context $\tau\in\mathcal{T}$, a public function $f_\tau$ specifies how attributes are compared. Two signatures are publicly linkable if and only if $\tau\in L_x\cap L_{x'}$ and $f_\tau(x)=f_\tau(x')$; otherwise they remain unlinkable. This enables selective, verifiable correlation without central trust and with leakage limited to the opt-in bit. We formalize the syntax and security notions of CLABS, capturing conditional linkability and context-aware anonymity, thereby ensuring privacy and verifiable linkage under voluntary participation. CLABS unifies global unlinkability and fine-grained, context-specific linkage within a single formal framework. We realize CLABS generically using three modular components: a pseudorandom function for deterministic tag generation, a conventional signature for attribute certification, and a signature of knowledge (SoK) proving correct tag computation and Boolean policy satisfaction without revealing $x$. Finally, we instantiate CLABS under standard lattice assumptions in the quantum random oracle model (QROM), achieving post-quantum security while supporting arbitrary Boolean policies. The techniques we employ to prove circuit satisfiability and tag correctness may be of independent interest.
Last updated:  2026-02-26
NIROPoK-Based Post-Quantum Sidechain Design on Ethereum
Hassan Khodaiemehr, Khadijeh Bagheri, Saeid Yazdinejad, and Chen Feng
This paper presents a pioneering approach to constructing sidechains on the Ethereum network with a focus on post-quantum security. Our framework integrates a novel quantum-resistant version of a non-interactive random oracle proof of knowledge (NIROPoK) scheme, alongside a quantum-resistant proof-of-stake mechanism and a post-quantum bridge based on the Dilithium digital signature scheme. By harnessing these advanced cryptographic techniques, we establish a robust defense against quantum computing threats while ensuring enhanced privacy for blockchain transactions. By conducting a thorough analysis and implementing our approach, we demonstrate the feasibility and effectiveness of creating quantum-resistant sidechains within the Ethereum ecosystem. Our proposed sidechain is also capable of securing Ethereum transactions from quantum threats using Ethereum’s current architecture and security measures.
Last updated:  2026-02-26
Non-interactive Blind Signatures with Threshold Issuance
Foteini Baldimtsi, Lucjan Hanzlik, and Aayush Yadav
Non-interactive blind signatures (NIBS) capture the minimal setting of blind signatures where the message space is restricted to unstructured random strings. They enable a signer to pre-compute presignatures without prior interaction, while ensuring that only the intended recipient can derive the corresponding blind signature. In this work, we consider the problem of threshold issuance of NIBS. Specifically, we introduce the notion of non-interactive threshold blind signatures (NITBS), where a user obtains partial presignatures from a threshold of signers and locally combines them into a valid blind signature. We provide a formal treatment of this primitive by defining the corresponding security notions of blindness and one-more unforgeability. We then present the first concrete construction of NITBS, obtained by adapting the Pointcheval-Sanders (PS) signature scheme, and establish its security in the algebraic group model. Our micro-benchmarking results show that our construction attains the smallest presignature and signature sizes and the fastest issuance among all existing NIBS schemes.
Last updated:  2026-02-26
What a Wonderful World: zkSNARKs in the Algebraic Group Model are Universally Composable
Gaspard Anthoine, Dario Fiore, and Mahak Pancholi
Zero-Knowledge Succinct Non-interactive Arguments of Knowledge (zkSNARKs) are important cryptographic primitives critical in many real-world applications. zkSNARKs are not used in isolation but are deployed within a broader context in which other cryptographic protocols may be concurrently executed. Universal-Composability (UC) allows rigorous analysis of cryptographic primitives being used in such arbitrary contexts. A UC analysis is even more desirable for popular, well-audited, and heavily deployed zkSNARKs already being used in practice. Prior works that study the UC security of existing zkSNARKs (without modifications) are either not modular, hence requiring case-by-case analysis for new proof systems, or have largely focused on zkSNARKs in the Random Oracle Model (ROM). The latter includes zkSNARKs with logarithmic proof sizes compiled from Interactive Oracle Proofs. This state of the art leaves out a large family of very efficient, often constant-size, zkSNARKs that rely on the Algebraic Group Model (AGM) and optionally on the ROM. This includes zkSNARKs compiled from Polynomial Interactive Oracle Proofs, such as Plonk and Marlin, among others. In this work, we address the UC security for unmodified zkSNARKs that are proven secure in AGM (+ROM). Our approach is modular: we identify simple, and mostly standard properties on the underlying zkSNARK that imply UC security. We observe that checking these properties for existing zkSNARKs is a surprisingly simple task using the rigorous formulation of AGM from Jaeger and Mohan (CRYPTO'24). The simplicity and modularity of our framework makes it easy-to-use for concluding UC security of several zkSNARKs in the same setting. Concretely, using our framework we establish that Plonk and Marlin are UC secure without any overhead.
Last updated:  2026-02-26
Orthus: Practical Sublinear Batch-Verification of Lattice Relations from Standard Assumptions
Madalina Bolboceanu, Jonathan Bootle, Vadim Lyubashevsky, Antonio Merino-Gallardo, and Gregor Seiler
The past several years have seen a rapid rise in practical lattice-based proof systems with linear-sized zero-knowledge proofs forming the foundation of many of the most efficient quantum-safe privacy protocols, and succinct proofs rapidly catching up and surpassing other quantum-safe alternatives in many metrics. A recent comparison of lattice-based aggregate signatures (Ethereum Foundation, 2025) involving the hash-based aggregate signature scheme Plonky3 and the instantiation of aggregate signatures from Falcon from the LaZer lattice library (Lyubashevsky, Seiler, Steuer, CCS 2024) using LaBRADOR (Beullens, Seiler, Crypto 2023), showed that lattice-based constructions have an advantage in terms of proof size and prover time, but are around an order of magnitude slower with regards to verification time. In general, it appears that slower verification times are the main obstacle to the adoption of succinct lattice-based proof systems. In this work, we introduce and implement Orthus, a proof system with sub-linear verification designed for relations that naturally arise in lattice-based constructions. Asymptotically, the verification time grows with the square root of the witness size, and for a concrete example of aggregating Falcon signatures our implementation reduces the verifier running time by a factor of $9X$ when aggregating $2^{17}$ signatures.
Last updated:  2026-02-26
Bittersweet Signatures: Bringing LWR to a Picnic for Hardware-Friendly MPC-in-the-Head
Brieuc Balon, Gianluca Brian, Sebastian Faust, Carmit Hazay, Elena Micheli, and François-Xavier Standaert
Post-quantum signature schemes are becoming increasingly important due to the threat of quantum computers to classical cryptographic schemes. Among the approaches considered in the literature, the MPC-in-the-head paradigm introduced by Ishai et al. (STOC'07) provides an innovative solution for constructing zero-knowledge proofs by exploiting Multi-Party Computation (MPC). This technique has proven to be a versatile tool in order to design efficient cryptographic schemes, including post-quantum signatures. Building on the MPC-in-the-head paradigm, we introduce Bittersweet signatures, a new class of signature schemes based on the Learning With Rounding (LWR) assumption. Their main advantage is conceptual simplicity: by exploiting (almost) key-homomorphic pseudorandom functions (PRFs), a cryptographic object that preserves pseudorandomness while allowing linear operations on keys, we obtain a very regular design offering nice opportunities for parallel implementations. Theoretically, analyzing Bittersweet signatures requires addressing significant challenges related to the (carry) leakage that almost key-homomorphic operations lead to. Concretely, Bittersweet signatures natively lead to competitive signature sizes, trading moderate software performance overheads for hardware performance gains when compared to state-of-the-art MPC-in-the-head schemes (e.g., relying on code-based assumptions), while admittedly lagging a bit behind recent algorithms based on the VOLE-in-the-head or Threshold-Computation-in-the-head frameworks. Besides, their scalability and algebraic structure makes them promising candidates for leakage-resilient implementations. The new abstractions we introduce additionally suggest interesting research directions towards further optimization and generalization.
Last updated:  2026-02-26
Anonymity of X-Wing and its Variants
Jiawei Bao and Jiaxin Pan
X-Wing (Barbosa et al., CiC Volume 1, Issue 1) is a hybrid key encapsulation mechanism (KEM) currently considered for standardization by IETF and deployed by major companies such as Google to ensure a secure transition to post-quantum cryptography. It combines a classical X25519 KEM with the post-quantum ML-KEM-768. In this paper, we propose the first analysis of the anonymity of X-Wing. We are interested in tight and memory-tight reductions that offer stronger security guarantees. We first establish in the standard model that for any IND-CCA secure KEM, weak anonymity implies full anonymity, and our reduction is tight not only in success probability and time but also in memory consumption. We then prove in the random oracle model that X-Wing achieves weak anonymity if both X25519 and ML-KEM-768 are weakly anonymous. The former can even be proven without a hardness assumption. Our proof on the weak anonymity of X-Wing does not preserve the memory-tightness of the underlying KEMs. To improve it, we propose a slight variant of X-Wing that preserves memory-tightness. Finally, we improve the existing IND-CCA proof of the original X-Wing by Barbosa et al. using our new memory-tight analysis.
Last updated:  2026-02-26
How To Make Delegated Payments on Bitcoin: A Question for the AI Agentic Future
Jay Taylor, Paul Gerhart, and Sri AravindaKrishnan Thyagarajan
AI agents and custodial services are increasingly being entrusted as intermediaries to conduct transactions on behalf of institutions. The stakes are high: The digital asset market is projected to exceed \$16 trillion by 2030, where exchanges often involve proprietary, time-sensitive goods. Although industry efforts like Google’s Agent-to-Payments (AP2) protocol standardize how agents authorize payments, they leave open the core challenge of fair exchange: ensuring that a buyer obtains the asset if and only if the seller is compensated without exposing sensitive information. We introduce proxy adaptor signatures (PAS), a new cryptographic primitive that enables fair exchange through delegation without sacrificing atomicity or privacy. A stateless buyer issues a single request and does not need to manage long-term cryptographic secrets while proxies complete the exchange with a seller. The seller is guaranteed payment if the buyer can later reconstruct the purchased witness; meanwhile, the proxies remain oblivious to the witness throughout the protocol. We formalize PAS under a threshold model that tolerates the collusion of up to $t-1$ proxies. We also present an efficient construction from standard primitives that is compatible with Bitcoin, Cardano, and Ethereum. Finally, we evaluate a Rust implementation that supports up to 30 proxies. Our prototype is concretely efficient: buyer and seller computations take place in microseconds, proxy operations in milliseconds, and on-chain costs are equivalent to those of a standard transaction without fair exchange.
Last updated:  2026-02-26
On the construction of Barnes-Wall lattices and their application in cryptography
Artyom Kuninets, Anton Leevik, Ekaterina Malygina, Evgeniy Melnichuk, and Denis Nabokov
In this work, we investigate the application of Barnes-Wall lattices in post-quantum cryptographic schemes. We survey and analyze several constructions of Barnes-Wall lattices, including subgroup chains, the generalized $k$-ing construction, and connections with Reed-Muller codes, highlighting their equivalence over both $\mathbb{Z}[i]$ and $\mathbb{Z}$. Building on these structural insights, we introduce a new algorithm for efficient sampling from discrete Gaussian distribution on $BW_{2^n}$ lattices. Our approach exploits the $k$-ing and squaring constructions to achieve low-variance sampling, which is particularly relevant for cryptographic applications such as digital signature schemes. We further examine the cryptographic hardness of Lattice Isomorphism Problem (LIP), showing that Barnes-Wall lattices provide inherent resistance to hull-based and other known attacks. Our results on sampling algorithms, combined with existing advances in the cryptanalysis of the LIP, indicate that Barnes-Wall lattices hold strong potential for the design of post-quantum schemes based on the LIP problem.
Last updated:  2026-02-26
Oracle-Based Multistep Strategy for Solving Polynomial Systems Over Finite Fields and Algebraic Cryptanalysis of the Aradi Cipher
Roberto La Scala and Sharwan K. Tiwari
The multistep solving strategy consists in a divide-and-conquer approach: when a multivariate polynomial system is computationally infeasible to solve directly, one variable is assigned over the elements of the base finite field, and the procedure is recursively applied to the resulting simplified systems. In a previous work by the same authors (among others), this approach proved effective in the algebraic cryptanalysis of the Trivium cipher. In this paper, we present a new formulation of the corresponding algorithm based on a Depth-First Search strategy, along with a novel complexity analysis leveraging tree structures. We also introduce the notion of an ``oracle function'', which is intended to determine whether evaluating a new variable is required to simplify the current polynomial system. This notion allows us to unify all previously proposed variants of the multistep strategy, including the classical hybrid approach, by appropriately selecting the oracle function. Finally, we employ the multistep solving strategy in the cryptanalysis of the NSA's recently introduced low-latency block cipher Aradi, achieving a first full-round algebraic attack that exposes structural features in its symbolic model.
Last updated:  2026-02-26
Locally Recoverable Data Availability Sampling
Seunghyun Cho, Eunyoung Seo, and Young-Sik Kim
Data Availability Sampling (DAS) enables light clients to probabilistically verify that a large published object remains retrievable by sampling a small number of coded symbols and checking consistency. Most existing DAS designs are predicated on MDS-style global recovery thresholds and therefore yield an essentially binary availability signal: either sufficient evidence accumulates to enable global reconstruction, or it does not. In this work we introduce Locally Recoverable Data Availability Sampling (LR-DAS), which leverages locally recoverable codes with disjoint recovery groups to provide graded availability evidence. A verifier sequentially certifies groups: once it has verified $r$ distinct openings in a group, it can reconstruct the entire group locally and monotonically increase a certified availability fraction. The central cryptographic challenge is preventing splicing attacks, where an adversary mixes locally consistent views that do not arise from a single global codeword. We define locality-aware erasure-code commitments that formalize position binding, local binding, and an explicit local-global consistency requirement, and we give a construction that enforces local-global consistency using a single out-of-domain algebraic linking check per certified group. We prove graded soundness via overshoot bounds that decompose into a sampling term and a negligible cryptographic failure term, covering both missing-groups and missing-fraction withholding patterns. We provide two concrete instantiations: a succinct pairing-based realization and a transparent realization based on low-degree/proximity testing, and we evaluate the resulting trade-offs against representative DAS and repair-oriented approaches.
Last updated:  2026-02-26
Statistically Secure Asynchronous MPC with Linear Communication and $\mathcal{O}(n^5)$ Additive Overhead
Xiaoyu Ji and Yifan Song
Secure multiparty computation (MPC) allows distrusted parties to jointly evaluate a common function while keeping their inputs private. In this work, we focus on improving the communication complexity of information-theoretic asynchronous MPC with optimal resilience ($t < n/3$). The current state-of-the-art work in this setting by Goyal, Liu-Zhang, and Song [Crypto’24] achieves amortized linear communication per gate with $\Omega(n^{14})$ additive overhead. In comparison, the best-known result in the synchronous setting by Goyal, Song, and Zhu [Crypto’20] achieves the amortized linear communication per gate with only $\mathcal{O}(n^{6})$ additive overhead, revealing a significant gap that we aim to close. This work closes this gap and goes further. We present the first statistically secure asynchronous MPC protocol achieving amortized linear communication per gate with only $\mathcal{O}(n^{5})$ additive overhead assuming a functionality for Agreement on Common Set (ACS). With the best-known instantiation for ACS, we obtain an asynchronous MPC protocol in the plain model with additive overhead $\mathcal{O}((\kappa+n)n^4\log\kappa)$ in expectation, where $\kappa$ is the security parameter. In the setting of statistical security with optimal resilience, this even surpasses the best synchronous result when $n\geq\sqrt{\kappa\log\kappa}$. Technically, our contributions include (i) a new protocol for generating Beaver triples with linear communication per triple and $\mathcal{O}(n^3)$ additive overhead assuming functionalities for generating random degree-$t$ Shamir sharings and ACS, and (ii) a new protocol for generating random degree-$t$ Shamir sharings with linear communication per sharing and $\mathcal{O}(n^{5})$ additive overhead assuming a functionality for ACS.
Last updated:  2026-02-26
The Quantum Decoherence Model: Everlasting Composable Secure Computation and More
Nico Döttling, Alexander Koch, Sven Maier, Jeremias Mechler, Anne Müller, Jörn Müller-Quade, and Marcel Tiepelt
Quantum cryptography allows to achieve security goals which are unobtainable using classical cryptography alone: it offers the promise of everlasting privacy. Thatis, an adversary trying to attack a protocol must succeed during the run of the protocol. After the protocol has terminated, security holds unconditionally. In this work, we initiate the study of a new model which we call the quantum decoherence model (QDM). In a nutshell, this model captures adversaries that are computationally bounded during the run of a protocol (and some time after), but become computationally unbounded long after the protocol terminates. Importantly, once the adversary becomes computationally unbounded, he can only remember a bounded number of qubits from before the computational bound was lifted. We provide a variant of the Universal Composability framework which captures the new notion of quantum decoherence and augment it with quantum random oracles. As our main contribution, we construct a non-interactive commitment scheme achieving unconditional and statistical security against malicious senders and everlasting security against malicious receivers under our new security notion. Such commitments imply general secure multiparty computation with everlasting security. Finally, we show that our core technique can be applied to a broader spectrum of problems. We show that it gives rise to everlasting public key encryption and OT in the QDM. Finally, we also consider the weaker notion of incompressible encryption in the setting of quantum decoherence, and show that post-quantum IND-CPA secure public key encryption is sufficient to realize this notion without resorting to random oracles. At the technical core of our constructions is a new, conceptually simple yet powerful reverse entropic uncertainty relation.
Last updated:  2026-02-26
Collusion-Safe Proxy Re-Encryption
Haotian Yin, Jie Zhang, Wanxin Li, Yuji Dong, Eng Gee Lim, and Dominik Wojtczak
Proxy re-encryption is a cryptographic scheme enabling a delegator (user $i$) to delegate its decryption right to a valid delegatee (user $j$) through a proxy, who cannot extract any information about the message during the procedure. An important security notion is the security against collusion between the proxy and the delegatee. In this case, the adversary has the secret key of the delegatee, $\mathsf{sk}_j$, and the re-encryption key, $\mathsf{rk}_{i\to j}$. The master secret security is first formalised by Ateniese et al. (NDSS'05) to capture the secrecy of $i$'s secret key during collusion. This notion was further formalised by Zhou et al. (ASIACRYPT'23) as the indistinguishability of re-encrypted ciphertext against chosen-message attacks, called collusion safety, which implies the master secret security. In this paper, we find that a PRE scheme is not master secret secure as they claimed, and many other schemes were not master secret secure. Then, we propose a generic construction to achieve collusion safety at the cost of doubling the key size from the IND-CPA secure PRE, enjoying a much better generality and efficiency than the existing technique by secret sharing.
Last updated:  2026-02-26
SQISign on ARM
Luca De Feo, Li-Jie Jian, Ting-Yuan Wang, and Bo-Yin Yang
We present the first vectorized implementation of SQIsign for high-performance Arm architectures. SQIsign is a promising candidate in the NIST On-Ramp Digital Signatures Call Round 2 to its most compact key and signature sizes. However, its signing performance remains a primary bottleneck, particularly the ideal-to-isogeny conversion. The conversion requires a large number of operations on elliptic curves and Abelian varieties, which depend on finite field arithmetic. Despite recent algorithmic improvements, research on high-performance implementations and efficient vectorized finite field arithmetic for SQIsign is still unexplored. Our main contribution is the first demonstration of non-trivial vectorization speedups for SQIsign. By leveraging the NEON instruction set, we implement highly efficient finite field arithmetic and batched elliptic curve operations tailored for 2-dimensional isogeny chain computations. This accelerates the subroutine by 2.24$\times$ over the state-of-the-art. Moreover, our improvements are completely orthogonal to the recent algorithmic improvement Qlapoti (Asiacrypt 2025), offering similar performance gains in the SQIsign signing algorithm. When combined with Qlapoti, our implementation achieves a speedup of more than 2.24$\times$ in signing at NIST security level I. We expect our work to inspire further SQIsign optimization from a vectorization perspective, especially for quaternion computations with precise bounds.
Last updated:  2026-02-26
Cryptanalysis of ChiLow with Cube-Like Attacks
Shuo Peng, Jiahui He, Kai Hu, Zhongfeng Niu, Shahram Rasoolzadeh, and Meiqin Wang
Proposed in EUROCRYPT~2025, \chilow is a family of tweakable block ciphers and a related PRF built on the novel nonlinear $\chichi$ function, designed to enable efficient and secure embedded code encryption. The only key-recovery results of \chilow are from designers which can reach at most 4 out of 8 rounds, which is not enough for a low-latency cipher like \chilow: more cryptanalysis efforts are expected. Considering the low-degree $\chichi$ function, we present three kinds of cube-like attacks on \chilow-32 under both single-tweak and multi-tweak settings, including \begin{itemize} \item[-] a \textit{conditional cube attack} in the multi-tweak setting, which enables full key recovery for 5-round and 6-round instances with time complexities $2^{32}$ and $2^{120}$, data complexities $2^{23.58}$ and $2^{40}$, and negligible memory requirements, respectively. \item[-] a \textit{borderline cube attack} in the multi-tweak setting, which recovers the full key of 5-round \chilow-32 with time, data, and memory complexities of $2^{32}$, $2^{18.58}$, and $2^{33.56}$, respectively. For 6-round \chilow-32, it achieves full key recovery with time, data, and memory complexities of $2^{34}$, $2^{33.58}$, and $2^{54.28}$, respectively. Both attacks are practical. \item [-] an \textit{integral attack} on 7-round \chilow-32 in the single-tweak setting. By combining a 4-round borderline cube with three additional rounds, we reduce the round-key search space from $2^{96}$ to $2^{73}$. Moreover, we present a method to recover the master key based on round-key information, allowing us to recover the master key for 7-round \chilow-32 with a time complexity of $2^{127.78}$. \end{itemize} All of our attacks respect security claims made by the designers. Though our analysis does not compromise the security of the full 8-round \chilow, we hope that our results offer valuable insights into its security properties.
Last updated:  2026-02-26
End-to-End Encrypted Git Services
Ya-Nan Li, Yaqing Song, Qiang Tang, and Moti Yung
Git services such as GitHub, have been widely used to manage projects and enable collaborations among multiple entities. Just as in messaging and cloud storage, where end-to-end security has been gaining increased attention, such a level of security is also demanded for Git services. Content in the repositories (and the data/code supply-chain facilitated by Git services) could be highly valuable, whereas the threat of system breaches has become routine nowadays. However, existing studies of Git security to date (mostly open source projects) suffer in two ways: they provide only very weak security, and they have a large overhead. In this paper, we initiate the needed study of efficient end-to-end encrypted Git services. Specifically, we formally define the syntax and critical security properties, and then propose two constructions that provably meet those properties. Moreover, our constructions have the important property of platform-compatibility: They are compatible with current Git servers and reserve all basic Git operations, thus can be directly tested and deployed on top of existing platforms. Furthermore, the overhead we achieve is only proportional to the actual difference caused by each edit, instead of the whole file (or even the whole repository) as is the case with existing works. We implemented both constructions and tested them directly on several public GitHub repositories. Our evaluations show (1) the effectiveness of platform-compatibility, and (2) the significant efficiency improvement we got (while provably providing much stronger security than prior ad-hoc treatments).
Last updated:  2026-02-25
Time-Space Trade-Offs for Sumcheck
Anubhav Baweja, Alessandro Chiesa, Elisabetta Fedele, Giacomo Fenzi, Pratyush Mishra, Tushar Mopuri, and Andrew Zitek-Estrada
The sumcheck protocol is a fundamental building block in the design of probabilistic proof systems, and has become a key component of recent work on efficient succinct arguments. We study time-space tradeoffs for the prover of the sumcheck protocol in the streaming model, and provide upper and lower bounds that tightly characterize the efficiency achievable by the prover. $\bullet{}$ For sumcheck claims about a single multilinear polynomial we demonstrate an algorithm that runs in time $O(kN)$ and uses space $O(N^{1/k})$ for any $k \geq 1$. For non-adaptive provers (a class which contains all known sumcheck prover algorithms) we show that this tradeoff is optimal. $\bullet{}$ For sumcheck claims about products of multilinear polynomials, we describe a prover algorithm that runs in time $O(N(\log \log N + k))$ and uses space $O(N^{1/k})$ for any $k \geq 1$. We show that, conditioned on the hardness of a natural problem about multiplication of multilinear polynomials, any ``natural'' prover algorithm that uses space $O(N^{1/2 - \varepsilon})$ for some $\varepsilon > 0$ must run in time $\Omega(N(\log \log N + \log \varepsilon))$. We implement and evaluate the prover algorithm for products of multilinear polynomials. We show that our algorithm consumes up to $120\times$ less memory compare to the linear-time prover algorithm, while incurring a time overhead of less than $2\times$. The foregoing algorithms and lower bounds apply in the interactive proof model. We show that in the polynomial interactive oracle proof model one can in fact design a new protocol that achieves a better time-space tradeoff of $O(N^{1/k})$ space and $O(N(\log^* N + k))$ time for any $k \geq 1$.
Last updated:  2026-02-25
VROOM: Accelerating (Almost All) Number-Theoretic Cryptography Using Vectorization and the Residue Number System
Simon Langowski, Kaiwen He, and Srini Devadas
Modular arithmetic with a large prime modulus is a dominant computational cost in number-theoretic cryptography. Modular operations are especially challenging to parallelize efficiently on CPUs using vector instructions; standard CPU implementations rely on costly carry operations and permutation instructions to align with the multiplication datapath, negating the benefits of vectorization. We develop vectorized algorithms for modular addition and multiplication, and present a new, constant-time modular multiplication algorithm suitable for general moduli - prime or otherwise. Our method uses a Residue Number System (RNS) representation to align the arithmetic naturally with wide vector units, and strategically eliminate extraneous instructions. Existing works either require the use of customized hardware or fail to show latency improvements. Reducing the latency of modular arithmetic results in speedups for cryptographic applications. We accelerate RSA-4096 signatures by $4.0\times$ (verify) and $1.3\times$ (sign) over OpenSSL, and speed up BLS signature verifications by $3.43\times$ over the assembly-optimized BLST library. To facilitate broad practical adoption, we plan to upstream our implementation into BoringSSL, where it will directly benefit real-world TLS and cryptographic deployments.
Last updated:  2026-02-25
WOTS-Tree: Merkle-Optimized Winternitz Signatures for Post-Quantum Bitcoin
Javier Mateos
We present WOTS-Tree, a stateful hash-based signature scheme for Bitcoin that combines WOTS+ one-time signatures with a binary Merkle tree, supporting up to $2^{21}$ independent signatures per address. The construction instantiates XMSS with parameters specifically optimized for Bitcoin's UTXO model, using a dual hash function design: SHA-256 truncated to 128 bits ($n=16$, $w=256$) for WOTS+ chain evaluations, and full 256-bit SHA-256 for Merkle tree compression. Deployed as dual leaves within BIP-341 Taproot (compatible with the proposed BIP-360 Pay-to-Merkle-Root), the default (hardened) mode provides: (i) a fast-path for single-use UTXOs at 353 bytes, and (ii) a fallback tree for Replace-By-Fee and address reuse at 675 bytes ($K=1{,}024$). For Lightning channels, $K=2^{21}$ yields 1,028-byte witnesses with a 19-second one-time setup. An opt-in compact mode using truncated 128-bit Merkle nodes reduces witnesses to 515--692 bytes at the cost of reduced Merkle binding security ($\approx 60$ bits). WOTS-Tree achieves strict L1 verification bounds of 4,601 hashes ($\approx 0.009$ ms on SHA-NI hardware). The default parameterization provides 115.8-bit classical and 57.9-bit quantum forgery resistance, with the Merkle binding at $\approx 124$ bits, exceeding the WOTS+ forgery bound in both classical and quantum settings. We provide a complete security reduction with concrete bounds, a dual hash instantiation analysis, and a reference implementation with comprehensive test coverage. Default-mode witnesses are $4$--$7\times$ smaller than hypertree variants while aligning naturally with Bitcoin's spend-once transaction model.
Last updated:  2026-02-25
Fast cube roots in Fp2 via the algebraic torus
Youssef El Housni
Computing cube roots in quadratic extensions of finite fields is a subroutine that arises in elliptic-curve point decompression, hash-to-curve and isogeny-based protocols. We present a new algorithm that, for $p \equiv 1 \pmod{3}$ –which holds in all settings where $\mathbb{F}_{p^2}$ cube roots arise in practice– reduces the $\mathbb{F}_{p^2}$ cube root to operations entirely in the base field $\mathbb{F}_p$ via the algebraic torus $\mathbb{T}_2(\mathbb{F}_p)$ and Lucas sequences. We prove correctness in all residuosity cases and implement the algorithm using the $\texttt{gnark-crypto}$ open-source library. Benchmarks on six primes spanning pairing-based and isogeny-based cryptography show $1.6$–$2.3\times$ speed-ups over direct (addition chain) exponentiations in $\mathbb{F}_{p^2}$. We also extend the approach to $p \equiv 2 \pmod{3}$ and, more generally, to any odd $n$-th roots in quadratic towers $\mathbb{F}_{p^{2^k}}$ with $\gcd(n, p+1) = 1$.
Last updated:  2026-02-25
Zero-Knowledge IOPPs for Constrained Interleaved Codes
Alessandro Chiesa, Giacomo Fenzi, and Guy Weissenberg
Succinct arguments based on interactive oracle proofs (IOPs) have achieved remarkable efficiency improvements and are now widely adopted in applications. State-of-the-art IOPs involve protocols for testing proximity to constrained interleaved linear codes, and enjoy essentially optimal parameters. However, recent IOP constructions provide no privacy guarantees, which remain a must for many applications. We present an IOP of proximity for testing constrained interleaved linear codes that achieves (honest-verifier) zero-knowledge, while incurring a negligible overhead compared to the (non-zero-knowledge) state of the art. In line with recent constructions, our construction satisfies round-by-round knowledge soundness with a straightline extractor and negligible error. We propose a definition of (honest-verifier) zero-knowledge for interactive oracle reductions (IORs) that we prove is compatible with composition, and then obtain our result by constructing and modularly composing several lightweight zero-knowledge IORs. Our key technical contributions are a zero-knowledge sumcheck IOR and a zero-knowledge code-switching IOR that fit the strict efficiency requirements of our setting; these contributions and other technical complications entailed overcoming several challenges with new notions and protocols. Finally, along the way, we highlight the efficiency benefits of high-distance codes obtained from dispersers, which may be of independent interest.
Last updated:  2026-02-25
Succinct Arguments for BatchQMA and Friends under 6 Rounds
Rishab Goyal, Aditya Jain, and Shashwatha Mitra GB
We study the problem of minimizing round complexity in the context of succinct classical argument systems for quantum computation. All prior works either require at least 8 rounds of interaction between the quantum prover and classical verifier, or rely on the idealized quantum random oracle model (QROM). We design: 1. A 4-round public-coin (except for the first message) argument system for batchQMA lan- guages. Our results come in two flavors: (a) Under the post-quantum hardness of functional encryption and learning with errors (LWE), we achieve optimal communication complexity (i.e., all message sizes are independent of batch size). (b) Under the post-quantum hardness of LWE, we achieve optimal communication com- plexity except for the verifier’s first message. 2. A 6-round private-coin argument system for monotone policy batchQMA languages, under the post-quantum hardness of LWE. The communication complexity is independent of the batch size as well as the monotone circuit size. One of our main technical contributions is a new approach to prove soundness without rewind- ing cheating provers. We bring the notion of straight-line partial extractability to argument systems for quantum computation. All previous works crucially relied on rewinding cheating provers, thus needed “state-preserving” succinct arguments of knowledge (AoKs) for NP to prove soundness.
Last updated:  2026-02-25
Non-Complete Set Coverings for Higher Order Threshold Implementations
Oriol Farràs, Óscar Fidalgo, and Carlos Andres Lara-Nino
Side-channel attacks (SCAs) represent an important threat for the implementation of cryptographic algorithms. These attacks exploit the information leakage found in the physical magnitudes of hardware devices (e.g. current draw, electromagnetic emanation). Threshold Implementations (TIs) aim to mitigate SCAs by implementing a modified version of the algorithm that operates over randomized shares of its input and intermediate values. This strategy relies on the possibility of splitting the algorithm to be protected into sub-functions that satisfy certain properties about their dependence structure on the randomized shares. Non-complete set coverings (NCSCs) are combinatorial objects that can provide this dependence structure and guide the design of TIs. Given the desired order of protection $d$ and the algebraic degree $t$ of the functions to be implemented, for an NCSC to be useful, its cardinality $r$ should be small and similar to the number of input shares $s$. This work contributes to the study of NCSCs for efficient TIs by finding smaller coverings and proving novel theoretical bounds on their cardinality. We present a new NCSC for the case $t=3,d=2$ that is optimal and NCSCs for the cases $t=3,d=3$ and $t=4,d=2$ whose sizes are close to the lower bounds. We also present new combinatorial properties of these coverings and an algorithm for the search of small NCSCs.
Last updated:  2026-02-25
Have Your CKAKE and Eat it, Too: Efficient, Composable KEM-Authenticated Key Exchange
Myrto Arapinis, Christopher Battarbee, and Mina Doosti
We report on a novel authenticated key-exchange (AKE) protocol where the authentication is achieved entirely by key-encapsulation mechanisms (KEMs). Techniques to achieve AKE with KEMs have been known for some time, but have received renewed attention in a post-quantum world; in contrast to classical cryptography, the data corresponding to the NIST post-quantum KEM standard is a significant save on bandwidth compared to the signature standard. Previous KEM-authenticated AKE protocols are not known to be composable; our protocol offers similar security guarantees, plus composability, while being more efficient in terms of bandwidth compared to non-composable KEM-based AKE protocols, and composable signature-based AKE protocols. Our protocol features a modular design, and a full security proof in the Constructive Cryptography (CC) framework, one of the major composable security frameworks. We also prove the forward secrecy of our protocol, and introduce generic techniques to prove forward secrecy in CC, which may be of independent interest.
Last updated:  2026-02-25
Towards Accountability for Anonymous Credentials
Shailesh Mishra and Martin Burkhart
Anonymous Credentials (or ACs) enable users to prove claims with strong privacy guarantees, protecting credential holders from being tracked by issuers and verifiers. However, these privacy guarantees imply that a credential holder cannot be held accountable for misuse (e.g., selling credential checks online for proving 𝑎𝑔𝑒 > 18). The lack of accountability may raise questions about the adoption of ACs into national iden- tity systems (e.g., European EUDI or Swiss e-ID), which might lead to the issuing authorities resorting to credential systems with weaker privacy guarantees (e.g., batch issuance of one-show credentials). This shows that the lack of account- ability can adversely impact the levels of privacy enjoyed by users. Hence, in this paper, we discuss transferability attacks on ACs and introduce a framework for providing accountability in AC systems. In addition to issuers, holders and verifiers, it assumes the existence of: (i) a law enforcement body (the police) and a judicial body (the judge) that work together to find information on credential misuse and; (ii) one or more digital privacy advocates, called the NGO(s), that ensure the system is not used for tracking people. We introduce the cryptographic forensic trail (CFT), which is attached to each credential show. The CFT can be used for obtaining more information about an individual if and only if the police have probable cause and can convince the judge to issue a corresponding search warrant. Then, the police, the judge, and the NGO(s) run a multiparty protocol for decrypting relevant trails only. The protocol mimics checks and balances of a healthy democracy, in which neither law enforcement nor justice can track people as they will. Even if both branches colluded, the NGO(s) can detect the misuse and block further use. In addition to possible extensions, we discuss performance constraints on mobile phones and argue that practical feasi- bility of the CFT is within reach.
Last updated:  2026-02-25
Concretely Efficient Fluid MPC with Linear Communication
Yubo Zeng, Kang Yang, Dengguo Feng, and Min Zhang
Traditional Secure Multi-Party Computation (MPC) requires parties to stay online through the whole computation, which compromises scalability when dealing with large-scale and complex tasks. The notion of fluid MPC, introduced by Choudhuri et al. (Crypto 2021), aims to address this challenge by presenting a dynamic participation model where parties have the flexibility to join and leave as needed. The best-known honest-majority MPC protocol by Bienstock et al. (Crypto 2023) in the fluid setting achieves linear communication complexity, but still incurs a substantially higher communication overhead than MPC in the classical setting. In this paper, we present two concretely efficient fluid MPC protocols in the honest-majority setting. The first, Velora, is an unconditionally secure maximal-fluid MPC protocol, which achieves the lowest communication cost among maximally fluid MPC protocols by introducing a new approach of transferring the output sharings held by the current committee to the next committee. To eliminate the inherent overhead of Velora, we also propose a separation-generation approach for random double sharings and integrate it into the second protocol as Ion. As a trade-off, Ion relaxes the fluidity requirement to the submaximal fluidity, allowing an extra internal communication round for each committee. Both protocols Velora and Ion enable us to extend the ATLAS technique from the classical setting to the fluid setting for further lowering communication overhead. Compared to the best-known fluid MPC protocol, our protocols reduce the communication cost per multiplication gate by a factor of 5.4 ∼ 7.5× (resp., 20.7 ∼ 28×) for semi-honest security (resp., malicious security). Compared to the state-of-the-art ATLAS protocol by Goyal et al. (Crypto 2021) in the classical setting, our semi-honest protocols only introduce a 1 ∼ 1.5× larger communication overhead for securely computing multiplication gates.
Last updated:  2026-02-25
Lightweight Sorting in Approximate Homomorphic Encryption
Lorenzo Rovida, Alberto Leporati, and Simone Basile
Sorting encrypted values is an open research problem that plays a crucial role in the broader objective of providing efficient and practical privacy-preserving online services. The current state of the art work by Mazzone, Everts, Hahn and Peter (USENIX Security '25) proposes efficient algorithms for indexing and sorting based on the CKKS scheme, which deviates from the compare-and-swap paradigm, typically used by sorting networks, using a permutation-based approach. In this work, we follow up their work and explore different approaches to approximate the nonlinear functions required by the circuit. We propose simpler and concrete solutions that allow for faster computations, smaller memory requirements and higher precision. For example, our framework allows to sort 128 real elements in roughly 22 seconds, while maintaining a precision of $0.001$ and requiring 3 GB of memory. Furthermore, we propose an implementation of a swap-based bitonic network that is not based on approximations of the $\text{sgn}(x)$ function, which scales linearly with the number of values, useful when the number of available slots is small.
Last updated:  2026-02-25
Learning with Errors with Output Dependencies: LWE, LWR, and Physical Learning Problems under the Same Umbrella
Clément Hoffmann, Pierrick Méaux, Mélissa Rossi, and François-Xavier Standaert
Learning problems have become a foundational element for constructing quantum-resistant cryptographic schemes, finding broad application even beyond, such as in Fully Homomorphic Encryption. The increasing complexity of this field, marked by the rise of physical learning problems due to research into side-channel leakage and secure hardware implementations, underscores the urgent need for a more comprehensive analytical framework capable of encompassing these diverse variants. In response, we introduce Learning With Error with Output Dependencies (LWE-OD), a novel learning problem defined by an error distribution that depends on the inner product value and therefore on the key. LWE-OD instances are remarkably versatile, generalizing both established theoretical problems like Learning With Errors (LWE) or Learning With Rounding (LWR), and emerging physical problems such as Learning With Physical Rounding (LWPR). Our core contribution is establishing a reduction from LWE-OD to LWE. This is accomplished by leveraging an intermediate problem, denoted qLWE. Our reduction follows a two-step, simulator-based approach, yielding explicit conditions that guarantee LWE-OD is at least as computationally hard as LWE. While this theorem provides a valuable reduction, it also highlights a crucial distinction among reductions: those that allow explicit calculation of target distributions versus weaker ones with conditional results. To further demonstrate the utility of our framework, we offer new proofs for existing results, specifically the reduction from LWR to LWE and from Learning Parity with Noise with Output Dependencies (LPN-OD) to LPN. This new reduction opens the door for a potential reduction from LWPR to LWE.
Last updated:  2026-02-25
Hardness of M-LWE with General Distributions and Applications to Leaky Variants
Katharina Boudgoust, Corentin Jeudy, Erkan Tairi, and Weiqiang Wen
The Module Learning With Errors (M-LWE) problem has become a fundamental hardness assumption for lattice-based cryptography. It offers an attractive trade-off between strong robustness guarantees, sometimes directly based on worst-case lattice problems, and efficiency of the subsequent cryptographic primitives. Different flavors of M-LWE have then been introduced towards improving performance. Such variants look at different secret-error distributions and might allow for additional hints on the secret-error vector. Existing hardness results however only cover restricted classes of said distributions, or are tailored to specific leakage models. This lack of generality hinders the design of efficient and versatile cryptographic schemes, as each new distribution or leakage model requires a separate and nontrivial hardness evaluation. In this work, we address this limitation by establishing the hardness of MLWE under general distributions. As a first step, we show that MLWE remains hard when the error vector follows an arbitrary bounded distribution with sufficient entropy, with some restriction on the number of samples. Building on this, we then reduce to the Hermite Normal Form (HNF) where the secret-error vector follows said arbitrary distribution. Overall, our result shows the actual shape of the distribution does not matter, as long as it keeps sufficient entropy. To demonstrate the versatility of our framework, we further analyze a range of leakage scenarios. By examining the residual entropy given the leakage, we show that our results of M-LWE with general distributions encompass various types of leakage. More precisely, we cover exact and approximate linear hints which are widely used in recent cryptographic designs, as well as quadratic, and even non-algebraic forms, some of which were not yet covered by any theoretical hardness guarantees. The generality of our results aims at facilitating future cryptographic designs and security analyses.
Last updated:  2026-02-25
Necessary and Sufficient Conditions for the Existence of Ideal Linear Secret Sharing Schemes for Arbitrary Access Structures
Zheng Chen, Qiuxia Xu, and Chunming Tang
Determining whether an arbitrary access structure can be realized by an ideal linear secret sharing scheme is an important research topic. we use linear codes as the main tool to construct matrices $H$ and $G$ over a finite field $\mathbb{F}_q$ for a given access structure $\Gamma_{\min}$, and show that a necessary and sufficient condition for the existence of an ideal linear secret sharing scheme realizing $\Gamma_{\min}$ is that the equation $GH^{\mathsf{T}}=0$ has a solution. If this equation has a solution, then $H$ serves as the parity-check matrix of a linear code that realizes $\Gamma_{\min}$, and $G$ is the corresponding generator matrix. Furthermore, we prove that the result is equivalent to the following statement: there exists an ideal linear code for realizing the $\Gamma_{\min}$ if and only if it is the port of a matroid that is representable over $\mathbb{F}_q$.
Last updated:  2026-02-25
Embedding belief propagation within a multi-task learning model : An example on Kyber's NTT
Thomas Marquet and Elisabeth Oswald
Domain-informed deep learning integrates specialized knowledge into deep neural networks through modifications to inputs, architectures, loss functions, or training processes. Such approach is usually employed to improve model performance in low/noisy data regime via domain-specific priors. Side-channel analysis is such a domain and even though straight deep learning has become a powerful tool for profiled side-channel, it still struggles on more complex datasets because of weak signals distributed across many datapoints. Initial attempts at integrating specific knowledge into models have been extremely successful to pushing boundaries of what can be achieved with deep learning. For example, integrating knowledge about the masking scheme in Masure et al provides significant improvements in decreasing the profiling complexity. Marquet et Oswald went further by leveraging redundant features across multiple tasks, such as shared randomness or common masking flow. In this work, we continue the discussion by using belief propagation on a larger graph to guide the learning. We introduce a multi-task learning model that explicitly integrates a factor graph reflecting the algebraic dependencies among intermediates in the computations of Kyber's inverse Number Theoretic Transform (iNTT). Such framework allow the model to learn a joint representation of the related tasks that is mutually beneficial. For the first time, we show that one can perform a belief propagation during training even when one does not have access to the internal randomness, on the masked shares, potentially improving greatly the performances of the attack.
Last updated:  2026-02-25
Interstellar: Efficient GKR-based IVC Scheme with Privacy-Preserving Collaborative Folding
Jieyi Long
In this paper, we present Interstellar, a novel folding and IVC framework built on a technique we call circuit interpolation, tailored for circuit satisfiability. By incorporating the GKR protocol, our approach avoids commitments to full computation traces and cross-term vectors, requiring instead only commitments to the actual circuit witness and optionally a small subset of intermediate gate values. This substantially reduces the size of the vectors to be committed to in each folding round, which is highly advantageous compared to existing schemes, as vector commitments involve expensive multi-scalar multiplications. We evaluate Interstellar using two representative workloads: product of matrices which models highly parallel computation, and MiMC hash chains which captures highly serialized computation. Our analysis shows that Interstellar achieves 1.59x to 6.74x prover speedup per folding round for matrix multiplication and up to 2.93x speedups for MiMC chains compared to alternative methods. Beyond efficiency, Interstellar is highly flexible. It can be extended to support high-degree and lookup gates, multi-instance folding, and non-uniform IVC, making it well-suited for practical applications such as zkML and zkVM. Finally, we formalize, for the first time, a new notion called collaborative folding/IVC, which allows folding/IVC proof generation with multiple provers holding disjoint private witnesses for the same public statement. This new primitive enables distributed IVC proof generation while preserving witness privacy across folding rounds, making Interstellar a compelling fit for distributed and privacy-sensitive applications.
Last updated:  2026-02-25
On the Complexity of Interactive Arguments
Idan Baril and Iftach Haitner
We study the minimal hardness assumptions required for constructing interactive arguments for NP, focusing on succinct arguments—where the prover’s total communication is smaller than the witness size—and on relaxed forms of zero knowledge, such as witness indistinguisha- bility. Known constructions of succinct arguments rely on various types of collision-resistant hash functions, indistinguishability obfuscation, hardness of discrete logarithm, and lattice-based assumptions, while known constructions of witness-indistinguishable arguments require one-way functions (OWFs). This suggests that succinct witness-indistinguishable interactive arguments require OWFs, or equivalently, that the existence of such arguments implies that of OWFs. Nevertheless, we prove that, at least as far as fully black-box reductions are concerned, interactive arguments do not imply OWFs. Specifically, we consider assumption-dependent fully black-box reductions from OWFs to succinct witness-indistinguishable interactive arguments and an additional hardness assumption G (e.g., NP̸ ⊆ P/poly). Such a reduction is a pair (f, R) of (oracle-aided) function f and algorithm R such that, for any witness-indistinguishable interactive argument Π = (P, V) and any inverter Inv of f Π, the algorithm RΠ,Inv breaks either the soundness of Π or the assumption G. We prove that the existence of such a reduction implies a black-box reduction from OWFs to G alone. Namely, beyond what is implied by G, succinct witness-indistinguishable interactive arguments have no black-box implications for the existence of OWFs.
Last updated:  2026-02-25
A Comprehensive Break of the Tropical Matrix-Based Signature Scheme
Sopan Chavhan and Shrikant Chaudhari
The recent digital signature scheme proposed by Grigoriev, Monico, and Shpilrain (GMS26) attempts to leverage the NP-hardness of tropical matrix factorization as its security foundation. We present a systematic cryptanalysis revealing multiple structural vulnerabilities. First, we demonstrate an existential forgery attack in the chosen-hash model requiring only polynomial-time operations. Second, we establish that the scheme is fundamentally malleable, allowing an adversary to generate arbitrarily many distinct valid signatures for any observed message—thereby breaking strong unforgeability. Third, we show that collecting a polynomial number of honest signatures enables partial key recovery through probabilistic leakage of private key entries. Finally, we present an SMT-based key recovery attack that, given only two valid signatures, recovers the full private key in seconds for the recommended parameters. Our attacks do not rely on solving the claimed NP-hard problem but exploit algebraic properties of the signing equations. Combined, they demonstrate that the scheme fails to achieve the standard security notions of existential unforgeability, strong unforgeability, and key confidentiality.
Last updated:  2026-02-25
Investigating the Wedge Map on SNOVA
Po-En Tseng, Lih-Chung Wang, Peigen Li, and Yen-Liang Kuan
This paper investigates the algebraic structure of SNOVA, a NIST PQC Round 2 candidate, with a specific focus on the kernel dimension of the wedge map. We employ lifting techniques to transform the public key from the matrix ring $\mathbb{F}_q^{l \times l}$ to an equivalent representation over the extension field $\mathbb{F}_{q^l}$, establishing that the rank of the wedge map is a structural invariant. A key contribution of this work is the derivation of a generating function that explicitly characterizes the wedge map's kernel dimension. This algebraic analysis provides a rigorous understanding of SNOVA's geometry, verifying the safety of specific parameter sets and enabling future refined adjustments to guarantee structural security.
Last updated:  2026-02-25
Improved Integral Attack on ChiLow-32 Exploiting the Inverse of the ChiChi Function
Akram Khalesi, Zahra Ahmadian, and Hosein Hadipour
The protection of executable code in embedded systems requires efficient mechanisms that ensure confidentiality and integrity. Belkheyar et al. recently proposed the Authenticated Code Encryption (ACE) framework, with ChiLow-(32 + $\tau$) as the first instantiation of ACE2 at EUROCRYPT 2025. The design of ChiLow-(32 + $\tau$) is based on a 32-bit tweakable block cipher with a quadratic nonlinear layer, known as ChiChi (denoted by $\chi\!\!\chi$), and a nested tweak key schedule optimized for secure code execution under strict query limits. In this work, we study the resistance of ChiLow to integral cryptanalysis. We identify new integral distinguishers in both the single-tweak and related-tweak models. Using these results and a nested strategy to recover all round tweaks, we present a key-recovery attack on 7 out of 8 rounds of ChiLow. The central contribution of our work is that it resolves the challenge of deriving the master key from the recovered round tweaks, an open problem highlighted by the designers and in a recent cryptanalysis by Peng et al. The attack on 7 rounds requires $2^{6.32}$ chosen ciphertexts, has a time complexity of about $2^{121.75}$ encryptions, and requires negligible memory.
Last updated:  2026-02-25
Neural-Inspired Advances in Integral Cryptanalysis
Liu Zhang, Yiran Yao, Danping Shi, Dongchen Chai, Jian Guo, and Zilong Wang
The studies by Gohr et al. at Crypto 2019 and subsequent related works have demonstrated that neural networks can offer new insights for cryptanalysis. Building on this insight, we leverage neural networks to learn features closely associated with integral properties and use neural network-derived distinguishers as a benchmark for the automatic search. This approach not only confirms the value of deep learning in feature discovery but also shows that neural-guided insights can improve the performance of classical cryptanalytic methods. Neural networks motivate the development of a more effective framework for identifying integral distinguishers. Comparative results show that existing automated methods, limited by search efficiency and the large search space, often fail to locate optimal distinguishers. In contrast, neural networks can directly identify high-quality integral distinguishers and reveal associated non-random structural features that are typically missed by traditional approaches. Based on these neural-network results, we refine the meet-in-the-middle search framework, thereby improving the trade-off between accuracy and computational cost. Notably, under the assumption of full-state key XOR with independent round keys, the refined framework achieves the known theoretical upper bound for key-independent integral distinguishers on non-standard SKINNY. Integral distinguishers identified by neural networks, whose underlying non-random features are successfully interpreted through Boolean function analysis, are translated into classical forms, enhancing integral key-recovery attacks. In particular, we propose a 16-round key-recovery attack on SKINNY-n-n based on a general integral key-recovery approach, improving the best-known result by two rounds under the single-tweakey setting. Furthermore, we present key-recovery attacks on 18-round SKINNY-n-2n and 20-round SKINNY-n-3n using integral distinguishers under the single-tweakey setting for the first time.
Last updated:  2026-02-25
Determining those Boolean functions whose restrictions to affine spaces are plateaued
Claude Carlet and Darrion Thornburgh
Quadratic Boolean functions (that is, Boolean functions of algebraic degree at most 2), bent Boolean functions (i.e. maximally nonlinear Boolean functions in even numbers of variables) and, as we prove in this paper, partially-bent Boolean functions (i.e. affine extensions of bent functions to linear super-spaces), share a strong property: all their restrictions to affine hyperplanes are plateaued (i.e. have a Walsh transform valued in a set of the form $\{0,\pm \lambda\}$, where $\lambda$ is a positive integer called the amplitude). In this paper we determine for any $n$ and $k<n$ the class $C^n_k$ of those $n$-variable Boolean functions whose restrictions to all $k$-dimensional affine subspaces of $\mathbb F_2^n$ are plateaued (of any amplitude). We characterize partially-bent (resp., quadratic) Boolean functions as those functions that are plateaued on any affine hyperplane (resp., any affine subspace of dimension $k$, where $3 \leq k \leq n-2$, while these are all Boolean functions for $0\leq k\leq 2$). For $n\geq 5$, each of the following classes of Boolean functions happens then to be strictly included in the next one: quadratic functions, partially-bent functions, the restrictions of partially-bent functions to affine hyperplanes, plateaued functions, the restrictions of plateaued functions to affine hyperplanes, and all Boolean functions. We leave open the two problems of determining exactly what are the third and fifth of these classes (we begin the study of the first of these two classes by giving a non-trivial characterization). Our characterization of partially-bent (resp., quadratic) functions extends to strongly plateaued vectorial functions. We state an open question on vectorial functions that happens to be related to an important one on crooked functions.
Last updated:  2026-02-25
$\mathsf{Spectra}$: Interval-Agnostic Vector Range Argument for Unstructured Range Assertions
Hao Gao, Qianhong Wu, Bo Qin, Fudong Wu, Zhenyang Ding, and Zhiguo Wan
A structured vector range argument proves that a committed vector $\mathbf{v}$ lies in a well-structured range of the form $[0,2^d-1]$. This structure makes the protocol extremely efficient, although it cannot handle more sophisticated range assertions, such as those arising from non-membership attestations. To address this gap, we study a more general setting not captured by prior constructions. In this setting, for each $i$, the admissible integer set for $v_i$ is a union of $k$ intervals $\mathsf{R}_i \overset{\text{def}}{=} \bigcup_{j=0}^{k-1}\left[l_{i,j},r_{i,j}\right]$. In this work, we present novel techniques to prove that $\mathbf{v} \in \mathbb{Z}^n_p$ lies within $\mathsf{R}_0 \times \mathsf{R}_1 \times \cdots \times \mathsf{R}_{n-1}$. We first introduce $\mathsf{RangeLift}$, a generic compiler that lifts a structured vector range argument to support such unstructured range assertions. Then we present $\mathsf{Spectra}$, a realization of $\mathsf{RangeLift}$ over the $\mathsf{KZG}$-based vector commitment scheme. $\mathsf{Spectra}$ achieves succinct communication and verifier time; its prover complexity is $O(n\,\tfrac{\log N}{\log\log N}\cdot \log(n\tfrac{\log N}{\log\log N}))$, where $N$ upper bounds the maximum interval size across all $\mathsf{R}_i$. Notably, $\mathsf{Spectra}$ is interval-agnostic, meaning its prover complexity is independent of the number of intervals $k$; therefore, its prover cost matches the single-interval case even when each $\mathsf{R}_i$ is composed of hundreds of thousands of intervals. We also obtain two new structured vector range arguments and a batching-friendly variant of the $\mathsf{Cq}^{+}$ lookup argument (PKC'24), which are also of independent interest. Experiments show that $\mathsf{Spectra}$ outperforms well-known curve-based vector range arguments on standard metrics while supporting strictly more expressive range assertions.
Last updated:  2026-02-25
On the Security of Linear Secret Sharing with General Noisy Side-Channel Leakage
Utkarsh Gupta and Hessam Mahdavifar
Secret sharing is a foundational cryptographic primitive for sharing keys in distributed systems. In a classical $(n,t)$-threshold setting, it involves a dealer who has a secret, a set of $n$ users to whom shares of the secret are sent, and a threshold $t$ which is the minimum number of shares required to recover the secret. These schemes offer an all-or-nothing security approach where less than $t$ shares reveal no information about the secret. But these guarantees are threatened by side-channel attacks which can leak partial information from each share. Initiated by Benhamouda et al. (Crypto'18), the security of linear secret sharing schemes has been studied for bounded leakage attack models, which assume that the adversary can leak bounded functions of each share. However, this model does not translate into real-world attacks, as physical side-channels are inherently noisy. The $\delta$-noisy channel model, proposed by Prouff and Rivain (Eurocrypt’13), is a general leakage framework which captures the noisy behavior of side-channels. In this work, we study the security of linear secret sharing schemes with $\delta$-noisy leakage, and show bounds on the mutual information (MI) and statistical bias ($\Delta^{\mathrm{TV}}$) security metrics. Our results are based on the Fourier analytical framework, first used by Benhamouda et al. (Crypto'18), adapted to the $\delta$-noisy leakage model. To give security bounds, we introduce a security parameter $\eta\le \min{\{1,2\delta\}}$, determined by the Fourier linear biases of the posterior distributions of the leaked secret shares. Then, the Poisson summation formula enables us to bound the ratio between the observed leakage for some given secret, and leakage under independence as $(1\pm \eta^t)$. This is then used to show a) $(n,t \ge \tau (n+1))$-threshold schemes over $\mathbb{F}_q$ have at most $\mathcal{O}(q^{-t(\gamma+1-1/\tau)})$ leakage, given $\eta \le q^{-\gamma}$; and consequently b) for $(n,n)$-threshold schemes the guessing advantage is at most $(q-1) \cdot \eta^n \le (q-1)\cdot (2 \delta)^n$. This work can be viewed as a next step towards closing the gap between theory and practice in leakage resilient cryptography.
Last updated:  2026-02-24
Distributed Monotone-Policy Encryption for DNFs from Lattices
Jeffrey Champion and David J. Wu
Distributed monotone-policy encryption augments public-key encryption with fine-grained decryption capabilities in a trustless manner. In this scheme, users independently generate a public/private key-pair and post their public key to a public-key directory. Thereafter, anyone can encrypt a message to a set of public keys together with an access policy. Any set of users that satisfies the access policy can decrypt the ciphertext while the message should remain computationally hidden to any unsatisfying set of users. The primary efficiency requirement is succinctness: namely, the size of the ciphertext should be sublinear (or polylogarithmic) in the description length of the policy. Distributed monotone-policy encryption directly generalizes recent trustless cryptographic notions like threshold encryption with silent setup and distributed broadcast encryption. In this work, we show how to construct distributed monotone-policy encryption for Boolean formulas in disjunctive normal form (DNF formulas) that supports an unbounded number of users. Security relies on the decomposed learning with errors (LWE) assumption, a simple and falsifiable lattice assumption, in the random oracle model. Previously, such a scheme was only known from plain witness encryption in the random oracle model. Our scheme has a transparent setup and the ciphertext size is $\mathsf{poly}(\lambda, \log N)$, where $N$ is the number of variables in the DNF formula.
Last updated:  2026-02-24
Migrating Bitcoin and Ethereum Addresses to the Quantum Blockchain Era
Mehmet Sabir Kiraz and Suleyman Kardas
Recent advances in quantum computing threaten the cryptographic foundations of blockchain systems, including Bitcoin and Ethereum, which rely on elliptic-curve cryptography (ECC) for security. Algorithms such as Shor's algorithm can efficiently solve the discrete logarithm problem (DLP), enabling recovery of private keys from public keys. Existing funds, especially those tied to long-lived addresses or unspent coinbase outputs (such as Satoshi Nakamoto's bitcoins), and Ethereum externally owned accounts become vulnerable once large-scale quantum computers become available. While previous work has suggested post-quantum signature schemes and migration strategies, no widely deployed, end-to-end, backward-compatible, and privacy-preserving migration mechanism has been presented for migrating legacy funds without revealing the corresponding classical public keys on-chain. In this paper, we present a complete framework for secure migration of both spent and unspent Bitcoin and Ethereum assets to a post-quantum (PQ) security model, using a hybrid approach based on post-quantum signatures and quantum-resistant zero-knowledge proofs (ZKPs). We design zkSTARK circuits that prove knowledge of a witness linking a legacy Bitcoin or Ethereum address to a fresh PQ public key without disclosing the legacy elliptic-curve public key on-chain. We also formalize a one-way post-quantum transition model for migrated assets: legacy authorization is used only at enrollment, while future authorization semantics are governed by post-quantum credentials and migrated-state registry semantics. We further show why hash-security margins must be re-evaluated in the quantum setting by distinguishing collision resistance (BHT-style attacks, approximately $2^{n/3}$) from preimage resistance (Grover-style attacks, approximately $2^{n/2}$), and we motivate hash agility for migration-era commitments and registries. To enable verifiable on-chain transitions, we propose new primitives (\texttt{OP\_CHECKQUANTUMSIG}, \texttt{OP\_CHECKSTARKPROOF}), enabling verification of quantum-safe proofs and signatures. Our work and implementation\footnote{\href{https://github.com/skardas/pq_bitcoin}{\texttt{github.com/skardas/pq\_bitcoin}}} provide a practical framework for securing legacy blockchain assets against quantum-era threats while preserving backward compatibility and operational continuity.
Last updated:  2026-02-24
Conditionally Input-Revealing 2PC and Fuzzy Password-Authenticated Key Exchange
David Richardson, Mike Rosulek, and Jiayu Xu
Yao's famous protocol for secure 2-party computation, based on garbled circuits, is well-known to be insecure against an actively corrupt garbler. We introduce a new and extremely simple variant of Yao's protocol that is fully secure against active adversaries, for a certain class of functions that we call conditionally input-revealing. We then show how to use this new protocol as the basis for fuzzy password authenticated key exchange (fuzzy PAKE). In fuzzy PAKE, two parties each hold a low-entropy secret (e.g., a password), and they interact to obtain a secure high-entropy key if and only if the passwords are sufficiently ``close.'' Our new fuzzy PAKE protocol supports completely arbitrary predicates for password ``closeness''. Compared to prior fuzzy PAKE protocols, ours is roughly $2\times$ cheaper in communication, computation, and round complexity.
Last updated:  2026-02-24
The Structured Generic-Group Model
Henry Corrigan-Gibbs, Alexandra Henzinger, and David J. Wu
This paper introduces the structured generic-group model, an extension of Shoup’s generic-group model (from Eurocrypt 1997) to capture algorithms that take advantage of some non-generic structure of the group. We show that any discrete-log algorithm in a group of prime order $q$ that exploits the structure of at most a $\delta$ fraction of group elements, in a way that we precisely define, must run in time $\Omega(\min(\sqrt{q},1/\delta))$. As an application, we prove a tight subexponential-time lower bound against discrete-log algorithms that exploit the multiplicative structure of smooth integers, but that are otherwise generic. This lower bound applies to a broad class of index-calculus algorithms. We prove similar lower bounds against algorithms that exploit the structure of small integers, smooth polynomials, and elliptic-curve points.
Last updated:  2026-02-24
Deep Neural Cryptography
David Gerault, Anna Hambitzer, Eyal Ronen, and Adi Shamir
The wide adoption of deep neural networks (DNNs) raises the question of how can we equip them with a desired cryptographic functionality (e.g, to decrypt an encrypted input, to verify that this input is authorized, or to embed a secret backdoor which alters the DNN's normal operation). The problem is that cryptographic primitives are typically designed to run on digital computers that use Boolean gates to map sequences of bits to sequences of bits, whereas DNNs are a special type of analog computer that uses linear mappings and ReLUs to map vectors of real numbers to vectors of real numbers. This discrepancy between the discrete and continuous computational models raises two fundamental questions: How can we implement standard cryptographic primitives as DNNs, and whether DNN implementations of secure cryptosystems remain secure in the new setting, in which an attacker can ask the DNN to process a message whose ``bits'' are arbitrary real numbers. In this paper we lay the foundations of this new area of research, defining the meaning of correctness and security for implementations of cryptographic primitives as ReLU-based DNNs. We then show that the natural implementations of block ciphers as DNNs can be broken in linear time by using such nonstandard inputs (this attack was experimentally tested in the case of full round AES-128, and had 100% success rate). Finally, we develop a new method for implementing any desired cryptographic functionality as a standard ReLU-based DNN in a provably secure and correct way. Our novel protective technique has very low overhead (a constant number of additional layers and a linear number of additional neurons), and is completely practical.
Last updated:  2026-02-24
Distributed Monotone-Policy Encryption with Silent Setup from Lattices
Abtin Afshar, Rishab Goyal, and Saikumar Yadugiri
Distributed cryptography serves as a cornerstone for building trustless systems, yet existing solutions for distributed policy encryption typically require either a trusted dealer or complex interactive setup protocols. Recent advances have introduced the concept of $silent~setup$, where users independently generate keys and a joint public key is derived deterministically. However, all current silent-setup constructions rely on bilinear pairings, leaving them vulnerable to quantum adversaries. In this work, we present the first distributed monotone-policy encryption scheme with silent setup based on lattices. Our construction supports arbitrary monotone policies represented as Disjunctive Normal Forms (DNFs) and provides a robust, post-quantum foundation for decentralized access control.
Last updated:  2026-02-24
HCTR$^{++}$ : A Beyond Birthday Bound Secure HCTR2 Variant
Gülnihal Öztürk, Onur Koçak, and Oğuz Yayla
Current industry-standard block cipher modes of operation, such as CBC and GCM, are fundamentally limited by the birthday bound $O(2^{n/2})$, a constraint that has evolved from a theoretical concern into a practical security bottleneck in contemporary high-throughput, high-data-volume environments. To address this, the cryptographic community and NIST are prioritizing Beyond Birthday Bound (BBB) security to extend the operational security margin toward the full block size $O(2^n)$. Achieving BBB security requires a departure from traditional constructions, primarily utilizing three methodologies: XOR of Permutations (XORP), Tweakable Block Ciphers (TBCs), and Fresh Rekeying. While none of these innovative BBB modes have been formally standardized, NIST has initiated the Accordion Mode project, defining a new primitive class: the Tweakable Variable-Input-Length Strong Pseudorandom Permutation (VIL-SPRP). This primitive treats the entire message as a single, indivisible block and expects the submission of BBB-secure variants. To contribute to this standardization effort, we propose a simple BBB-secure variant of the HCTR2 algorithm. We first explain the core BBB methodologies, then discuss the operational mechanism of HCTR2, and finally present our proposed BBB-secure construction.
Last updated:  2026-02-24
Full Anonymity in the Asynchronous Setting from Peony Onion Encryption
Megumi Ando, Miranda Christ, Kashvi Gupta, Tal Malkin, and Dane Smith
Onion routing is a popular and practical approach to anonymous communication, and the subject of a growing body of work on designing efficient schemes with provable anonymity, the strongest notion being full anonymity. However, all prior schemes achieving full anonymity assume a synchronous communication setting, which is unrealistic since real networks suffer from message loss and timing attacks. Recently, Ando, Lysyanskaya, and Upfal (TCC’24) made progress toward the asynchronous setting by introducing bruisable onion encryption and constructing an efficient protocol with the weaker guarantee of differential privacy. In this paper, we construct the first efficient fully anonymous onion routing protocol in the asynchronous setting. To do so, we address two main challenges: (1) we develop the first bruisable onion encryption scheme that does not leak information about the onion’s position on the routing path, and (2) we design an onion routing protocol using this primitive to achieve full anonymity, rather than only differential privacy. Along the way, we also give a new fully anonymous protocol in the synchronous setting, improving the state-of-the-art in both communication and round complexity. Both protocols are secure against an active adversary that corrupts a constant fraction of nodes (up to < 1 in the synchronous case, and < 1/2 in the asynchronous case), and rely on standard cryptographic assumptions (CCA-secure public-key encryption and collision-resistant hash functions).
Last updated:  2026-02-24
Putting Multi into Multi-Signatures: Tight Security for Multiple Signers
Anja Lehmann and Cavit Özbay
Multi-signatures enable multiple parties to create a joint signature on the same message. Such schemes aggregate several individual signatures and public keys into a short signature and aggregated public key, and verification is performed on these combined values. Interestingly, all existing notions of unforgeability for multi-signatures are designed with a single honest user in mind, overlooking the multi-user setting that multi-signatures are built for. While multi-user security can be bootstrapped from any single-user secure scheme, the straightforward adoption implies a security loss that is linear in the number of signers n. In this work we therefore start the investigation of multi-signatures with tight multi-user security. We show that none of the existing multi-signatures with tight single-user security seems amendable to the multi-user setting, as all their proofs and design choices exploit the fact that there is only a single honest user. Based on this insight, we then propose two new constructions built from scratch with multi-user security in mind: Skewer-NI, a non-interactive and pairing-based scheme, and Skewer-PF, a pairing-free and two-round construction. We prove both schemes tightly secure under the DDH assumption in the ROM. Both schemes also improve the state-of-the-art in another aspect: they support the feature of key aggregation. Skewer-NI is the first non-interactive tightly secure multi-signature with this feature. In the pairing-free two-round setting, Skewer-PF is the first to combine tight multi-user security with key aggregation where the only prior result, due to Bacho and Wagner (CRYPTO’25), achieved aggregation but only in the single-user case.
Last updated:  2026-02-24
Multi-key Security in the Quantum World: Revisiting Tweakable Even-Mansour and FX
Rentaro Shiba and Tetsu Iwata
In this paper, we prove the security of symmetric-key constructions in an adversary model called the Q1MK model, which combines the Q1 model, where the adversary makes classical online queries and quantum offline queries, and the multi-key (multi-user) setting. Specifically, under this model, we prove the security of two symmetric-key constructions: the tweakable Even-Mansour cipher (TEM) and the FX construction (FX), as starting points for understanding the post-quantum security of symmetric-key constructions in this adversary model. Our security proofs are based on the hybrid argument technique introduced by Alagic et al. at EUROCRYPT 2022. First, we prove that in order to break TEM in the Q1MK model, $\Omega(2^{\kappa/3})$ classical and quantum queries are needed, regardless of the number of target $\kappa$-bit keys. Then, before turning to the Q1MK security analysis of FX, we revisit the security proof of FX in the standard Q1 model proposed in version 20230317:200508 of ePrint 2022/1097 and tighten it. By the modified proof, we show that in order to break FX with $(\kappa + n)$-bit secret key in the Q1 model, $\Omega(2^{(\kappa+n)/3})$ classical and quantum queries are needed. We then apply this analysis to the Q1MK setting, and we show that in order to break FX in the Q1MK model, $\Omega(2^{(\kappa + n - u)/3})$ classical and quantum queries are needed, when $2^u$ ($\le 2^{\kappa}$) independent keys are in use.
Last updated:  2026-02-24
Tropical cryptography IV: Digital signatures and secret sharing with arbitrary access structure
Dima Grigoriev, Chris Monico, and Vladimir Shpilrain
We use tropical algebras as platforms for a very efficient digital signature protocol. Security relies on computational hardness of factoring a given tropical matrix in a product of two matrices of given dimensions; this problem is known to be NP-complete. We also offer a secret sharing scheme with an arbitrary access structure where security of the shared secret is based on computational hardness of the same problem.
Last updated:  2026-02-24
On the Adaptive Security of Key-Unique Threshold Signatures
Michele Ciampi, Elizabeth Crites, Chelsea Komlo, and Mary Maller
In this work, we investigate the security assumptions required to prove the adaptive security of threshold signatures. Adaptive security is a strong notion of security that allows an adversary to corrupt parties at any point during the execution of the protocol, and is of practical interest due to recent standardization efforts for threshold schemes. Towards this end, we give two different impossibility results. We begin by formalizing the notion of a key-unique threshold signature scheme, where public keys have a unique correspondence to secret keys and there is an efficient algorithm for checking that public keys are well-formed. Key-uniqueness occurs in many threshold schemes that are compatible with standard, single-party signatures used in practice, such as BLS, ECDSA, and Schnorr signatures. Our first impossibility result demonstrates that it is impossible to prove the adaptive security of any key-unique threshold signature scheme under any non-interactive computational or decisional assumption for a broad class of reductions, in the range ⌊t/ℓ⌋ < t_c ≤ t, where t+1 is the threshold out of n parties, t_c is the number of corrupted parties (polynomially related with the security parameter), and ℓ is a constant. Such assumptions include, but are not limited to, the discrete logarithm (DL), recently introduced circular DL (CDL), computational Diffie-Hellman (CDH), decisional DH (DDH), and q-Strong DH (q-SDH) assumptions. Our second impossibility result applies specifically to key-unique threshold Schnorr signatures, currently an active area of research. We demonstrate that, even under the interactive computational assumptions one-more DL (OMDL), algebraic OMDL (AOMDL) and algebraic one-more CDH (AOMCDH), it is impossible to prove adaptive security for ⌊t/2⌋ < t_c ≤ t in the ROM for a natural class of rewinding reductions. Taken together, our results underscore the difficulty of achieving adaptive security for key-unique threshold signatures, but at the same time, they open a new line of research, by indicating assumptions and properties to aim for when constructing adaptively secure threshold schemes, or informing new attacks.
Last updated:  2026-02-24
On the conversion of module representations for higher dimensional supersingular isogenies
Aurel Page, Damien Robert, and Julien Soumier
We expand the well developed toolbox between quaternionic ideals and supersingular elliptic curves into its higher dimensional version, namely (Hermitian) modules and maximal supersingular principally polarized abelian varieties. One of our main result is an efficient algorithm to compute an unpolarized isomorphism $A \simeq E_0^g$ given the abstract module representation of $A$. This algorithm relies on a subroutine that solves the Principal Ideal Problem in matrix rings over quaternion orders, combined with a higher dimensional generalisation of the Clapotis algorithm. To illustrate the flexibility of our framework, we also use it to reduce the degree of the output of the KLPT$^2$ algorithm, from $O(p^{25})$ to $O(p^{15.5})$.
Last updated:  2026-02-24
Crossing with Confidence: Formal Analysis and Model Checking of Blockchain Bridges
Pyrros Chaidos, Pooya Farshim, Denis Firsov, Dimitar Jetchev, Aggelos Kiayias, Markulf Kohlweiss, and Anca Nitulescu
We develop formal code-based security definitions for blockchain bridges and apply them to several bridge architectures deployed in practice. We derive both traditional pen-and-paper proofs and on the other, automated guarantees against bounded counterexamples. The latter is achieved via bounded model checking of our formally specified properties, implemented in Quint, a specification language and model checker closely related to TLA+. Our definitions are expressed in a precise, code-based variant of the Universal Composition (UC) framework. This enables the modular use of hybrid functionalities—even for property-based definitions—and is essential for bounded model checking, where underlying primitives must be idealized. Accordingly, we idealize and model-check all building blocks used in our protocols. Notably, we formulate a novel UC ideal functionality for Advanced Threshold Signatures (ATS) and modelcheck it for attacks to ensure its robustness
Last updated:  2026-02-24
Post-Quantum Adaptor Signatures with Strong Security from Cryptographic Group Actions
Ryann Cartor, Nathan Daly, Giulia Gaggero, Jason T. LeGrow, Andrea Sanguineti, and Silvia Sconza
We present One Round "Cheating" Adaptor Signatures (OR- CAS): a novel and efficient construction of adaptor signature schemes from CSI-FiSh. Our protocol improves substantially on existing group action-based schemes: Unlike IAS (Tairi et al., FC 2021), our scheme does not require expensive non-interactive zero-knowledge proofs, and unlike adaptor MCSI-FiSh (Jana et al., CANS 2024) our construction does not require any modification to the underlying digital signature scheme. We prove the protocol’s security under the strong security no- tions of Dai et al. (Indocrypt 2022) and Gerhart et al. (Eurocrypt 2024).
Last updated:  2026-02-24
Device-Bound Anonymous Credentials With(out) Trusted Hardware
Karla Friedrichs, Franklin Harding, Anja Lehmann, and Anna Lysyanskaya
Anonymous credentials enable unlinkable and privacy-preserving user authentication. To ensure non-transferability of credentials among corrupt users, they can additionally be device-bound. Therein, a credential is tied to a key protected by a secure element (SE), usually a hardware component, and any presentation of the credential requires a fresh contribution of the SE. Interestingly, despite being a fundamental aspect of user credentials, device binding for anonymous credentials is relatively unstudied. Existing constructions either require multiple calls to the SE, or need the SE to keep a credential-specific state -- violating core design principles of shielded SEs. Further, constructions that are compatible with the most mature credential scheme BBS rely on the honesty of the SE for privacy, which is hard to vet given that SEs are black-box components. In this work, we thoroughly study and solve the problem of device-bound anonymous credentials (DBACs). We model DBACs to ensure the unforgeability and non-transferability of credentials, and to guarantee user privacy at the same time. Our definitions cover a range of SE trust levels, including the case of a subverted or fully corrupted SE. We also define blind DBACs, in which the SE learns nothing about the credential presentations it helped compute. This targets the design of a remote, cloud-based SE which is a deployment model considered for the EU Digital Identity (EUDI) wallet to address the fact that most user phones are not equipped with a sufficiently secure SE. Finally, we present three simple and round-optimal constructions for device binding of BBS credentials, and prove their security in the AGM+ROM and privacy unconditionally. The SE therein is extremely lightweight: it only has to compute a BLS or Schnorr signature in a single call. We also give the BLS-based construction in a blind variant, yielding the first protocol that enables privacy-preserving device binding for anonymous credentials when being used with a remote SE.
Last updated:  2026-02-24
Multi-Committee MPC: From Unanimous to Identifiable Abort
Lichun Li, Hongqing Liu, Jiawei Ni, Chaoping Xing, and Chen Yuan
In this work, we consider dishonest majority MPC protocols with $(1-\epsilon)n$ corrupted parties for some constant $\epsilon\in (0,1/2)$. In this setting, there exist MPC protocols with unanimous abort that achieve constant communication in both online and offline phases via a packed secret sharing scheme. Departing from their approaches, we revisit the ``committee-based'' approach to design an efficient MPC protocol with constant online and offline communication complexity. To balance the communication load of each party, our protocol adopts multiple committees, each of constant size. The computation of circuit $C$ is then divided into layers, each assigned to one committee. To securely transmit messages between committees, we introduce the handoff gates, incurring only a slight communication overhead. Furthermore, we leverage circuit-dependent preprocessing and incremental checking to improve the online efficiency. Compared to other MPC protocols in the same corruption setting, our protocol achieves the smallest concrete total communication complexity. Building upon our multi-committee unanimous-abort protocol, we upgrade it to identifiable abort by adapting a technique from (Rivinius, EUROCRYPT 2025). To integrate this technique into our setting, we adjust the verification timing and introduce a king party to reduce the communication complexity of openings. This yields the first identifiable-abort MPC protocol with constant communication complexity in the sub-optimal dishonest majority setting.
Last updated:  2026-02-24
A Gaussian Leftover Hash Lemma for Modules over Number Fields
Martin R. Albrecht, Joël Felderhoff, Russell W. F. Lai, Oleksandra Lapiha, and Ivy K. Y. Woo
Leftover Hash Lemma (LHL) states that \(\mathbf{X} \cdot \mathbf{v}\) for a Gaussian \(\mathbf{v}\) is an essentially independent Gaussian sample. It has seen numerous applications in cryptography for hiding sensitive distributions of \(\mathbf{v}\). We generalise the Gaussian LHL initially stated over \(\mathbb{Z}\) by Agrawal, Gentry, Halevi, and Sahai (2013) to modules over number fields. Our results have a sub-linear dependency on the degree of the number field and require only polynomial norm growth: \(\lVert\mathbf{v}\rVert/\lVert\mathbf{X}\rVert\). To this end, we also prove when \(\mathbf{X}\) is surjective (assuming the Generalised Riemann Hypothesis) and give bounds on the smoothing parameter of the kernel of \(\mathbf{X}\). We also establish when the resulting distribution is independent of the geometry of \(\mathbf{X}\) and establish the hardness of the \(k\)-SIS and \(k\)-LWE problems over modules (\(k\)-MSIS/\(k\)-MLWE) based on the hardness of SIS and LWE over modules (MSIS/MLWE) respectively, which was assumed without proof in prior works.
Last updated:  2026-02-24
Low-cost anonymous reputation update for IoT applications
Alex Shafarenko
This paper presents a novel approach to zero-trust anonymous reputation update in crowd sensing IoT applications. We use a suite of cryptographic functions to achieve anonymity, including unlinkability of sensing reports to the principals that submit them and to one another, while enabling the infrastructure to reliably quantify the degree of trust expressed as a reputation level. The protocol is low-cost for the anonymous participant due to the use of cheap standard algorithms: low-exponent modular exponentia- tion and cryptographic hashing, which makes it quite suitable for IoT.
Last updated:  2026-02-24
Pairing-based Functional Commitments for Circuits with Shorter Parameters
David Balbás, Dario Fiore, and Russell W. F. Lai
Functional commitments (FCs) enable a prover to commit to a message and later produce a succinct proof of its image under any given admissible function. Unlike succinct non-interactive arguments (SNARGs), secure FCs can be realised under falsifiable assumptions in the standard model, making them attractive alternatives when fully algebraic constructions are desired. All known algebraic constructions of FC are either lattice-based or pairing-based. While a lattice-based FC for circuits with almost optimal complexities has been recently achieved [Wee, CRYPTO'25], state-of-the-art pairing-based FCs for bounded-width $w$ [Balbás-Catalano-Fiore-Lai (BCFL), TCC'23] and bounded-size $s$ circuits [Wee-Wu, EUROCRYPT'24] require prohibitive public parameter sizes $O(\lambda w^5)$ and $O(\lambda s^5)$, respectively. In this work, we present a new algebraic pairing-based FC which achieves $O(\lambda w^3)$ public parameter size for bounded-width $w$ and unbounded depth $d$ circuits. The construction preserves all nice properties of BCFL: $O(\lambda)$ commitment size, $O(\lambda d^2)$ proof size, additive homomorphism, efficient verification, and chainability. For bounded-size $s$ circuits, we alternatively obtain $O(\lambda s^3)$ public parameters with $O(\lambda d)$ proofs. At the core of our scheme lies a new chainable FC for quadratic functions with commitments computed with respect to a power basis, as well as techniques for switching between different commitment bases.
Last updated:  2026-02-24
Information-Theoretic Network-Agnostic MPC with Polynomial Communication
Xiaoyu Ji, Chen-Da Liu-Zhang, Daniel Pöllmann, and Yifan Song
Network-agnostic MPC protocols tolerate simultaneously a higher number of corruptions $t_s < n/2$ when the network is synchronous, and a lower number $t_a < n/3$ when the network is asynchronous. As such, they provide strong resilience, irrespective of the type of underlying communication network. We focus on improving the communication complexity of network-agnostic MPC with optimal resilience $2t_s + t_a < n$. In this regime, there are no polynomial-time information-theoretic solutions and current computational protocols (without fully-homomorphic encryption) communicate $O(n^2)$ elements per multiplication gate. In this work, we significantly advance the landscape by introducing the first information-theoretic protocol with quadratic communication per multiplication gate and the first computational protocol with linear communication per multiplication gate based solely on signatures and symmetric-key encryption.
Last updated:  2026-02-24
Perfectly Secure Network-Agnostic MPC Comes for Free
Xiaoyu Ji, Chen-Da Liu-Zhang, and Yifan Song
Secure multiparty computation (MPC) allows a set of parties to jointly compute a function while keeping their inputs private. Classical MPC protocols assume either a synchronous or an asynchronous network. Synchronous protocols tolerate more corrupted parties but rely on a timing bound, while asynchronous protocols make no timing assumptions but handle fewer corruptions. The network-agnostic model aims to combine the advantages of both. It requires security without knowing in advance whether the network is synchronous or asynchronous, guaranteeing resilience against up to $t_s$ corruptions in the synchronous case and $t_a$ corruptions in the asynchronous case. The optimal corruption threshold for perfect security has been established as $n = 2\max(t_s, t_a) + \max(2t_a, t_s) + 1$, but prior work either falls short of this threshold or requires exponential local computation. In this work, we present the first perfectly secure network-agnostic MPC protocol with polynomial communication and computation complexity under the optimal threshold. Our protocol achieves expected communication complexity $\mathcal{O}((|C|n + (D+C_I)n^2 + n^6)\log n)$ bits for a circuit of size $|C|$ over a finite field $\mathbb{F}$ of size $\mathcal{O}(n)$, depth $D$, and input size $C_I$. Our main technical contribution is a compiler that generates Beaver triples in the network-agnostic setting using synchronous and asynchronous triple-generation protocols in a black-box way. Beyond the cost of the underlying protocols, it only requires $\mathcal{O}(n^2)$ instances of network-agnostic Byzantine agreement.
Last updated:  2026-02-24
Traceable Secret Sharing Schemes for General Access Structures
Oriol Farràs and Miquel Guiot
Traceable secret sharing complements traditional schemes by enabling the identification of parties who sell their shares. In the model introduced by Boneh, Partap, and Rotem [CRYPTO’24], a group of corrupted parties generates a reconstruction box $R$ that, given enough valid shares as input, reconstructs the secret. The goal is to trace $R$ back to at least one of the corrupted parties using only black-box access to it. While their work provides efficient constructions for threshold access structures, it does not apply to the general case. In this work, we extend their framework to general access structures and present an information-theoretic traceable scheme supporting them. In the course of our construction, we also contribute to the study of anonymous secret sharing, a notion recently introduced by Bishop et al. [CRYPTO’25], which strengthens classical secret sharing by requiring that shares do not reveal the identities of the parties holding them. We further advance this area by proposing new and stronger definitions, and presenting an anonymous scheme for general access structures that satisfies them.
Last updated:  2026-02-24
LeOPaRd: Towards Practical Post-Quantum Oblivious PRFs via 2HashDH Paradigm
Muhammed F. Esgin, Ron Steinfeld, Erkan Tairi, and Jie Xu
In this work, we introduce a more efficient post-quantum oblivious PRF (OPRF) design, called LeOPaRd. Our proposal is round-optimal and supports verifiability and partial obliviousness, all of which are important for practical applications. The main technical novelty of our work is a new method for computing samples of MLWE (Module Learning With Errors) in a two-party setting. To do this, we introduce a new family of (interactive) lattice problems, called MLWE-PRF with re-use (MLWE-PRF-RU). Here, the adversary is given a mix of MLWE and PRF samples where each PRF error is dependent on an adversarially-chosen matrix and the MLWE error. We rigorously study the hardness of MLWE-PRF-RU and provide a reduction from the standard MLWE to MLWE-PRF-RU, establishing a strong security foundation. We believe MLWE-PRF-RU problem family and the intermediate security reductions we introduce along the way can be of independent interest for other interactive protocols. LeOPaRd exploits this MLWE-PRF-RU assumption to realize a lattice-based OPRF design without relying on heavy machinery such as noise flooding and fully homomorphic encryption used in earlier works. LeOPaRd can feature around 137 KB total communication, compared to 300+ KB in earlier works, while also achieving about 2x reduction in online communication compared to the prior state-of-the-art result. We also identify gaps in some existing constructions and models, and propose appropriate fixes.
Last updated:  2026-02-24
Weak Tweak-Key Analysis Of Blink Via Superbox
Shiyao Chen, Jian Guo, and Tianyu Zhang
This work presents the first third-party cryptanalysis of \textsf{Blink}, a recent tweakable block cipher built on the Three-Hash Framework with a long-key design. Leveraging the idea of Superbox, we develop a lightweight theoretical model capturing the value correlations, weak-key conditions, fixed-key probabilities and the local clustering behaviors inside a \textsf{Blink} Superbox when the value spaces are affine. This model is intended as a concrete, easy-to-follow specialization of existing generic frameworks adapted to the structure of \textsf{Blink}. We then build a simple pattern-based automatic tool to search for weak tweak-key differentials. The main results are as follows: For \textsf{Blink-64} with 448-bit key size, we obtain a 10-round distinguisher with probability $2^{-50.42}$ for up to $2^{442}$ weak keys (equivalently $2^{-6}$ of the full key space); For \textsf{Blink-128} with 1024-bit key size, we obtain a 10-round distinguisher with probability $2^{-68.83}$ for up to $2^{1010}$ weak keys (equivalently $2^{-14}$ of the full key space), which can be extended to a 12-round multiple differential distinguisher with probability $2^{-96.83}$. Based on the weak tweak-key distinguisher, we further mount a 13-round key recovery attack on \textsf{Blink-64}, recovering the full 448-bit master key with time complexity $2^{114.02}$ and $2^{56}$ chosen plaintexts. All these distinguishers and key recovery attacks work within the data and time bounds claimed by the designers. And our analysis remains consistent with the security of the full-round design of \textsf{Blink}, while offering additional insight into the edge-case behaviors arising from the tweak-key interaction in \textsf{Blink}, and could be potentially informative for future refinements of tweakable block cipher constructions.
Last updated:  2026-02-24
Is PSI Really Faster Than PSU? Achieving Efficient PSU with Invertible Bloom Filters
Lucas Piske and Ni Trieu
Private Set Union (PSU) enables two parties to compute the union of their private sets without revealing anything beyond the union itself. Existing PSU protocols remain much slower than private set intersection (PSI), often by a factor of around $30\times$. In this work, we present the first PSU protocol based on Invertible Bloom Lookup Tables (IBLTs), introducing a fundamentally new framework that departs from traditional, inefficient approaches. Our protocol exploits structural invariants between each party’s IBLTs and their union to compute the union efficiently without explicitly constructing a combined IBLT. Central to our approach is the notion of union peelability, which allows union elements to be recovered directly from the original IBLTs. We securely implement this functionality using only Oblivious Transfer (OT) and Oblivious Pseudorandom Function (OPRF) for equality checks, ensuring no information beyond the union is leaked. As a result, for set sizes ranging from $2^{14}$ to $2^{20}$, our protocol achieves a runtime of $0.08$ to $2.95$ seconds in the LAN setting, which is comparable to state-of-the-art PSI. We also show substantial speedups over prior PSU work—up to $10\times$ faster in LAN settings and consistently faster in WAN scenarios—while maintaining linear computation and communication complexity with small constants.
Last updated:  2026-02-24
Liquid Democracy With Two Opposing Factions
Krishnendu Chatterjee, Seth Gilbert, Stefan Schmid, Jakub Svoboda, and Michelle Yeo
Liquid democracy is a transitive vote delegation process. Previously, the advantages of liquid democracy over direct voting have been studied in settings where there is a ground truth “good” voting outcome. In this work, we analyse liquid democracy in a realistic setting with two opposing factions without a ground truth and under uncertainty. Formally, we consider 𝑛 voters who want to decide on some binary issue by voting. Each voter has a preference in {0, 1} that represents the opinion of the voter on these issues. That is, a voter with preference 0 prefers to vote for option 0. We refer to voters with the same preference as being in the same faction. The goal is for voters in the same faction to cooperatively decide on vote delegation strategies that maximise their probability of winning the election. In this setting, we present a practical distributed algorithm under realistic assumptions to decide on an approximately vote delegation strategy that involves minimal interaction and communication, and under incomplete information about the opposing faction. We then provide a complete analytical characterisation of optimal vote delegation strategies under complete information about the opposing faction. Finally, we show that finding optimal delegation strategies in the general setting is PSPACE-complete.
Last updated:  2026-02-24
Distributed Broadcast Encryption for Confidential Interoperability across Private Blockchains
Angelo De Caro, Kaoutar Elkhiyaoui, Sandeep Nishad, Sikhar Patranabis, and Venkatraman Ramakrishna
Interoperation across distributed ledger technology (DLT) networks hinges upon the secure transmission of ledger state from one network to another. This is especially challenging for private networks whose ledger access is limited to enrolled members. Existing approaches rely on a trusted centralized proxy that receives encrypted ledger state of a network, decrypts it, and sends it to members of another network. Though effective, this approach goes against the founding principle of DLT, namely avoiding single points of failure (or single sources of trust). In this paper, we leverage fully-distributed broadcast encryption (FDBE in short) to build a fully decentralized protocol for confidential information-sharing across private networks. Compared to traditional broadcast encryption (BE), FDBE is characterized by distributed setup and key generation, where mutually distrusting parties agree on a BE’s public key without a trusted setup, and securely derive their decryption keys. Given any FDBE, two private networks can securely share information as follows: a sender in one network uses the other network’s FDBE public key to encrypt a message for its members; and the resulting construction is secure in the simplified universal composability framework. To further demonstrate the practicality of our approach, we present the first instantiation of an FDBE that enjoys constant-sized decryption keys and ciphertexts, and evaluate the resulting performances through a reference implementation that considers two private Hyperledger Fabric networks within the Hyperledger Cacti interoperation framework.
Last updated:  2026-02-24
When Simple Permutations Mix Poorly: Limited Independence Does Not Imply Pseudorandomness
Jesko Dujmovic, Angelos Pelecanos, and Stefano Tessaro
Over the past two decades, several works have used (almost) $k$-wise independence as a proxy for pseudorandomness in block ciphers, since it guarantees resistance against broad classes of statistical attacks. For example, even the case $k = 2$ already implies security against differential and linear cryptanalysis. Hoory, Magen, Myers, and Rackoff (ICALP ’04; TCS ’05) formulated an appealing conjecture: if the sequential composition of $T$ independent local randomized permutations is (close to) four-wise independent, then it should also be a pseudorandom permutation. Here, "local" means that each output bit depends on only a constant number of input bits. This conjecture offers a potential strong justification for analyses of block ciphers that establish (almost) $k$-wise independence of this type of constructions. In this work, we disprove the conjecture in full generality by presenting an explicit local randomized permutation whose sequential composition is four-wise independent, but not a pseudorandom permutation. Our counterexample in fact extends to $k$-wise independence for any constant $k$.
Last updated:  2026-02-24
Xiezhi: Toward Succinct Proofs of Solvency
Youwei Deng and Jeremy Clark
A proof of solvency (or proof of reserves) is a zero-knowledge proof conducted by centralized cryptocurrency exchange to offer evidence that the exchange owns enough cryptocurrency to settle each of its users' balances. The proof seeks to reveal nothing about the finances of the exchange or its users, only the fact that it is solvent. The literature has already started to explore how to make proof size and verifier time independent of the number of (i) users on the exchange, and (ii) addresses used by the exchange. We argue there are a few areas of improvement. First, we propose and implement a full end-to-end argument that is fast for the exchange to prove (minutes), small in size (KBs), and fast to verify (seconds). Second, we deal with the natural conflict between Bitcoin and Ethereum's cryptographic setting (secp256k1) and more ideal settings for succinctness (e.g., pairing-based cryptography) with a novel mapping approach. Finally, we discuss how to adapt the protocol to the concrete parameters of bls12-381 (which is relevant because the bit-decomposition of all user balances will exceed the largest root of unity of the curve for even moderately-sized exchanges).
Last updated:  2026-02-24
Partially Non-Interactive Two-Round Threshold and Multi-Signatures with Tighter and Adaptive Security
Yanbo Chen
Threshold and multi-signature schemes enable a group of signers to jointly sign messages. In the pairing-free discrete-logarithm setting, there have been extensive research efforts on two-round threshold and multi-signature schemes. A prominent category is partially non-interactive schemes, where the first round is message-independent and can be preprocessed offline. However, there exist gaps in security properties between partially non-interactive schemes and fully online schemes. While there are fully online schemes that achieve rewinding-free or fully adaptive security based on standard assumptions, existing partially non-interactive schemes require the adversary to act algebraically or rely on non-standard assumptions for these properties. We bridge this gap by proposing a family of new constructions of partially non-interactive threshold and multi-signature schemes. Central to our constructions is a technique that renders HBMS (Bellare and Dai, Asiacrypt 2021) and its variants partially non-interactive. Specifically, we present the following instances: - The first partially non-interactive two-round multi-signature scheme with a rewinding-free reduction under standard assumptions. This scheme provides a tighter security guarantee against non-algebraic adversaries than all prior partially non-interactive schemes. - The first partially non-interactive two-round threshold signature scheme achieving fully adaptive security against non-algebraic adversaries under standard assumptions.
Last updated:  2026-02-24
Round-Based Approximation of (Higher-Order) Differential-Linear Correlation
Kai Hu, Zhongfeng Niu, and Meiqin Wang
This paper presents a new method for approximating the correlations of differential-linear distinguishers. From the perspective of Beyne's geometric approach, the differential-linear correlation is a corresponding coordinate of the \textit{correlation vector} associated with the ciphertext multiset, which can be obtained by using the correlation matrix of the \textit{2-wise form} of the cipher. The composite nature of the correlation matrix leads to a round-based approach to approximate the correlation vector. This simple approximation is remarkably precise, yielding the most accurate estimation for differential-linear correlations in \ascon thus far and the first DL distinguishers for 6-round \ascon-128a initialization. For \present, we present 17-round DL distinguishers, 4 rounds longer than the current record. To apply the round-based approach to ciphers with the large Chi ($\chi$) function as nonlinear functions, we derive theorems to handle the correlation propagation for $\chi$ and its 2-wise form. Strong DL distinguishers for up to 6 rounds of \subterranean and \koala-$p$ are provided, 2 rounds longer than the previous differential and linear distinguishers. Furthermore, the round-based approximation idea is also extended to the higher-order differential-linear distinguishers. We give the first second-order DL distinguisher for 6-round \ascon-128 initialization and the first second-order DL distinguishers for up to 7 rounds of \subterranean and \koala-$p$.
Last updated:  2026-02-23
qt-Pegasis: Simpler and Faster Effective Class Group Actions
Pierrick Dartois, Jonathan Komada Eriksen, Riccardo Invernizzi, and Frederik Vercauteren
In this paper, we revisit the recent Pegasis algorithm that computes an effective group action of the class group of any imaginary quadratic order $R$ on a set of supersingular elliptic curves primitively oriented by $R$. Although Pegasis was the first algorithm showing the practicality of computing unrestricted class group actions at higher security levels, it is complicated and prone to failures, which leads to many rerandomizations. We present a new algorithm, qt-Pegasis, which is much simpler, but at the same time faster and removes the need for rerandomization of the ideal we want to act with, since it never fails. It leverages the main technique of the recent Qlapoti approach. However, Qlapoti solves a norm equation in a quaternion algebra, which corresponds to the full endomorphism ring of a supersingular elliptic curve. We show that the algorithm still applies in the quadratic setting, by embedding the quadratic ideal into a quaternion ideal using a technique similar to the one applied in KLaPoTi. This way, we can reinterpret the output of Qlapoti as four equivalent quadratic ideals, instead of two equivalent quaternion ideals. We then show how to construct a Clapoti-like diagram in dimension $2$, which embeds the action of the ideal in a $4$-dimensional isogeny. We implemented our qt-Pegasis algorithm in SageMath for the CSURF group action, and we achieve a speedup over Pegasis of $1.8\times$ for the 500-bit parameters and $2.6\times$ for the 4000-bit parameters.
Last updated:  2026-02-23
WaterSQI and PRISMO: Quaternion Signatures for Supersingular Isogeny Group Actions
Tako Boris Fouotsa
Isogeny group action based signatures are obtained from a sigma protocol with high soundness error, say $\frac{1}{2}$ for its most basic variant. One needs to independently repeat the sigma protocol $O(\lambda)$ times to reduce the soundness error to negligible (with $\lambda$ being the security parameter). These repetitions come with a considerable efficiency and size overhead. On the other hand, quaternion isogeny-based signatures such as SQIsign and PRISM are directly obtained from a sigma protocol with a negligible soundness error. The secret key in SQIsign and PRISM is a random supersingular isogeny, and both schemes are insecure when the secret isogeny arises from the supersingular isogeny group action setting. In this paper, we propose WaterSQI and PRISMO, variants of SQIsign and PRISM respectively, suited for secret isogenies that arise from the supersingular isogeny group action setting. They use a sigma protocol whose soundness error is negligible without requiring parallel repetitions. They are hence more compact and $O(\lambda)$ times more efficient compared to Generalised CSI-FiSh (the generalisation of CSI-FiSh to large parameters using generic isogeny group action evaluation algorithms such as Clapotis/KLaPoTi/PEGASIS). For example, for our proof of concept implementation with a 2000 bits prime in Sagemath, PRISMO, when compared to Generalised CSI-FiSh with the same public key size, is about 80x faster for signing and 1457x faster for verification, while also being 29x more compact (signature size). Moreover, we prove that WaterSQI is a proof of knowledge of the full endomorphism ring.
Last updated:  2026-02-23
Two-Server Private Information Retrieval in Sublinear Time and Quasilinear Space
Alexandra Henzinger and Seyoon Ragavan
We build two-server private information retrieval (PIR) that achieves information-theoretic security and strong double-efficiency guarantees. On a database of $n > 10^6$ bits, the servers store a preprocessed data structure of size $1.5 \sqrt{\log_2 n} \cdot n$ bits and then answer each PIR query by probing $12 \cdot n^{0.82}$ bits in this data structure. To our knowledge, this is the first information-theoretic PIR with any constant number of servers that has quasilinear server storage $n^{1+o(1)}$ and polynomially sublinear server time $n^{1-\Omega(1)}$. Our work builds on the PIR-with-preprocessing protocol of Beimel, Ishai, and Malkin (CRYPTO 2000). The insight driving our improvement is a compact data structure for evaluating a multivariate polynomial and its derivatives. Our data structure and PIR protocol leverage the fact that Hasse derivatives can be efficiently computed on-the-fly by taking finite differences between the polynomial's evaluations. We further extend our techniques to improve the state-of-the-art in PIR with three or more servers, building on recent work by Ghoshal, Li, Ma, Dai, and Shi (TCC 2025). On an 11 GB database with 1-byte records, our two-server PIR encodes the database into a 1 TB data structure – which is 4,500,000$\times$ smaller than that of prior two-server PIR-with-preprocessing schemes, while maintaining the same communication and time per query. To answer a PIR query, the servers fetch and send back 4.4 MB from this data structure, requiring 2,560$\times$ fewer memory accesses than linear-time PIR. The main limitation of our protocol is its large communication complexity, which we show how to shrink to $n^{0.31} \cdot \mathsf{poly}(\lambda)$ using compact linearly homomorphic encryption.
Last updated:  2026-02-23
Multivalued Broadcast with Optimal Length
Gabriel Dettling, Martin Hirt, and Chen-Da Liu-Zhang
A multi-valued broadcast protocol allows a sender $P_s$ to broadcast an $\ell$-bit message to $n$ recipients. For all relevant models, multi-valued broadcast protocols with asymptotically optimal communication complexity $\mathcal{O}(\ell n)+\mathrm{Poly}(n)$ have been published. Despite their low communication complexity, these protocols perform poorly in modern networks. Even if the network allows all $n$ parties to send messages at the same time, the execution time of the protocols is proportional to $\ell n$ (instead of $\ell$). Even if the network allows to use all bilateral channels at the same time, the execution time is still proportional to $\ell$ (instead of $\ell/n$). We ask the natural question whether multi-valued broadcast protocols exist which take time proportional to $\ell$ if parties can simultaneously send messages, and even take time proportional to $\ell/n$ if the bilateral channels can be used simultaneously. We provide a complete characterization of multi-valued broadcast with a two-fold answer: On the negative side, we prove that for $t<n$, multi-valued broadcast tolerating up to $t$ corrupt parties requires time proportional to $\ell n$, if parties can simultaneously send messages, respectively take time proportional to $\ell$ if bilateral channels can be used simultaneously. On the positive side, we prove that for any constant $\epsilon > 0$, multi-valued broadcast tolerating up to $ t < (1- \epsilon)n$ corruptions can be achieved in time proportional to $\ell$ (when parties can send messages simultaneously), respectively proportional to $\ell/n$ (if bilateral channels can be used simultaneously). We provide such protocols both with cryptographic security as well as with statistical security.
Last updated:  2026-02-23
A Modular Approach to Succinct Arguments for QMA
James Bartusek, Jiahui Liu, and Giulio Malavolta
Succinct argument systems are of central importance to modern crytpography, enabling the efficient verification of computational claims. In the classical setting, Kilian (STOC 92) established that any probabilistically checkable proof for NP can be transformed into a succinct argument system for NP using only collision-resistant hash functions. In the quantum setting, recent works have established the feasibility of (classically-verifiable) succinct arguments for QMA, capturing statements that require \emph{quantum} proofs. However, known constructions all rely on the highly structured assumption of learning with errors (LWE), which stands in stark contrast with the unstructured assumptions that suffice for NP. In this work, we develop a new framework that broadens the cryptographic foundations of succinct arguments for QMA. We assume the existence of (i) an oblivious state preparation (OSP) protocol, which in turn can be constructed from \emph{plain} trapdoor claw-free functions, and (ii) collapsing hash functions, the quantum analogue of collision-resistance. In particular, we obtain the first succinct, classically-verifiable argument system for QMA which does not rely on the hardness of LWE. Our construction proceeds in two steps. First, we design a \emph{round-efficient} classically-verifiable argument system for QMA based only on the assumption of OSP. Second, we introduce a \emph{generalized communication compression compiler}, which, assuming collapsing hash functions, transforms any $T$-round interactive protocol with a classical verifier into one in which the communication size is bounded by $T \cdot \poly(\secp)$ for some fixed $\poly$ independent of the original size of each message. Our compiler extends a quantum rigidity-based communication compression technique of Zhang (QCrypt 25), and may be of independent interest.
Last updated:  2026-02-23
Round-Optimal Byzantine Agreement without Trusted Setup
Diana Ghinea, Ivana Klasovitá, and Chen-Da Liu-Zhang
Byzantine Agreement is a fundamental primitive in cryptography and distributed computing, and minimizing its round complexity is of paramount importance. The seminal works of Karlin and Yao [Manuscript'84] and Chor, Merritt and Shmoys [JACM'89] showed that any randomized $r$-round protocol must fail with probability at least $(c\cdot r)^{-r}$, for some constant $c$, when the number of corruptions is linear in the number of parties, $t = \theta(n)$. The work of Ghinea, Goyal and Liu-Zhang [Eurocrypt'22] introduced the first \emph{round-optimal BA} protocol matching this lower bound. However, the protocol requires a trusted setup for unique threshold signatures and random oracles. In this work, we present the first round-optimal BA protocols without trusted setup: a protocol for $t<n/3$ with statistical security, and a protocol for $t<(1-\epsilon)n/2$ with any constant $\epsilon > 0$, assuming a bulletin-board PKI for signatures.
Last updated:  2026-02-23
Issuer-Hiding for BBS Anonymous Credentials via Randomizable Keys
Andrea Flamini, Karla Friedrichs, and Anja Lehmann
Anonymous credentials (AC) equip users with credentials on attested attributes, which enable them to prove verifiable yet data-minimizing statements over their attributes. However, in standard ACs, each credential presentation reveals the credential issuer, which could be more information than intended and necessary, e.g., when merely proving age or personhood. Issuer-Hiding Anonymous Credentials (IHAC) address this limitation and hide the issuer in the presentation. That is, they only reveal that the user has a credential from an issuer within a certain trust set, referred to as the policy. Recent works by Sanders and Traoré, and Katz and Sefranek show how to add issuer-hiding to PS- and BBS-based credentials while keeping presentations compact, i.e., not scaling in the number of issuers. However, both constructions require the verifier to generate dedicated policy key pairs, turning verification into a secret key operation. Managing these verifier-specific keys introduces additional complexity and affects the resulting practical privacy and security guarantees. In this work, we propose two IHAC schemes from BBS signatures that achieve compact presentations without the need of such verifier-specific keys. At the core of our schemes is a technique to randomize BBS public keys and adapt the signatures accordingly, which we believe to be of independent interest.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.