IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
01 March 2026
Zihan Hao, Zikuan Huang, Qipeng Liu
In this work, we establish the first separation between computation with bounded and unbounded space, for problems with short outputs (i.e., working memory can be exponentially larger than output size), both in the classical and the quantum setting. Towards that, we introduce a problem called nested collision finding, and show that optimal query complexity can not be achieved without exponential memory.
Our result is based on a novel ``two-oracle recording'' technique, where one oracle ``records'' the computation's long outputs under the other oracle, effectively reducing the time-space trade-off for short-output problems to that of long-output problems. We believe this technique will be of independent interest for establishing time-space tradeoffs in other short-output settings.
Minh Pham, Khoa Nguyen, Slim Bettaieb, Mukul Kulkarni, Willy Susilo
Attribute-Based Signatures (ABS) enable users to authenticate messages under expressive attribute policies while remaining anonymous. Existing ABS variants, however, treat linkability as a static, system-wide property: signatures are either always unlinkable, as in standard ABS, or globally linkable, as in traceable or accountable extensions. This rigid dichotomy fails to capture scenarios where correlation should arise only under explicitly declared conditions.
This work introduces Conditionally Linkable Attribute-Based Signatures (CLABS), a framework extending ABS with programmable, context-dependent linkability. Each certified user with attribute $x$ is associated with a linking set $L_x$ over a public context space $\mathcal{T}$. For each context $\tau\in\mathcal{T}$, a public function $f_\tau$ specifies how attributes are compared. Two signatures are publicly linkable if and only if $\tau\in L_x\cap L_{x'}$ and $f_\tau(x)=f_\tau(x')$; otherwise they remain unlinkable. This enables selective, verifiable correlation without central trust and with leakage limited to the opt-in bit.
We formalize the syntax and security notions of CLABS, capturing conditional linkability and context-aware anonymity, thereby ensuring privacy and verifiable linkage under voluntary participation. CLABS unifies global unlinkability and fine-grained, context-specific linkage within a single formal framework.
We realize CLABS generically using three modular components: a pseudorandom function for deterministic tag generation, a conventional signature for attribute certification, and a signature of knowledge (SoK) proving correct tag computation and Boolean policy satisfaction without revealing $x$. Finally, we instantiate CLABS under standard lattice assumptions in the quantum random oracle model (QROM), achieving post-quantum security while supporting arbitrary Boolean policies. The techniques we employ to prove circuit satisfiability and tag correctness may be of independent interest.
This work introduces Conditionally Linkable Attribute-Based Signatures (CLABS), a framework extending ABS with programmable, context-dependent linkability. Each certified user with attribute $x$ is associated with a linking set $L_x$ over a public context space $\mathcal{T}$. For each context $\tau\in\mathcal{T}$, a public function $f_\tau$ specifies how attributes are compared. Two signatures are publicly linkable if and only if $\tau\in L_x\cap L_{x'}$ and $f_\tau(x)=f_\tau(x')$; otherwise they remain unlinkable. This enables selective, verifiable correlation without central trust and with leakage limited to the opt-in bit.
We formalize the syntax and security notions of CLABS, capturing conditional linkability and context-aware anonymity, thereby ensuring privacy and verifiable linkage under voluntary participation. CLABS unifies global unlinkability and fine-grained, context-specific linkage within a single formal framework.
We realize CLABS generically using three modular components: a pseudorandom function for deterministic tag generation, a conventional signature for attribute certification, and a signature of knowledge (SoK) proving correct tag computation and Boolean policy satisfaction without revealing $x$. Finally, we instantiate CLABS under standard lattice assumptions in the quantum random oracle model (QROM), achieving post-quantum security while supporting arbitrary Boolean policies. The techniques we employ to prove circuit satisfiability and tag correctness may be of independent interest.
Hassan Khodaiemehr, Khadijeh Bagheri, Saeid Yazdinejad, Chen Feng
This paper presents a pioneering approach to constructing sidechains on the Ethereum network with a focus on post-quantum security. Our framework integrates a novel quantum-resistant version of a non-interactive random oracle proof of knowledge (NIROPoK) scheme, alongside a quantum-resistant proof-of-stake mechanism and a post-quantum bridge based on the Dilithium digital signature scheme. By harnessing these advanced cryptographic techniques, we establish a robust defense against quantum computing threats while ensuring enhanced privacy for blockchain transactions. By conducting a thorough analysis and implementing our approach, we demonstrate the feasibility and effectiveness of creating quantum-resistant sidechains within the Ethereum ecosystem. Our proposed sidechain is also capable of securing Ethereum transactions from quantum threats using Ethereum’s current architecture and security measures.
Foteini Baldimtsi, Lucjan Hanzlik, Aayush Yadav
Non-interactive blind signatures (NIBS) capture the minimal setting of blind signatures where the message space is restricted to unstructured random strings. They enable a signer to pre-compute presignatures without prior interaction, while ensuring that only the intended recipient can derive the corresponding blind signature.
In this work, we consider the problem of threshold issuance of NIBS. Specifically, we introduce the notion of non-interactive threshold blind signatures (NITBS), where a user obtains partial presignatures from a threshold of signers and locally combines them into a valid blind signature. We provide a formal treatment of this primitive by defining the corresponding security notions of blindness and one-more unforgeability. We then present the first concrete construction of NITBS, obtained by adapting the Pointcheval-Sanders (PS) signature scheme, and establish its security in the algebraic group model. Our micro-benchmarking results show that our construction attains the smallest presignature and signature sizes and the fastest issuance among all existing NIBS schemes.
In this work, we consider the problem of threshold issuance of NIBS. Specifically, we introduce the notion of non-interactive threshold blind signatures (NITBS), where a user obtains partial presignatures from a threshold of signers and locally combines them into a valid blind signature. We provide a formal treatment of this primitive by defining the corresponding security notions of blindness and one-more unforgeability. We then present the first concrete construction of NITBS, obtained by adapting the Pointcheval-Sanders (PS) signature scheme, and establish its security in the algebraic group model. Our micro-benchmarking results show that our construction attains the smallest presignature and signature sizes and the fastest issuance among all existing NIBS schemes.
Gaspard Anthoine, Dario Fiore, Mahak Pancholi
Zero-Knowledge Succinct Non-interactive Arguments of Knowledge (zkSNARKs) are important cryptographic primitives critical in many real-world applications. zkSNARKs are not used in isolation but are deployed within a broader context in which other cryptographic protocols may be concurrently executed. Universal-Composability (UC) allows rigorous analysis of cryptographic primitives being used in such arbitrary contexts. A UC analysis is even more desirable for popular, well-audited, and heavily deployed zkSNARKs already being used in practice.
Prior works that study the UC security of existing zkSNARKs (without modifications) are either not modular, hence requiring case-by-case analysis for new proof systems, or have largely focused on zkSNARKs in the Random Oracle Model (ROM). The latter includes zkSNARKs with logarithmic proof sizes compiled from Interactive Oracle Proofs. This state of the art leaves out a large family of very efficient, often constant-size, zkSNARKs that rely on the Algebraic Group Model (AGM) and optionally on the ROM. This includes zkSNARKs compiled from Polynomial Interactive Oracle Proofs, such as Plonk and Marlin, among others.
In this work, we address the UC security for unmodified zkSNARKs that are proven secure in AGM (+ROM). Our approach is modular: we identify simple, and mostly standard properties on the underlying zkSNARK that imply UC security. We observe that checking these properties for existing zkSNARKs is a surprisingly simple task using the rigorous formulation of AGM from Jaeger and Mohan (CRYPTO'24). The simplicity and modularity of our framework makes it easy-to-use for concluding UC security of several zkSNARKs in the same setting. Concretely, using our framework we establish that Plonk and Marlin are UC secure without any overhead.
Madalina Bolboceanu, Jonathan Bootle, Vadim Lyubashevsky, Antonio Merino-Gallardo, Gregor Seiler
The past several years have seen a rapid rise in practical lattice-based proof systems with linear-sized zero-knowledge proofs forming the foundation of many of the most efficient quantum-safe privacy protocols, and succinct proofs rapidly catching up and surpassing other quantum-safe alternatives in many metrics. A recent comparison of lattice-based aggregate signatures (Ethereum Foundation, 2025) involving the hash-based aggregate signature scheme Plonky3 and the instantiation of aggregate signatures from Falcon from the LaZer lattice library (Lyubashevsky, Seiler, Steuer, CCS 2024) using LaBRADOR (Beullens, Seiler, Crypto 2023), showed that lattice-based constructions have an advantage in terms of proof size and prover time, but are around an order of magnitude slower with regards to verification time. In general, it appears that slower verification times are the main obstacle to the adoption of succinct lattice-based proof systems.
In this work, we introduce and implement Orthus, a proof system with sub-linear verification designed for relations that naturally arise in lattice-based constructions. Asymptotically, the verification time grows with the square root of the witness size, and for a concrete example of aggregating Falcon signatures our implementation reduces the verifier running time by a factor of $9X$ when aggregating $2^{17}$ signatures.
Brieuc Balon, Gianluca Brian, Sebastian Faust, Carmit Hazay, Elena Micheli, François-Xavier Standaert
Post-quantum signature schemes are becoming increasingly important due to the threat of quantum computers to classical cryptographic schemes. Among the approaches considered in the literature, the MPC-in-the-head paradigm introduced by Ishai et al. (STOC'07) provides an innovative solution for constructing zero-knowledge proofs by exploiting Multi-Party Computation (MPC). This technique has proven to be a versatile tool in order to design efficient cryptographic schemes, including post-quantum signatures.
Building on the MPC-in-the-head paradigm, we introduce Bittersweet signatures, a new class of signature schemes based on the Learning With Rounding (LWR) assumption. Their main advantage is conceptual simplicity: by exploiting (almost) key-homomorphic pseudorandom functions (PRFs), a cryptographic object that preserves pseudorandomness while allowing linear operations on keys, we obtain a very regular design offering nice opportunities for parallel implementations.
Theoretically, analyzing Bittersweet signatures requires addressing significant challenges related to the (carry) leakage that almost key-homomorphic operations lead to.
Concretely, Bittersweet signatures natively lead to competitive signature sizes, trading moderate software performance overheads for hardware performance gains when compared to state-of-the-art MPC-in-the-head schemes (e.g., relying on code-based assumptions), while admittedly lagging a bit behind recent algorithms based on
the VOLE-in-the-head or Threshold-Computation-in-the-head frameworks.
Besides, their scalability and algebraic structure makes them promising candidates for leakage-resilient implementations. The new abstractions we introduce additionally suggest interesting research directions towards further optimization and generalization.
Jiawei Bao, Jiaxin Pan
X-Wing (Barbosa et al., CiC Volume 1, Issue 1) is a hybrid key encapsulation mechanism (KEM) currently considered for standardization by IETF and deployed by major companies such as Google to ensure a secure transition to post-quantum cryptography. It combines a classical X25519 KEM with the post-quantum ML-KEM-768.
In this paper, we propose the first analysis of the anonymity of X-Wing. We are interested in tight and memory-tight reductions that offer stronger security guarantees. We first establish in the standard model that for any IND-CCA secure KEM, weak anonymity implies full anonymity, and our reduction is tight not only in success probability and time but also in memory consumption. We then prove in the random oracle model that X-Wing achieves weak anonymity if both X25519 and ML-KEM-768 are weakly anonymous. The former can even be proven without a hardness assumption.
Our proof on the weak anonymity of X-Wing does not preserve the memory-tightness of the underlying KEMs. To improve it, we propose a slight variant of X-Wing that preserves memory-tightness. Finally, we improve the existing IND-CCA proof of the original X-Wing by Barbosa et al. using our new memory-tight analysis.
In this paper, we propose the first analysis of the anonymity of X-Wing. We are interested in tight and memory-tight reductions that offer stronger security guarantees. We first establish in the standard model that for any IND-CCA secure KEM, weak anonymity implies full anonymity, and our reduction is tight not only in success probability and time but also in memory consumption. We then prove in the random oracle model that X-Wing achieves weak anonymity if both X25519 and ML-KEM-768 are weakly anonymous. The former can even be proven without a hardness assumption.
Our proof on the weak anonymity of X-Wing does not preserve the memory-tightness of the underlying KEMs. To improve it, we propose a slight variant of X-Wing that preserves memory-tightness. Finally, we improve the existing IND-CCA proof of the original X-Wing by Barbosa et al. using our new memory-tight analysis.
Jay Taylor, Paul Gerhart, Sri AravindaKrishnan Thyagarajan
AI agents and custodial services are increasingly being entrusted as intermediaries to conduct transactions on behalf of institutions. The stakes are high: The digital asset market is projected to exceed \$16 trillion by 2030, where exchanges often involve proprietary, time-sensitive goods. Although industry efforts like Google’s Agent-to-Payments (AP2) protocol standardize how agents authorize payments, they leave open the core challenge of fair exchange: ensuring that a buyer obtains the asset if and only if the seller is compensated without exposing sensitive information.
We introduce proxy adaptor signatures (PAS), a new cryptographic primitive that enables fair exchange through delegation without sacrificing atomicity or privacy. A stateless buyer issues a single request and does not need to manage long-term cryptographic secrets while proxies complete the exchange with a seller. The seller is guaranteed payment if the buyer can later reconstruct the purchased witness; meanwhile, the proxies remain oblivious to the witness throughout the protocol. We formalize PAS under a threshold model that tolerates the collusion of up to $t-1$ proxies. We also present an efficient construction from standard primitives that is compatible with Bitcoin, Cardano, and Ethereum. Finally, we evaluate a Rust implementation that supports up to 30 proxies. Our prototype is concretely efficient: buyer and seller computations take place in microseconds, proxy operations in milliseconds, and on-chain costs are equivalent to those of a standard transaction without fair exchange.
We introduce proxy adaptor signatures (PAS), a new cryptographic primitive that enables fair exchange through delegation without sacrificing atomicity or privacy. A stateless buyer issues a single request and does not need to manage long-term cryptographic secrets while proxies complete the exchange with a seller. The seller is guaranteed payment if the buyer can later reconstruct the purchased witness; meanwhile, the proxies remain oblivious to the witness throughout the protocol. We formalize PAS under a threshold model that tolerates the collusion of up to $t-1$ proxies. We also present an efficient construction from standard primitives that is compatible with Bitcoin, Cardano, and Ethereum. Finally, we evaluate a Rust implementation that supports up to 30 proxies. Our prototype is concretely efficient: buyer and seller computations take place in microseconds, proxy operations in milliseconds, and on-chain costs are equivalent to those of a standard transaction without fair exchange.
28 February 2026
Luca De Feo, Li-Jie Jian, Ting-Yuan Wang, Bo-Yin Yang
We present the first vectorized implementation of SQIsign for high-performance Arm architectures. SQIsign is a promising candidate in the NIST On-Ramp Digital Signatures Call Round 2 to its most compact key and signature sizes. However, its signing performance remains a primary bottleneck, particularly the ideal-to-isogeny conversion. The conversion requires a large number of operations on elliptic curves and Abelian varieties, which depend on finite field arithmetic. Despite recent algorithmic improvements, research on high-performance implementations and efficient vectorized finite field arithmetic for SQIsign is still unexplored.
Our main contribution is the first demonstration of non-trivial vectorization speedups for SQIsign. By leveraging the NEON instruction set, we implement highly efficient finite field arithmetic and batched elliptic curve operations tailored for 2-dimensional isogeny chain computations. This accelerates the subroutine by 2.24$\times$ over the state-of-the-art. Moreover, our improvements are completely orthogonal to the recent algorithmic improvement Qlapoti (Asiacrypt 2025), offering similar performance gains in the SQIsign signing algorithm. When combined with Qlapoti, our implementation achieves a speedup of more than 2.24$\times$ in signing at NIST security level I. We expect our work to inspire further SQIsign optimization from a vectorization perspective, especially for quaternion computations with precise bounds.
Our main contribution is the first demonstration of non-trivial vectorization speedups for SQIsign. By leveraging the NEON instruction set, we implement highly efficient finite field arithmetic and batched elliptic curve operations tailored for 2-dimensional isogeny chain computations. This accelerates the subroutine by 2.24$\times$ over the state-of-the-art. Moreover, our improvements are completely orthogonal to the recent algorithmic improvement Qlapoti (Asiacrypt 2025), offering similar performance gains in the SQIsign signing algorithm. When combined with Qlapoti, our implementation achieves a speedup of more than 2.24$\times$ in signing at NIST security level I. We expect our work to inspire further SQIsign optimization from a vectorization perspective, especially for quaternion computations with precise bounds.
Simon Langowski, Kaiwen He, Srini Devadas
Modular arithmetic with a large prime modulus is a dominant computational cost in number-theoretic cryptography. Modular operations are especially challenging to parallelize efficiently on CPUs using vector instructions; standard CPU implementations rely on costly carry operations and permutation instructions to align with the multiplication datapath, negating the benefits of vectorization.
We develop vectorized algorithms for modular addition and multiplication, and present a new, constant-time modular multiplication algorithm suitable for general moduli - prime or otherwise. Our method uses a Residue Number System (RNS) representation to align the arithmetic naturally with wide vector units, and strategically eliminate extraneous instructions. Existing works either require the use of customized hardware or fail to show latency improvements.
Reducing the latency of modular arithmetic results in speedups for cryptographic applications. We accelerate RSA-4096 signatures by $4.0\times$ (verify) and $1.3\times$ (sign) over OpenSSL, and speed up BLS signature verifications by $3.43\times$ over the assembly-optimized BLST library. To facilitate broad practical adoption, we plan to upstream our implementation into BoringSSL, where it will directly benefit real-world TLS and cryptographic deployments.
We develop vectorized algorithms for modular addition and multiplication, and present a new, constant-time modular multiplication algorithm suitable for general moduli - prime or otherwise. Our method uses a Residue Number System (RNS) representation to align the arithmetic naturally with wide vector units, and strategically eliminate extraneous instructions. Existing works either require the use of customized hardware or fail to show latency improvements.
Reducing the latency of modular arithmetic results in speedups for cryptographic applications. We accelerate RSA-4096 signatures by $4.0\times$ (verify) and $1.3\times$ (sign) over OpenSSL, and speed up BLS signature verifications by $3.43\times$ over the assembly-optimized BLST library. To facilitate broad practical adoption, we plan to upstream our implementation into BoringSSL, where it will directly benefit real-world TLS and cryptographic deployments.
26 February 2026
Youssef El Housni
Computing cube roots in quadratic extensions of finite fields is a subroutine that arises in elliptic-curve point decompression, hash-to-curve and isogeny-based protocols. We present a new algorithm that, for $p \equiv 1 \pmod{3}$ –which holds in all settings where $\mathbb{F}_{p^2}$ cube roots arise in practice– reduces the $\mathbb{F}_{p^2}$ cube root to operations entirely in the base field $\mathbb{F}_p$ via the algebraic torus $\mathbb{T}_2(\mathbb{F}_p)$ and Lucas sequences. We prove correctness in all residuosity cases and implement the algorithm using the $\texttt{gnark-crypto}$ open-source library. Benchmarks on six primes spanning pairing-based and isogeny-based cryptography show $1.6$–$2.3\times$ speed-ups over direct (addition chain) exponentiations in $\mathbb{F}_{p^2}$. We also extend the approach to $p \equiv 2 \pmod{3}$ and, more generally, to any odd $n$-th roots in quadratic towers $\mathbb{F}_{p^{2^k}}$ with $\gcd(n, p+1) = 1$.
Alessandro Chiesa, Giacomo Fenzi, Guy Weissenberg
Succinct arguments based on interactive oracle proofs (IOPs) have achieved remarkable efficiency improvements and are now widely adopted in applications. State-of-the-art IOPs involve protocols for testing proximity to constrained interleaved linear codes, and enjoy essentially optimal parameters. However, recent IOP constructions provide no privacy guarantees, which remain a must for many applications.
We present an IOP of proximity for testing constrained interleaved linear codes that achieves (honest-verifier) zero-knowledge, while incurring a negligible overhead compared to the (non-zero-knowledge) state of the art. In line with recent constructions, our construction satisfies round-by-round knowledge soundness with a straightline extractor and negligible error.
We propose a definition of (honest-verifier) zero-knowledge for interactive oracle reductions (IORs) that we prove is compatible with composition, and then obtain our result by constructing and modularly composing several lightweight zero-knowledge IORs. Our key technical contributions are a zero-knowledge sumcheck IOR and a zero-knowledge code-switching IOR that fit the strict efficiency requirements of our setting; these contributions and other technical complications entailed overcoming several challenges with new notions and protocols. Finally, along the way, we highlight the efficiency benefits of high-distance codes obtained from dispersers, which may be of independent interest.
We present an IOP of proximity for testing constrained interleaved linear codes that achieves (honest-verifier) zero-knowledge, while incurring a negligible overhead compared to the (non-zero-knowledge) state of the art. In line with recent constructions, our construction satisfies round-by-round knowledge soundness with a straightline extractor and negligible error.
We propose a definition of (honest-verifier) zero-knowledge for interactive oracle reductions (IORs) that we prove is compatible with composition, and then obtain our result by constructing and modularly composing several lightweight zero-knowledge IORs. Our key technical contributions are a zero-knowledge sumcheck IOR and a zero-knowledge code-switching IOR that fit the strict efficiency requirements of our setting; these contributions and other technical complications entailed overcoming several challenges with new notions and protocols. Finally, along the way, we highlight the efficiency benefits of high-distance codes obtained from dispersers, which may be of independent interest.
Rishab Goyal, Aditya Jain, Shashwatha Mitra GB
We study the problem of minimizing round complexity in the context of succinct classical argument systems for quantum computation. All prior works either require at least 8 rounds of interaction between the quantum prover and classical verifier, or rely on the idealized quantum random oracle model (QROM). We design:
1. A 4-round public-coin (except for the first message) argument system for batchQMA lan- guages. Our results come in two flavors:
(a) Under the post-quantum hardness of functional encryption and learning with errors (LWE), we achieve optimal communication complexity (i.e., all message sizes are independent of batch size).
(b) Under the post-quantum hardness of LWE, we achieve optimal communication com- plexity except for the verifier’s first message.
2. A 6-round private-coin argument system for monotone policy batchQMA languages, under the post-quantum hardness of LWE. The communication complexity is independent of the batch size as well as the monotone circuit size.
One of our main technical contributions is a new approach to prove soundness without rewind- ing cheating provers. We bring the notion of straight-line partial extractability to argument systems for quantum computation. All previous works crucially relied on rewinding cheating provers, thus needed “state-preserving” succinct arguments of knowledge (AoKs) for NP to prove soundness.
Shailesh Mishra, Martin Burkhart
Anonymous Credentials (or ACs) enable users to prove claims
with strong privacy guarantees, protecting credential holders
from being tracked by issuers and verifiers. However, these
privacy guarantees imply that a credential holder cannot be
held accountable for misuse (e.g., selling credential checks
online for proving ??? > 18). The lack of accountability may
raise questions about the adoption of ACs into national iden-
tity systems (e.g., European EUDI or Swiss e-ID), which
might lead to the issuing authorities resorting to credential
systems with weaker privacy guarantees (e.g., batch issuance
of one-show credentials). This shows that the lack of account-
ability can adversely impact the levels of privacy enjoyed by
users.
Hence, in this paper, we discuss transferability attacks on
ACs and introduce a framework for providing accountability
in AC systems. In addition to issuers, holders and verifiers,
it assumes the existence of: (i) a law enforcement body (the
police) and a judicial body (the judge) that work together to
find information on credential misuse and; (ii) one or more
digital privacy advocates, called the NGO(s), that ensure the
system is not used for tracking people.
We introduce the cryptographic forensic trail (CFT), which
is attached to each credential show. The CFT can be used for
obtaining more information about an individual if and only
if the police have probable cause and can convince the judge
to issue a corresponding search warrant. Then, the police,
the judge, and the NGO(s) run a multiparty protocol for
decrypting relevant trails only. The protocol mimics checks
and balances of a healthy democracy, in which neither law
enforcement nor justice can track people as they will. Even
if both branches colluded, the NGO(s) can detect the misuse
and block further use.
In addition to possible extensions, we discuss performance
constraints on mobile phones and argue that practical feasi-
bility of the CFT is within reach.
Zheng Chen, Qiuxia Xu, Chunming Tang
Determining whether an arbitrary access structure can be realized by an ideal linear secret sharing scheme is an important research topic. we use linear codes as the main tool to construct matrices $H$ and $G$ over a finite field $\mathbb{F}_q$ for a given access structure $\Gamma_{\min}$, and show that a necessary and sufficient condition for the existence of an ideal linear secret sharing scheme realizing $\Gamma_{\min}$ is that the equation $GH^{\mathsf{T}}=0$ has a solution. If this equation has a solution, then $H$ serves as the parity-check matrix of a linear code that realizes $\Gamma_{\min}$, and $G$ is the corresponding generator matrix. Furthermore, we prove that the result is equivalent to the following statement: there exists an ideal linear code for realizing the $\Gamma_{\min}$ if and only if it is the port of a matroid that is representable over $\mathbb{F}_q$.
Sopan Chavhan, Shrikant Chaudhari
The recent digital signature scheme proposed by Grigoriev, Monico, and Shpilrain (GMS26) attempts to leverage the NP-hardness of tropical matrix factorization as its security foundation. We present a systematic cryptanalysis revealing multiple structural vulnerabilities. First, we demonstrate an existential forgery attack in the chosen-hash model requiring only polynomial-time operations. Second, we establish that the scheme is fundamentally malleable, allowing an adversary to generate arbitrarily many distinct valid signatures for any observed message—thereby breaking strong unforgeability. Third, we show that collecting a polynomial number of honest signatures enables partial key recovery through probabilistic leakage of private key entries. Finally, we present an SMT-based key recovery attack that, given only two valid signatures, recovers the full private key in seconds for the recommended parameters. Our attacks do not rely on solving the claimed NP-hard problem but exploit algebraic properties of the signing equations. Combined, they demonstrate that the scheme fails to achieve the standard security notions of existential unforgeability, strong unforgeability, and key confidentiality.
Claude Carlet, Darrion Thornburgh
Quadratic Boolean functions (that is, Boolean functions of algebraic degree at most 2), bent Boolean functions (i.e. maximally nonlinear Boolean functions in even numbers of variables) and, as we prove in this paper, partially-bent Boolean functions (i.e. affine extensions of bent functions to linear super-spaces), share a strong property: all their restrictions to affine hyperplanes are plateaued (i.e. have a Walsh transform valued in a set of the form $\{0,\pm \lambda\}$, where $\lambda$ is a positive integer called the amplitude). In this paper we determine for any $n$ and $k
Ruibang Liu, Minyu Chen, Dengji Ma, Guoqiang Li
Deep learning's hunger for high-quality data has catalyzed a burgeoning economy of decentralized data marketplaces. However, a fundamental trust deficit stifles this ecosystem: buyers fear data poisoning, while sellers fear data leakage. Although the Shapley value offers a rigorous economic framework for fair compensation, its calculation traditionally requires a Trusted Third Party (TTP) to access raw data, creating a single point of failure for privacy. Verifying data valuation without compromising confidentiality remains an open challenge.
In this paper, we present ZK-DV, the first Zero-Knowledge Proof (ZKP) system designed for verifiable, privacy-preserving data valuation. ZK-DV enables a seller to prove that a claimed valuation score (based on Gradient Shapley) is mathematically consistent with the underlying private data and the buyer's model, without revealing either. Our key technical insight is the architectural coupling of model training and valuation: we construct a specialized arithmetic circuit that combines the valuation logic into the backpropagation, extracting marginal utility scores from intermediate gradients. This design, implemented via the GKR protocol with a hybrid commitment strategy, amortizes the heavy cryptographic overhead through batched processing. Extensive experiments on the MNIST dataset demonstrate that ZK-DV is practical: by optimizing batch sizes, we achieve a $2.7\times$ speedup in proof generation, while maintaining a quantization fidelity of $\rho=1.0$ and a low verification cost ($< 0.2$s). ZK-DV thus bridges the gap between cryptographic integrity and economic fairness, paving the way for trustless data exchange
In this paper, we present ZK-DV, the first Zero-Knowledge Proof (ZKP) system designed for verifiable, privacy-preserving data valuation. ZK-DV enables a seller to prove that a claimed valuation score (based on Gradient Shapley) is mathematically consistent with the underlying private data and the buyer's model, without revealing either. Our key technical insight is the architectural coupling of model training and valuation: we construct a specialized arithmetic circuit that combines the valuation logic into the backpropagation, extracting marginal utility scores from intermediate gradients. This design, implemented via the GKR protocol with a hybrid commitment strategy, amortizes the heavy cryptographic overhead through batched processing. Extensive experiments on the MNIST dataset demonstrate that ZK-DV is practical: by optimizing batch sizes, we achieve a $2.7\times$ speedup in proof generation, while maintaining a quantization fidelity of $\rho=1.0$ and a low verification cost ($< 0.2$s). ZK-DV thus bridges the gap between cryptographic integrity and economic fairness, paving the way for trustless data exchange
Henry Corrigan-Gibbs, Alexandra Henzinger, David J. Wu
This paper introduces the structured generic-group model, an extension of Shoup’s generic-group model (from Eurocrypt 1997) to capture algorithms that take advantage of some non-generic structure of the group. We show that any discrete-log algorithm in a group of prime order $q$ that exploits the structure of at most a $\delta$ fraction of group elements, in a way that we precisely define, must run in time $\Omega(\min(\sqrt{q},1/\delta))$. As an application, we prove a tight subexponential-time lower bound against discrete-log algorithms that exploit the multiplicative structure of smooth integers, but that are otherwise generic. This lower bound applies to a broad class of index-calculus algorithms. We prove similar lower bounds against algorithms that exploit the structure of small integers, smooth polynomials, and elliptic-curve points.