DIGITAL ACCESS TO
Scholarship at Harvard
WHAT IS DASH?
DASH is the central, open-access institutional repository of research by members of the Harvard community. Harvard Library Open Scholarship and Research Data Services (OSRDS) operates DASH to provide the broadest possible access to Harvard's scholarship. This repository hosts a wide range of Harvard-affiliated scholarly works, including pre- and post-refereed journal articles, conference proceedings, theses and dissertations, working papers, and reports.
More about DASH
Recent Submissions
Managed Expectations Theory: Ex Ante Likelihoods Influence Ex Post Utilities
Daniel Kahneman, often in collaboration with Amos Tversky, developed foundational frameworks for understanding human decision-making. Building on that tradition, this article proposes that individuals’ ex ante assessments of the likelihood of good and bad outcomes serve as reference points that shape the ex post utility of lottery outcomes. In prospect theory, prior holdings act as reference points for evaluating outcomes—but probabilities themselves play no such role. This article introduces Managed Expectations Theory, which rests on two core hypotheses: 1. Reference Point Hypothesis: Ex ante probabilities serve as reference points for ex post utility. Specifically, more pessimistic expectations about uncertain outcomes enhance ex post utility, regardless of whether the outcome turns out to be good or bad. Four experiments, using a nationally representative adult sample of over 1,000 participants, strongly support this hypothesis. Lower [higher] ex ante likelihoods are associated with greater ex post utility for good [bad] outcomes. 2. Created Likelihoods Hypothesis: Recognizing that likelihood assessments act as reference points, individuals deliberately manage these assessments to be more pessimistic than an objective or statistical “outside view”—in order to boost their ex post utilities. Supporting this conjecture, the good outcomes participants labeled as “likely” occurred far more often than the bad outcomes they labeled as “likely.” These created likelihoods help explain a central feature of prospect theory’s probability weighting function: the underestimation of high-probability good events and the overestimation of low-probability bad events.
Stigma and the Social Safety Net
Stigma features prominently in debates about the social safety net, but empirically disentangling its role has left open many questions about whether it is a meaningful—or movable—barrier to take-up. Through four nationally representative studies (N = 11,164) and a new four-dimensional validated scale, we quantify the role that stigma plays in shaping take-up (1) directly, by impacting beneficiary behavior, and (2) indirectly, by influencing program design. We find that a one standard deviation (SD) increase in stigma is associated with a 9-19 percentage point decrease in willingness to apply for benefits among low-income respondents. It also predicts a 0.08-0.40 SD increase in society’s preferences for policies and program design features that could reduce program access. In both cases, we show that stigma explains more of the variation in policy preferences than any individual respondent characteristic, including political ideology. Notably, program design causally impacts stigma in competing ways: more expansive eligibility criteria reduce stigma, while implementation designs that would simplify access increase stigma. Together, these findings suggest that stigma should be considered both an individual and structural barrier to participation in the social safety net, where it both shapes and is shaped by policy design choices.
Algorithms and Architectures for Quantum Simulation with Neutral-atom Arrays
Fast and scalable quantum simulations promise to revolutionize our understanding of complex quantum systems, In this thesis, we primarily aim to develop algorithms and architectures for leveraging programmable quantum devices, to model complex physical phenomena. As quantum computers can natively capture superposition and entanglement, two key attributes which are challenging for classical computers to accurately model, this approach promises significant benefits in the long run. We focus primarily on neutral-atom arrays, an emerging experimental platform, although many of our results apply more generally as well. The work is structured into three phases, each progressively advancing the complexity and control of considered experimental hardware, and in parallel the importance and applicability of the considered quantum simulations.
In the first phase (Chapters 1–3), we address the challenge of programming and controlling quantum many-body systems through analog techniques. Analog quantum simulation utilizes continuous control parameters to engineer desired quantum states and dynamics. Chapter 1 introduces novel methods for steering entanglement using quantum many-body scars, harnessing special strongly-interacting dynamics to manipulate quantum entanglement robustly. Chapter 2 advances this approach by showing how Floquet engineering can be used to systematically generate interactions, and how this enables sophisticated control over entanglement and access to novel quantum phases. Chapter 3 integrates these developments into a general framework for programming Hamiltonians into analog quantum simulators with time-reversal capabilities, illustrating the power of programmable analog strategies for simulations of lattice gauge theories.
The second phase (Chapters 4–6) shifts focus to topologically ordered quantum states, known for their exotic long-range entanglement and fundamental significance in condensed matter physics and quantum computation. Chapter 4 presents strategies to enhance the experimental detection and verification of topological order using ideas from the renormalization group. The procedure we develop, order parameters dressed by local quantum error correction, significantly improve the practical observability of these delicate quantum phases. In Chapters 5 and 6, we explore novel techniques for realizing topological phases, by exploiting newly developed experimental capabilities, notably atom reconfiguration, to achieve precise digital control. In particular, we show how to engineer chiral Floquet spin liquids - exotic quantum phases exhibiting robust quantum coherence and non-Abelian excitations — as well as simulations of topological fermionic matter. These advancements not only illuminate foundational physics but also bridge towards robust quantum error correction schemes.
The final phase (Chapters 7 and 8) expands the techniques developed thus far towards the simulation of increasingly complex physical systems relevant to chemistry and materials science. Chapter 7 discusses a general framework for digital quantum simulation of effective spin models, prevalent in condensed matter physics, introducing crucial techniques for engineering and characterizing these Hamiltonians. Chapter 8 extends these insights by proving that fermionic quantum systems, essential for realistic simulations of electronic structures in molecules and materials, can be efficiently encoded into qubits. This advance significantly reduces computational complexity and opens pathways for genuine quantum simulations of chemical systems.
Collectively, these contributions represent substantial progress towards practical quantum simulation with neutral-atom quantum processors, laying critical foundations for future applications in physics, chemistry, and quantum information science.
Scheduling Algorithms for Low-Precision Accumulation on Energy-Efficient Deep Neural Network Hardware
In just a few years, deep neural networks (DNNs) have advanced from image classification and machine translation to large language models (LLMs) capable of processing and generating hundreds of thousands of tokens. Like their predecessors based on convolutional and transformer architectures, modern LLMs rely on extensive matrix multiplications interleaved with non-linear operations. Unlike earlier models, however, LLM dot products routinely exceed 30,000 elements, imposing severe costs in energy and latency—particularly in edge deployments where resources are limited. Enabling efficient inference in such settings demands both novel hardware architectures and new algorithmic strategies for computation scheduling.
At the core of these workloads lies accumulation: the repeated addition of products within large dot products. Narrowing the bitwidth of accumulators can dramatically reduce hardware complexity, energy consumption, and latency. Yet overly narrow accumulators increase the risk of numerical overflow, degrading model accuracy. Existing solutions mitigate this by retraining models with modified weights, a process that is costly for large models and often infeasible due to data access constraints.
This thesis introduces algorithmic and architectural techniques that enable low-bitwidth accumulation without retraining or altering the model weights. We first present Alternating Greedy Schedules (AGS), which partitions the summation into positive and negative subarrays and orders their accumulation to avoid overflow. We then propose Markov Greedy Sums (MGS), which leverage the statistical structure of DNN partial products to maximize use of a narrow accumulator and resort to a wide accumulator only when necessary. Finally, we develop Alternating Balancing Sums (ABS), a buffer-based reordering scheme that provably maintains narrow accumulation for most operations, deferring wide accumulation to a short final phase. Together, these techniques establish a family of scheduling strategies that systematically balance precision, energy, and latency.
We evaluate our designs across diverse DNN inference tasks, showing that they substantially reduce accumulator bitwidth requirements and energy consumption while preserving accuracy—all without retraining. These results demonstrate a path toward scalable, energy-efficient hardware for next-generation DNNs and LLMs, enabling their deployment across a broader range of platforms from cloud servers to edge devices.
Veiled Visibility: Spatial Memory and Queer Identity in Shinjuku Ni-Chome - Understanding Ni-Chōme as Urban Queer Sanctuary And Its Role in Fostering a Spatial Identity and Cultural Enclave
Shinjuku Ni-Chōme, nestled in Tokyo’s dense urban heart, is more than nightlife—it’s memory, movement, and meaning for Japan’s LGBTQ+ communities. This thesis traces Ni-Chōme as a “Queersphere” shaped by both historical refuge and contemporary tension. Amid Japan’s cultural norms of discretion, queer identity fragmentation, and a shifting global discourse on “gayborhoods,” Ni-Chōme emerges as a paradox: visible yet veiled, open yet exclusive. The study asks how queer identity, spatial memory, and urban transformation converge in this district—and for whom. Through ethnographic fieldwork, interviews, spatial analysis, and archival research, it uncovers how Ni-Chōme navigates pressures of commercialization, political shifts, and internal exclusions, particularly toward non-cisgender male identities. Findings show a space constantly reinventing itself—vulnerable, resilient, and deeply community-driven. By bridging queer theory and urban planning, this thesis offers not only a portrait of this sanctuary but a vision for inclusive, intergenerational, and culturally rooted queer spaces in Japan’s urban future.