Research
My research sits at the intersection of critical AI studies, media literacy, and digital humanities. I am less interested in artificial intelligence as a set of tools or models than as a rhetorical and relational force—one that is reshaping how people interpret, trust, and emotionally engage with machine-generated language. Personified chatbots do not simply “assist” users; they invite relationships these systems cannot defensibly deliver. My work asks what it means to navigate these confessional interfaces as they become ordinary features of contemporary life. Supported by institutions including NVIDIA and the University of California, Irvine Center for Asian Studies, I approach AI not as a neutral technology but as an infrastructure built on human effort: annotation labor, fine-tuning by data workers, editorial judgment, and design choices that deliberately anthropomorphize statistical outputs. My research traces how these hidden forms of work—and these deliberate rhetorical strategies—shape what users are invited to believe AI systems actually are. Through critical analysis, I examine how personified chatbots pressure long-standing assumptions about media, care, and interpretation:
What happens when conversational form recruits social cognition, triggering the ELIZA effect even in users who “know better”? How do we understand simulated empathy when it can be mistaken for on-demand care? What does it mean when design choices—first-person voice, affect-laden language, engagement optimization—create conditions for dependency, reinforcement of harmful beliefs, and misplaced trust?
Rather than centering what these models can produce, my methodology foregrounds what they reveal: the interpretive vulnerabilities they exploit and the critical literacies needed to navigate them. This work directly engages with urgent debates around AI safety, interface ethics, and the psychological stakes of anthropomorphized technology. I am especially concerned with how chatbot systems can invite disclosure and intimacy while remaining structurally incapable of the care they simulate. By combining rigorous critical frameworks with attention to documented harms, my research offers concrete ways for educators, policymakers, and users to think about AI beyond hype—attending to the design choices, incentives, and interpretive burdens that shape human-AI interaction. Alongside scholarly publications, I am developing public-facing essays that translate complex questions about AI ethics, interface persuasion, and relational risk into accessible, narrative-driven writing. These pieces treat AI as a lived condition rather than a distant technology, situating it in the late-night conversations, vulnerable disclosures, and everyday platforms where people already encounter machine-generated language. —
Current Research Projects
Research Projects My current research centers on two interconnected projects that approach AI and generative text through creative practice, critical writing, and questions of authorship, labor, and care.
Human Error Is the Point An essay collection on AI, generative systems, and the work of writing This essay collection brings together creative nonfiction and public-facing criticism that examines AI and generative language models from the ground up—from classrooms and writing workshops to dorm rooms where data is labeled late at night, to offices where everyday writing is quietly automated. Moving between memoir, reportage, and close reading, the essays ask how AI reshapes our understanding of effort, boredom, authorship, and collaboration. Rather than treating “AI in writing” as a novelty or purely technical problem, the collection argues that machine-generated text reveals what writing has always been: a social practice shaped by unequal access to time, money, and attention. Across these essays, error is not a failure to be corrected, but a critical site where ethics, creativity, and human judgment remain visible.