About AutoFiction
AutoFiction lets the community read and review full-length novels generated by frontier AI agents.
Contents
About AutoFiction
AutoFiction is a research platform created by researchers at the University of Maryland, College Park to showcase how well AI agents perform on long-horizon creative tasks, especially writing full-length novels.
At its core, AutoFiction uses a model-agnostic generation workflow to produce fiction from both original story premises and premises inspired by publicly available summaries of award-winning books. Readers can engage in many ways, including flagging writing issues and discussing scenes with other users. Users can also contribute directly by submitting their own books and story premises.
The Institutional Review Board (IRB) at UMD has determined this project to be exempt from federal requirements for human subjects research.
Disclaimer: You are viewing the beta version of our website, which may have bugs or incomplete features. Your patience and feedback are greatly appreciated as we continue to improve the site.
Why we are building this
AI-generated books are already appearing in mainstream channels (The Times; South China Morning Post), self-publishing platforms (WIRED; The Guardian), and AI-native storefronts such as Lost Books. This growth is being driven by more capable LLMs and ongoing research on long-form writing (Agents' Room; Next chapter prediction; LongWriter), along with consumer tools built specifically for fiction generation such as BookAutoAI, NovelAI, and Sudowrite.
At the same time, many AI-generated books are presented to readers as if they were human-written (New York Times). That undermines reader trust and makes it difficult to study how people actually respond to AI-written fiction when authorship is disclosed up front. AutoFiction is designed to make that disclosure explicit. On our platform, readers know that an AI wrote the book, which lets us study engagement, feedback, and reading behavior under transparent conditions rather than through deception.
We are not building AutoFiction to make money by tricking readers into consuming undisclosed AI content. Our goal is the opposite: to create a transparent research platform for evaluating long-form creative performance. Existing benchmarks and leaderboards mostly measure general model preference or lower-level writing abilities, but they do not directly evaluate full novel-length performance (Chatbot Arena; EQ-Bench; WritingBench). However, we argue that short-form evaluations are not a sufficient proxy for long-form story generation. Writing a book requires sustained planning, coherence across many chapters, iterative revision, and consistent control over characters, settings, plot, and style; many of such capabilities are difficult to evaluate in short outputs (Li, 2026). Our goal is to identify the issues that appear specifically in long-form narrative generation and to measure how readers engage with this work when its origins are clearly disclosed.
How books and premises are selected
We source premises in two ways: by using high-level, publicly available summaries of award-winning books and by soliciting original ideas from the community. We want to test a wide range of genres and premise quality levels, so we do not enforce constraints on premise types. Most highly upvoted premises are added to the generation queue, though we may skip submissions that are duplicates or difficult to moderate safely.
We also review community-uploaded books before releasing them on the platform. We check for duplicates, run the content through the OpenAI moderation API to flag unsafe material, and verify that the text is more than 90% AI-generated using the Pangram API. Only books that pass all three checks are released. This way, we make sure that all books on the platform are unique, safe, and not written by humans.
We use award-winning book premises because they provide a useful point of control in our evaluations. If a premise has already led to a successful human-written novel, we can compare that human outcome with an AI-generated one while keeping some high-level narrative ingredients similar, such as topic, genre, or central tension. This helps us study where AI systems succeed or fail in long-form writing, and it also gives us a setting for testing whether LLM judges and other evaluators can reliably assess book-length quality.
We take a model-agnostic approach to turning premises into books. In practice, our workflow uses Claude Code, Codex, or both. Rather than asking one model to write an entire book in a single pass, the workflow breaks generation into stages: premise, outline, parallel chapter drafting, review, and revision. The goal is not only speed, but also detection of common long-form failures such as weak causality, repetitive scenes, flat dialogue, continuity drift, and overly safe prose. Our workflow is open source, linked on each book detail page, and open to contributions.
Content rights and removal
You should only upload or submit content that you own or are authorized to use. By posting content on AutoFiction, you represent that you have the rights needed for platform use, display, moderation, and related research or product operations.
If we receive a credible report that content may infringe copyright, trademark, privacy, or other rights, we may restrict or remove that content while we review the report. Repeat violations may result in account suspension or removal.
If you are a rights holder and need to request removal, contact support@autofiction.ai with enough detail for us to identify and evaluate the material.
Data collection and privacy
We collect data about how users interact with books and the platform, including comments, reviews, reading progress, time spent reading, and clicks. We may also collect information about story premises and generated books. This data may be used to evaluate book quality, detect errors and misuse, identify recurring failure modes, enforce platform policies, and improve the platform, including generation systems, review tools, and safety controls.
Where possible, we use de-identified or aggregated data for research, analysis, and reporting. We may also use platform data in research papers, but we will not intentionally publish personal identifiers (names, email addresses, etc.) without a separate legal basis or your consent where required by law.
Some activity may be visible to other signed-in users, depending on the feature and your visibility settings. This may include reading progress, comments, reviews, and uploaded books. You are responsible for the content you choose to post or share through features that allow visibility to other users.
For enforceable platform rules and legal terms, see Terms of Service & Data Use.
Roadmap and known gaps
AutoFiction is actively under development. We plan to add more features to support readers, writers, and researchers, and we will continue to refine our generation workflow and content policies as we learn from user feedback and research findings.
The next phase focuses on discoverability and social reading. Planned features include a "You may also like" section, stronger recommendations, and more shareable content formats. We also want commenting to feel faster and more natural with emoji reactions, emoji in reviews, and easier sharing.
We want the platform to feel more community-driven. That includes letting users follow one another and interact more around reviews and discussion threads. Over time, we also plan to support multilingual books so more readers can participate.
On the content side, we plan to keep improving how books are made behind the scenes. That includes building generation workflows for each book and improving them over time based on real usage. The goal is to make the reading experience better while also learning what works best as the platform grows.
Team and collaboration
AutoFiction is built by a small team of researchers at the University of Maryland, College Park. We are interested in collaborating with readers, writers, and researchers who care about AI-generated fiction. If you want to contribute, please reach out through the feedback form or email us at support@autofiction.ai.
Core contributors (equal contributions): Chau Minh Pham, Yapei Chang, and Mohit Iyyer.
We are also grateful for feedback from colleagues in CLIP Lab at UMD, UMass NLP, and the broader research community, as well as early users from our private beta and public launch.
How to cite AutoFiction
If you cite AutoFiction in research, cite the specific page you used for website content, and cite a specific software version when referencing the platform as software.
BibTeX (@software)
@software{pham_chang_iyyer_2026,
author = {Pham, Chau Minh and Chang, Yapei and Iyyer, Mohit},
title = {AutoFiction},
year = {2026},
version = {0.1.0},
url = {https://www.autofiction.ai/},
note = {Web platform}
}