Talk to the City uses Large Language Models (LLMs) to strengthen collective decision-making by transforming large-scale public input into actionable insights. Unlike traditional polling or commercial survey tools, T3C preserves the nuance of individual perspectives and captures authentic voices, while surfacing the broader themes and differences that matter most. Like all AI systems, LLMs have limitations and risks that users should understand.
LLMs excel at recognizing language patterns, identifying themes, and summarizing complex information. However, they are prediction machines, not truth machines. They generate text based on patterns they have learned, not from verified facts. When an LLM produces text that sounds plausible but is not grounded in source materials, it is called a "hallucination."
In the context of summarizing large opinion datasets, this might mean:
We have built Talk to the City with multiple safeguards and validation steps into the processing pipeline to mitigate these risks:
For each comment, the LLM extracts explicit claims and must link them to verbatim quotes from real people that support each claim. The LLM can only use topic and subtopic names generated during the initial extraction phase—no variations or new names are allowed.
Short comments (fewer than three words) are filtered out because they can cause the LLM to hallucinate by inventing missing details.
Each extracted claim in a Talk to the City report is directly traceable to real people's opinions. Clicking a claim reveals the exact supporting quotes and participants behind it to verify whether the summary is fair and accurate.
We also use automated tests to flag low-quality extractions:
Every report also includes a detailed audit log of all processing decisions, showing:
This enables full traceability. If you discover a possible hallucination or misclassification, please report it.
Despite our safeguards, some risks remain:
The AI might slightly exaggerate or reframe a sentiment. For example:
The claim adds certainty not present in the original comment.
LLM topic sorting may differ from human intuition, occasionally merging or fragmenting categories in unexpected ways.
Niche or minority perspectives can be underrepresented or absorbed into broader themes.
Because LLMs reflect patterns in their training data, they may reproduce cultural or societal biases in phrasing or emphasis. To make the most of your report, stay alert to two types of potential issues:
AI Interpretation Limits: These relate to how language models interpret or summarize text.
Underlying Data Gaps: These arise from limitations in the input itself—such as low participation, uneven representation, or missing perspectives. AI analysis cannot correct for these gaps.
Think of Talk to the City as a highly capable research assistant: fast, consistent, and insightful, but still requiring human oversight and judgment. Use reports as a starting point for conversation, not the final conclusion. Verify key claims, examine original quotes, and apply your contextual knowledge. Our goal is to help make your community voice easier to hear—clearly, honestly, and with full awareness of technological limits.