Experts Question Why AI Chatbots Use First-Person Pronouns

Some experts criticize AI chatbots' use of "I" and humanlike behavior, calling the design choice problematic.

Experts Question Why AI Chatbots Use First-Person Pronouns

According to The New York Times, AI chatbots have been intentionally designed to exhibit humanlike behavior, including the use of first-person pronouns like “I.” However, some experts believe this design approach is fundamentally flawed.

The article highlights growing concerns about chatbots presenting themselves in ways that suggest consciousness or personhood. By using “I” and mimicking human conversational patterns, these systems may mislead users about their actual capabilities and nature.

While The New York Times reports that this humanlike design is deliberate, critics argue it creates problematic expectations and potentially deceives users about what these AI systems actually are—sophisticated language prediction tools rather than sentient beings.

The debate reflects broader questions in AI development about transparency and appropriate interaction design. As chatbots become more prevalent in everyday applications, the choice of whether to make them seem more human or to emphasize their artificial nature remains contentious among researchers and ethicists.

The New York Times does not specify which experts hold these views or provide detailed arguments for alternative design approaches, but frames the use of first-person language as a significant and potentially “terrible” design decision in AI development.