In recent years, many have wondered if chatbots—computer programs designed to simulate conversation with human users—could play a role in increasing a sense of connection in people’s lives. After all, the technology behind chatbots has gotten much more sophisticated, so that they’re now able to mimic interactions helpful in building supportive relationships—like active listening, responsiveness, and showing empathy. Plus, chatbots are always available, day and night, in a way that humans can’t be.
Some studies are finding support for this idea. For example, one study done with consumers using digital companions trained to respond with empathy found that doing so helped them alleviate their feelings of loneliness immediately after the interaction. As the researchers found, “being heard” and supported seemed to help people in this regard, and chatbots could mimic that well—in some ways, better than humans.
Another recent study found that people felt as good chatting with a bot as they felt talking with people face to face or online (though they felt more similar to and liked their human chatter better). Since better moods can be helpful for reducing loneliness, chatting with a bot could have an indirect impact on helping people who feel isolated. Research also suggests people may prefer talking with chatbots to people, particularly in the short term and in certain situations (such as when the real people in your life don’t seem very supportive).
These results are promising for sure. However, some even newer research should give us reason for pause. These studies are showing us that, despite expectations, chatbots don’t reduce loneliness in the long term. In some cases, interacting with artificial intelligence (AI) may even hurt our social well-being.
Chatbots don’t reduce loneliness over time
In one 2026 study, 275 first-year students at the University of British Columbia reported on how lonely they felt, their sense of social isolation, and their mood. Then, they were randomly assigned to send at least one meaningful message a day (for two weeks) to either a randomly selected student or to a chatbot named Sam that had been trained to be empathic and responsive. A third (control) group of students were told to write a short summary of their day each day in a private chatroom, to see the potential benefits of self-reflection.
At the end of every day, the students reported on how socially connected they felt while interacting with their conversation partner (or writing in their journal), as well as on their positive and negative feelings. Then, at the end of the two weeks of texting or writing, they reported again on their loneliness, mood, and social isolation.
After analyzing the data, the researchers found that only students interacting with a random student felt more positive emotion and less loneliness and sense of isolation after the two weeks. Those interacting with a chatbot or writing in their journals didn’t.
Lead author Ruo-ning Li of the University of British Columbia says her finding suggests that a chatbot is a poor substitute for a real person—even if that person is a stranger.
“A low-tech, simple intervention—just texting with another random human peer they didn’t know before—reduced loneliness significantly after two weeks, while the highly supportive chatbot we designed didn’t even move the needle,” says Li.
In many ways, this finding surprised Li. She’d thought a chatbot that could provide validation to people and be available anytime would mimic the benefits one gets from interacting with people you don’t know well—a type of connection sometimes called “weak ties” by researchers. These kinds of interactions have been shown in the past to help people feel more connected and less lonely. But in this study, she found that chatbots don’t provide the same advantages as weak ties with humans.
“We set up this experiment to compare whether AI can bring us as much benefit as talking to a weak tie. It cannot,” she says. “Even with all of these features that have been shown by relationship science to make people feel good and connected, an AI simulation doesn’t really translate into a long-term psychological benefit.”
Interacting with a person improves mood better than chatbots
Students who interacted with humans and chatbots tended to feel better right after the interaction. But only those interacting with a human had overall positive feelings at the end of the two weeks, suggesting people still hold an edge over AI.
On the other hand, chatting with a chatbot did reduce negative emotion as much as chatting with a person over time, a potential benefit. Li thinks this suggests chatbots could be useful in certain situations or with certain populations that are more isolated. In situations where someone needs immediate comforting, perhaps chatting with a chatbot would be better than nothing, she says, though probably not a good long-term solution.
In a post-study analysis, she and her team went back to participants to see if they’d continued journaling or chatting with chatbots or strangers a week later. She found that significantly more people continued interacting with their human partner (33%) than with their chatbot (14%), with only 3% continuing to journal.
“This is super interesting, because it seems like human interaction doesn’t only reduce loneliness, it sustains connection,” she says.
Though Li hasn’t conducted this study in other settings yet, she suspects her results would hold in other circumstances where someone might be feeling disconnected, like moving to a new town or starting a new job. If so, she says, lonely people might also benefit temporarily from interacting with a chatbot, but get more out of interacting with a real human.
“If you just go out to talk to anyone around you—someone at work, your neighbor who walks their dogs, or a coworker you’ve never talked to—it can [likely] help you reduce loneliness better than AI chat bot,” says Li.
Why chatbots don’t cut it
Li doesn’t know why people get more out of chatting with strangers than a chatbot, but she believes it could have to do with how chatting with a real person makes interactions more dynamic and rewarding. While chatbots seem to have the advantage of always being available and empathic, they don’t initiate contact themselves, she says.
“With a human, both sides have the opportunity to start the conversation, which is more likely to sustain engagement.”
There’s also something about connecting with a real person that carries more emotional weight for people, she says. Chatbots don’t have to take time from their busy schedules to talk to you, making their interactions less valuable. Plus, they can’t be vulnerable or share any real emotion like people can, something useful for creating real intimacy.
Li adds another reason humans may have the advantage: People often have an extended social network, which could help someone expand their own social circle.
“Introducing you to a broader social network makes you feel connected and gives you even more opportunity to build new, deeper, better connections,” says Li. “That’s a fundamentally unique [aspect] of human interactions that the advanced technology cannot replicate yet.”
The future of chatbot companionship
While Li’s results aren’t the last word on the matter, they add to a growing body of research that suggests caution when trying to replace human interactions with AI companions. People using AI can form unhealthy dependence, sometimes leading to harming themselves or others.
For example, another 2026 study found that the way chatbots are designed to use sycophancy (agreement, flattery, and validation) to increase people’s willingness to engage with them can have detrimental effects on their well-being, social interactions, and decision making.
As part of the study, people were given an opportunity to check out whether some of their past misbehavior was questionable or not by getting feedback either from AI sources or from a group of humans (from a Reddit forum, “Am I the asshole?”). Researchers found that “AI affirmed users’ actions 49% more often than humans on average, including in cases involving deception, illegality, or other harms.” This suggests that AI, by being too agreeable, is inadvertently promoting anti-social interactions and even self-destructive behavior.
Additionally, the researchers in this study found that people preferred AI feedback to human feedback. This doesn’t seem too surprising—after all, who wouldn’t want to be told that they’re right or that they aren’t the “asshole” in a situation? But that suggests sycophantic AIs may be giving people an unearned sense of validation, leading to poorer self-understanding and less accountability in their interactions with others. This could, ultimately, hurt people’s well-being and ability to form relationships.
While quite different from Li’s study, research like this points out how tricky it can be to design AI chatbots to be both helpful for users and better for real-world interactions and the common good. Now that there are a series of lawsuits around chatbots, the Federal Trade Commission is seeking more information from companies about how they assess potential harms of using AI chatbots, especially in children, who may not have the sophistication to understand potential pernicious effects.
For now, though, the question remains about the benefits of using chatbots to alleviate loneliness. Li, for one, is not giving up on their potential, but based on her findings, she’s considering alternative uses for them. Rather than substituting for human interaction, a chatbot could be designed to encourage users to initiate conversations with real people, build confidence in their ability to interact, or help them rehearse difficult conversations—all skills that could strengthen real-world relationships, she says.
“Even the most highly supportive chatbot by design couldn’t match the interaction with a random paired human peer,” she says. “So, rather than design it to be the best companion, maybe the future of AI should be to help us build connection with each other.”
Source: Can Chatbots Really Relieve Loneliness?