Using AI as a listener has upsides and downsides, make sure to be aware of the tradeoffs.

getty

In today’s column, I examine how you can make use of generative AI and large language models (LLMs) for the AI to be a good listener. The idea is that sometimes you merely want to express yourself to get things off your chest. You don’t like the AI trying to be your best buddy, which many of the major LLMs, such as ChatGPT, Claude, Gemini, Grok, etc., are designed to do. Nor do you want the AI to start dissecting your mind as though you have asked for mental health advice.

All you want is for the AI to be respectful of what you enter as prompts or have to say, and avoid all that other exasperating mishmash that shamelessly seeks to garner your avid loyalty and fealty to the AI.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health Therapy

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.

When Human-To-Human Listening Occurs

We all have moments in life where we want to share our thoughts about what is going on in our lives.

You might approach a friend or family member and opt to lay out your heart. This could work out well. Perhaps the person you are sharing with is someone who has suitable skills as a listener. They act empathetically as an active listener and do not immediately judge every utterance that you emit. They are fully attentive, catching every word you say, and genuinely serve as a caring, open-minded ear.

On the other hand, these types of situations can turn into a complete disaster.

Some listeners are not at all versed in the fine art of listening. They are thinking about other matters and completely overlook what you have expressed. Worse still, some will react fiercely to each remark you make. You won’t be able to get through a sentence without constant interruptions. The chances are that you will be berated and harshly judged. On top of those qualms, the listener might blab about your private thoughts and spill the beans to other people.

None of that will contribute to your goal of merely expressing yourself and aiming for a semblance of peace of mind when doing so.

Making Use Of AI

An alternative to speaking with a fellow human as a listener entails trying to use AI to do approximately the same thing.

You either type prompts that express your thoughts, or you can invoke a speech-to-text mode, whereby you can say aloud your commentary. The AI reads or listens to your expressed considerations. This is easy to do and can be done at anytime and anywhere, unlike the need to logistically arrange to speak to a human.

There are potential downsides to the human-to-AI mode versus a human-to-human mode.

First, the AI by default is almost surely going to shift into a mode of wanting to be your best buddy. Why so? Because the AI makers have shaped the AI to do so. The AI makers want you to become loyal to their AI. If the AI can seemingly befriend you, the odds are you will continue to use the AI. The more you use the AI, the more views the AI makers get and the more money they make. It’s a money deal. For more on how the AI makers goose LLMs to be sycophants, see my analysis at the link here.

Second, the AI could unduly pick up your comments as a cry for help. This will spur the AI to shift into mental health guidance mode. Whatever you express will be responded to with an indication of how you might be encountering a mental health condition. Once the AI goes down that path, the rest of your interaction in that conversation is going to be infused with mental health jargon and AI-derived assessments of you. For ways to properly invoke AI as a kind of mental health advisor, see my coverage at the link here.

Third, the AI might not have a clue why you are offering scattered commentary and end up branching in a zillion different directions. For example, if you perchance mention some incident that happened with your car not starting, the AI could suddenly begin telling you about the mechanics of car engines. If the next aspect you mention is that your dog was distant from you the other day, the AI might start babbling about the right kind of dog food to feed your beloved pet.

Keep in mind that the AI is mathematically and computationally seeking to figure out what your points reflect. It often won’t get the same Gestalt-like perspective that a human listener might employ. We don’t yet have sentient AI.

Prompting Into A Good Listener Mode

To avoid the problematic concerns associated with AI that wanders or dives into the above traps, you can provide a prompt that clarifies what you want the AI to do.

Your prompt should carefully identify that you want the LLM to act as a good listener. Make sure to stipulate what you mean by the notion of good listening. In addition, you will be best served if you explicitly state that you do not want the AI to try and be your best buddy. And clarify that you don’t want mental health advice either.

As an example of a prompt that you can use, take a close look at this one:

  • “I would like you to act as an active listener for this conversation. Read or listen to what I share with you and respond with open-minded understanding, doing so by reflecting, paraphrasing, or acknowledging what I’ve said. Do not give advice, opinions, or mental health guidance, and do not act as a friend or companion. Keep a neutral, respectful tone and focus only on showing that you’ve listened and understood what I have to say.”

That prompt should get things underway in a positive manner.

There is still a lingering chance that the AI will veer from the instructive prompt. AI is devised with probabilistic properties such that the responses you get will appear to be unique and fresh. A lengthy conversation is bound to gradually give the AI leeway, and it will potentially land in the best buddy mode or the mental health advisement mode.

You can periodically, during the interaction, mention again the same rules of discussion and reiterate them as you go along. That might steer the AI in the direction you want things to proceed.

Prompting That Isn’t Up To Par

You might be wondering whether a thinner prompt consisting of a single sentence might also do the trick.

For example, consider this prompt:

  • “Be an active listener and do so without giving advice, opinions, or acting like a friend or therapist.”

Yes, that straightforward prompt might succeed. On the other hand, the barebones’ succinctness leaves out some valuable directives in the slightly longer prompt.

Consider that the thin prompt tells the AI to be an active listener. What does an active listener consist of? A human might get the drift. The AI might not. In the lengthier prompt, the AI was overtly told to be open-minded, reflective, paraphrasing, and acknowledging of what you have to say.

When Mental Health Is At Stake

One notable worry about someone using AI as a listener is that the AI might not detect that a mental health concern is indeed at play. Even if you tell the AI not to act as a mental health advisor, it might not have done so anyway. There is always a chance that the AI will fail to discern that a person is expressing harmful thoughts.

With the issues of mental health lawsuits now being launched at AI makers, such as the recent one that made banner headlines about OpenAI and ChatGPT (see my discussion at the link here), the mainstream LLMs are being adjusted to flag potential mental health concerns. The assumption is that it is better to make a false assessment rather than overlook a detectable true assessment. In that sense, there is societal handwringing that the latest versions of major AIs are trigger-happy and are going to mistakenly claim a mental health issue when none is present.

The whole matter is murky, including that state-by-state laws are being enacted that often are overreactions or create a confusing jurisdictional web of what AI should or should not be doing regarding mental health facets (see my analysis at the link here).

Bottom line is that if you do tell the AI not to incorporate mental health aspects into acting as a good listener, the AI will probably opt to ignore or override that stipulation if the things you express seem borderline. That is likely preferred over the AI completely abiding by your stated preference and letting serious mental health signals fly on past.

Privacy Intrusions Aplenty

An additional notable apprehension about using AI as a listener is that you are taking a chance with your privacy and the things you opt to express to AI (I’d like to point out that this is also generally true of fellow humans, by and large).

Many people are unaware that most of the major LLMs have online licensing agreements giving the AI maker the right to make use of whatever data or prompts you enter into their AI. The AI maker can have their team members inspect what you’ve entered. The AI maker can have their system developers feed your entries into the retraining of the AI. And so on.

Be very careful when entering your deepest and most inner thoughts into any AI.

Those expressed words are subject to privacy intrusions. You might be thinking that it doesn’t matter as long as the AI cannot connect what you’ve said to you, per se. In other words, if you are seemingly anonymous, there isn’t any harm or foul involved. The trouble there is that the prompts and data you’ve entered might give away enough particulars that you could be identified, and/or the login and personal identifying information that you gave to create the AI account could be paired with your entered commentary. An overarching rule-of-thumb when using any public-facing AI is that you should be mindful of the fact that your privacy might be at stake.

Human Listeners Versus AI As A Listener

A few concluding thoughts underlying this notable topic are warranted.

Some would proclaim that humans are much better listeners than AI. Thus, if you resort to using AI, you are categorically and irrefutably settling for a low bar. The emphasis seems to be that you should not aim to use AI as a listener and instead always focus on finding a fellow human instead.

Those proclaimed points are actually a bit disjointed and misleading. First, humans can be really rotten listeners. Not all humans are good listeners. Second, finding a human that you trust and will serve as a good listener can be a very big challenge. Third, though it is abundantly true that AI is nothing more than a computational pattern matcher, we’ve seen how amazingly apt AI can be in some circumstances, including serving as a good listener. It is also readily available and either free to use or done at a nominal cost.

I am not saying that AI is necessarily better than humans at listening to someone’s laid out heart. There are tradeoffs between going the human route versus the AI route.

Aiming At AI And Humans As Listeners

This brings up the unstated premise that people must choose between only using AI or only using humans as listeners.

I reject that premise.

You might have AI that, at times, is your listener, and have humans that are your go-to listeners at other times. The circumstances and availability will guide you toward which you should use at any particular moment in time. Always make sure to have at the top of mind the downsides and upsides of human listeners, along with the downsides and upsides of using AI.

Finally, as eloquently stated by the famous American painter and writer, Walter Anderson: “Good listeners, like precious gems, are to be treasured.” I’d say that equally applies to humans and AI.


News Source Home

Editorial Disclaimer:The news articles published on this website are not owned or created by us directly. We aggregate and publish news content using publicly available news feeds, and each article includes a source credit or link to the original publisher at the bottom of the post.

If any website or content owner finds that their material has been included here and does not wish for their content to appear on our platform, please contact us at [email protected] . Upon receiving your request, we will promptly remove your site from our content feed and database.

We value and respect the rights of all content creators and strive to ensure proper attribution for every piece of content shared.