Published on: Friday, June 06, 2025
There is an episode of The Simpsons where Homer is working at his computer, responding to queries with “yes” or “no”. Homer quickly determines that the proper response is always “yes”. And he then realizes that he doesn’t have to type the whole word but just the “Y”. So he sets up one of those bobbing, perpetual motion drinking birds to keep hitting the Y, and he goes off and enjoys his day.
Artificial Intelligence is coming on fast. The pundits are telling us that we should have it do this and that for us. One can’t help but to sometimes feel like Homer as the technology becomes available to do more and more of our work.
If you have used apps like ChatGPT, you’ll know how mind-blowing the capabilities can be. And you’ve probably seen people get carried away with it, too. A friend who was looking to hire received a resume that began with “Dear [.]” and ended with “Would you like me to suggest changes or create a PDF?” Oh dear.
In my world of wealth management, we’re being shown ways we can hand off tasks to AI to increase our productivity. I’m open to that. But will it improve the client experience?
They say I can get it to write a reply to my emails. Yes, that could save me time. But what if my clients can tell that it was not written by me? They didn’t fill out a web form looking for information from some big, faceless organization. They asked a question of their trusted advisor to whom they pay fees. Will they feel short-changed?
It’s also been suggested that I use it to listen in on client meetings and take notes – more thorough notes than I ever could. Then I will be able to focus more on the clients. Again, that could be helpful. But two problems (at least) come to mind. One is privacy. Now the clients’ private information is on yet another cloud server somewhere. I’m trying to limit the number of places information appears (and the depth of it) to try to protect from breaches. The opportunity set for something to go wrong just got bigger.
The second thing is that the conversation changes when someone knows they’re being recorded. I worry that a technology like this will cause us to act differently. Right now, I have heart-to-heart conversations with clients, and sometimes, they delve into difficult topics such as their insecurities or family dynamics. I have a feeling that they won’t speak as freely if a device is listening in. I will probably not be exactly the same either. If the idea was to allow me to focus more on the client, but the conversation has changed for the worse, how is that helping anyone?
By no means am I shooting down AI. It is an amazing tool, and I am finding ways to use it to save time. I get a lot of emails from investment media publications and there is a lot to stay on top of. I will often hit the “summarize” button to help me do triage and find what needs to be read more closely. With the way things are going, the capabilities will quickly expand and improve. We have to get used to it. This is not a fad.
Maybe we’ll all eventually get used to the fact that this is the way the world now works.
Maybe we’ll come to expect replies that were written by a machine, or we’ll stop thinking about or notice that we’re being recorded.
Either way, I think it behooves us to approach it with a bit of skepticism and a great deal of caution. Giving financial advice is an extremely personal endeavour. We don’t want to let technology dehumanize deeply human interactions. I’m going to use the technology in very select ways and with a great deal of discretion. That last thing we need is something that causes us to say “Doh!”.
I wrote this myself.