By Marie Swift
I’m a full-blown coffee snob, a wine connoisseur, and a bit of a foodie, too. These are highly curated experiences, and the sensitivities involved are at the core of the senses that make us human.
So when the fervor surrounding ChatGPT and other artificial intelligence (AI) tools for content creation inundated my social media and news channels, I initially dismissed them skeptically. I firmly believed that no AI tool could match the quality of my own writing. I laughed and nodded my head when someone on my Twitter—now called X—feed posted, “If you didn’t bother to write it, why should I bother to read it?” Of course, I gave that tweet a “heart” to signify my agreement.
However, since I’ve begun experimenting with AI tools, my perspective has softened. A few months ago, I believed that engaging with an AI tool involved logging in, providing prompts, meticulously refining the output, generating alternative angles, and subsequently subjecting it to grammar and writing tools like Grammarly, QwillBot, or Reword to guarantee grammatical accuracy, proper sourcing, and originality. The cumbersome nature of this process made me conclude that it was more efficient to revert to the traditional approach of personally crafting the content. But when NAPFA members Katie Burke and Bridget Grimes asked me to create an educational webinar for the Equita Financial Network’s “Inspiring Insights” series, I thought, why not focus on ChatGPT and other AI tools for content creation?
To prepare for the three-person panel discussion, I experimented to see what generative AI tools could and couldn’t do to my high standards. Spoiler alert: I’m still a better content creator than any of the AI tools I tried (at least, I think so!), but I’m coming around to some ways to use them in my creative work process. While far from perfect, some of these AI tools can be helpful idea generators, good summary solutions, and springboards for getting work done.
Here are a few insights from my AI learning journey. If you’re a coffee snob like me, perhaps you’ll change your mind, too, with some experimentation and continued use.
In addition to my own continued use of Grammarly, Canva, Fireflies.AI, and the Adobe Creative Suite (my team and I were already power users of these tools, but for my testing purposes I focused more on their AI possibilities), I experimented with ChatGPT-3.5 (the free version of OpenAI’s product), QwillBot and Reword (paraphrasing and anti-plagiarism tools), GPTZero (AI detection platform), DALL-E (OpenAI’s free but, for me, totally laughable AI art generator), and Wealth Management GPT (a new platform focused on supporting people in financial services). Through it all, I kept reminding myself to be open-minded and have fun experimenting.
For example, I needed to create a 60-second video script for a client. As a test, I prompted ChatGPT to create a compelling script for the narrator, a woman with a British accent whom we’d be hiring via Fiverr, to narrate some animated graphics. I took sections of my client’s website and pasted them into the “new chat” area on ChatGPT and stipulated that the script should be fast-paced and interesting to executives at financial services firms. It took less than 30 seconds to produce what I thought was a fairly good first draft. I touched it up and sent it to the client. He made a few more changes, and it was good to go to the production team. (Note that some publishers—including the NAPFA Advisor, as described in this month’s “From the Editor”— and third-party platforms may prohibit the use of generative AI to create content for them.)
In another test, I used Wealth Management GPT to write a post for my company blog. I gave it the topic, length, intended audience, and tone of voice I wanted to see. I had a summary of a webinar that Fireflies.AI had produced for me, so I fed that into the system, too. Within a minute or so, I had the beginnings of a good first draft. I repeated the test, going straight to ChatGPT this time, and I got back something similar. I picked the version I liked best, fed it up into Reword for suggestions, added my own spin, ran it through Grammarly, and voila, the post was ready to publish.
I’d heard about DALL-E, so I tried it to modify some photos I uploaded but found the output to be so fake that it was creepy (think “fake news” creepy). I gave up on it within 30 minutes and moved over to Canva, where I had a much better experience replacing standard business backgrounds with surfboards and oceanfront tiki huts. I even added a parrot to my business colleague’s shoulder, just for fun (he was amused, as he actually is a surfer).
Of course, we can all probably agree that what people call “good writing” is actually good thinking and that “good design” is highly subjective—and note that in the tests above, I did not simply copy/paste/use and call it a day. While I’ve now changed my tune when it comes to the helpfulness of using AI to create content, it’s clearer than ever to me that human beings possess traits that AI does not—at least not yet. Nuance and discernment are, to me, at the heart of every good communication, no matter what form it takes—written, spoken, or visual.
For the remainder of this article, I’m focusing on written communications—specifically ChatGPT.
Marc Butler, creator of Wealth Management GPT, a new AI platform that lays over the top of ChatGPT, told me that the AI opportunity in wealth management is limitless:
… AI has the power to change our industry in ways similar to what we saw when the internet came of age, and firms, advisors, and clients started to engage digitally. We are in the first inning of a nine-inning game, and the AI story within wealth management will evolve over time, but firms and advisors who embrace it now will benefit over the long term. The impact of AI is not an “if” question but is instead a “when” question.
One challenge for advisors, generally, is their ability to communicate consistently with their clients while personalizing the content and helping clients understand in the simplest ways possible what they are talking about—and to do all this in a scalable way. Butler says that tools like ChatGPT can help on all these fronts, but Butler says advisors don’t understand its operation.
“The problem with an advisor using ChatGPT directly is that they have to understand how to structure the prompts to get the output they are looking for, and many advisors I have spoken with have been afraid to use ChatGPT,” Butler said.
Butler’s platform guides the advisor through a series of questions to help the AI generate a better response. By focusing on specific scenarios (i.e., “Write a Blog”) and following a sequence of steps, advisors can harness the power of ChatGPT in an easy-to-use format in language they understand. The design of Wealth Management GPT is so simple that it does not require training. In the “Write a Blog” scenario, for example, advisors can—in about 20 seconds—produce a 500-word blog on a particular topic that’s geared toward a specific audience and written in an easily understood way.
My warning: No matter how tempting it is to copy and paste the content and use it as is, never, ever do that. AI should be used as a collaboration tool rather than replace human creativity. Pitfalls of relying too heavily on AI-generated content include plagiarism and lack of distinctiveness in writing style. Copyright violations are also a concern—and they aren’t just a matter of inadvertently echoing another source’s content. For example, some information providers prohibit the mere entering of their content into a generative AI tool.
In addition, everyone I’ve spoken with who’s more advanced than I am in using ChatGPT warns that it is prone to “hallucinations.” When I asked ChatGPT about hallucinations, it responded with this:
As an AI language model, ChatGPT operates based on patterns and associations it has learned from the vast amount of text data it was trained on. While it can generate text that appears coherent and contextually relevant, it does not possess consciousness, awareness, or the ability to hallucinate. However, ChatGPT has limitations, and it can sometimes produce responses that may seem nonsensical, irrelevant, or “hallucinatory” in a sense that they are not grounded in reality or the given context.
So rule #1 for financial advisors should be: Use AI-generated content critically and verify information. Of course, any output from ChatGPT that gets shared with clients still must go through whatever compliance review processes are already in place.
Also, advisors should take care not to feed into any generative AI platform content that is confidential, proprietary, or sensitive in nature, such as client lists and your own intellectual property. That information becomes “training material” for the tool and could surface in other AI-generated responses. In addition, consider that it might not be possible to copyright material generated by AI. For more advice, see my “AI Guidelines for Fiduciary Financial Advisors” below.
It is important to understand that we are early in financial advisors’ use of AI. I encourage you to wade into the generative AI waters. Like me, you might find some good uses and efficiencies.
AI Guidelines for Fiduciary Financial AdvisorsIt’s a brave new world as generative AI becomes more prevalent. The tips below can help promote the responsible and ethical use of generative AI. 1. Data ethics should always come first: Ensure the data used for training the AI model is ethically sourced and complies with privacy regulations. To protect user and client privacy, financial advisors should never share sensitive information or personal data within the AI model. Respect copyright and intellectual property rights when using AI to generate content. Try to identify the sources of AI-generated material so you give credit where credit is due. 2. Human supervision is key: Maintain human oversight to review and validate AI-generated output before sharing it. Be cautious of biases in the training data because AI models can inadvertently perpetuate them. To avoid confusion and potential misinformation, clearly communicate when AI-generated content is used. Use AI models that can provide explanations for their decisions and outputs. Transparently disclose the potential for errors in AI-generated content. Conduct regular audits to assess the impact and effectiveness of AI-generated content. 3. Create a company policy: Clearly specify the intended purpose and objectives of using generative AI. For instance: Use generative AI where it adds genuine value, avoiding unnecessary or frivolous applications. Do not use AI to create deceptive content that misleads or deceives users. Note: This list of tips was generated by Marie Swift of Impact Communications with the help of ChatGPT. The AI-generated content was used as a springboard, not a crutch, and it was significantly edited to apply to NAPFA Advisor readers. |
Marie Swift is CEO of Impact Communications, a digital marketing and PR firm. She is also the host of the NAPFA Nation podcast and the mastermind behind NAPFA’s Playbook webinar series. Access the AI webinar referenced above at https://tinyurl.com/EquitaAIWebinar.
image credit: istock.com/Ilya Lukichev