This Holiday Season, Let’s Talk About Generative AI With Our Loved Ones!
Generative AI tools are becoming more accessible by the day, and their use cases are multiplying rapidly. From creating children’s books to assisting teachers with lesson planning to coaching agents for personal growth, AI is everywhere. While the opportunities are exciting, it’s critical that we pause and reflect on the limitations and potential dangers these tools carry.
Many of us have been early adopters of tools like ChatGPT. And those of us who work in the tech and product worlds are likely also familiar with AI biases and how they can influence everything from which candidates get surfaced by an applicant tracking system to the accuracy of healthcare computer-aided diagnosis systems. But with wider accessibility comes a greater responsibility for us, the experienced users, to continuously point out the risks to those who are just starting to use these technologies.
Why This Matters
Let me start by saying this: I absolutely love tools like Story Treasure and Fobizz. They’re amazing examples of how generative AI can bring value, creativity, and efficiency to our lives. The people behind these tools are incredibly well-intentioned, and their work is inspiring.
But even with these fantastic tools, it’s important to acknowledge a broader truth about generative AI: the technology itself is neutral, but how it’s trained, deployed, and used can have significant consequences. Here are just a few use cases and reasons why we need to stay vigilant:
AI-generated children’s books: Story Treasure allows users to create beautiful, personalized children’s books. It’s a delightful use case. But let’s imagine a darker scenario: What if someone with harmful intentions created a similar tool? Subtle biases baked into the generated stories could shape children’s perspectives without families even realizing it. While Story Treasure is a brilliant tool made by thoughtful creators, it highlights how other tools could be misused in less ethical hands.
AI tools for teachers: Fobizz is a fantastic platform helping educators streamline lesson planning. It’s an incredible way to empower teachers. But as generative AI becomes more common in classrooms, what if a similar tool was subtly engineered to promote a specific ideology? Educators might unknowingly pass these biases onto students. Again, this isn’t a criticism of Fobizz—it’s about understanding the potential risks if less responsible creators enter the space.
AI coaching agents: More tools are emerging that aim to “coach” people on personal and professional topics using AI. While many of these are well-meaning, imagine a scenario where the AI is trained on flawed or harmful data. What if the advice subtly influences someone’s decisions, values, or beliefs over time? Coaching can profoundly shape a person’s growth, and it’s crucial to ensure these tools are trustworthy and transparent.
And while you're taking a critical view of the unintended consequences of AI, perhaps it's worth considering it in the context of your own work as well. If you've been tempted to turn to AI to shortcut certain activities (like speaking with your customers), take some time to reflect on the risks involved and whether they're worth the benefits.
Use the Holidays to Educate and Empower
As we gather with family and friends this holiday season, let’s use the time to start meaningful conversations about the tools shaping our world.
Talk about how these tools work. How are they trained? What are their limitations? Talk about the impact they already have on everyday people (like your relatives) and their lives. For example, if anyone you know has applied for a job recently, there’s a good chance that the company they applied to used some form of hiring software with AI (and as this article by my editor Melissa Suzuno shows, some companies are even taking responsibility to audit their software to limit the role of bias in its output).
Discuss their potential risks. Highlight examples like biases in children’s books, lesson planning tools, or AI coaches.
Encourage critical thinking. Help others understand why they need to approach AI tools thoughtfully and with awareness.
It’s important to remember that most people don’t have the technical knowledge to evaluate how these tools are built or what biases they might carry. As experienced users, we have a role to play in advocating for transparency and educating others. And with a little awareness, we can ensure that AI remains a force for good. Let’s help everyone understand its potential and pitfalls.