In this episode of Accelerated Velocity, we explore advancements in AI processing, the risks of under-tested automation, and what real ethical AI use looks like in practice.
First, we unpack Google’s Gemini 2.5 Flash update, which introduces reasoning control to reduce unnecessary processing. It’s a win for developers—and a potential breakthrough for energy efficiency, security, and scalability.
We also take a look at a support chatbot gone rogue, highlighting how poor implementation can quickly erode trust and damage brand experience. The episode wraps with reflections on ethical AI use as an intentional approach that impacts customer trust and organizational integrity.
00:00 - Introduction to AI in Business
01:43 - Gemini 2.5 Flash: Enhancing AI Efficiency
04:05 - The Importance of Accessibility and Security in AI
04:54 - Customer Experience: Lessons from AI Failures
07:34 - Ethics in AI: A Business and Humanity Issue
10:19 - The Future of AI: Balancing Technology and Humanity
“Company apologizes after AI support agent invents policy that causes user uproar” by Benj Edwards for Ars Technica
“Google introduces AI reasoning control in Gemini 2.5 Flash” by Dashveenjit Kaur for AI News