With releases like Codex 5.5 and Claude Opus 4.7 in the past month, there’s more conversation yet again on which AI tools “win” and which combinations work best together in real workflows. We can debate which models “win” for different use cases all day, but that is not what’s most pressing for most businesses right now. The question I want to post instead: how do you decide which LLMs deserve a real place in your firm’s day‑to‑day workflows?
First, you need a clear plan for which general‑purpose LLMs your team will use and for which types of tasks. Next, you need to consider how those choices connect to the systems that already run your business. Different industries will have their own niche tools to factor in, but every organization should be thinking ahead about where their CRM, marketing automation, and other core platforms fit into the overall AI picture.
Maybe this is a familiar trend at your firm: One person prefers ChatGPT. Another uses Claude. Someone else has a Gemini tab open all day. A few people lean on Perplexity for research and content ideas.
These tools might all be useful on their own, but together they can create a messy, ungoverned landscape. This usually happens when teams lack clear guardrails on what AI tools should be used for what.
When there is no shared plan, you get scattered work, higher subscription costs, and uneven quality.
A better approach is to define clear roles for your core AI tools. For example, you might decide that one LLM is your team’s default for drafting and client‑facing language, while another is reserved for data-heavy analysis or technical automation work.
Instead of a free‑for‑all between Claude, ChatGPT, Perplexity, Gemini, and others, aim for a short, intentional list of tools and documented use cases. You can keep this simple:
Your guide might look something like this in practice (examples only, you can adjust to your reality):
The goal is to remove ambiguity so people know which AI tool fits which job.
The conversation should not end at “Which LLM do we like most?” You should also consider how these choices support the systems you already rely on. For most service businesses, that means your CRM and your marketing automation platform.
Your CRM, especially if you use HubSpot, already holds the structure of your business: contacts, companies, deals or matters, timelines, and activities. When your chosen LLMs connect thoughtfully to that data, they can support work your team actually cares about, such as:
This is where innovation with popular LLMs gets more interesting. Models that can handle longer contexts and more complex instructions are better candidates for behind‑the‑scenes work inside systems like HubSpot, not just as standalone chatbots in a browser.
General LLMs are only one piece of a healthy AI stack. Real leverage comes from the workflows that surround them. For a service‑based firm, that often means AI routines that:
These workflows rely on more than just “a good model.” They depend on your CRM configuration, your automation design, and the specific AI agents and integrations that tie everything together. This is the layer where a partner like InboundAV focuses: connecting your chosen AI workflows with HubSpot, designing automations that align with your business model, and ensuring AI supports how you actually serve clients.
One emerging theme in 2026 is that most enterprises are moving beyond experimenting with AI to deploying agents and workflows in production. Many leaders now cite AI sprawl as a major risk if these systems are not governed centrally.
When you align your LLM choices with your CRM and your automation, you move beyond everyone testing AI in their own silo. You start to build a unified, firm‑wide approach that supports your vertical and your way of working.
You do not need to rebuild your entire tech stack to make progress here. A realistic starting point looks like this:
New releases like Codex 5.5 and Opus 4.7 will continue to show up on your radar. The real opportunity is to use these milestones as prompts to refine your overall AI strategy, reduce sprawl, and make sure every new capability fits into a clear picture of how your firm uses AI from top to bottom.