In this article series, we evaluate the recent rapid tech changes using the operational framework of tools, people, and processes.
Part 1 introduced three recent changes in Tools that affect small and mid-sized businesses: Copilot Agents, Plan Designer, and Azure Foundry. We discussed the capabilities of each and some considerations on how to move forward.
In Part 2,” People: Empowering and Educating Your Team,” we explore the context of organizational change and empowering your team to adapt to these changes.
II. People: Empowering and Educating Your Team
All the cutting-edge tech in the world won’t help if your team isn’t enabled to use it or, worse, if they misuse it. “People” is about training, culture, and roles.
1. The New Skillset: AI-Augmented Work
We need to encourage a culture where employees see AI as a tool to collaborate with, not a threat or a magic box. That means training in two areas:
- Tool Proficiency: Make sure employees know how to use the features. Many Microsoft 365 users still don’t know about the simplest time-savers, let alone Copilot. If you turn on Copilot for your staff, schedule a demo or curate a short video of it in action, relevant to their job. For example, show salespeople how Copilot can pull CRM data into an email draft; show HR how it can summarize lengthy policy docs. Microsoft often provides training content. Partners like us can do custom workshops. We’ve developed training specifically suited for users in small organizations that need quick hits with practical day-one benefits. The ROI on training is high. An hour could unlock dozens of hours saved monthly through feature usage.
- AI Literacy: We also need to train soft skills around AI, what some call “media literacy” or “AI literacy.” Teach employees that AI outputs may sound confident but can be wrong. Encourage them to use critical thinking and ensure they have been trained not only on the principles but also the practical tools to separate symptoms from problems and solutions from opinions. For instance, if Copilot writes an email reply, they should read it thoroughly before sending – ensure it matches the intent and has no factual errors about the client or product. We’ve seen cases where an unchecked AI draft could have confused (“Oops, it mentioned the wrong product name because it grabbed an old reference”). Another aspect is ethical use: using AI responsibly, not to, say, generate inappropriate content or to violate compliance (like don’t ask it to do something that breaks confidentiality). Engage with your cybersecurity provider to ensure your AI Models are protected, as is the data it reads and the data you send to it. Many companies now include a short “AI usage” section in their employee handbook and SOPs.
And, importantly, AI is probabilistic, not deterministic. It can be wrong because it generates a “probably” correct answer, not a tested-to-be-correct answer.
A positive way to frame it is that AI is your teammate, but you’re the supervisor. It will do a lot of leg work, but you provide direction and oversight. If you do not have subject matter expertise to assess the validity of the information, then find someone or a reputable source that does. This resonates with many. They realize they’re not competing with it but managing it. The downside is that this can humanize the tool, creating a sense of intent, trust, or empathy that is not there. Be sure to balance mitigating fear and creating overconfidence.
Our team has helped many organizations work through this balance so they can guide the use of AI in their organizations. Often, a consulting advisory session to work through the adoption opportunities and challenges in your organization is a great first step.
2. Role of the IT Partner (or Internal IT) in Enablement
Most SMBs outsource all or some of their IT. Your technology partner becomes crucial in people enablement. Here’s how:
- Configuration and Governance: Ensuring the tools are configured to be easy to use. Example: if employees are trying to use Copilot but it can’t access any SharePoint files because no one set that up, they’ll find it useless. An IT admin or partner needs to tweak those settings. Another example is setting up single sign-on and integrations, so users aren’t juggling logins. Ensure it’s connected if Copilot can integrate with your CRM. Employees get seamless experience like when Copilot for Sales can pull CRM notes into Teams calls. We use this powerful feature frequently but it’s only possible if you configure the tool to do so.
- Ongoing Support and Questions: Expect that as people start using these new tools, they’ll have questions or hiccups. Maybe your employee tells you that Copilot returned a weird description of a client’s recent history. There could be many reasons beyond the probabilistic nature of AI. The employee’s access to the latest data could be missing, or they could have ambiguously phrased the question. An experienced AI IT partner can troubleshoot and guide (“Next time, ask it this way…”, or “We’ve now connected that data source, try again.”). Consider creating a channel in Teams where users can seek help or share AI tips. We often run “office hours” open Teams meetings after deploying something like Copilot to coach people through the early learning curve.
- Human Oversight in Process: Create guidelines to help your team independently decide which processes require human review or sign-off and when. You might create a policy that says, “A manager must review all AI-generated client communications until we’re comfortable.” That’s not to stifle use, but to ensure quality initially. Over time, as trust builds, you might relax or target oversight to only high-risk outputs. We encourage clients to document these interim rules, so employees don’t feel in the dark. It might be as simple as a checklist – e.g., when using AI to draft content, check these three things.
3. Organizational Changes – Small but Significant
Even without adding headcount, you can tweak your organizational roles and responsibilities to leverage these tools better:
- AI/Automation “Champions”: Identify a few early adopters or enthusiastic employees and unofficially dub them champions. Give them more training or access and let them be the go-to for their peers. For example, someone in finance who’s quick to learn Power BI’s AI features can be the point person when others have questions about that, reducing the load on IT and fostering peer learning.
- AI Objectives in Goals: Set team goals around productivity improvement and include AI usage. For our small business customers this is often making the most Copilot productivity features like summarization, ideation and research. As companies grow and require streamlining process flows, the use cases tend to be more multi-user or multi-system. For example, “Reduce manual report prep time by 50% by the end of Q4 (leveraging tools like Copilot or Power Automate).” This signals that the company values smart use of new tools and incentivizes teams to integrate them rather than treat them as novelties.
- Job Descriptions & Hiring: When hiring new people, mention if your company uses AI tools as part of work. It can attract talent who want to work at a forward-thinking firm and filter out those entirely resistant to tech. You might update descriptions for existing roles to include proficiency with relevant software. Notably, some companies are adding “familiarity with AI tools e.g., Microsoft Copilot” under desired qualifications.
4. Continued Need for Human in the Loop (Avoid Over-Reliance)
I have heard several concerns about allowing AI in. “Will AI make employees lazy, or worse, will they trust it too much?” “Will they trust it and rely on inaccurate information?” Addressing these concerns comes down to a culture set from the top. Ensure your organization’s leaders are modeling AI to provide starting points, not final decisions. AI can eliminate the blank page starting point. It does not eliminate final review and decision-making.
To help the team learn what over-reliance is and how to avoid it, share stories about AI failures as learning moments. For instance, if your AI tool missed a key point from a meeting, instead of shaming the tool or the user, treat it as a case study: “Copilot missed X because it happened during a breakout room that wasn’t recorded. Let’s keep that in mind, it only knows what it’s given.” The lesson: AI has limitations. You instill a healthy respect for AI’s capabilities and limits by openly discussing such instances.
From a security perspective, reinforce “Don’t blindly click or implement something just because AI said so.” There have been where AI might suggest enabling a setting or clicking a link that isn’t safe. User training should include asking when in doubt. It’s not very different from phishing training. If something looks off, pause and consult IT.
Lastly, media literacy in the context of AI-generated content. There has been a lot in the news and media about deepfakes and malicious or harmful AI-generated text. Employees should also be skeptical of AI-generated content. For example, an employee might get a voice message that sounds like their CEO but is a deep voice fake and is asking for a money transfer. Yes, criminals are attempting such stunts. Training sessions on recognizing these and having verification processes like “always double-check via known channels for any unusual requests” will protect your company. This blends cybersecurity with AI awareness. In a small business, every person must be a line of defense.
Continue reading part 3
