Part 4: AI as Collaborator: Guardrails, Ethics and Responsibilities
Having an AI model available for prompt and response interaction is like having another team member to brainstorm with. With more perspectives, you can increase the probability of solving the problem, especially if that second person brings specific knowledge and understanding.
Like the digitized, non-tested data from the internet which informs the model to make the decision, the AI model could provide flawed decisions and guidance.
With humans, we assume some of the information we receive is flawed. We assume that the data the person is working from could be better and that their problem-solving ability and decision-making processes could be better.
We know that humans are not system-tested. What they’re saying may or may not be accurate, so we combine our experience with our five senses of what we see in the real world with what the human is saying and try to discern what information to proceed with. Is what they’re saying entirely correct? Partially, correct?
In the past, we could assume that if a computer provided an answer, it was accurate, or the programmers did not test it sufficiently, and it needed to be fixed. The bar for well-run software was accuracy. We would even use a computer to test a human’s work, such as solving a complex math problem.
In this new world we need to be aware that Chatbots which are built on Deep-Learning AI and Large Language Models use probability to gauge an answer. Unlike traditional software, and even unlike Classical Machine Learning like our OCR invoice above, the Chatbots with Deep Learning AI are not system tested. And yet, we find many people are thinking of these tools as if they were. We often hear our customers asking a Chatbot a question or asking a Chatbot to create content and then using that content as is, as though the results have been system tested somewhere for accuracy. As though if they just ask the question in the right way, they will get an answer. When it will always be a prompt with a response. A response based on probability not system-tested accuracy.
This brings us to another critical area for our strategy – Ethics, responsibility, and potential liability.
If the model is trained on digitized internet data, what biases and errors are reflected in that information? How will it help or harm our business and decisions if we rely on it for responses derived from biased information? What happens to decisions that would shift given information discovered by science or elsewhere in 2024 if the foundational model was last updated in 2022?
How about legal risks? In August 2023, the Equal Opportunity Commission settled its first AI hiring discrimination lawsuit. The three companies violated the 1967 Age Discrimination Act because the AI hiring program “automatically reject[ed] female applicants age 55 or older and male applicants age 60 or over.” There is ongoing litigation for a class action lawsuit involving Workday. Amazon stopped using its AI hiring tool because, having been trained on a database of primarily male applicants, it preferred resumes that used words that are more commonly used by men in their resumes like “executed” and “captured.” Navigating the AI Employment Bias Maze: Legal Compliance Guidelines and Strategies (americanbar.org)
Most AI and Standards organizations have identified the necessity of trustworthy and responsible AI. The National Institute of Standards and Technology (NIST.gov), the ones who decide an inch is an inch and have defined these key “building blocks” Trustworthy and Responsible AI | NIST:
- Validity and Reliability
- Safety
- Security and Resiliency
- Accountability and Transparency
- Explainability and Interpretability
- Privacy
- Fairness with Mitigation of Harmful Bias
Microsoft requires that all its customers commit to responsible and ethical AI. Their landing page can be found here: Empowering responsible AI practices | Microsoft AI. Their 2025 Responsible AI (RAI) Transparency Report can be found here: Responsible AI Transparency Report (microsoft.com)
Don’t Go It Alone: Some ways TechHouse can help
- Free Webinars to stay aware: Contact us for our upcoming webinars.
- Check out Kathy’s AI panel on Bright Talk June 21st, 2024, at 1pm eastern.
- Our CoPilot AwareTM Solution contains curated assessments, sample policies, communications, and guides for your AI Adoption journey.
- Training and mentoring for you and your team: from Cybersecurity to Critical thinking workshops, our team is dedicated to transferring skills to help your team thrive in this new world.
- Technical Preparedness and Tools: Engage us for an AI preparation, Data Governance, cybersecurity, or CoPilot/AI rollout in your organization.