Learn more about Chamber Circles for Women and Entrepreneurs
|
||
As technology continues to rapidly evolve, businesses are increasingly turning to Artificial Intelligence (AI) to streamline operations, enhance efficiency, and gain a competitive edge.
There is no question surrounding the benefits of integrating AI into business processes, but there remain legitimate concerns that accompany this technological leap.
One primary concern is the ethical implications of AI implementation. As AI systems such as ChatGPT, ClickUp, Copy.ai, or Kickresume become more sophisticated, they often require access to vast amounts of data to function effectively. This raises questions about privacy and the responsible use of sensitive information, as well as legal concerns surrounding the use of intellectual property.
“The question is fair use or is it a violation of copyright,” says Maura Grossman, Research Professor, School of Computer Science at the University of Waterloo, whose expertise centres on AI policy and ethics.
She notes that an AI user can reference a particular article, book, or poem, despite it being copyrighted. “It shouldn’t be able to do that because that’s a copyright infraction, but it can. The law hasn’t caught up with that yet but there are a number of legal cases now pending.”
Algorithms a concern
As well, Professor Grossman says bias in AI algorithms is another major concern. AI systems learn from historical data, and if that data contains biases, the algorithms can sustain and amplify them resulting in discriminatory outcomes and reinforcing existing social disparities.
“You’re going to find that in the language as well as the images. Open AI has spent a lot of time trying to remove toxic language from the system, so you get a little bit less of that with ChatPT,” she says, referring to the problems Microsoft experienced when it released its Tay bot in March 2016. The bot, under the name TayTweets with the handle @TayandYou, resulted in Twitter (now known as ‘X’) users tweet politically incorrect phrases and inflammatory messages resulting in the bot releasing racist and sexually charged messages in response to other users. Initially, Microsoft suspended the account after 16 hours, erasing the inflammatory tweets and two days later took it offline.
“Most systems, like ChatGPT, are trained on the internet and that has its pluses and minuses,” says Professor Grossman, adding ‘hallucinations’ pose another big problem for AI users. “ChatGPT for example is trained to generate new content and to sound very conversational, so it uses what it has learned on the internet to predict the next most likely word. But that doesn’t mean it’s telling you the truth.”
Official policy needed
She says there have been instances of people using AI to conduct legal research and submitting bogus case citations in court. “I think the first case happened recently in B.C., but it has also happened all over the U.S.,” says Professor Grossman.
For businesses utilizing AI, she recommends drafting an official policy to outline usage.
“First they need to have a policy and then need to train who in the business is going to use AI because people need to understand what it does well and doesn’t do well,” she says. “Your policy needs to say what permissible uses are and what impermissible uses are.”
Impermissible uses could include creating a deep fake video in the workplace.
“Even if it’s a joke, you don’t want employees creating deep fakes,” she says, noting the policy should also outline what workplace devices can be used for AI. “If you need to save something because you’re involved in a lawsuit, then you don’t want to it be on an employee’s personal device because you won’t have access to it.”
Employees require training
As well, Professor Grossman also recommends employees clearly know what AI tools are okay to use and which are not and ensure they are fully trained.
“You don’t want them violating intellectual property rules or other privacy rights. You also don’t want them putting into a public tool any confidential or propriety information,” she says. “Some companies have turned off the ability to use these AI tools because they are terrified employees will put propriety information out there while asking a question about a problem they are working on. If you’re using one of these open-source tools, it’s like Google or anything else; it’s free rein.”
Professor Grossman says rules and regulations around AI will be gradually strengthened, noting a new regulation coming into play in B.C. pertaining to issues surrounding intimate imagery is just one example.
“As soon as this starts making its way more into politics, we will start to see more effort into creating regulations,” she says, referring to a recent ‘deep fake’ image that surfaced of U.S. President Joe Biden.
Despite these issues, Professor Grossman says AI is something more businesses will become comfortable using and should embrace this new technology.
“It will save on efficiency,” she says, noting AI can greatly assist in the creation of marketing material. “Companies need to explore it and learn about it but learn about it in safe ways and understand where it can be beneficial and not just let people experiment on their own because that’s going to lead to a lot of trouble.”
AI hurdles in business
|
||
|
||
|
||
|
||
Brian Rodnick 222 November 28, 2024 |
Greg Durocher 41 July 28, 2023 |
Canadian Chamber of Commerce 24 January 29, 2021 |
Cambridge Chamber 2 March 27, 2020 |