Advertisement

When integrating AI into business, it’s okay to start small

Executives should understand the fundamentals of AI before taking a company on the great leap forward, says Adelaide Business School’s Ralf Zurbrugg.

The start-up scene is awash with companies using artificial intelligence, as AI transforms everything from healthcare to logistics to the funeral industry.

For business leaders looking to innovate and grow, AI can present as a wave of opportunities – or an unrideable tsunami that, regardless, should be harnessed because everyone else is doing it.

“But just because it’s a buzz term, it doesn’t mean your business requires AI,” cautioned Professor Ralf Zurbrugg from Adelaide Business School.

He advised business leaders to start with an understanding of AI and its potential applications.

“The first step is becoming acquainted with the technology and its applications,” he said.

“Most small to medium sized businesses could find some usefulness for analytics, but they need to consider whether there is a role for AI.

“That falls on the CEO to basically educate themselves and decide whether there’s a place for it or not.”

Zurbrugg is the associate head of research at Adelaide Business School at the University of Adelaide and the discipline leader for business analytics.

His colleague at the school, MBA director Gary Bowman maintains “getting one’s head around AI” is a non-negotiable for leadership, even as the technology rapidly evolves.

“Business leaders need to understand the opportunities and threats that accompany AI, so it’s becoming a more significant part of our Executive MBA curriculum,” Bowman said.

“While we provide some fundamentals around how AI actually works, particularly in a data context, we focus on the leadership requirements of the changing world so that our graduates can harness AI in a responsible manner.”

The Productivity Commission reported in February 2023 that “Australia needs to keep pace with technological developments to underpin our future economic prosperity”.

It found that Australian businesses trail other OECD countries in their use of data analytics and AI, and only 6 per cent of our companies were using data analytics and even fewer were using AI.

However, before implementing AI into their business through workflows and the like, Zurbrugg said management and leadership should weigh up the risks around security, ownership and the appropriate use of the data.

“Anytime there’s an interaction that uses, for example, Open AI technology through an API, that information does get carried through to Open AI,” Zurbrugg said.

For many companies, custom-built chatbots have provided an early introduction to AI integration, being relatively easy to formulate and construct and allowing data to remain in-house.

A watershed moment in chatbot use came with the worldwide launch of Open AI’s Chat GPT – just three months before the Productivity Commission report was released – that gave a widespread and hands-on introduction to AI’s potential.

Now, it feels like everyone who uses a computer at work also uses Chat GPT as an assistant, a research tool or to put down the bones of a report.

Zurbrugg believes data ownership and ownership of ideas will be challenged as AI use becomes widespread.

“Absolutely, it’s going to be a major legal issue in the future,” he said.

“If we look at images generated with AI, generally speaking they’ve come up with a co-ownership structure so that you can use it for your own purposes, but so can the AI creator.

“But when it comes to something trickier, for example, you’re running a business and feeding information into that large language model – and that model is not owned by yourself – then it does cause issues because [their terms and conditions allow them to retain] that data and use if for whatever they like.

“It raises the important point that you need to ensure that whatever the communication is, it’s not something commercial in confidence or confidential for the client or user.”

Customers’ ownership of their data could also prove to be a challenge for businesses, he said, not just because of consumer rights, but also regarding what data is being collected.

Woolworths uses its chatbot Olive, which it developed inhouse, to answer customer queries and resolve service issues, including providing refunds for missing or damaged items.

“If you’re talking about bananas going rotten, it’s of an ilk that it doesn’t matter where the data ends up,” Zurbrugg said.

“But if you’re providing sensitive information through this mechanism, that’s something you’re going to have to think twice about.”

InDaily in your inbox. The best local news every workday at lunch time.
By signing up, you agree to our User Agreement andPrivacy Policy & Cookie Statement. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Meanwhile, Coles has introduced cameras managed by AI to track customers as they move throughout their supermarkets.

Ostensibly, it is to combat theft, but Zurbrugg said the system could also collect data.

“The cameras are looking at what you’re focusing on or picking out to browse or purchase … and that is basic data as well,” Zurbrugg said.

“Have you agreed implicitly to share the data of you browsing around the shop?

“They’re sort of open-ended questions at this stage that people haven’t really thought too much about, but it is an issue to think about.”

Zurbrugg said in the short-term AI will be rolled out as a replacement for repetitive, automated tasks where the information inputs and outputs are “easily pigeonholed” using narrow artificial intelligence.

“We will move to a stage where you go to a drive-thru food outlet and, rather than having a human take your order, it could be done through an AI routine.

“But in the medium term, I think more complicated strategic decision-making will start to be offered by AI.

“For example, a top management team of a company deciding whether they should go ahead with a product or not, and weighing many different factors that need to be considered at that stage.

“It will still be a human decision-making process, but it wouldn’t surprise me if AI agents were to be part of that top management team discussion, providing insights and ideas for how to market that product, or whether to invest in a particular project or not.

“It wouldn’t surprise me if that type of AI use occurs within the next five years or so.”

Taking full advantage of AI would require an investment in data infrastructure, data cleaning, data integration processes and data security. Underscoring the importance of data cleaning, he said biased data could lead to algorithmic biases that then colour the decision making.

He gave Amazon and Google as high-profile examples that also managed to avoid long-term reputational damage.

In 2018, it was reported that Amazon had developed an AI algorithm to assist with its hiring of software developers and other tech staff. However, the algorithm was trained on data primarily consisting of resumes the company had received from male applicants, reflecting the male-dominated tech industry.

Amazon’s hiring tool reportedly downgraded resumes containing the word “women’s” or graduates from women’s colleges.

Google also hit the spotlight with its release of its AI tool Gemini, earlier this year. In trying to ensure gender and racial representation in the generated images, it showed unexpected bias, reportedly refusing to produce images of all-white groups and producing inaccurate historical images, such as a Caucasian female US president.

“These are all due to biases in the data, not necessarily conscious biases, but ones that feed into the decision making that it churns out,” Zurbrugg said.

“Not only do you have to train the algorithm, but you have to make sure it behaves because AI is known to sometimes hallucinate and not give the answers you’re hoping it will – so there’s a risk management element to that.”

Considering the bigger picture again, he said building a governance framework that addresses the use of AI is important, particularly given it rapid rate of advancement.

While this is a regulatory issue, the need for companies to also be proactive can leave leaders feeling underprepared. Bowman said this is where an Executive MBA can bridge the gap between recognising a looming problem and knowing what action to take.

“There is no substitute for face-to-face learning, particularly for something like the Executive MBA, where so much of the value comes from the interaction of senior leaders and world-leading academics,” he said.

“With such an engaged and diverse cohort, we offer a learning experience that you’d struggle to find anywhere in the world … and it’s something that you just can’t replicate online.”

Learn more at the Adelaide Business School’s Executive MBA Information Session on June 13.

Local News Matters
Advertisement
Copyright © 2024 InDaily.
All rights reserved.