Five pitfalls to avoid when using AI in the workplace

Leaders need to be more open about the use of AI at work and any potential future regulations that may require certification and declaration of usage

Leaders need to be more open about the use of AI at work and any potential future regulations that may require certification and declaration of usage

Let’s look at some common pitfalls for leaders to avoid in the workplace:

Pitfall #1: Data privacy

Last year, more than a thousand technology leaders (including Elon Musk and Steve Wozniak amongst others), called for a moratorium on AI development to prepare the regulatory landscape. Recently, there have also been several key legal challenges to AI developers on multiple Intellectual Property infringements during the establishment of training databases.

As a result, we are in an interesting space where users are still experiencing the hype around AI applications, while simultaneously becoming more mature in their concerns about its ethical use.

Leaders need to be more open about the use of AI at work and any potential future regulations that may require certification and declaration of usage. At The Talent Enterprise, we have received client requests to ensure that no AI tools will be used in our assessment protocols at all, while other clients are keen for us to lead AI applications in talent in their organisations.

Building greater trust with the team in a period of transformation is key and investing time to communicate openly, balancing plans with any potential risks, will minimise the chances for any potential backlash.

Pitfall #2: Prone to mistakes

Currently, AI applications are, at best, limited, mediocre and prone to mistakes. These mistakes can range from minor inconveniences to significant operational disruptions, financial losses, or even harm to an organisation’s reputation. At this stage of the hype-cycle, the rush to deploy AI has arguably reduced the quality of outputs.

To address this, organisations must make use of comprehensive, high-quality data that is regularly updated to reflect new information. Secondly, organisations should allow for a human oversight mechanism, where AI-made decisions can be reviewed and overridden if necessary. This way AI acts as a reliable tool that supports workplace efficiency and effectiveness.

Pitfall #3: Furthering bias and discrimination

Ask ChatGPT4 to create an image for a CEO, CFO, Lawyer, Doctor, President, Prime Minister, etc. and you will be delivered a distinctly male, pale and stale image because the data it was “trained” on is based on a fixed, outdated sample.

As we continue to develop more workplace applications, we must avoid the risk of preserving our organisations in the current imperfect state in which we find them. Organisations need to diversify and de-bias their data, possibly by enriching existing datasets or reworking algorithms to remove bias.

Additionally, companies can conduct regular audits of AI applications to identify and address any sources of bias. These measures will enable businesses to harness AI effectively, while promoting a more inclusive and fairer workplace.

Pitfall #4: Loss of human touch

As we introduce AI into our work, there is a risk that we dehumanise our workplaces to the detriment of our colleagues and customers. For instance, the widespread implementation of AI chatbots in the consumer banking industry has reduced costs, increased profitability, and offers customers access to services 24/7.

However, a decade ago, local bank branches had staff who knew us personally and guided us through complex bureaucracy to solve our financial issues, without us needing to undertake additional “shadow labour” as customers.

Leaders can equip individuals with the skills needed to work alongside AI, which can mitigate the impact of job displacement

Just like generative AI, modern workplaces are human constructs, and our experience of work remains a key part of each of our identities. Rushing into automation can accelerate the potential for alienation, disengagement, frustration and withdrawal. To counteract this loss of human touch, businesses must strike a balance by incorporating AI as a tool to assist rather than replace human decision-making altogether.

Pitfall #5: Job displacement for entry-level roles

Looking ahead, the risk of negative impacts for career paths and advancement opportunities are concerning. Let’s stick with the banking industry as an example, where people traditionally started as tellers and gained essential skills before progressing into more technical and senior roles.

With automation poised to eliminate many entry-level, routine jobs – not only in banking but across all sectors – young and inexperienced professionals face the potential loss of critical career launchpads.

I am confident the “lump of labour” fallacy will prevail, which states that as overall productivity increases, so will the total amount of employment and economic activity. However, it is difficult to find research which indicates the size and shape of new job creation.

For those entering the workforce, organisations can consider creating new roles that leverage human capabilities in areas where AI falls short, such as emotional intelligence, creativity and strategic thinking. Leaders can equip individuals with the skills needed to work alongside AI, which can mitigate the impact of job displacement.

It is too soon to determine how AI will affect the future of working, other than to say with confidence that it will have a fundamental and far-reaching impact – this genie will not be going back into the bottle.

Just as the choices we made (or didn’t make!) about the regulation of the internet determined whether it stayed an “open” resource or became a privatised corporate domain, similarly our choices with the use and misuse of AI will affect lives – in general and professionally – going forward.