Skip to content

Human-centricity is critical to making AI a positive disruptor in 2024

C-suite executives must make human leadership, experiences and protection the foundation of their AI strategy

AI is an opportunity for business leaders to bring positive disruption to their organisation — faster delivery, happier employees and better customer outcomes. But only if their strategy for AI adoption is human-centric.

2023 finished much as it began, the AI hype train in motion and vast amounts of investment in the technology — from hardware and software to process and people. However, it also saw waves of use cases where organisations liked what they saw and invested in AI, only to become disillusioned by the lack of immediate and vast financial/business gain.

For 2024 to be the year of AI mastery, organisations must ensure the technology is serving the people it’s intended to help. Because if they don’t then decision-makers will act without direction, employees will feel expendable and users will get terrible experiences.

Begin by clarifying who you want to help

The starting point cannot just be yet another chatbot, especially if it provides no value, or worse, results in a forced and unpleasant interaction for users. The starting point must be a set of individuals whose lives will improve as a result. This then is a goal that amounts to something more substantive and impactful than a basic, human-like conversation. It has to serve the interests of organisations, employees and consumers alike.

Determine whether you need AI, ML or GenAI

Business leaders must think about the challenges they’re trying to overcome. They have to consider the capabilities they’re hoping to achieve. They need to list the opportunities they’re aiming to unlock. And they’ve got to establish who they want to help. Adopting this human-centric approach will ensure C-suite executives invest in AI technology that delivers the right outcomes.

The next step is assessing where on the ‘AI scale’ business leaders need to look for a solution that will deliver impact at an affordable price point. If the problem is very specific then a rules engine or standard machine learning approach could be ideal. If the problem requires producing novel content and creativity then leaders are looking at some form of generative AI. In some cases, they may find that a combination of the approaches produces a better quality output at a lower cost.

For example, business leaders want to use generative AI (GenAI) to boost customer engagement, but are unsure about delegating to GenAI — to have conversations/interactions. In this scenario, it could be better to use an off-the-shelf sentiment analysis tool to flag conversations, directing hard conversations to a human and easier ones to a generative model. By leveraging GenAI in this way, organisations reduce the likelihood of frustrating and losing an existing/potential customer. The net result of this nuanced approach is a better customer experience, an increased likelihood of user signup and a greater chance of earning a positive customer NPS (Net Promoter Score) at a lower cost.

Critically, business leaders mustn’t over-engineer their approach but, at the same time, they can’t afford to underdo it. They need to keep the use case small in terms of footprint, (think engineering effort around data, integrating with existing systems etc.) but high enough value that they can generate further buy-in across the business.

Organisations should initially trade down value to mitigate risk. This is because mistakes are going to be made and things will inevitably take longer than expected — the big hit(s) should be saved for when there’s a greater experience in using the technology.


Key takeaways

  • Determine what you want to achieve

  • Consider if AI, ML or GenAI is the appropriate tech for your goals

  • Define use cases, keep them small in terms of footprint and as high as possible in value

  • Trade down on value to mitigate risk

Understand the human impact of investing in AI

Once there’s a defined use case in place, the next key step is to stop and consider cultural and social implications — to quote Jurassic Park’s Dr Ian Malcolm:

“Yeah, but your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”

Depending on the business position, think capabilities on a data maturity model, there may need to be major shifts in existing workflows, skills and capabilities across the company. Business leaders must be crystal clear on how they manage the change, how they communicate the plan and the impacts on the business. Because C-suite executives may be excited by AI, but others, including potential customers, will be fearful if not sceptical. 

There must be a clear grasp of how much training or reskilling will be required to make the most of the investment and at what scale. None of this is cheap or fast. Furthermore, it’s essential and the need for it will grow as your AI adoption does. This should be approached with a positive mentality, one which recognises that AI presents exciting new opportunities and a progressive redefinition of jobs.

At the same time, don’t embark on a company-wide initiative. Start with a narrow technical project. This limits the splash damage and enables people to see that, fundamentally, AI can be transformative. It’s a chance for departments to think about future uses and re-organise organically, allowing teams, and associated roles, to align with new workflows, AI or ML. 

In instances where multidisciplinary teams need to be hired and built, ensure there’s exposure to MLOps (Machine Learning Operations) skills as well as business domain expertise. Do not get hung up on hiring the perfect skill set — this is all so new that nobody is really well qualified. Don’t let your capabilities limit your ambition, consider contractors or smaller boutique providers that have experience and can facilitate upskilling existing teams. 

The above applies even in areas that are distant from the changes — ultimately, I expect all roles to be positively disrupted by generative AI. Invest in your people and the adoption of AI.

Lastly, update policies in line with implemented changes. If the ways of working are being modified then supporting processes, KPIs and incentives must be looked at. Business leaders must also measure the impact these changes have on them. Clear, bidirectional, communication will be a differentiating factor between companies that do well with AI and those that do not.

By taking this strategy, business leaders will adopt AI in a way that leads to happier employees. This is because the technology will be leveraged to make their lives easier, rather than to render them obsolete.


Key takeaways

  • The cultural and social implications of adopting AI must be considered 

  • Make incremental changes rather than big moves

  • Major shifts to workflows, skills and capabilities may be needed 

  • Have a clear grasp of the training and upskilling that will be required 

  • Don’t get hung up on hiring the perfect skillset

Ensure people give the AI good data

High-quality data must be a top priority, not an afterthought, if organisations want to get a return on the money they invest in AI. There’s truth in the adage “bad data in, bad decisions out”. Flawed, incomplete, inconsistent or downright dirty data will result in bad models and bad results. These could be misleading, irresponsible or unethical.

You’ve got your data, it’s good data. Stop. Look closer. Is the data biased? If the input is biased then so is the output. This means your models will simply amplify and perpetuate those biases. Spend time assessing the data and algorithms for bias related to age, gender, race, culture, self-harm etc. With good, unbiased data we can continue, safe in the knowledge that it’s being used to ensure the AI is primed to bring good outcomes for customers. 

But remember, regardless of how good our data and final model are, we will eventually have to deal with errors. A model will be wrong, but how the model is wrong will depend on the model. How seriously the error is taken will also depend on the label it’s given. 

In traditional machine learning, we’ve become attuned to errors and their significance and how to mitigate them — numerous books have been written about such experiences. However, when it comes to generative AI we’ve made a bit of a mistake and anthropomorphised things. We don’t have errors, we’re all fuzzy and warm.  But the reality is that we have hallucinations. And a hallucination is an error, so treat it as such. 

Remain vigilant, be aware of, and design protections for anomalies and tweet-able hallucinations. A refined, high-quality dataset replete with a strong metadata game can help but it is not sufficient alone. Nor is prompt engineering, RAG (Retrieval-Augmented Generation) or fine-tuning. Put great effort into validating model outputs. Add in human oversight, alongside fail-safes. You want to avoid being the $1 Chevy Tahoe

Lastly, there’s the thing that should be done first: ensuring there’s enough data to solve the problem at hand. While the problem is often associated with traditional machine learning, it also matters for GenAI. If there isn’t enough data then the problem space complexity won’t be addressed. This will impact model accuracy and effectiveness. Make sure upfront that the breadth and depth of the quality data is sufficient (including metadata). If it isn’t then it’s time to go back to that data maturity model and have a rethink about priorities.


Key takeaways


Data quality

  • Bad data in = bad data out

  • Biased data is more insidious than bad data

  • Errors by any other name are just as erroneous

  • Too little data can rule you out early

Add in legacy systems

  • Integrating into current systems may not be simple

  • Technical constraints

  • Systems overhauls

  • Missing fundamentals

Governance: bringing it all together

The EU, UK and US have hosted events or provided guidance around the adoption and development of AI. However, there are four core aspects that demand particular scrutiny.


Security 

As with any other application or feature organisations add, AI must meet best practices related to access controls, encryption, pen testing, vulnerability testing etc. Monitor the models, actively. Look for malicious attempts to manipulate or exploit the use case, even if it’s just internal. If there’s any data that raises any concerns, organisations must ensure they’re well ahead with protocols for anonymisation and pseudonymisation — do not use personal data, it probably isn’t needed and certainly doesn’t need to be used directly.


D​​ata leakage

We saw early on that GenAI presented a risk to spillage of company secret sauce. Take Samsung, whose workers used ChatGPT to help them with tasks and leaked source code, confidential data and meeting notes about hardware. This is a socio-technical issue. Yes, guardrails can be put in place, but they aren’t sufficient on their own. 

Ultimately, staff need to be trained on how to safely use these tools, external or internal. The wrong move is to lock everything down and deny access. Adopting such an extreme approach ultimately frustrates teams and leads to people looking for workarounds that result in the outcome organisations don’t want — to quote techradar, data “out in the OpenAI”.


Transparency

First, in terms of understanding why these things do the things they do, AI remains an active area of research  — there isn’t a single solution you can leverage ‘off the shelf’. There’s a nice anecdote in Melanie Mitchel’s book, ‘Artificial Intelligence, A Guide for Thinking Humans’, where she describes work by a grad student. The goal is to classify photos into ‘contains an animal’ or ‘does not contain animal’. The model produced worked well. However, upon digging into what the model had learnt, rather than explicitly detecting an animal, the model determined that images with blurry backgrounds contained an animal. The published work can be found here

Second, organisations need to make it clear when and how their products are leveraging AI. Additionally, if AI is being used as the primary interface for customers, then it must be paramount to provide an escape mechanism — bots were frustrating before GenAI, and if they’re implemented poorly they’ll result in an even more frustrating, poor-quality customer experience.


Cost

Cost is inescapable, yet frequently forgotten and there are two perspectives to take on board. 

First, the environmental consequences of using this technology: AI requires vast computational power, either via your own equipment or through a cloud service provider (CSP). All of this chews through energy, which will eventually need to be paid for. 

Think about how energy is sourced, is it possible to offset emissions by selecting a CSP that uses renewable energy sources? Is there anything you can do to mitigate the ever-growing levels of e-waste? 

With younger generations much better informed about the state of the planet, it’s likely companies will be punished in the future for the bad environmental choices they choose to make today.

Second, there’s the cost leveraged by the service provider. While OpenAI is extremely cheap for inference to fine-tuning, things can quickly get out of control when you factor in processing, storing and searching your data. It’s not just OpenAI either, the same applies to each of the CSPs. Moreover, if you fail to secure the model behind your application you may find yourself paying for extraneous queries.


Key takeaways

  • Security

  • Data leakage

  • Transparency

  • Cost

C-suite executives can make AI a positive disruptor

Investing in AI is easy. Successfully leveraging AI to deliver secure, impactful and scalable products is challenging. But this presents a real opportunity for informed, driven and considerate business leaders to gain a competitive edge over their rivals. By taking a human-centric approach to its adoption, they can ensure AI helps them deliver more, makes their employees happier and brings better outcomes for their customers.

Insight, imagination and expertly engineered solutions to accelerate and sustain progress.

Contact