The rapid growth in AI has resulted in some incredible successes: Nvidia’s recent market capitalisation figure was approximately $3tn, OpenAI was valued at $80bn in February, and Mistral AI’s recent fund raise was based on a valuation of almost €6bn (not bad for an organisation that was founded in April 2023).
Away from the financials and hype, the impact and adoption of AI is a different story. A recent study by BCG suggests that across 21 countries, over 75% of respondents were aware of ChatGPT. Copilot and other GenAI tools are in widespread use at a personal and enterprise level, but at significant cost – Microsoft’s CO2 emissions were 30% higher in 2023 than its 2020 baseline, and it plans to triple the rate at which it increases datacentre capacity in the first half of 2025.
Less reported on still is the success/failure rate of AI implementation, and to no one’s surprise there are a number of hurdles to overcome, including the need for new skillsets to navigate through 1) the AI vendor ecosystem, 2) prompts and 3) models, all of which requires resources (E.g. product owners, prompt engineers, traditional developers, third party support etc.), for which there needs to be a solid business case. The result is that lots of companies struggle to move on from the experimentation stage for a number of reasons:
ROI – it’s difficult to prove the benefits and most experimentation comes at a cost.
Resource limitations – people, time and money are all needed to build infrastructure and the talent pool for AI talent is small, albeit growing.
Security – encryption is needed, access control over users in application services, traceability is important, as is accountability.
Test scope – business data required, as is historic data, it all needs to be clean and specific.
Tooling – build vs buy decision, tools and frequently frameworks are fragmented, any depth of capability requires investment.
Difficulty managing multiple AI providers – all providers have different AIs, different versions of each model, with different inputs and outputs.
Too much choice?
The availability of such a wide range of models adds a degree of complexity, with newer versions constantly being released and a huge array already available, through dedicated vendors and platforms or open source – Hugging Face currently lists over 700,000 models and over 160,000 datasets, all accessible to individuals for free. Making sense of the landscape is not straightforward and any longer-term use of AI requires teams to refine, test and re-deploy services, so the flexibility to change providers is important. Technology organisations that struggle with software testing won’t be able to effectively test their AI systems either.
Connecting to an LLM environment with the latest models is preferable, and avoiding models with very specific syntax and language requirements will offer a greater degree of control and reduce the likelihood of vendor lock in. The UK government recently admitted to a lack of leverage to negotiate better prices on cloud services caused by dependence on one vendor.
AI services are still nascent and while the lack of experienced talent is unlikely to present problems in the short-term, forward-thinking organisations will be considering how to build capability internally as adoption and reliance on AI increases.
If you would like to discuss any of these areas in greater detail, please get in touch. At Tenon we help organisations find the right talent, and work closely with a network of experienced technology leaders who are equipped to help you understand the relevance and potential impact of AI to your own organisation

Leave a comment