In short, regulation, responsibility, application and adaptation; but before answering the question more fully, it is probably useful to take a step back to consider how such a conference in 2024 might differ from one held in 2023. A year ago, the AI community was still grappling with the enormous interest in ChatGPT that had arisen since its release in November 2022. There was plenty of excitement about this new technology, with the public suddenly becoming aware of previously pretty niche technologies such as (large) language models, but often within the community there was a general lack of clarity in its depth of understanding.
Therefore, one of the striking things about this latest conference compared to similar events held last year was the sense of consolidation of understanding, combined with taking a step back to evaluate AI (and in particular generative AI) technologies in a more measured and cool-headed manner.
What is it good for? How does it perform? Where is it going? What are the challenges? And, most importantly, what are the risks? This is where the different perspectives proved to be particularly useful.
Responsibility
A related topic that was covered by both technology practitioners and policy experts was the responsibility of organisations for the AI models that use in their systems, even though they may have been developed by a third party such as OpenAI or Google. Their point was that the business remains fully liable for any behaviour (expected, unanticipated, or otherwise), and that it is the business’ responsibility to check and validate any behaviour in production.
After noting the difference in approaches to regulation in the UK/EU (standards enforced by regulation) versus the USA (standards enforced by litigation or class action lawsuits), the possibility was raised of organisations being held responsible for any shortcomings of a third party model that their solution uses (e.g. it being trained on unauthorised material, employment conditions of data annotators, etc.).
Class actions have been used against companies in other contexts, and so may be applied to generative AI technologies, which was certainly a sobering prospect for anyone engaged in the development of applications using them. For more detail on this topic, take a look at this Point-of-View from GFT’s Simon Thompson. A similar point was made by the UK advertising regulator, that ultimately an advert is the responsibility of the agency publishing it, regardless of whether or not it was generated by AI.