AI is not an existential threat to humanity; it will be a transformative force for good if we get critical decisions about its development and use right.
The UK can help lead the way in setting professional and technical standards in AI roles, supported by a robust code of conduct, international collaboration and fully resourced regulation.
By doing so, “Coded in Britain” can become a global byword for high-quality, ethical, inclusive AI.
The first sentence of the open letter especially is one I fully agree with. The operative word is “if” … if we can prevent dangerous deployments and misuse by individuals, groups, corporations and governments.
The other two sentences are a bit UK parochial for my taste, as I feel that international attention is needed to address some of the ethical and practical issues now presented.
I did not sign the recent open letter organised by the Future Life Institute (having been happy to sign their previous call for a ban on lethal autonomous weapons). The recent open letter called for a temporary 6 month pause on the development of improved AI Large Language Models (LLMs like ChatGPT) as I think that would simply reinforce the market position of those who are currently deploying these systems, seemingly with little concern for their impact, and with little attention to misuse of internet accessible (but still owned and licensed) materials, creator rights, data biases and poorly explained non-transparent presentation of results (made up information – e.g. see this blog post). I feel the dangers come more from those companies seeking to deploy AI and LLM systems in a careless way to capitalise on them without care.