Originally published in Arab News.
It is impossible for anyone to have missed the excitement generated by ChatGPT. Countless articles on the subject have been written, including many by ChatGPT.
While underlying technologies, such as deep learning, are not new, ChatGPT’s rich conversational interface has captured the popular imagination around artificial intelligence in the same way Netscape made the World Wide Web real for millions worldwide when the browser first appeared in the 1990s.
ChatGPT is built on something called a Large Language Model.
LLMs are artificial models trained on huge corpuses of text using something called “unsupervised learning,” where they are not explicitly taught but fed these models’ text to learn the relationships between words and the underlying concepts, essentially developing a statistical model of what words are likely to follow other words given a particular prompt or starting point. In some sense, they seem like “autocomplete on steroids.”
Therefore, they are remarkably effective at giving responses to questions, summarizing texts and producing large amounts of text content based on some prompting. For example, we are seeing global law firms explore how these models can automatically create the skeletons of contracts without requiring legal or paralegals to draft.
We see articles in publications authored by AI and panicked university officials wondering about the high-tech plagiarism these generative AI tools will enable.
However, it is also important to remember their current shortcomings: these models “understand” the statistics of language and, through this, the relationship between words, but they do not have knowledge of the world, common sense or the ability to reason.
Hence, they cannot tackle riddles or perform complex mathematics, making them prone to “hallucinations” where they generate text that, while superficially plausible, might be completely false, offensive, or misleading.
For example, a model could generate a scientific paper that looks and feels like a research paper based entirely on nonsensical arguments and content, or, in a more nefarious example, these models could enable the mass production of highly plausible misinformation that could poison search engine results or mislead people in destructive or harmful ways.
As we look to the future, these models will continue to evolve rapidly. But they will need to be augmented by systems that, like humans, have common sense, an understanding of the world, some sense of ethics and the ability to reason. Moreover, it will bring them closer to how human minds operate, especially making near-instantaneous decisions, such as identifying an object in our field of vision or reading a sentence.
We also have a second type of slower thinking that requires more effort and is both conscious and logical. While the former closely resembles what we see today with LLM’s ability to recognize words without understanding context deeply or semantics, the latter form of thinking is an emergent trajectory of AI research focused heavily on learning rules, such as the rules of physics or ethical behavior.
We are also seeing the emergence of foundation models, such as generative pre-trained transformers, which can be trained once and then extended and reused broadly at minimal marginal cost, for example, not requiring the vast amount of computational capability and power needed to prepare GPT or similar.
These AI foundations are similar to web, mobile and social. They are the next platform — a foundation on which new value will be created through new applications made possible by this general-purpose technology.
Models, such as those underlying ChatGPT, can be enriched and extended with domain-specific or licensed data and embedded in applications to provide a new way of engaging with a business or product.
For example, one could take today’s LLMs and train further on the corpus of consultancy and research reports across an entire government and allow employees to ask questions in natural language or generate presentations or materials — without the need to re-engage a consultant. This accelerated adoption of AI comes at a critical inflection point when much of the world faces inflationary pressures and rapidly rising labor costs.
AI will enable systems and machines to learn how to perform tasks currently performed by humans so that firms can be more productive and reduce the reliance on increasingly expensive human labor.
AI, like automation, can also have a deflationary effect, making it a vital productivity lever during these challenging economic times. In the long term, we can also see that, unlike Saudi Arabia, many developed countries face a demographic challenge, leading to a rapidly aging population and a rapidly declining workforce.
It is easy to see how the widespread proliferation of AI can ensure these economies’ future sustainability and prosperity.
In a Saudi context, the broad recognition of the value of data and AI, as exemplified by organizations such as the Saudi Data and AI Authority, the extensive multi-decade efforts to train a cadre of Saudi engineers, scientists, and technologists, and the investments and programs launched by national champions such as Aramco to develop local AI capabilities, make the Kingdom exceptionally well placed to capture this opportunity.
For example, the Kingdom can lead in the localization and extension of LLMs to the languages and dialects of the region or explore how the knowledge embedded in domains in which the country is a natural leader, such as energy, can be used to build foundation models that can then be made available broadly.
If the lessons of the internet age are any guide, we are at an essential point in the evolution of AI. Though the best time to engage with AI was yesterday, the next best time was today. Therefore, all Saudi public and private sector entities must be encouraged to explore how this technology can create new value in their respective fields and industries.
Anthony Butler Newsletter
Join the newsletter to receive the latest updates in your inbox.