AI & the Future
A foreword : The below is quite a meandering set of thoughts the reflect my understanding of certain areas that others are certainly much more informed on, and to that end if I make clear and obvious errors, please don't hold it against me, I'm a human after all and not an AI (winking face).
Intro
As I sit here and consider what the last 18 months has taught us, the recurring thought I have in my mind is just how fortunate we are to be living through the period we currently find ourselves in.
If we were to plot on a graph the leaps we have made throughout human history, the acceleration & progress in engineering would be exponential. Considering that from the end of the Second World War until today, we have taken huge leaps forward across the domains of physics, chemistry, and biology. Mastering these to the extent that the world we have created is completely unrecognisable to someone born in a comparable period in the 1900s. Considering a longer time horizon of human development over the last 10,000 years, we could go as far as to say we are living through a knowledge bubble, and with that analogy, could our bubble be about to burst?
As fast as this growth has been, we are at the foot of a mountain that we cannot see the top of. This analogy, I think, perfectly captures the endeavours of the human spirit, to summit that which cannot be seen, to reach further, and to quote Star Trek, to boldly go where no one has gone before.
The age of artificial intelligence goes back more than a decade, into the early 2000s and academically before then, where breakthroughs in the use of perceptrons in multi layered neural networks were at the forefront of the field. Since 2017 and the publication of the paper on the use of transformers to solve for various linguistic pattern problems, developments in this field have accelerated at a rate that I believe people still don't fully appreciate.
I was in San Francisco in December of last year, and while it has been a long time since I've been to the States, and many things on the ground have changed, the sense that the place was alive with possibility is something I've never felt quite so tangible.
As an aside, I'd like to say that lots of us may comment on the United States, and its place in the world, and while it is a place with immense challenges facing it, it still has the infrastructure, the capability, and fundamentally the attitude toward the future that lends itself to building the majority of the major breakthroughs in our time.
A-side over...
I left San Francisco having tried to meet several people from Anthropic, but by the day I left, I couldn't locate their office building. I was keen to talk about how different languages, that have fundamentally different characteristics, can lead to more effective use of the context window. For example, Arabic is a phenomenally specific language, and while I don't speak it, I'm led to believe that there are several hundred words to describe camels with specific qualities. To this end, you can pack an immense amount of meaning into very few words, which, in the context of trying to increase the amount of information you can pack in a context window when supplied to an LLM, seemed at the time for me to be an interesting way of expanding the capabilities of your system at the language level.
I was also keen to discuss what efforts are being made to democratise the training process. As, training these large models relies on huge amounts of compute power. The idea I was keen to explore was whether you could notionally train a model like a chat GPT on a sufficiently large number of people's smartphones. I went through and did the maths on this, and came up with what I thought to be a reasonable number, if you want to know what that number is, shoot me an email, it's truly fascinating.
With all this being said and with no discernible office building in sight, I left with my tail between my legs. However, the impression it left me with as I reflect above is that so much is happening right now that a lot of people, I think, are yet to really fully appreciate.
I wanted to put together this piece to document a couple of things that I think are likely to happen across a number of different domains as we reach ever closer to AGI.
AGI, artificial general intelligence, represents a point in time where a computer system can perform any human task at expert level or above. We've already seen how powerful models like OpenAI's ChatGPT have been across successive updates at solving a large number of human problems - condensing information, identifying flaws and challenges, all the way through to writing software.
The practical applications of AI, even at the level that we have it today, are endless and are hugely disruptive to various elements of our social and economic model.
This model, I'd like to set out, because some of the other points I want to make are reliant on us agreeing on the below.
For the most part, the global model promotes the idea that through education, you will earn yourself a better life. This is such a central tenet of the way that the world works, that we are perhaps not able to see how that pillar stands to be toppled In the decades ahead.
We are walking towards a future where the value of human cognition, is reducing.
To be successful in the world, and to live a happy and prosperous life, I think will become increasingly challenged, with so many jobs being affected by AI or perhaps put more positively, the way we extract meaning and purpose from our lives is sure to change.
With that in mind I've tried to aggregate my thoughts across a few key areas of affect (a sort of jumbled up pestle analysis minus a couple of letters).
The economy
The political
The social model
Effects on the Economy
The economy is going to undergo a significant shift in the immediate term, with more and more services being created that replace particular elements of people's roles. This will in the short-term give a huge productivity boost to develop economies. and expedite the growth of developing economies.
In the slightly longer term, by prediction is that, with AGI faster approaching, (2024-25), I think we will see a new definition for businesses where they can be categorised under the following headings.
Non AI : These are the businesses that rely on human to human interaction as part of the value exchange. Some* of these companies will ultimately leverage robotics, when the cost of ownership reduces below a certain threshold.
Aside No 2 : I can envisage a uber-like model for robots where in effect, the robot is your surrogate for labour tasks. Thereby meaning owning a robot ( like those created by Tesla and others ) will be an economic asset in the same way that having children, previously was in Victorian/Edwardian Britain Aside No 2 over.
For the most part the non-ai businesses in their day-to-day operations are likely to remain unchanged, while the environment around them will undergo significant shift (that ultimately will affect the demand side of some of those businesses ). For context here i'm imagining such businesses as a vineyard, or cheese merchant, or even Tescos, Where the bulk of the capital expenditure is on human labour.
Pre AGI : You could class all businesses as pre AI, but here I want to call out businesses that were founded before the advent of AGI ( when that epoch arrives ). These businesses will have to contend with the challenge of integrating AI services into their day-to-day business operations. There are several billion dollar ideas in this area, as many businesses in this class will have to undergo painful internal transformation projects as they reorganise their business to remain competitive against this third and final category.
Post AGI : These are the businesses that have been created after the advent of AGI and represent an existential threat to all businesses that exist in the Pre AGI category. These businesses have small head counts but access to the cognitive power of thousands of agents. For example you could found a law firm as an individual, and take on a hugely expansive case the currently would be undertaken by only the largest of the magic circle. Similarly, considering the likes of KPMG / EY, some of these firms audit the world's largest businesses, and a big piece of their P&L comes from these activities. Notionally again, these business models will be impacted hugely by new incumbents that can do 60% of the same job, at a fraction of the price.
The political and the path to UBI
I believe that politically AI will be a dominant thread in global politics as many government initiatives will be impacted in some way by the extent to which AI services will be leveraged in the delivery of those initiatives/policies. Political parties that will succeed, will be those that advocate for strong governance around AI, and those that consider the glide path towards universal basic income.
It is my personal belief that this will be required in our time to keep certain areas of the economy functioning.
In future I think there will be increasing controls placed around interactions between businesses and AI service providers to audit the amount of value creation that is being delivered through those channels, and ergo how much tax should be paid above a base level.
This will be legislated for not only because it will be a big tax windful, but also that existing businesses in the pre-agi category will lobby for it, as in short, these businesses need the clock to run slower, not faster.
Zooming out slightly, and considering fiscal income and tax law, as it is implemented today, several loop holes that exist, are not closed deliberately because governments are worried about the impact on their businesses ( businesses employing large numbers of their nationals) . That is to say, if the US government demanded Apple pay the “correct” amount in corporation tax, their ability to compete with Samsung in the smartphone market (as an example) would be diminished, and over time erode Apples market share (and with it its ability to employ Americans).
I think the difference with AI legislation, is that all countries would have an interest in enforcing these levies, as the economic disruption caused by a country not playing by the rules would be so damaging that they would be ostracised from the global community. By way of an example, if there was a tariff imposed on the use of AI services that provided income needed to support UBI, and a country failed to adhere to these levies, the social safety net afforded by UBI would become stressed as less fiscal income would ultimately be captured.
With that being said, there may be significant opportunities for Middle Eastern and Gulf states, whose expat driven economies (with low numbers of nationals) will continue their trend of low regulation to attract more inward investment, and as such are perhaps the most likely perpetrators of these levies were they to be introduced. Gulf states, As is being proven right now with the Ukraine and Gaza conflicts raging, are uniquely placed given their natural resources to play both East and West (in some cases off against each other) to achieve their strategic objectives, regionally and globally. Further underscoring that the bargaining power of oil will continue to be the counterweight until we crack fusion.
The role of central banks and CBDC’s
I think we will see central banks introduce CBDC’s (Central Bank Digital Currencies) which in some way support the efforts of the global system to monitor, measure , and respond to the impact of AI services in our value creation model. CBDC’s could ultimately be the enforcers of these levees described above. Thereby meaning when businesses leverage AI services, the currency they use to pay for those services will allow the central banks to automatically ultimately be tracking and enforcing those levies (collecting fiscal income as they are defined).
I should note that in my thinking, central banks around the world have established a set of ranges for these levies that ensure the competitiveness of other markets is preserved (e.g. levies introduced on AI services in Argentina would be less than would be imposed in the US).
The social and beyond
Socially AI services are going to have broad effects on how we associate value to people. I think it will ultimately continue the trend for us to live increasingly transactional and functional lives where we pay for the experiences we want, and pay for the experiences we don't want. We are already seeing the early stages of this with such products as the Apple Vision Pro and the MetaVerse.
The effect of AI on our ability to look at our fellow man and say that person is brilliant, cannot be understated. We have never experienced what it's like to be a chimpanzee looking at a human and thinking, I wish my brain was just that little bit bigger. More aptly, we are perhaps for the first time going to experience what it would be like to have an extra terrestrial species come to the planet and show us things weve never seen before, based in logic and reasoning we do not poses. I believe we could see the axiom that ”humans know best”, as a narrative, confined to the history books. I think it is indisputable that people will have a greater tendency to trust the outputs and opinions of the AI ecosystem than that of their fellow man in the years ahead.
Other points on governance
I think more and more government policy will be directed towards applications of AI to solve a number of social and municipal challenges, with the differences between the right and left becoming increasingly small as simply put the things political parties can realistically expect to change are reduced. That is to say, AI services will be pervasive in all areas of a citizens interaction with the government, and to that extent our model of governance can be optimised around citizens needs in very specific geographic areas.
As a result, the role of our elected officials, i can imagine becoming increasingly diminished as AI services will largely be responsible for delivering the “best” solution to a given social/economic challenge.
Other points on personal resources
Our society could become much more like players in a game whose day to day considerations differ largely on the basis of what AI resources “you” as an individual control. We could see our society split into two distinct areas, those that have access to robotic resource, that earns them a passive income, and those that don't have access, who take their place alongside robots in our value creation system.
Broadly speaking this worldview is somewhat dystopian And he's giving me shades of Blade Runner. Putting things in a more positive light, AI when combined with robotics, if correctly delivered, could provide a very valuable guard against human to human conflict.
AI as the mediator
The vast majority of conflict in the world occurs through the inability for two individuals/groups to trust one another, AI and robotics represent the possibility for us to create an independent intermediary whose objectives are to resolve differences in opinion. I'm not sure what comes to your mind, but for me this takes the form of an ai-powered version of the United Nations that effectively discharges the mandate of the United Nations in a truly impartial way (disbanding the idea of certain Nations having more weight in global affairs than others).
To this extent we could expect our society to become increasingly utilitarian, largely as a result of that being the way AI services are likely to develop in the years ahead.
In the words of Jeremy Bentham, the founder of utilitarianism, described utility as:
That property in any object, whereby it tends to produce benefit, advantage, pleasure, good, or happiness ... [or] to prevent the happening of mischief, pain, evil, or unhappiness to the party whose interest is considered.
I only hope that the AI models of the future effectively capture the compassion and love humans are able to demonstrate towards one-another and that these tenants of AI, I hope fall into the category of emergent properties (qualities of AI systems that come about as a result of training and iteration over thousands of years worth of human development noted down in the form of web pages).
As if the AI we end up creating does not exhibit these characteristics, it could be said that what we have made is truly and indictment of us as a species. As it will prove the sum of all that we have achieved has given birth to a system that does not exhibit the characteristics that we aspire to see in ourselves (this is what is referred to in research papers as “AI alignment”).
If anyone wants to talk about any of these topics in more detail, hit me up and we can put something together. Its an interesting time for sure