X Close

Can Britain resist AI communism? Chatbots pave the way for a surveillance state

China has limitless data to feed AI Chatbots. Kevin Frayer/Getty

China has limitless data to feed AI Chatbots. Kevin Frayer/Getty


March 6, 2023   5 mins

Can anyone compete with China’s Artificial Intelligence super-system? Sleepy government bureaucracies the world over are finally waking up to the hard reality that they have virtually no chance. China is galloping ahead. Only last month, it unveiled its latest rival to San Francisco’s ChatGPT: the Moss bot, and this month it plans to release another. The UK lags far behind.

Tony Blair thinks Britain should put itself on an economic war footing and pour national resources into the creation of an AI framework that might compete with China’s. But it’s hard to see how that is possible — or even desirable.

In large part, this is because AI needs data to work. Lots and lots of data. By feeding huge amounts of information to AIs, deep learning trains them to find correlations between data points that can produce a desired outcome. As deep learning improves the AI, it requires more data, which creates more learning, which requires more data.

Advertisements

While many nations might struggle to cope with AI’s insatiable demand for data, China is in no short supply. Since the nation fully came online around the turn of the millennium, it has been steadily carving out a surveillance state by acquiring endless amounts of data on its population. This initiative has roots in China’s One Child Policy: this impetus for controlling the population in aggregate — that is, on a demographic level — devolved into a need to control the population on an individual level.

This became fully apparent in 1997, when China introduced its first laws addressing “cyber crimes”, and continued into the early 2000s as the CCP began building the Great Firewall to control what its citizens could access online. Its guiding principle was expressed in an aphorism of former-Chinese leader Deng Xiaoping: “If you open the window for fresh air, you have to expect some flies to blow in.” The Great Firewall was a way of keeping out the flies.

China has always had a broad definition of “flies”. In 2017, the Chinese region of Xinjiang, home to the Uighur minority, rolled out the country’s first iris database containing the biometric identification of 30 million people. This was part of a larger effort known as part of a wider Strike Hard Campaign, an effort to bring the Uighur population under control, an effort to bring the Uighur population under control by using anti-terror tactics, rhetoric and surveillance.

This was a major step in the development of the Chinese surveillance state. But even that great leap forward pales in comparison to the CCP’s Zero Covid strategy, which involved the government swabbing and tracking every single one of its 1.4 billion citizens. When you consider that this population-wide genetic database was tied through QR codes to the locus of people’s digital lives — their smartphones — what China has come into possession of in the past three years is a data super-ocean, the likes of which humanity has never seen.

Nevertheless, this “overabundance of data”, as former Google China CEO Kai-Fu Lee describes it in his book AI Superpowers, does not fully account for the extent of China’s data edge. While the US might enjoy a similar mass of data, there are stark differences. The first is that American data is owned by private companies that maintain proprietary fences around it, keeping the data fragmented. While America’s own surveillance state is vast and deep, the need to maintain data protections from both a privacy and national security perspective means that much of that data is unavailable for the use of AI development. By contrast, China’s blurring of Western lines between the state and private companies means it can access limitless, generally centralised data.

The second difference is just as important. As Lee points out, American data is derived from the online world — from apps and websites that voraciously hoover it up. This data deals mostly with online behaviours, such as how a user “travels” around the web. Through the ubiquity of the surveillance state, however, Chinese data is derived from the real world. It’s about where you go physically, what you do, with whom you speak, work, date, argue and socialise. As AI melds the real world into a hybrid digital-physical realm, China’s data presents a qualitative edge. This is what makes it, in Lee’s words, the “Saudi Arabia of data”.

Does Britain have any hope of catching up with such a colossus? Tony Blair believes that by mobilising British ingenuity in the service of a national goal, the UK can become a globally competitive force in AI. This idea is well grounded in historical fact, given that the UK’s contribution to AI development has been nothing less than fundamental. From DeepMind, the UK-based AI company acquired by Google parent Alphabet in 2014, to towering figures like Cambridge-trained AI pioneer George Hinton, the UK serves as one pillar in the UK-US-Canada triumvirate of AI research and development.

This is all very well. But it sidesteps an inconvenient truth: AI technology is already here. It’s the data is missing. It’s as if the world has an unpatented design for a powerful new rocket but an enormous scarcity of fuel. Anyone can make the rocket, but only those who have access to enough fuel can press the launch button.

The temptation to rely on the government to achieve this mission might be strong, but it’s also based on a model of government that, in the West, may no longer exist. While the British government once had the know-how and political will to pursue massive projects, like the engineering marvel of the Channel, it seems that those days are passed. London’s Elizabeth Line took 20 years to bring to almost-completion, while the HS2 high-speed rail is now £50 billion over budget and years behind schedule.

Ironically, the path forward for Britain might be found in China’s own economic playbook. In 2010, China transformed an ailing “Electronics Street” in Beijing called Zhongguancun into a central hub for venture-backed technology growth. With cheap rent and generous government funding, it took a mere decade for Zhongguancun to become the birthplace of tens of thousands of startups including some, like TikTok, that would eventually grow into the world’s biggest tech companies. The UK has the economic sophistication, the research and development experience, and an international draw — all of which can be turned to its advantage in creating fertile soil for AI-driven growth. The question is whether it has the political will to get it done.

Even if the UK government could find the will to compete seriously in the “fourth industrial revolution”, one question would remain: Do its citizens really want it? A frequent refrain in the tech community is that “AI is communist”. The monopolistic nature of AI requires the kind of massive data and computational power that only huge companies like Microsoft and Google can support. With dominant players like those two increasingly cooperating with governments (including China’s) to censor speech, monitor behaviour and engineer societies, the AI-is-communist sentiment echoes a well-warranted fear that it will be used for government-like top-down control.

In the hands of an actual government, AI will inevitability encourage greater state involvement in the lives of ordinary citizens. Sovereign AIs require national data, and national data tends to require more top-down control. In a country that has resisted ID cards and national identity registers (wisely, though not without costs), this approach seems unlikely at best. Despite the tremendous potential AI holds for the betterment of humanity, it also presents equally enormous risks.


Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

17 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Andrew Dalton
AD
Andrew Dalton
1 year ago

At risk of doxing myself I work I work in the realm of the fourth industrial revolution, although overwhelmingly in the industrial application of this technology and not the social and political applications of it. The consequences of the emergent technologies are enormous, and although I’d rather not put time lines to the development and consequences of deployed technologies, it is clear their impacts will be significant across all domains of life.

AI is not my computing and software specialty but it is beginning to surprise: both in terms of it’s development but also how stupid it can be at times (unless it’s lulling us into a false sense of security). What bothers me is how innovation typically works, which is often about how existing technologies are merged to create something new.

It isn’t necessarily difficult to predict that advanced machinery/robotics + AI renders vast quantities of jobs redundant. As soon as this is cheaper than the cheapest labour in a particular location, it will become the chosen approach for corporate profiteering over globalisation. It could also have benefits for supply chains (and therefore environmental impacts) by allowing factories and plant to be near the source of raw materials. Science fiction writers and futorologists have been predicting this for decades.

The use of AI as a social control system is a little less discussed but not unheard of. However it is clearly the innovation of merging social media, public record and AI. Introducing other 4th IR concepts like digital currencies (central bank or otherwise) it isn’t exactly hard to see where social credit systems come into play. Considering the prior point regarding robots taking people’s jobs, it becomes essential (at least from a political point of view).

The welfare state, which was vastly expanded following skilled jobs being off-shored would need to expand again. In a world where an individual citizen/consumer has become decoupled from production, how exactly will their spending power be governed? As such, I see a certain inevitability as the consequence of automating jobs away will demand this form of response.

Yes, my outlook is dystopian, Huxley would be proud. The more “utopian” vision for an automated society, such as Jacque Fresco’s removes too much power from government, supranational and corporate interests and I see no evidence that those groups would ever surrender control. We will once again see a merger of corporate and political interest: one of Tony Blair’s favourite things.

Andrew Dalton
AD
Andrew Dalton
1 year ago

At risk of doxing myself I work I work in the realm of the fourth industrial revolution, although overwhelmingly in the industrial application of this technology and not the social and political applications of it. The consequences of the emergent technologies are enormous, and although I’d rather not put time lines to the development and consequences of deployed technologies, it is clear their impacts will be significant across all domains of life.

AI is not my computing and software specialty but it is beginning to surprise: both in terms of it’s development but also how stupid it can be at times (unless it’s lulling us into a false sense of security). What bothers me is how innovation typically works, which is often about how existing technologies are merged to create something new.

It isn’t necessarily difficult to predict that advanced machinery/robotics + AI renders vast quantities of jobs redundant. As soon as this is cheaper than the cheapest labour in a particular location, it will become the chosen approach for corporate profiteering over globalisation. It could also have benefits for supply chains (and therefore environmental impacts) by allowing factories and plant to be near the source of raw materials. Science fiction writers and futorologists have been predicting this for decades.

The use of AI as a social control system is a little less discussed but not unheard of. However it is clearly the innovation of merging social media, public record and AI. Introducing other 4th IR concepts like digital currencies (central bank or otherwise) it isn’t exactly hard to see where social credit systems come into play. Considering the prior point regarding robots taking people’s jobs, it becomes essential (at least from a political point of view).

The welfare state, which was vastly expanded following skilled jobs being off-shored would need to expand again. In a world where an individual citizen/consumer has become decoupled from production, how exactly will their spending power be governed? As such, I see a certain inevitability as the consequence of automating jobs away will demand this form of response.

Yes, my outlook is dystopian, Huxley would be proud. The more “utopian” vision for an automated society, such as Jacque Fresco’s removes too much power from government, supranational and corporate interests and I see no evidence that those groups would ever surrender control. We will once again see a merger of corporate and political interest: one of Tony Blair’s favourite things.

Norman Powers
Norman Powers
1 year ago

Yeah nice essay, too bad it’s totally wrong. I got nothing against journalists and writers opining on technical subjects but they should at least try to find someone with knowledge to check their thesis before they embark on writing it. No Kai-Fu Lee doesn’t count, as his agenda these days is to make China look powerful first and be correct second.
To train LLMs like ChatGPT or “MossBot” you need lots of data, but it’s the sort of data you get from the public internet, including possibly books/magazines/newspapers/etc. It isn’t surveillance data of the sort this article is talking about. You got a billion iris images in a database? Good for you, that’ll be super helpful if you want to train an AI that can generate perfect looking random iris images and not much else. You got a billion swabs? Great, now you can generate fake swabs.
Getting the picture here? The reason you need a lot of data scraped from the internet to train a modern AI is because you want that AI to generate the sorts of things you find on the internet: answers to questions, news articles, photos, poetry, code.
So data isn’t the new oil, it’s not like anything even close to oil. Oil is fungible, one barrel is much like another. Data is not, which is why forced analogies like “The Saudi Arabia of data” are the mark of punditry, not expertise (not that you need much to know this stuff!).
Could the British government train a giant LLM if it wanted to? Uh, yes? DeepMind is based in London and has done exactly that, it’s just a matter of offering those people 2x what they currently get paid and then giving them the time and money they need to collect lots of web crawls, book scans and so on + a few tens of millions of dollars worth of hardware from NVIDIA. But why would it? Maybe people like Blair think there’s something strategic about all this but there isn’t. What does he even mean by an AI Framework? A British TensorFlow? Surely not.

A frequent refrain in the tech community is that “AI is communist”.

I’m a member of the tech community, read AI research papers regularly, take part in AI discussions regularly and have never heard anyone say this. What would that even mean? AI development is the opposite of communist, the most advanced AIs are all being trained by private corporations and the most talked-about AI right now (ChatGPT) is by a well funded startup. There’s nothing I can think of that’s less communist than a startup.

Last edited 1 year ago by Norman Powers
Robbie K
Robbie K
1 year ago
Reply to  Norman Powers

Yeah great observation, I was thinking the same thing as I was reading the article. The Chinese data referred to is constrained, controlled and influenced by policy. MossBot probably gives great answers to questions posed by the CCP.

Peter Kwasi-Modo
Peter Kwasi-Modo
1 year ago
Reply to  Norman Powers

I totally concur with the points you make, but I would not be quite so dismissive of surveillance data, etc. On its own, surveillance data such as iris recognition, may be of limited value beyond those directly involved in surveillance, but when combined with other data about the individuals its potential uses (and of course abuses) are much greater in scope.

Norman Powers
Norman Powers
1 year ago

Sure, but then you’re using AI to process surveillance data, not using that data to create AI. LLMs and other modern AI tech can certainly be used to build a dystopia, no doubt about it, but China doesn’t seem to have any particular advantage in building such tech beyond a desire to do so and lots of smart tech-savvy citizens.

Peter Kwasi-Modo
Peter Kwasi-Modo
1 year ago
Reply to  Norman Powers

Fair point!

Peter Kwasi-Modo
PP
Peter Kwasi-Modo
1 year ago
Reply to  Norman Powers

Fair point!

Norman Powers
Norman Powers
1 year ago

Sure, but then you’re using AI to process surveillance data, not using that data to create AI. LLMs and other modern AI tech can certainly be used to build a dystopia, no doubt about it, but China doesn’t seem to have any particular advantage in building such tech beyond a desire to do so and lots of smart tech-savvy citizens.

Andrew Dalton
Andrew Dalton
1 year ago
Reply to  Norman Powers

Maybe people like Blair think there’s something strategic about all this but there isn’t

Blair was a big proponent of the knowledge based economy when he was PM. He wasn’t quite so big on explaining what it was though.

Robbie K
RK
Robbie K
1 year ago
Reply to  Norman Powers

Yeah great observation, I was thinking the same thing as I was reading the article. The Chinese data referred to is constrained, controlled and influenced by policy. MossBot probably gives great answers to questions posed by the CCP.

Peter Kwasi-Modo
Peter Kwasi-Modo
1 year ago
Reply to  Norman Powers

I totally concur with the points you make, but I would not be quite so dismissive of surveillance data, etc. On its own, surveillance data such as iris recognition, may be of limited value beyond those directly involved in surveillance, but when combined with other data about the individuals its potential uses (and of course abuses) are much greater in scope.

Andrew Dalton
Andrew Dalton
1 year ago
Reply to  Norman Powers

Maybe people like Blair think there’s something strategic about all this but there isn’t

Blair was a big proponent of the knowledge based economy when he was PM. He wasn’t quite so big on explaining what it was though.

Norman Powers
Norman Powers
1 year ago

Yeah nice essay, too bad it’s totally wrong. I got nothing against journalists and writers opining on technical subjects but they should at least try to find someone with knowledge to check their thesis before they embark on writing it. No Kai-Fu Lee doesn’t count, as his agenda these days is to make China look powerful first and be correct second.
To train LLMs like ChatGPT or “MossBot” you need lots of data, but it’s the sort of data you get from the public internet, including possibly books/magazines/newspapers/etc. It isn’t surveillance data of the sort this article is talking about. You got a billion iris images in a database? Good for you, that’ll be super helpful if you want to train an AI that can generate perfect looking random iris images and not much else. You got a billion swabs? Great, now you can generate fake swabs.
Getting the picture here? The reason you need a lot of data scraped from the internet to train a modern AI is because you want that AI to generate the sorts of things you find on the internet: answers to questions, news articles, photos, poetry, code.
So data isn’t the new oil, it’s not like anything even close to oil. Oil is fungible, one barrel is much like another. Data is not, which is why forced analogies like “The Saudi Arabia of data” are the mark of punditry, not expertise (not that you need much to know this stuff!).
Could the British government train a giant LLM if it wanted to? Uh, yes? DeepMind is based in London and has done exactly that, it’s just a matter of offering those people 2x what they currently get paid and then giving them the time and money they need to collect lots of web crawls, book scans and so on + a few tens of millions of dollars worth of hardware from NVIDIA. But why would it? Maybe people like Blair think there’s something strategic about all this but there isn’t. What does he even mean by an AI Framework? A British TensorFlow? Surely not.

A frequent refrain in the tech community is that “AI is communist”.

I’m a member of the tech community, read AI research papers regularly, take part in AI discussions regularly and have never heard anyone say this. What would that even mean? AI development is the opposite of communist, the most advanced AIs are all being trained by private corporations and the most talked-about AI right now (ChatGPT) is by a well funded startup. There’s nothing I can think of that’s less communist than a startup.

Last edited 1 year ago by Norman Powers
Peter Kwasi-Modo
Peter Kwasi-Modo
1 year ago

Feminists ridicule the verbiage of us males as “manplanation”. I wonder how long it will be before “botplanation” enters the vocabulary. Sometimes, chatGPT raises its hands in the air and admits its just a piece of software, but more often, its responses are a mixture a fact and bu115h1t. As with any interlocutor, human or otherwise, who breezily answer a question when they don’t really know what they are talking about, one quickly learns to ignore them.
So if I instructed chatGPT to “Explain why President Xi’s Zero Covid Policy was such a monumental disaster.” In about two seconds I get a very good, balanced response with four headings: Economics, Human Rights, Long-term harm and Lack of transparency. Great! What do you suppose Moss would do if I asked it for the same information? So the two tools have a distinct purpose. And god help us if a latter-day Dominic Cummings bases policy on the strnegth of a chat with a Bot.
An ex-colleague, a professor of Artificial Intelligence, once told me, researchers only call it artificial intelligence when we don’t really understand why it works. As soon as we DO understand why it works, we start calling it software.

Last edited 1 year ago by Peter Kwasi-Modo
Andrew Dalton
AD
Andrew Dalton
1 year ago

When I was student, in the immediate run up to the Iraq war, our Artificial Intelligence and Artificial Neural Networks professor announced he wouldn’t be available for a couple of weeks due to meetings with the government. A friend of mine quipped they’d developed an expert system to decide whether to invade or not. In the next lecture after his return, he made a point about expert systems being used in the decision process for invading Iraq. I do wonder if he heard my spit take.

Andrew Dalton
AD
Andrew Dalton
1 year ago

When I was student, in the immediate run up to the Iraq war, our Artificial Intelligence and Artificial Neural Networks professor announced he wouldn’t be available for a couple of weeks due to meetings with the government. A friend of mine quipped they’d developed an expert system to decide whether to invade or not. In the next lecture after his return, he made a point about expert systems being used in the decision process for invading Iraq. I do wonder if he heard my spit take.

Peter Kwasi-Modo
Peter Kwasi-Modo
1 year ago

Feminists ridicule the verbiage of us males as “manplanation”. I wonder how long it will be before “botplanation” enters the vocabulary. Sometimes, chatGPT raises its hands in the air and admits its just a piece of software, but more often, its responses are a mixture a fact and bu115h1t. As with any interlocutor, human or otherwise, who breezily answer a question when they don’t really know what they are talking about, one quickly learns to ignore them.
So if I instructed chatGPT to “Explain why President Xi’s Zero Covid Policy was such a monumental disaster.” In about two seconds I get a very good, balanced response with four headings: Economics, Human Rights, Long-term harm and Lack of transparency. Great! What do you suppose Moss would do if I asked it for the same information? So the two tools have a distinct purpose. And god help us if a latter-day Dominic Cummings bases policy on the strnegth of a chat with a Bot.
An ex-colleague, a professor of Artificial Intelligence, once told me, researchers only call it artificial intelligence when we don’t really understand why it works. As soon as we DO understand why it works, we start calling it software.

Last edited 1 year ago by Peter Kwasi-Modo
Rocky Martiano
Rocky Martiano
1 year ago

Social credit scores coming to a postcode near you? Seriously though, the UK government can’t even produce a database of NHS medical records. How on earth would they manage a project like this? It won’t stop them trying though. Management consultants are already rubbing their hands at the prospect of the coming bonanza.

Steve Murray
Steve Murray
1 year ago
Reply to  Rocky Martiano

An excellent observation about the pitiful use of IT within the NHS. I had dealings with IT people drafted into our health service over the last two decades of my NHS career (up to 2016) and let’s just say the type of people recruited were pretty third rate. Why? Quite simply because anyone with any real IT talent could earn far more money in the private sector.
On the national level, i attended conferences in the early 2000s about the introduction of a UK database for medical records. I expect such conferences are still being attended two decades later, with the same blather and the same costs in attending.
To try to bridge the talent gap, the NHS employs IT Consultants at eye-watering rates. They come in, do their thing (with the tech available at the time) and disappear, leaving the system they’ve helped introduce to become a “legacy” within the space of a few short years since no-one remaining in the organisation understands it, nor they can change it. More efficient tech overtakes the legacy system and none of the disparate systems “talk” to each other. Repeat ad infinitum.
If this were to be the type of template for AI at the state level, it’d be a huge waste of time and money. However… i think what’s being suggested is something of a different order. In the NHS, the systems rely on overstretched staff inputting data (e.g. drug regime, changes to regime, drug administered, if not why not etc.) which simply can’t happen automatically. Errors creep in all the time which stymies the system. In an environment where data is routinely collated via an automation process (as with smartphones), there’s a different type of potential. There’s no reason in theory why the UK couldn’t replicate what’s happening in China, except that state control and citizen consent are of a different order. If this were to happen by stealth, i.e. without citizen consent (and it may already be happening) then we’re in a completely new ballpark. This article is very welcome as a warning of the double-edged sword that’s hanging over us all. We’re all Damocles now.

Last edited 1 year ago by Steve Murray
Rocky Martiano
Rocky Martiano
1 year ago
Reply to  Steve Murray

You are correct, it is already happening by stealth e.g. centralised ID database for access to all government services, BOE preparing to introduce a CBDC, handover of power to the WHO to control the UK population in the event of a new pandemic (see Unherd’s excellent piece today on this subject). None of this to my knowledge has even been debated in Parliament, let alone put to the British people.
Back on the subject of the NHS, my brother-in-law had a sinecure for many years working for one of the major consultancies, going round the country managing IT projects for NHS trusts. Exactly as you described, finish project, move on, leave the poorly equipped trusts to deal with the fallout, system falls into oblivion within a few years.

Rocky Martiano
RM
Rocky Martiano
1 year ago
Reply to  Steve Murray

You are correct, it is already happening by stealth e.g. centralised ID database for access to all government services, BOE preparing to introduce a CBDC, handover of power to the WHO to control the UK population in the event of a new pandemic (see Unherd’s excellent piece today on this subject). None of this to my knowledge has even been debated in Parliament, let alone put to the British people.
Back on the subject of the NHS, my brother-in-law had a sinecure for many years working for one of the major consultancies, going round the country managing IT projects for NHS trusts. Exactly as you described, finish project, move on, leave the poorly equipped trusts to deal with the fallout, system falls into oblivion within a few years.

Steve Murray
LL
Steve Murray
1 year ago
Reply to  Rocky Martiano

An excellent observation about the pitiful use of IT within the NHS. I had dealings with IT people drafted into our health service over the last two decades of my NHS career (up to 2016) and let’s just say the type of people recruited were pretty third rate. Why? Quite simply because anyone with any real IT talent could earn far more money in the private sector.
On the national level, i attended conferences in the early 2000s about the introduction of a UK database for medical records. I expect such conferences are still being attended two decades later, with the same blather and the same costs in attending.
To try to bridge the talent gap, the NHS employs IT Consultants at eye-watering rates. They come in, do their thing (with the tech available at the time) and disappear, leaving the system they’ve helped introduce to become a “legacy” within the space of a few short years since no-one remaining in the organisation understands it, nor they can change it. More efficient tech overtakes the legacy system and none of the disparate systems “talk” to each other. Repeat ad infinitum.
If this were to be the type of template for AI at the state level, it’d be a huge waste of time and money. However… i think what’s being suggested is something of a different order. In the NHS, the systems rely on overstretched staff inputting data (e.g. drug regime, changes to regime, drug administered, if not why not etc.) which simply can’t happen automatically. Errors creep in all the time which stymies the system. In an environment where data is routinely collated via an automation process (as with smartphones), there’s a different type of potential. There’s no reason in theory why the UK couldn’t replicate what’s happening in China, except that state control and citizen consent are of a different order. If this were to happen by stealth, i.e. without citizen consent (and it may already be happening) then we’re in a completely new ballpark. This article is very welcome as a warning of the double-edged sword that’s hanging over us all. We’re all Damocles now.

Last edited 1 year ago by Steve Murray
Rocky Martiano
Rocky Martiano
1 year ago

Social credit scores coming to a postcode near you? Seriously though, the UK government can’t even produce a database of NHS medical records. How on earth would they manage a project like this? It won’t stop them trying though. Management consultants are already rubbing their hands at the prospect of the coming bonanza.

Michael Coleman
Michael Coleman
1 year ago

Not all data has the same value. China certainly is monitoring and recording more conversations than any other entity (except maybe the NSA?) and this provides better training for AI based on LLMs like Chat GPT. Similarly for number of images of people and thus people & image recognizing AIs.
But as numerous others more knowledgeable than me have demonstrated, the current deep learning models have little true understanding and are far from a general AI. See Gary Marcus’ excellent Substack on AI
https://garymarcus.substack.com/p/smells-a-little-bit-like-ai-winter
It is not clear that the current dominant models will be the path towards AGI and that achieving AGI is just about more processors and more data.

Michael Coleman
Michael Coleman
1 year ago

Not all data has the same value. China certainly is monitoring and recording more conversations than any other entity (except maybe the NSA?) and this provides better training for AI based on LLMs like Chat GPT. Similarly for number of images of people and thus people & image recognizing AIs.
But as numerous others more knowledgeable than me have demonstrated, the current deep learning models have little true understanding and are far from a general AI. See Gary Marcus’ excellent Substack on AI
https://garymarcus.substack.com/p/smells-a-little-bit-like-ai-winter
It is not clear that the current dominant models will be the path towards AGI and that achieving AGI is just about more processors and more data.

Nicky Samengo-Turner
Nicky Samengo-Turner
1 year ago

I have no idea what chatbot CBT is?

John Solomon
JS
John Solomon
1 year ago

Just pray you never need to find out.

Peter Kwasi-Modo
Peter Kwasi-Modo
1 year ago

With an ordinary browser, you type in some keywords and it gives you a list of web pages that contain the keywords. With a chatbot, such as chat GPT, you can (a) ask a question and you get an essay-style answer or (b) engage in a dialogue with the chatbot. the essay-style responses are produced remarkably quickly and are reasonably well structured. but for me, the big problem is that the chatbot only occasionally admits that it is out of its depth. It can produce nonsensical answers, sometimes becuase it has not be trained on the appropriate data.

Stephen Quilley
Stephen Quilley
1 year ago

No

Stephen Quilley
Stephen Quilley
1 year ago

No