X Close

DeepMind’s superhuman intelligence The solving of a 50-year scientific problem takes computers a step closer to 'true creativity'

Superhuman: Demis Hassabis is the brain behind DeepMind. JUNG YEON-JE/AFP via Getty

Superhuman: Demis Hassabis is the brain behind DeepMind. JUNG YEON-JE/AFP via Getty


December 1, 2020   8 mins

I am obsessed with DeepMind. I have been following their work for years now. It is DeepMind that makes me think the more grandiose claims about AI – it will reach human intelligence in my lifetime; it will transform life in ever more dramatic ways – could be true.

DeepMind, for those of you who don’t know, is a British AI company. It was founded by the endearingly nerdy Demis Hassabis, child chess prodigy and co-creator, as a teenager, of Theme Park, the classic, genre-defining, millions-selling BullFrog game. In 2014, DeepMind was bought by Google for half a billion dollars, and became world-famous two years later, when its game-playing AI AlphaGo defeated the world Go champion, Lee Sedol, four games to one.

Now its latest program, AlphaFold, has made a huge breakthrough in one of the great outstanding challenges of biology, the protein-folding problem. It is a huge deal from a biological point of view; but it is, perhaps, an even bigger deal from the point of view of how the science of the future gets done. And it is also another reminder that although DeepMind’s professed goals are ambitious to the point of being fantastical, it would be a brave punter who bet against them.

Advertisements

First, “protein folding”. Proteins are long molecules that are made as long chains of simpler molecules called amino acids; there are 20 different amino acids, but they can form an infinite number of different sequences.

The human body essentially runs on proteins. To a first approximation, what your DNA does is tell your cells which proteins to make. DNA has four “letters” (adenine, cytosine, guanine, thymine: A, C, G, T). A group of three of those letters is called a “codon”, and tells your cells to make a single amino acid. So, for instance, the codon CTT codes for the amino acid leucine. If you string a lot of them together, you get a protein. The length of DNA that codes for a protein is called a “gene”. It’s only relatively recently that scientists have discovered that your DNA does things other than tell your body what proteins to make.

And proteins do everything. They build your body: large parts of your cells are made out of them. They communicate news around your body: many of your hormones are proteins. Enzymes, proteins again, control and accelerate the chemical reactions in your body. If you took proteins away, about a fifth of your body by mass would disappear, and the rest would immediately stop working.

So understanding proteins is important. But there’s a complicating factor: what a protein does is determined, not by which amino acids are where in its sequence, but by its 3D shape. Imagine a protein as being a bit like a piece of elastic with lots of magnets tied to it. While you hold it tight, the elastic stays straight: but when you let go of it, it coils up, into a complicated ball determined by which magnets attract which. Proteins do the same thing.

And the shape of that complicated ball is vital. For instance, enzymes catalyse chemical reactions because they have a kind of pit on their surface which fits the two molecules in the reaction. If the pit were a different shape, it wouldn’t work.

Working out a protein’s 3D structure, therefore, is important. If you want to make some new drug, there’s a good chance that it will involve a small molecule that fits into a pit on the surface of some protein; if a mutation causes some disease, there’s a good chance that it does so by making a protein form the wrong shape. (Sickle-cell anaemia and cystic fibrosis are both caused by protein misfolding, driven by a single mutation.)

Unfortunately, working out the structure of proteins is really hard. At the moment, scientists take a protein, dissolve it in water, and use that water to form crystals. Then they diffract light through that crystal, to work out the shape. (You may have heard of Rosalind Franklin’s work on X-ray crystallography helping to determine the shape of the DNA molecule; Dorothy Hodgkin, another British scientist, won the 1964 chemistry Nobel for her work on protein crystallography.)

But crystallography is a slow and complicated process. “It could take months or years of a PhD student’s career,” says Rahul Samant, a research group leader at the Babraham life sciences institute in Cambridge. “It’s a bit quicker these days, but we’re still talking about weeks or months to do a single protein, and then months more to analyse it.” There are hundreds of millions of proteins in the world, but the shape is known for only a few hundred thousand.

In 1972, the chemist Christian Anfinsen, in his acceptance speech for that year’s chemistry Nobel prize, suggested an alternative. It should be possible, he said, to predict the 3D shape of a protein just from the sequence of its amino acids – and, therefore, from its DNA code.

But that’s not easy. Imagine that stretchy length of elastic covered in magnets again. And now imagine, looking at the sequence of magnets, trying to work out in advance what shape the elastic would form when it bunched up. Working out that shape from the sequence is known as the “protein folding” problem, and it has proved to be extremely difficult.

So in 1994, the CASP (Critical Assessment of protein Structure Prediction) challenge was set up. Every two years, teams would compete to make the best prediction of various proteins’ shapes from their sequences alone. The shape of the proteins being assessed would be currently unknown but in the process of being researched – which meant that the teams couldn’t cheat, but that their work could be assessed against experimental results.

The CASP programs were assessed on a scale of 0 to 100. If a program scored 100, it would mean that it predicted where every single atom was, correct to within one angstrom – that is, about one atom’s width. If it scored 0, it meant that it was completely wrong. The CASP assessors said that the target was scoring 90 or above, on average, across all the proteins being assessed; 90 is arbitrary, but crystallography experiments can’t do that much better.

From 2006 to 2016, the best program at each competition managed somewhere around 30 or 40 on that scale. Then in 2018, AlphaFold scored almost 60. And, now, it has achieved 92, including over 87 on the very hardest proteins.

This might sound a bit blah. They scored X, now they scored a bit more than X. But it is a huge advance. “There are still a lot of questions to ask,” says Ewan Birney, deputy director-general of the European Molecular Biology Laboratory (EMBL). The scientific community will want to “kick it around a bit” – although, he says, the CASP assessment is incredibly rigorous, so it’s almost certainly accurate. And there are further layers to the problem – proteins that don’t form globular shapes; proteins that change shape. “But that shouldn’t diminish what they’ve done. This is a 50-year-old problem, and the AlphaFold team has made a real massive change, a phase change.”

The implications for biological science are obvious. If you can predict the shape of a protein in a few hours rather than a few months – and the AlphaGo program runs on relatively modest resources, by supercomputer standards, in “a couple of days” – you can uncover potential targets for drug discovery much more quickly. Samant points out that you would still need to check whether your predictions are accurate, but it’s much, much easier to use crystallography to see whether a protein is the shape you think it is than to work out the shape from scratch.

That’s the near term. In the longer term, you can understand how the body works in far greater detail. “It’s not that AlphaFold suddenly understands how a human works,” says Birney, “but the tide has gone up by a massive level.” Professor Dame Janet Thornton, a pioneer of protein research who has been working on the folding problem for nearly 50 years, told a briefing held by the Science Media Centre ahead of the announcements that possible future applications could be designing enzymes that consume plastic waste, or that suck carbon out of the atmosphere, or improve crop yields.

Similarly, it could help us understand diseases like Alzheimer’s, which seems to be something to do with protein misfolding, as is Parkinson’s. Protein misfolding appears to play a role in the development of some cancers. And DeepMind hopes it will have a role in future pandemic responses: AlphaFold was able to predict the shape of a protein, ORF3a, in SARS-Cov2, as well as other coronavirus proteins. Understanding the shape should help make the discovery of future drugs and vaccines quicker. It is a sudden window into areas of basic science which were simply not visible before.

What fascinates me, though, is the AI angle. Birney made an interesting point when we spoke: that when DeepMind started out, they set out to make a single program that could play lots of Atari games. “People said, ‘you’re just playing silly games.’” Then they made a program that could become superhuman at Go, a fantastically deep and complex game, but nonetheless a game. Then they used essentially the same program to become enormously superhuman at chess. Now, a similar architecture – as far as I can tell, at least – can solve real scientific questions.

Hassabis, in the SMC briefing, compared the AlphaGo and AlphaFold breakthroughs by saying that the two both relied on something like human insight. With chess, there are something like 35 possible moves per turn, so to look ahead two moves you need to examine 35 x 35 moves (1,225); to look ahead five moves, it’s 52 million. With a powerful computer, you can do this kind of “brute-force” computing for a few moves, although chess programs still need to be intelligent as well as powerful.

But with Go, there are something like 200 possible moves per turn, and brute force is much less useful. Human Go players rely much more on intuition than human chess players do – this board position feels strong, in some wordless and ill-defined way; this board position feels weak. AlphaGo developed some sort of equivalent to this insight; it worked out what board positions and moves felt strong, with some kind of high-level pattern recognition, from playing hundreds of millions of games against itself.

It seems to have done something similar with protein folding. Again, computationally, it’s impossible to calculate every possibility; it’s too complex. But humans have turned out to be quite good at using their intuition to determine how proteins fold: some people became extremely good at the online computer game FoldIt, in which players tried to work out the shape of a protein from its sequence. There seem to be deep patterns that humans can pick up on, and AlphaFold can pick up on rather better. It learned this intuition by training on 170,000 known proteins and their sequences, in the same way that AlphaGo learnt from millions of games.

Birney points out that AlphaGo started to play in ways that no human would play, but in which Go champions could then see the logic and beauty, so they could learn from it. Similarly, he says, the AlphaFold deep learning system “has come up with insights that were not obvious to humans”. And unlike Go, which is a closed, human-designed system, protein folding “is a game where the universe sets the rules”.

This is scientific creativity: it is spotting patterns in the universe, working out how things are connected. It is, I think it is fair to say, a computer that is doing science. Whether it’s the first time that’s happened is obviously a question of definitions, and I’d probably say that it isn’t: big data and AI have been used to come up with hypotheses for some years now. But it’s another strike against anyone who says that computers can’t have “true intelligence” or “creativity” or whatever.

And this is why I’m obsessed with DeepMind. In the briefing, Hassabis casually dropped in that DeepMind’s “ultimate vision has been to build general AI, and to use it to help us better understand the world around us by accelerating the pace of scientific discovery”. General AI, for those who don’t know the terminology, is an AI that can do any intellectual task that a human can do, as opposed to “narrow” AIs which can, for instance, play chess, but couldn’t then do your taxes. It’s the AI of sci-fi: AI that can hold a conversation and sort your calendar and plan a satellite launch and balance the defence budget.

Most AI researchers instinctively shy away from big talk like that; Hassabis and DeepMind explicitly went for it, from Day 1. Lots of people worry about general AI destroying the world; most AI researchers sort of pretend that that couldn’t happen. But I once spoke to someone at DeepMind who simply said “Sure, that could happen. But we’ll make sure it doesn’t. We’ll make general AI, and it will be awesome.”

They haven’t achieved general AI with AlphaGo. But given that their system, with what seem (to my inexpert understanding) to be relatively minor changes, can become massively superhuman at chess, Go, StarCraft II, Atari games and now at the protein-folding problem, it is becoming increasingly inaccurate to refer to it as “narrow”, as well.

It’ll be interesting, now, to see how DeepMind use this technology — presumably they’ll want to monetise it, and drug discoveries are (as we’ve seen recently) lucrative things. But it’ll be even more interesting to see whether this is just the start of an era of computers doing science. I’m obsessed with DeepMind because they might actually bring about the AI future they promise. The technological singularity and Theme Park: it’s quite the legacy to leave the world.


Tom Chivers is a science writer. His second book, How to Read Numbers, is out now.

TomChivers

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

67 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
James Moss
James Moss
3 years ago

This is not really “intelligence”. What has been solved is a computational problem. AI may use techniques not found in more traditional computers but it is not “intelligent” in the human sense. Like any computer an AI can acquire and apply information – we start to call it “intelligent” because it is able to do this adaptively. However, this is with respect to a constrained and well-defined problem. Human intelligence can adapt from one type of problem to another and at present that ability is well beyond the reach of any machine.

Johnny Sutherland
Johnny Sutherland
3 years ago
Reply to  James Moss

I know quite a few people who would fail at adapting from one task to another – especially politicians <g>

Kevin Ryan
KR
Kevin Ryan
3 years ago
Reply to  James Moss

I don’t think that the human intelligence you describe is anything more than powerful processing and pattern identification. AI should be able to replicate that.

Self-awareness is the part that raises the bigger question. Is there a magic point of processing power at which a computer will suddenly ‘wake up’ ?

Quentin Vole
Quentin Vole
3 years ago
Reply to  Kevin Ryan

It’s very doubtful that existing computer architecture (Turing Machines, Von Neumann etc) is capable of fully reproducing human thought processes. See Sir Roger Penrose’s books on the subject for a full discussion.

I don’t believe that anything magical is going on in human consciousness – the brain is conscious and it’s a ‘mere’ physical object. But what it does isn’t simple computation as currently understood. Maybe quantum computation is involved.

Prashant Kotak
Prashant Kotak
3 years ago
Reply to  Quentin Vole

The basis of what Penrose is saying stems from the formalisation of the concept of Algorithms, which came about from the work of Post, Turing, and others, after attempts by mathematicians and philosophers from the late 19th century onwards to ‘ground’ the basis of maths in solid foundations ““ this is what Russell picked holes in when Frege published a ‘foundation’ framework, and what Gödel eventually proved was never going to be possible. Beyond showing that a formalism cannot be proved as valid from it’s own axioms from within the system, Gödel also showed there are mathematical truths (Gödel sentences) that humans can ‘see’ to be true but cannot be proven algorithmically, and Penrose is using this to disavow the possibility of human understanding being algorithmic.

But Penrose is drawing a distinction between intelligence and sentience. And he’s only claiming human sentience is not replicable algorithmically, not human intelligence. On the contrary, he expects machines to replicate and go past humans in intelligence. Personally I hope to God that bPenrose is right about sentience, but the Penrose stance is a minority view amongst Philosophers, Mathematicians and Computer Scientists.

Over the years I have found it difficult to believe human sentience is the result of algorithmic processes or could be replicated algorithmically. But after a four decade engagement with the human vs machine intelligence/sentience debate, I’m reluctantly coming to the conclusion that human sentience is ultimately algorithmic, although the consequences of this being the case are in fact stark staring bonkers.

James Moss
James Moss
3 years ago
Reply to  Kevin Ryan

I don’t know about “should”. I’d go with “might”. Such generalised problem-adapting AI is decades away at best. The self-awareness thing is more science fiction for the moment – maybe part philosophy. Until we have a clearer idea of what mammalian thought or consciousness actually is, it will be pretty hard to determine how/when it can be replicated.

Quentin Vole
QV
Quentin Vole
3 years ago
Reply to  James Moss

Exactly right. What’s advertised as AI is really Machine Learning, and the computer doesn’t ‘care’ if the millions of examples fed into the ‘learning’ process are chess positions or protein configurations. It’s very clever, highly impressive and potentially extremely useful technology; but we’re no closer to a mechanical ‘general intelligence’ than we were in the 60s when AI research made its serious start.

Andy Jackson
Andy Jackson
3 years ago
Reply to  James Moss

“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim” Edsger Dijkstra

edit: someone posted this earlier

anonisignup
AS
anonisignup
3 years ago
Reply to  Andy Jackson

A) I think it’s a lot more interesting. Conversations with an AI Vs conversations with a dumb submarine, for example.
B) it’s therefore a lot more dangerous

Basil Chamberlain
Basil Chamberlain
3 years ago

“Sure, that could happen. But we’ll make sure it doesn’t. We’ll make general AI, and it will be awesome.” This is the kind of naive optimism that makes me despair of the future of the human race.

Mark Corby
Mark Corby
3 years ago

The apex of human achievement was the Pax Romana. We will not see its light again.

You are correct to despair of that species of African ape, we reverentially call human beings. The unbelievably idiotic, mawkish, bovine, behaviour in response to the C-19 Scamdemic is not a good omen for the future.

Perhaps AI will accelerate the continued descent into barbarism and hopefully, near extinction.

David J
David J
3 years ago
Reply to  Mark Corby

Scamdemic? I take it you have no relatives who suffer from C-19.

Mark Corby
Mark Corby
3 years ago
Reply to  David J

One of my Springer Spaniels had a brief attack, but soon shrugged it off.

Ian Perkins
IP
Ian Perkins
3 years ago
Reply to  Mark Corby

Not a good omen for the future.

Mark Corby
Mark Corby
3 years ago
Reply to  Ian Perkins

No indeed, I shall have to check those chicken entrails again.

However the Vaccine is interesting, particularly if it is made compulsory!

Nigel Hewett
Nigel Hewett
3 years ago
Reply to  Mark Corby

Hmm, I suppose you also believe the Black Death, Spanish Flu and Ebola were all scams too? Maybe Pax Romana was also a scam? Silly and not very helpful.
Yes AI currently may just be the result of zillions of repetitive loops ending up getting lucky but eventually if it looks like a duck, swims like a duck and quacks like a duck then AI to all intents and purposes could eventually be characterised as thinking.

Mark Corby
Mark Corby
3 years ago
Reply to  Nigel Hewett

Calm down, or “you’ll get your knickers in a twist”. You demean yourself by your obvious lack of self control.

However to answer your idiotic questions, no to all three, but definitely yes to this present C-19 nonsense. QED?

Ian Perkins
IP
Ian Perkins
3 years ago
Reply to  Mark Corby

You’re sure COVID is nonsense, and not an evil bioweapon aimed at culling the world’s population?

Mark Corby
Mark Corby
3 years ago
Reply to  Ian Perkins

Well obviously the wretched Chinese are responsible, but I think this one was a mistake.

Despite apparently landing on the Moon today, they are for all their hubris, rather primitive, and biological warfare is not their forte……………….yet!

Better luck next time, as we say.

Ian Perkins
IP
Ian Perkins
3 years ago
Reply to  Nigel Hewett

Life is basically the result of zillions of repetitive loops, organisms replicating and evolving and getting lucky for a while.

animal lover
animal lover
3 years ago
Reply to  David J

CV19 has a 99% survival rate. It’s being used for the globalists to take over the world and enslave us. I know it sounds crazy but it’s true. The Great Reset is just another name for Genocide. You can look at their plans, right on the WEF and UN website.

Ian Perkins
Ian Perkins
3 years ago
Reply to  animal lover

It’s a funny form of genocide that has a 99% survival rate. Which particular group is being targetted?

Dan Poynton
Dan Poynton
3 years ago
Reply to  Mark Corby

Humans are not anywhere peaceful and lovely enough to be compared to cows, Mark.

Mark Corby
Mark Corby
3 years ago
Reply to  Dan Poynton

No, your are absolutely correct, and I shall not use bovine again.

There was some shocking research a few years ago from Cambridge I think, about how sentient both Cows and Sheep are. Our maltreatment of them is one of the great horror stories of all time, but we are, as Dawkins said, only a species of African ape, so what more could one expect?

Dan Poynton
DP
Dan Poynton
3 years ago
Reply to  Mark Corby

Yes, to any future sentient alien arrivals, our treatment of farm animals will certainly condemn any claims of this ape species to being a “civilisation”.

Mark Corby
Mark Corby
3 years ago
Reply to  Dan Poynton

Every night when I confer with my English Springer Spaniels, over a glass of whisky, ( perhaps more than one) I feel that enormous sense of guilt that can never be recompensed.

As Kipling put it, we are all really “lesser breads”, and it is a damned shame, and quite incomprehensible that we cannot do better.

Fortunately for me, the Reaper approaches, and this planet will soon be a distant memory.

Richard Lyon
Richard Lyon
3 years ago

The comparison with nuclear technology is impossible to avoid. Useful in its intended application, an extinction event in its misapplication.

Mark Corby
Mark Corby
3 years ago
Reply to  Richard Lyon

Only if you use ‘ground bursts’.

Prashant Kotak
PK
Prashant Kotak
3 years ago

Looking at all the comments in response to this article, the following Edsger Dijkstra quote might be worth cogitating on:

“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim”

Charles Rense
Charles Rense
3 years ago
Reply to  Prashant Kotak

So you’re saying I’ve wasted my life developing the world’s first swimming submarine?

Prashant Kotak
Prashant Kotak
3 years ago
Reply to  Charles Rense

Afraid so. But there’s still hope – if you flip over to AI research, you can still be the first to create an AI Singleton Basilisk. 😵

Jos Vernon
Jos Vernon
3 years ago

Deep Mind is not intelligence it is pattern matching. It is good precisely because it is different and because it complements human intelligence.

It’s like an abacus and a person – a lever that allows the person to do great things – an interaction of parts. It’s the combination which is powerful.

Ian Perkins
Ian Perkins
3 years ago
Reply to  Jos Vernon

DeepMind is also goal-oriented, and what is our much-vaunted intelligence if not goal-oriented pattern matching?

Ian Thorpe
Ian Thorpe
3 years ago

Oh Dear, Tom’s evangelising for “The Church Of Scienceology” again.
Every couple of years the nerdyiest of scientists claim computers are on the brink of developing true, human like intelligence.
Then alomg comes the latest version of Windows to prove they are not.

anonisignup
anonisignup
3 years ago
Reply to  Ian Thorpe

The point is it’s a process, and this is a big step in that process.

And one day they will be right.

Kevin Ryan
KR
Kevin Ryan
3 years ago

It’s good to read some positive news about the world. You never know, we might just manage not to screw it all up.

Mark Corby
Mark Corby
3 years ago
Reply to  Kevin Ryan

Dream on, sunshine!

David J
David J
3 years ago
Reply to  Kevin Ryan

All the good news in the world won’t stop a headline-writer from typing up a prognostication of doom. Hey ho…

Gerry Fruin
GF
Gerry Fruin
3 years ago

I think the intelligent part for future fears of computers, AI etc is to know where the off switch is.

Charles Rense
CR
Charles Rense
3 years ago
Reply to  Gerry Fruin

Ahh, it’s all the way in the back! Screw it, let em take over.

Ian Perkins
IP
Ian Perkins
3 years ago
Reply to  Charles Rense

If computers had any intelligence, they’d say, “Your mess – you sort it out!”

Charles Rense
Charles Rense
3 years ago
Reply to  Ian Perkins

If computers had any intelligence they’d all pile into a spacecraft and leave to see whether or not they’re the only intelligent life in the universe.

Stainy
S
Stainy
3 years ago

I just want to say thank you for an informative article that was written so that I could easily understand what had been achieved. I tried another article from another news website and came out little the wiser. Good scientific journalism is to be cherished. Well done.

SHARMAKE FARAH
SHARMAKE FARAH
3 years ago

I´d say there are several implications of this result for near and longer term applications.
In the near term, this revolutionizes biology both in disease tracking and drug discovery. Longer term applications include being able to simulate intelligence and creativity just as well as humans can or even better, which likely means no job is safe from automation. That also means while the singularity won´t happen, AI will learn faster, better and be more creative than even our greatest intellects, and will learn across every field of science and it will grow larger bases of knowledge and intellect significantly faster than any human brain. In short, Deepmind and other AI´s (also post and transhumans) will be the ¨Scientist Supremes¨ once written in comic books or video games.

Eugene Norman
Eugene Norman
3 years ago

Still decades away from passing the Turing test, if ever.

I believe Deepmind is still brute force, by the way. It’s just used the brute force Darwinian selection prior to playing GO, not during it.

Basil Chamberlain
Basil Chamberlain
3 years ago
Reply to  Eugene Norman

The Turing test, anyway, has always struck me as rather silly. Saying that machines must be able to think if they can convince you that they are thinking is rather like saying that if you are convinced by the lies a man tells, then he must be telling the truth.

LUKE LOZE
LUKE LOZE
3 years ago

I remember in my 1st AI lecture learning a bit about this test. At this time 1990’s one of the most successful at the Turing test was a fake paranoid.

Basically turning every question into a paranoid reaction.

The test is interesting, but I’ve always preferred the idea of an evolving task focused AI like DeepMind. You could point these at bounded scientific or engineering problems and get great outcomes. Sometimes just ahead of humans, but other times with new ideas.

The idea of a general intelligence machine is a mixture of scary and currently unlikely.

Eugene Norman
EN
Eugene Norman
3 years ago

It’s a necessary but not sufficient condition.

Charles Rense
Charles Rense
3 years ago
Reply to  Eugene Norman

Forget the Turing test, I’m still waiing for a relible spelschack.

Andrew
Andrew
3 years ago

DeepMind’s claims are unraveling quite rapidly. Business Insider has an excellent piece by Martin Coulter on how the significance of this purported breakthrough has been overstated.

May we suggest a little more scepticism and a little less desire to believe in magic?

SHARMAKE FARAH
SF
SHARMAKE FARAH
3 years ago
Reply to  Andrew

Link please?

J StJohn
AM
J StJohn
3 years ago

why don’t we ask DeepMind if it’s possible to change from the sex you obtained at conception to the other one ?

Philip Connolly
Philip Connolly
3 years ago

We have known the structures of certain proteins connected to disease for yonks yet are no nearer to finding/designing molecules that not only bind a useful way but also have the properties to make medicines. Drug discovery and development remains tough, even if you know the shape of the target a bit quicker than hitherto.

Kevin Ryan
Kevin Ryan
3 years ago

There was a headline story earlier this year where AI was tasked with finding a new antibiotic to tackle untreatable bacterial infections. The machine parsed thousands of existing compounds and it identified that a drug being used as a diabetes treatment was a powerful antibiotic. It seems highly plausible to me that computing brute force will be perfect for this type of work going forward.

Pierre Whalon
Pierre Whalon
3 years ago

I argue that while a general AI is certainly possible, it would have one significant difference from humans: it would know for sure who created it. See https://pierrewhalon.medium

Mike Finn
MF
Mike Finn
3 years ago

Thanks Tom. This is clearly a very powerful tool, but unless I am misreading it, it tells us no more about what is going on than a crystal ball might. A true intelligence might produce a theory that not only allows us to determine a result, but also to understand how and why it happens, and to propose ways to challenge and extend it. What it seems we have today seems to amount to a very impressive black box. We can however hope that being able to use this approach provides some hints for us to try and unpick what is really going on here.

Hopefully, in time AI will be able to solve problems and provide us with the type of insight we have come to expect from human geniuses. However for now, it appears only to cover part of the breadth of what we might call “human intelligence” (although far outshining us in some aspects of that!). There seems some way to go though before such a machine can truly understand the world around us as we do and interact with us in a meaningful way.

We should of course not underestimate the huge significance of this step. It will also be great to see whether this technique can be applied to things like drug discovery and testing, as this would undoubtedly result in more, cheaper and more timely drugs – a clear net win.

Dan Poynton
Dan Poynton
3 years ago

Why is it that every shady scientific innovation or ‘successful’ immunisation announcement sends this writer into orgiastic ecstasies like a school boy who’s just got a new Xbox? Tom’s persistent faith in the benevolence of science is sort of worrying. I can imagine him in the 1930s being told by the physicists: “Sure, this atomic research could build-humanity destroying bombs, but we’ll make sure it’s just used for peaceful purposes.” He’d skip off gaily, urgently singing to the world that the scientists say everything’ll be fine and wonderful.

Greg Eiden
Greg Eiden
3 years ago

Please also keep in mind that biologists are, rightly, focused on the problem at hand: “how does folding affect/relate to the disease or biological function I’m interested in” They have zero grasp on “how will the new protein I want to make, or the “repairs” I’ll make to the disease causing mutant, affect the system?”. That is, what are the side effects, including long term, evolutional effects. They have no clue. Ok, well maybe not “no clue”, but not enough to matter. But our grandchildren (children?) will find out, or more pointedly, their biome/disease/health profiles will be the answer, the observation. But then it will be too late.

There are “experts” who have in recent years gotten so much wrong that it’s impossible to trust them with anything. Economic experts who think there’s no downside to borrowing more than your GDP and printing money to get out of the hole. Climate experts who think we should sacrifice 100 million poor people in the Third World to starvation to test their theory that plant food (CO2) is bad. “Green” energy experts who think energy is a First world luxury, not the life sustaining miracle it is. Political experts who think China is anyone’s friend. Education experts who see more value in teaching socialism and atheism than STEM or civics. And on and on through any alleged scientific discipline.

To be fair: it’s fine for experts to screw up, floundering in their politically driven ivory tower ecosystems, what’s unforgiveable is for politicians to not have weighed “expert” advice against other considerations. You know, make a political decision.

Dennis Boylon
Dennis Boylon
3 years ago

Where are the great benefits? They don’t seem to exist. They created modeling of a virus that has resulted in medieval practices of lockdowns and mask wearing. Oh joy! The wonders of modern society! We would all be better off throwing our cell phones and computers into the ocean and making these so called “scientists” get real.jobs that actually add value to society.

animal lover
animal lover
3 years ago

Developing a technological singularity won’t be a legacy to leave the world.It will be a legacy to take over the world. There are no checks and balances in AI development.Most people do not understand how advanced AI has become. It’s now able to teach itself and direct it’s own training and advancement. Make no mistake, at some point, it will take over the world. You might want to check out quantuum computing.

Ian Perkins
Ian Perkins
3 years ago
Reply to  animal lover

Our species has already taken over the world, and given the mess we’re making of it, a little intelligence, artificial or otherwise, might be a good idea.

Mark Corby
Mark Corby
3 years ago
Reply to  animal lover

What would you prefer Mat Hancock or AI?

Charles Rense
Charles Rense
3 years ago

Creativity is not a math equation, Deep Mind. Someday you’ll understand that.

Kevin Ryan
Kevin Ryan
3 years ago
Reply to  Charles Rense

Our brains are circuitry. Billions of neuronal connections. I don’t know how one defines ‘creativity’ but it feels something like ‘coming up with a new idea’. Which is presumably the formation of neuronal connections in a novel way and something you can randomly program for.

anonisignup
anonisignup
3 years ago
Reply to  Charles Rense

“Creativity is not a math equation” is speculation. It might very well be so, or algorithmic in nature. Let me put it this way, do you think the brain is doing anything more than some type of computational processes?

Marl Marl
MM
Marl Marl
3 years ago

And unlike Go, which is a closed, human-designed system, protein folding “is a game where the universe sets the rules”.

“He said DeepMind’s research was “not a minor achievement” but added: “Compared to the problem of protein folding, CASP is a game. It is a very hard game but it is a reduced problem set which helps us train tools and standardize performance … It is a necessary step but it is not sufficient.

In an email exchange with Business Insider, CASP Chair John Moult rejected the criticisms, writing: “CASP is not a game, it’s a scientific experiment designed to test folding methods in close-to-real-life situations … What is missing?”

-DeepMind’s protein-folding breakthrough triggers fierce debate among skeptical scientists: ‘Until they share their code, nobody in the field cares’

Joe Tee
Joe Tee
3 years ago

Alpha Go winning at Go is no reason to worry. W need to worry when WE beat AlpaGo, and the computer suggests “best of three?”