AI vs IA: How Will Humans and Technologies Interplay in the Future of Scholarly Communication?
AI vs IA: How Will Humans and Technologies Interplay in the Future of Scholarly Communication?
https://asa1cadmoremedia.blob.core.windows.net/asset-24929ef8-abfe-454d-8100-b3758de8b3da/3 - AI versus IA.mp4
DANIEL HOOK: It's a great pleasure to be here. This topic is a little offbeat for me. While we do quite a lot of AI-type work at Digital Science, in fact, I am sadly under-qualified to talk on this area. And so, this is going to be something of an interesting journey for me, as well. It is, however, something that I have been quite interested in in the last few years. And so, I'm quite interested in how we think of the space in which we all work, in terms of AI, and in terms of IA.
DANIEL HOOK: And by IA here, I'm really thinking about Intelligent Augmentation. So, the IA-- AI, I mean to start, most of us in this room will have looked at companies who we might want to invest in or we might want to work with. And I think the enduring thing that all of us know about the .AI at the end of the URL of the company, basically adds 10 million to the valuation.
DANIEL HOOK: This is not the type of AI I really want to think about today. I want to think slightly more broadly. And I figure that actually, I really don't know the level of people in the room. Who has any kind of background in AI? A little bit. A few people have some background in AI.
DANIEL HOOK: Excellent. So I can say lots of things and get away with them. Tyler, I've got my eye on you. Good. So I should be able to get away with some things, but I thought actually it might be quite interesting for some people who have a little bit of a background in the history of AI because I think it is a fascinating field and something that is now starting to impinge on the world in which we live.
DANIEL HOOK: I think the most important event in AI that we have seen in recent years is actually Go-- to do with Go. The game of Go is a complicated game. In some sense, the rules are simple, but in fact, how you handle this game in computer terms-- so that you can get a computer to play you and even beat you-- has been really challenging. In fact, significantly more challenging than chess.
DANIEL HOOK: We saw the first chess computers probably in the '50s and '60s. We saw them become mainstream in the '70s and '80s. And by 1997, Garry Kasparov was beaten by a chess computer. However, really interestingly, Garry Kasparov was not beaten comprehensively by a chess computer. The set of games that they played, the chess computer won 3 and 1/2 games to 2 and 1/2 games by Kasparov. So, the computer, which was Deep Blue from IBM, only won by a single game.
DANIEL HOOK: And in some sense, if you know something about that background, it was, in fact, just by brute force. Arguably, the IBM team programmed the computer not to play chess, not to beat a chess player, but to beat Kasparov, specifically. They studied how Kasparov played and they designed an engine that would beat Kasparov. And this is not what people are trying to do with AI. The Holy Grail of AI is generalized artificial intelligence, this idea that you can get a machine that is going to be able to interact with a human and to be as intelligent as a human in its perception and its interaction, its ability to integrate with society, in some sense, its ability to be able to perceive and understand what we understand and perceive, and potentially to do that better than we can.
DANIEL HOOK: That's a pretty high bar. But move forward a few years, and in 2016, a monumental event happens. Lee Sedol, the world champion of Go, was beaten by the first Go computer, AlphaGo. The DeepMind company, who is affiliated with Google, came up with a computer that was actually able not to simply beat Lee Sedol, but to actually beat Go players in general.
DANIEL HOOK: And this beating wasn't by a single game. It was a five game match, and the computer won four out of the five games. So it was a comprehensive beating. It was a significant step forward, something that we haven't seen before. But this is actually where the story of AI kind of takes off. AI, as we think of it these days, people will know terms, perhaps, like machine learning and things like this.
DANIEL HOOK: These ideas have been around for 30 or 40 years. But what happens in 2016 is that the algorithms and the computer power and the amount of data that we had caught up with the promise of the algorithms that we've had for 40 years. And that's really the sea change. And much as in the same way that the Space Race started in the 1960s and the late 1950s, when the Russians launched their orbiting spacecraft around the world.
DANIEL HOOK: And you see things like Sputnik and Gagarin going up. This caused the Americans to be concerned and to be worried about how the world was developing and it signaled the beginning of the Space Race. With the advent of AlphaGo, the Chinese had a similar fire lit in their bellies. They were worried about how computing would move forward, and this AlphaGo game beating their esteemed champion-- and Go is something that isn't just a game in China, it's very much part of the social fabric-- was something for them that they really needed to be worried about.
DANIEL HOOK: And so if you look at Chinese investment into research in AI, you see a complete sea change in 2016. The amount of research funding that is now going into AI and into innovation around the translation of AI is actually scary. We've done some analysis with our data at Digital Science, and I can tell you that if you look at the top 10 universities in the world doing work on AI, nine of them are in China.
DANIEL HOOK: One of them is in the US. MIT, on the scale of these universities in China, looks like a small island in comparison to continents, in terms of the amount of work and the amount of funding that's going on. The collaborative landscape is shifting towards China in a massive way around this particular research area. So, this is really fruit of the-- I would argue fruits of what happened in 2016.
DANIEL HOOK: The original work by Jeffrey Hinton has been around for a while. I think that when Hinton wrote this paper, he could not have guessed 30 years ago that it would now be sitting there with 8,000 citations. It would be probably the seminal work that actually is powering the deep learning revolution that we're seeing going forward now. And effectively, data has become the new oil of our age.
DANIEL HOOK: I think everybody knows that at some level, but there are different levels of interfacing with that comment. I think we all know that in running our businesses, we want to be more data-supported. We want to think more about the data that's actually helping us to move forward, and we try to use those data to create metrics and create insights. But with the data that's available in AI-- the data that AI needs to create its ecosystem and to create these deep learning environments-- we're seeing an exponential speedup in what data means to this space.
DANIEL HOOK: And, in fact, China has a very interesting position in this because China has so much data through things like WeChat and through their various ecosystems where researchers can get access to significant amounts of data to work with. It provides them a significant commercial advantage. They have 1.3 billion people in the country, all of whom are throwing off data. All of that data is effectively and potentially available to researchers inside China, but not outside China.
DANIEL HOOK: So, this is a significant competitive advantage and one that we need to consider carefully when we're going forwards. Open data, I've always been a big fan of open data and open science, but of course, one has to be quite careful about this. I remember running a user forum back when I was CEO of Symplectic-- this is a decade ago. And I took the room and I did one of these things where you put a statement up on the board and you say to everybody in the room, please go to this side of the room.
DANIEL HOOK: If you agree with the statement, go to this side of the room. If you disagree with the statement-- and anywhere in between is fine. And then I get someone with a microphone to go and attack people and get comments from them. And someone went, and the statement that I put on the board was "my institution believes firmly that open data is for the good of the ecosystem, and that we should make our data open." And one person went to one end of the room, which was the "yes, we should definitely share our data" end of the room and then quickly moved to the completely other end of the room and consequently marked himself up for attack.
DANIEL HOOK: So I gave the microphone to my assistant. She went over and talked to him and said, why did you move? And he said, well, I thought about it. And my institution firmly believes that every other institution should make their data available. [LAUGHTER] So-- and I still think that many institutions are there. And in some sense, although I think open data and open science is a great thing, it is something that we need to try and do together.
DANIEL HOOK: It's something where I think we're still building trust. And I think in how China is moving forward, this is something where they're going to continue to need to build trust because they're going to need to make their data open at some level to make sure that it's an equal partnership. So, if you think about data as a new system of or a new raw material that we're working with, then it's quite interesting to look at the economics of industrial revolutions.
DANIEL HOOK: And if you look at industrial revolutions from the past, you can see that there's a time difference between when the initial work of the Industrial Revolution takes place-- the initial things that trigger the revolution-- and then many years after you see the effects. Now, just because we are used to things happening quickly, it does not mean that they actually happen quickly. Most of what we experience today in the world is actually fruit of what was happening in the late 1960s.
DANIEL HOOK: In the late 1960s, the US was spending around 25% of its national disposable income at governmental level on research. That's nowhere near the percentage that you guys are spending today. As a result of spending that money in the research into research infrastructure, we gain things like the internet, the world wide web, microcomputer revolution.
DANIEL HOOK: These things took 20 years to come about, and we're still reaping the benefits today. So, in thinking about artificial intelligence, although we're seeing a frightening pace of things moving forward, it is important to realize that the actual effects of what we're seeing today are probably 20, 30 years off. So the world that we're building today, the conceptualization that we're putting together, is something that actually will pay back over a period of time.
DANIEL HOOK: The other interesting thing about looking at things like this is that, in fact, it is fascinating that scientists and researchers are the ones that come up with these revolutions quite often, but we're remarkably impervious to accepting them ourselves. A lot of researchers are now kind of having the internet done to them. They invented the internet originally, but they didn't really think about what it was at the time.
DANIEL HOOK: It was an intellectual exercise. They moved on from that. They're doing other things. But, in fact, the professionalization of research, how institutions are now moving forward, the types of thoughts that people are having around how their research is managed, and the technologies that we have to help manage and publish and integrate with research are all things that are now being done to us.
DANIEL HOOK: They're not things where we're holding the reins and moving forward and moving ahead of the curve. The fact that the paper hasn't really changed in 350 years shows you how-- not actually the people in this room, but the people we serve-- are so ensconced in the idea of the printed thing that they hold. Tangibility and physicality are tremendously powerful physical relationships that we have with papers.
DANIEL HOOK: And actually getting over those relationships are really challenging if you're a technologist and you want to move things forward. So the way I think about where we are right now is that we are effectively tool providers. We are trying to get people to move from this state-- where they are gradually using our tools and hopefully, augmenting their intelligence by tool usage-- to the stage where we actually move into the next phase, and we start seeing tools move in another direction.
DANIEL HOOK: The question is, when will we get to this, and is this where we want to be? I would suggest not necessarily. But it is important to remember that we have choices in how this works. A lot of people feel, as I was saying, the internet is done to them. Internet technology is done to them. But actually, I think in our community, almost exclusively, we have the ability to have an intelligent conversation with our stakeholders and work out where the best place for us to end up is.
DANIEL HOOK: This is not a globular cluster or a galactic simulation, this is actually a data dump out of Dimensions where we have looked at AI research. And we have taken all of the AI research in the world, around a million papers and their connections, and we have produced this on a very large scale here. And we've colored it associated with different research areas that happen inside AI. AI is many things to many people.
DANIEL HOOK: And we can break that down a little bit. So, we've done some topic modeling to try to understand what's going on in different fields. And we've come up with 15 high-level fields and we've plotted the activity of research for each country in the world. And you can see quite interesting things come out even at this level. You notice that China is perhaps a little more interested in facial recognition than the rest of us.
DANIEL HOOK: That's a thing to take home and think about. It's actually interesting that lots of researchers around the world are interested in facial recognition, not just the Chinese. They just seem to be the leading ones. If you look at these footprints that I've printed-- if you saw them up close, you would see that, actually, the scale differs on each of the axis for each of the countries.
DANIEL HOOK: So even though they're broadly similarly shaped, the scale of the research in each case is different. And what you would notice if I plotted this on a rank basis is that in 13 out of 15 areas, China is the largest producer of AI research in any of those fields. So, China is definitely where the locus of information and the work is. The US is number two.
DANIEL HOOK: But we also see other countries-- if you take the EU as a whole, it's certainly up there with them with America as a significant power in this space. But no other single country has the kind of capability and research that the US and China do in this space right now. So how is this actually going to disrupt our space?
DANIEL HOOK: There's a great report from PWC that actually starts looking, in global terms, about how things move forward, and I really recommend this to you because it's a wonderful summary of the whole area in one go, and it's very applicable. So they actually talk about AI happening to us in three waves. They talk about the algorithmic wave, they talk about the augmentation wave, and they talk about the autonomy wave.
DANIEL HOOK: Now, broadly speaking, algorithms-- the algorithmic wave is the thing that's kind of here right now. It's hitting us right now. This is where we're seeing recommender algorithms do things for us, which I think are tremendously dangerous, and maybe we'll have time to come back to that a little bit later. Augmentation is where you're starting to see artificial intelligence tools placed in people's hands.
DANIEL HOOK: And then autonomy is where you see robots and robotics starting to take more of a hold. And I think all of them have some applicability in our space, but in fact, some more than others. And I think-- I'll tell you the punchline first. The punchline I think, is, for us, that augmentation is probably where we will sit for a long time in this space.
DANIEL HOOK: I think that autonomy is not so relevant to where most of us are. I think robots have limited purchase in most of what we're doing because we're in a more creative industry. But I think augmentation is something that can really bring significant gains to all of us, especially with cost pressures in this space which we all inhabit.
DANIEL HOOK: We need to become more efficient. We need to broaden and diversify the things that we bring to people, and a lot of that can be done through augmentation. If you look at where PWC come out on some of these things, you can see, in fact, that Wave 1-- the early 2020s-- the algorithmic revolution doesn't actually lead to too many job losses. This is really tiny-level effects.
DANIEL HOOK: This is a few percent. Wave 2, in the late 2020s-- this is when we start getting augmentation in place. So this is for the workforce, overall. They think there's going to be a much more significant hit, and you could see 15% to 20% of jobs disappearing as we know them currently. And in Wave 3, in the mid 2030s, you can see now this kind of long time scale that I was talking about coming into play.
DANIEL HOOK: You can see much higher disruption in the job market. And the question is, does that actually apply to us in our area? And I would say in a limited way it does, but I don't think that we're going to see anywhere near the Wave 3 kind of disruption that you'll see in the overall job market. If you look at particular industries, you can see that there are different profiles, and I would argue that, in fact, we're much closer to financial services than we are transport.
DANIEL HOOK: If you think about the types of thing that we're involved in in our space-- lots of data, lots of creativity, lots of analytics, lots of things that are tricky for other people to intuit, or indeed, that you could potentially make algorithmic, but you do certainly want a person looking at them before they actually-- before they actually get committed anywhere-- then you can see that I think that the yellow line here is the one that's more appropriate to us.
DANIEL HOOK: So I don't think it's the end of the world yet. I always preface with-- but you can also see here, if you look at it on a slightly different axis, manual tasks, you're going to see things going down significantly. So anybody who wants to do a postdoc is going to have a very different relationship with their work in various subject areas in the world to come.
DANIEL HOOK: Routine tasks, again, waiting the postdoc, here mostly, or the PhD student. And various things here in management-- academics don't really do management, if you might have noticed. It's something that you have to grow into. You don't get trained for it. So there are all sorts of different aspects to this, which actually are quite marginal in the type of place that we're working.
DANIEL HOOK: And so I show these really to show you the kind of places where most people are doing analysis and where most people are thinking about things going-- actually, places which don't hit us centrally as a market to work in. The one final piece of this that I thought was completely fascinating was which countries are going to be hit most badly, given the analysis that PWC did.
DANIEL HOOK: And you can see, in fact, that countries like the US are actually fairly close to the top of the table. The UK is fairly middling. We're seeing countries like New Zealand, Japan, and Russia being the countries which most benefit from the AI revolution. So you might consider that over the next few years, in the way that we're seeing research move towards China, as I've indicated before, we might also see other parts of what we do shifting to other locuses in the world, simply because they're better set up to take advantage of the tools that we in this room are going to be building.
DANIEL HOOK: So, our standard markets may not be our standard markets unless they can retool and move themselves quite quickly. Given that we're in New York, I thought it would be good to call out this research institute. This is at NYU, the AI Now Institute. Again, I strongly recommend reading their material and looking at what they are doing. I think one of the really fascinating things when you start to look at artificial intelligence is this real concern that as we get towards machine learning and deep learning, we start to lose our understanding of what's happening inside the box.
DANIEL HOOK: I remember when I was at school, I had to-- I was that age group where I had to learn to both use a calculator and do the calculation in my head. And if I couldn't do the calculation in my head then I wasn't allowed to use the calculator. And so, it was only by understanding how the calculation worked that one got to play with the calculator.
DANIEL HOOK: And I think we as a society are a little bit in this place right now with AI. When you think about deep learning, and you start to understand deep learning technologies, the very structure of the deep learning ecosystem seems to be that there is a black box into which one cannot look. So you put in the data, the machine then self-teaches itself from the data to understand certain things, and then creates you a mechanism by which you can put in an input and get an output.
DANIEL HOOK: But that black box that happens in the middle is something that we lack the technical capability right now to understand. It's something that's a closed world to us. And that has led to some really interesting developments. AI Now wrote this report, within the last 12 months, called "Discriminating Systems: Gender, Race and Power in AI." And this is a fantastic report. It was referred to in "The Guardian" as a result of some of the strikes and some of the people at Google going out on strike because of some of the work that Google was doing.
DANIEL HOOK: I think the most interesting of these cases is actually that Google were trying to create an AI to help them with their hiring practices. And so they put all of their hiring data into an artificial intelligence, and the artificial intelligence only would allow them to hire white males, which arguably isn't terribly PC or really great for the business. And so, Google obviously realized that this was a big issue.
DANIEL HOOK: The outing of their staff in other ways was actually to do with other projects that they're working on. But because you have this black box that's going on, there's a lot that's hidden including the fact that your AI is making ethical judgments on your behalf, which are now completely hidden and which you're now completely divorced from. And I would argue that in the work that we do in research and supporting research, this is actually the biggest danger of what we do.
DANIEL HOOK: Already, this type of technology is shown to give poor results when looking at hiring. It's giving biased results with facial recognition. It already shows that there are errors or biases in financial algorithms that the financial sector use, and that they use to manage our money and play with our money. So these are already things that are highly concerning because they're starting to affect the society in which we live.
DANIEL HOOK: Who knows? There may be issues in voter systems that are based on this type of work as well. So, it's actually quite heartening to see that there are now articles-- this is a research scientist called Been Kim, who is a researcher at Google. And she is starting to work on algorithms at Google Brain which allow us to take that black box apart and to try and understand that actually even if data is pulled into the system and tells us that only white males are doctors-- which is exactly what this algorithm did-- we can actually start challenging it and taking it apart and holding it to some kind of ethical standard.
DANIEL HOOK: And the technical term for this is TCAV, is Testing with Concept Activation Vectors. So this is a technology that I think you would start seeing to be much more pervasive in the things that we're doing, so that we start having checks and balances on this black box that's being introduced. If we think of our own systems that we're using in our space, in some sense, if you trace back to way back when, in some sense impact factor is an algorithm for us to shut off our brain and trust something that we don't completely understand that's a bit of a black box.
DANIEL HOOK: If we move forward, and we start thinking about the revolution that happened in the '90s and '00s, recommender systems are another thing where we've been taught to shut off our brain and take the thing the computer gives us. Google, as a search mechanism, is crafting all of its search results for each one of you individually. It knows who you are. You log into it.
DANIEL HOOK: Even if you don't log into it, it knows what your IP address is. It will craft its search results to favor the world that you want to see. And this is where we see social bubbles coming out of. And so these are all AI technologies that are unwittingly affecting our everyday lives right now. Imagine a world in which you have a research assessment exercise and your government decides to rank your universities and give them money based on how good they are.
DANIEL HOOK: Then you have a world-- which I think of as a kind of a high evaluation, high-touch world-- which is now using metrics to decide how to rate your institutions. But those metrics are intrinsically based on a feedback loop, which is meaning that you tend to fund popular research rather than good research. Because the argument being, you have so much research now being produced-- I think in Dimensions this year, we'll have something like 5 million publications-- 5 million things with a DOI that pertain to research.
DANIEL HOOK: Those 5 million things, they're far too much to read, so we won't read them. We'll let the computer read them for us and we'll then make an informed decision based on what the computer says. But, of course, we've taught the computer that a good thing is a popular thing, and so you get a feedback mechanism where you only then fund popular things.
DANIEL HOOK: And that's not just in a system, it's a weaker effect in a system where you don't have a national exercise, but still funders like NIH, like NSF, will still be looking at things in metrics-driven ways and they will be pushing their results towards a popular outcome, whether they realize it or not. And this is why I talk about data-supported decisions, rather than data-driven decisions.
DANIEL HOOK: The idea that data should drive the decision, I think should be an abhorrent one to all of us in this room. We want to make our own decisions, which means we need to critically look at the data and we also need to understand what's driving the outcomes of those data. So the types of places that I think we're starting to see AIs come up rather than just in the metric space in research-- I just wanted to touch on a few things that either we're doing at Digital Science or that I've seen happening in the space, which I think are quite interesting.
DANIEL HOOK: So in actually carrying out research, I have seen several technologies in the last 12 months where people in labs have a personal assistant, like a smart speaker, and they can give the smart speaker instructions to start a timer, or to tell them what the next point in the protocol is that they're executing, or to track something which might give us more insight into reproducibility of an experiment.
DANIEL HOOK: So this could be a very positive technology, but it's certainly one where having a blind thing that just completely believes the researcher and what the research is doing could be something where there are ethical things that are coming into play where we don't completely see what's going on. And if we trust it too much, we think because it's on the scene, because it's happening in real time, it must be true.
DANIEL HOOK: And actually, there are a lot of things to worry about with this. There's a company that we made a small investment into a few years ago called Tetrascience. These are some of Digital Science's sexier properties, if you will. Tetrascience is a company that is Internet of Things for the lab. So it is trying to make your lab equipment more internet aware, more aware of their surroundings.
DANIEL HOOK: So with this type of technology, you could imagine coupling that technology that you have with your lab assistant telling you which starter you're typing, starting now, and which protocol you're executing, combined with a data stream from Tetrascience telling you about the things that are actually happening in the lab and what switching on or what's switching off and what's at what temperature, where you actually have a more complete view of what's happening in the universe of that lab at any moment in time.
DANIEL HOOK: Transcriptic is probably our company with the most buzzwords. It is a robotic lab in the cloud using machine learning to optimize experimental outcomes. What Transcriptic is, it's kind of like AWS for a lab, so you can actually take your experiment, you can send all your materials to this place in Menlo Park, they will load it up in what looks like a shipping container with a robotic arm and equipment, and they will actually run your experiment for you inside this confined environment.
DANIEL HOOK: And you can actually automate the experiments and get them to do a complete scan of the parameter space with an AI that then zooms in on the most promising potential features of your experiment. So these are all technologies which move us significantly further forward in terms of getting rid of postdocs. Another postdoc-killing technology is-- going back to HAL again, is 1715labs.
DANIEL HOOK: 1715labs comes out of-- we don't have an investment in this, by the way, just to be clear-- 1715labs is out of University of Oxford and the Zooniverse project, so Galaxy Zoo, and it's extremely good at marking up and using machine learning to enhance the speed and quality with which you can tag data sets and give them different properties and track them for machine learning purposes.
DANIEL HOOK: So all of these you can think of in different parts of the experimental cycle that could really, again, stop us needing postdocs. Writefull is something that we're very proud to be invested in. We invested in this company when it is just two guys who were looking at machine learning approaches to helping us write better papers. And so, they actually have a complete machine learning model based on academic literature so that when you write in the Writefull environment, it will not only tell you when your grammar is wrong and when your spelling is wrong, it will also tell you when you've used the wrong adjective and what a more natural English adjective would be to make your writing more palatable to the reader.
DANIEL HOOK: But not only that, because it's trained on an academic corpus, it understands what should be written academically. So, in fact, it can bring down the quality of your writing to the level of other academics or it can raise your quality, depending on whether you understand the research area in which you're working. There are lots of examples of this, you know, in theoretical physics. There's a thing called measurement theory-- that's a really bad term to have to work with if you work with Grammarly or any kind of engine.
DANIEL HOOK: They'll say, what are you talking about, measurement theory? This makes no sense as words that go together. Again, because Writefull has scanned that literature. It understands that measurements theory is a thing that exists and that you can use to improve your writing. Ripeta is something that's quite interesting right now because we have a lot of discussions about peer review.
DANIEL HOOK: When I was doing my research, I was in a wonderful position where if I wanted to look at the reproducibility of a paper because I was a theoretician, I would simply reproduce the paper. I would sit down and redo the calculation. And that's how we review papers in theoretical physics, if we're being good. We actually sit down and we go through each line of work and say, does this make sense?
DANIEL HOOK: Can we follow the logical argument? Now, obviously, in a lot of disciplines, that's just completely impractical and you can no longer do that. And so, Ripeta is an AI-based technology that is trying to pick up on the hallmarks of reproducibility. If you mentioned a data set, have you made that data set available? If you mentioned an analysis approach, have you been specific about the code that you used?
DANIEL HOOK: Have you made the code openly available? Did you talk about the right version of the code? So, this in broad sense, is a technology which could automate certain parts of the peer review process. And I'm not going to say anything about postdocs at this point. Clearly postdocs aren't the only peer reviewers. But I'll let you leave you to complete that for yourselves.
DANIEL HOOK: One of the things that we've been doing, again, at big scale effort at Digital Science is search and classification. A lot of the AI work we do is in trying to automatically classify papers into specific research areas. And this is a devil of a task. This is really complicated stuff to do well and to do consistently. But it's something where we've had some success, and we think we have fairly good algorithms now.
DANIEL HOOK: But, again, in this world, you're very focused on delivering a product to market, and you're less focused, quite often, on the issues of bias in AI. And so, one of the things that we always need to think about when we're producing systems that classify content or that recommend content is, what's the real bias going on there? Similarly, with Ripeta, when Ripeta's putting its algorithms together, what is it actually biasing because we don't necessarily know what's going on in that black box?
DANIEL HOOK: Is it biased towards papers that are written in a certain style? Is it biased more towards male style of writing to female style of writing? Is it biased towards people who formulate their topic sentences in a particular way? If you put your topic sentences at the end of your paragraph, as some people do, will it give you a lower score or bias the result in some way that suggests your work is less reproducible because it hasn't picked up on how you should have structured your data?
DANIEL HOOK: So these are all the kind of problems that we face in doing what we do. But there are-- you know, things continue to move on, and one of the things that I've been saying for some time is that I believe that we are only so far from books and papers being written themselves by computer. And I think that in future, we may well spend less time writing and editing books and papers, and in fact that there are systems that will write these for us.
DANIEL HOOK: And this is really the first case in point. Beta Writer was a collaboration with the University of Heidelberg and Springer Nature and they've created a book called Lithium-Ion Batteries. It's a fairly dry read, but it is completely computer-generated, and it was sent off to external reviewers and the external reviewers came back and said, it's a little bit of a dry read. But they said it was accurate and it passed peer review independently without them knowing that it was computer generated.
DANIEL HOOK: So this is a [INAUDIBLE] of work or review of work is already a technology that we possess. So that's actually quite scary if you think from an academic perspective. Part of my job, which I spent a lot of time doing, is thinking about how to present a set of results, how to make sure that it's communicated well, how people are going to consume that. And we're starting to see technologies emerge now which actually take that away from me.
DANIEL HOOK: So what is my job as a researcher? Where does the creativity part of my job lie? Is it in thinking up the experiment? Is it in doing the experiment? Is it in the concept behind the experiment and working out what we should test? Or is it, more generally, in bringing the right groups of people together so that we're doing the right experiments overall?
DANIEL HOOK: These are really challenging questions, I think, and certainly ones that go beyond the remit of this audience, overall. But they are touch points in which you will all have insights, and you will all have clients and people you work with who are going to start being worried by these questions over the next few years. Creativity, is for me, the one thing that I think unlikely that will be reproduced or superseded by an AI.
DANIEL HOOK: I think there's something about that fundamental leap of logic that one has to make in order to get to a scientific discovery or research discovery of any nature. Where, actually, the computer can't do that bit. And that was very definitely my thinking. I've spoken to a number of people this year who are looking at trying to do large-scale scans of the research space to try to understand where there are missing pieces, where we could potentially work on particular pieces of the puzzle, where it's kind of, there looks like there's a gap-- shouldn't we do something here?
DANIEL HOOK: So very much going back to the original paper in "Nature" on the strength of weak ties. So there are people doing that kind of work, but I still think that's merely a scanning mechanism that helps us identify things that we should be thinking about. And it still doesn't provide that creative leap to the people like Einstein made when they came up with general relativity. That's not something I believe that could have been thought of by a computer.
DANIEL HOOK: And then I encountered the company Iprova, which was interesting to me because I was thinking, what is the limit of this kind of creative jump. And Iprova's is a great one because they are a company based in Switzerland who help you load in lots of information about a particular space that you're interested in and they add in scientific papers and things that are in the regulatory media.
DANIEL HOOK: And they then try and come up with spaces that you should look at to try and innovate. And they are quite successful. They have these wonderful ideas. Like, for example, they happen to know that your mobile phone has an induction coil in it, and that increasingly people are starting to charge their mobile phones by placing them on an induction coil.
DANIEL HOOK: Now, it turns out that your mobile phone is getting thinner and thinner and consequently harder and harder to pick up. One of the things that this phone also has in it is a motion sensor, so it knows when I'm close to it. So they've invented a technology completely from a set of suggestions that came from a computer and then were taken forward by researchers whereby they can actually reverse the polarity in the induction coil as your hand gets close to the phone and the phone jumps that you can hold it in your hand.
DANIEL HOOK: And I think that's just genius. [LAUGHTER] So, I'm really challenged about actually whether there isn't some level of creativity that a computer can come up with on its own at the moment it's still partnering with humans. But to come up with suggestions like that, actually, that's not an obvious thought, right? And that's actually quite an interesting move forward.
DANIEL HOOK: So, at one level, I could be saying to you we're going into an age where intelligent augmentation is going to help us do significantly more. At another level I could be saying to you no one's ever going to be a postdoc again, consequently we're not going to get them to be research scientists because nobody ever makes postdoc.
DANIEL HOOK: But I think the reality is probably a little bit more nuanced. And so, my parting thought to you is, really that if you think about the different areas in which we all work across this space, and you look at the algorithm, the Wave 1, the augmentation, Wave 2, and automation, Wave 3, there are areas where I can already kind of put in tools that I can see meeting that need or going in that direction.
DANIEL HOOK: There are further areas which I've labeled in yellow, where I can see a tool and I can see that potentially that's going to change the job market because of the nature of the tool. There are areas that I've placed in red where I don't see a tool, but I think actually it would be really dangerous for us to go there. And maybe we collectively, as an industry, make a choice that's not somewhere we're willing to go because we think it's dangerous.
DANIEL HOOK: We think, actually, this takes us to a point where we have ethical issues or where, you know, it's unacceptable that we should be going into that space for a variety of different reasons. And then I've put white spaces in, which are essentially places where I don't think we can play. I think there's a technological limit. So this is a very high level mnemonic for some of the things that I'm kind of musing on recently.
DANIEL HOOK: What I do think is that we will see further effects, though. I think we will see a lot of things where jobs are changing. So, if you talked to people 100 years ago and you talked about a computer programmer, they would not know what that was. It would not make any sense to them. And in 50 years time, given that things are moving faster, there will be job titles which we have no conception of today, where people will have moved into those different spaces.
DANIEL HOOK: Right now, "blogger" is a job title which, you know, probably didn't exist 20 years ago, and I think which probably won't exist in another 20 years. I hope nobody here is a significant blogger. However, I do think that there are fashions in how job titles will move and how our job market will move, and I think everybody in this room is responsible for, has accountability for, and has an ability to change that landscape in a positive way for our space going forwards.
DANIEL HOOK: So, I'd invite you to think what the really game-changing tools are that are coming out right now and that you can imagine coming out in the future, which are intelligent augmentation devices for us, rather than either replacements for us or things that we should be worried about. And so that's where I'll finish. Thank you. [APPLAUSE]
HOST: Thank you very much, Daniel. That was an excellent talk. We have time for one question and-- [LAUGHTER] --sorry. No, that's quite OK. And we do have a break later, and I'm sure we can ask you questions then. But Mr. Harrington, my colleague Sarah has the microphone.
ROBERT HARINGTON: Yes, hi. Is this working? I'm Robert Harington, and I work at a math organization, the American Mathematical Society, and I thought it was fascinating what you were saying about where we're going on the job [INAUDIBLE] management of useful postdocs. But mathematics is very different to, say, synthetic biology.
DANIEL HOOK: Yes.
ROBERT HARINGTON: And so maths, for example, has theoretical physics as--
DANIEL HOOK: We're very similar.
ROBERT HARINGTON: [INAUDIBLE] sport and have some uses products, published, perhaps, in a different way. Do you see-- what's your view, maybe, coming from your point of view as a theoretical physicist to on how different disciplines may culturally evolve-- in terms of how, you know, who is going to go in and do that work and will people become-- will students become more likely to go into the humanities and math and [INAUDIBLE] disciplines?
ROBERT HARINGTON: What's your view on the evolution of disciplines?
DANIEL HOOK: Oh, goodness. This is a tricky one because there are all sorts of, kind of, forces at work here. So, I would say right now, if we look at the economics of the situation, there is less and less funding for arts, humanities, social sciences. And I think that's a big concern for everybody, or at least it should be because I think to have a healthy research environment we need those topics and they should be being significantly funded, not being tapered off.
DANIEL HOOK: So, you know, even as a kind of a hard scientist with a background, perhaps in a slightly more comfortable, safe area I can definitely see significant value in having challenge and having research thinking from the arts, humanities, and social science being brought in closer to my own area. And I think there is a significant value in that. So, what I think is the problem that we are facing at the moment is interdisciplinarity.
DANIEL HOOK: Everybody pays lip service to interdisciplinarity and talks about how we want more interdisciplinary research or multidisciplinary research. But then when you get to the evaluation stage, no one evaluates it properly. Nobody knows where to send the paper. I recently wrote a paper, which appeared in PLOS ONE-- thank you, Anne-- on perception prestige and page rank. So it was looking at ranking algorithms in the context of the genealogy of academic networks and how esteem flows on academic networks.
DANIEL HOOK: And that's quite an interdisciplinary thing there's some quite heavy physics in it. There's some reasonable amount of maths. There's also-- I was collaborating with a colleague who's a social anthropologist at Oxford. And so, getting that paper reviewed took a year because actually finding the right people to review it was a complete nightmare. And I think we increasingly have the tools to go through these different areas and actually connect things which are only peripherally connected.
DANIEL HOOK: But then we lack the understanding how to use those tools or how to take on those areas as human beings. So, I think there are really big challenges in that space. So I guess to answer your question more directly, if the funding holds up, I can see a massive diversification of what people specialize in and the types of approach that we take. In some sense, I think that every researcher, regardless of their discipline, if they go to do a PhD, they should have a data science component.
DANIEL HOOK: Arts, humanities, social sciences, everyone should do a basic data science component. And if you take that to it's, kind of, infinite limit, then you're thinking more about-- some of you may know of the Melbourne Degree-- it's quite famous in Australia-- where they take you in to do a degree and you don't specialize until much later in your curriculum. You have a general understanding as an undergraduate and then you specialize in graduate studies.
DANIEL HOOK: And I wonder, actually, if we aren't moving to a world in which graduate studies start to cover the same model where, in fact, you do a PhD, but you do quite a general PhD in order to equip you to understand how to do research. And then you specialize beyond the PhD into a more specific area. So I can see some kind of stratification like that coming up where, in fact, maybe you need to do PhDs or there's a thing above a PhD which takes you more directly into a field.
DANIEL HOOK: So I think that may be-- just thinking generally about the area-- that may be an approach which we see emerge over the next few years. It will replace all those postdocs. [LAUGHTER] Well, thank you very much. [APPLAUSE]