Name:
Platform Strategies 2023: AI In Scholarly Publishing Keynote
Description:
Platform Strategies 2023: AI In Scholarly Publishing Keynote
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/bcb58818-4634-4653-941e-44db09c1a8ea/thumbnails/bcb58818-4634-4653-941e-44db09c1a8ea.png
Duration:
T00H40M29S
Embed URL:
https://stream.cadmore.media/player/bcb58818-4634-4653-941e-44db09c1a8ea
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/bcb58818-4634-4653-941e-44db09c1a8ea/Silverchair_Platform_2023_Part_1-AI Keynote.mov?sv=2019-02-02&sr=c&sig=TunyD3lNabo0uQNj6H8F2eRNrsZSt3bNPOG6TsdcjIQ%3D&st=2024-11-21T09%3A42%3A56Z&se=2024-11-21T11%3A47%3A56Z&sp=r
Upload Date:
2023-10-09T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
SPEAKER: So smart people disagree. Some people see this as the latest the latest fad with a peak of inflated expectations in the hype cycle. Others see something far more profound coming. Likewise, there are a range of views on how good or bad this will be for humanity. Here is an illustration of the spectrum of some key opinion leaders on these two axes.
SPEAKER: And these are smart people with sophisticated opinions. And it's a really involved exercise to empirically quantify their perspectives with exactitude. But given I didn't think it was highly critical for this presentation to get it quite right, I asked ChatGPT to force rank their perspectives on these two axes. So take it for what it's worth. But the key point here is that smart people disagree, and all perspectives here are heavily opinionated.
SPEAKER: Now, my view is that the technology really is tipping into hyper acceleration, and I'll make the case for that today. I see a lot of the utopian promise here. And I'm also pretty worried about our ability to regulate this, so I'll walk through a bit of that with you today. My view is that we're living through an unprecedented period, where artificial intelligence has tipped from highly specialized models into very general ones.
SPEAKER: The very complex architectures have actually been settling out into extremely simple ones. And the domains in which AI can now operate has gone from very specialized narrow problem sets that were almost pinpoint into very broad domains. And AI applications are no longer limited to discrimination of recognition, classification pattern recognition. They now create.
SPEAKER: So Andrew Karpathy makes a great distinction between software 1.0 and 2.0. So he was the head of AI at Tesla for a while, one of the OpenAI founders. And he talks about the classic stack of Software 1.0. And Software 1.0 is written in languages like JavaScript, C#, Java, languages that humans understand.
SPEAKER: And with that language, what we're doing is we're controlling the execution point within the code. And we're walking this through in loops, in constructs that we understand, and we're able to debug it. We're able to very deeply understand it. Software 2.0, on the other hand, is written in a language that we don't actually really understand. Its weights in a neural network.
SPEAKER: It's just basically this vast matrix of numbers. And those numbers are really vast. We're now talking about hundreds of billions, up to the trillion weights in these networks. And rather than being specific about how the program should operate, what we do is we basically tell the program what characteristics we're seeking to have this system emerge with. And then what we do is we throw enormous amounts of computational power at this and feed in data.
SPEAKER: And we teach these programs how to measure the loss between what we would like and what they're actually producing. And it's just this rapid kind of iteration cycle, where it's just basically learning how to exhibit the behaviors that we're rewarding it for. We're not able to debug these. We don't actually really understand how it's working.
SPEAKER: And nature is filled with emergent properties. And this is where a system's collective inter-- that collective components interacting with one another manifest novel behaviors or characteristics not evident in the individual components alone. And in regular software development, features are explicitly planned. In LLMs, the features that arise are much more like nature.
SPEAKER: They're emergent. So a good example is writing code. We all now know that these large language models have actually not just learned how to write English, they've learned how to write software code. But what's interesting is that in the early days when they were doing this, they didn't train it whatsoever on writing software. It just so happened to be that less than 1% of the internet data that was sucked up and trained within these contained code.
SPEAKER: And it was an observation that they made that these things had actually learned to code. So it's really amazing what can be learned from just getting a system to self-learn about minimizing loss. And this is really critical to understand. AI is much more like nature than traditional programming, at least in terms of the latest generative versions of it. And this is a Google research illustration.
SPEAKER: And this part, I'm really trying to hit on the emergent properties here because you can see where this is going. And with basically, as Google-- and this is true for all of the major players. As they increase the parameter count of their models, the number of weights in their system and the corresponding compute and the size of the data they fed in, new properties emerged.
SPEAKER: So like, pattern recognition, things that just simply weren't present at the previous order of magnitude. And with each new order of magnitude, capabilities emerge. Some of them unexpected. It's hard for them to know exactly what's going to be there. In the early days when they were experimenting with the current technology, training on raw text just to predict the next word.
SPEAKER: They actually weren't expecting these models to produce coherent, novel, and even creative sentences, far less expecting that they would be able to compose music, generate art- and a lot of the art in this presentation is AI generated-- or solve scientific problems as some now can. The scaling laws are consistently observed, but they remain mysterious.
SPEAKER: As models scale, they acquire new skills unevenly, but overall very predictably. And this is an explicit view of these capabilities that have emerged with each order of magnitude. And I won't go into it, but the point is, as these models scale up, they get significantly smarter. And the leaders in the space have confidence, but not certainty, that the capabilities will continue to emerge, that the scaling laws will hold.
SPEAKER: None of them claim to with certainty what will emerge next, although there's certainly a lot of predictions around long-term horizon planning, persuasion, emerging. But what's really crazy is, we're about to find out. So two of the largest four players have stated goals to train massive models with 100 times the compute of the current Frontier Foundation models.
SPEAKER: And the other two are almost certainly doing the same, they're just not speaking about it. We're rapidly seeing AI teams build the world's largest supercomputers. But now in terms of current state, we've all found plenty of areas where AI just falls flat on its face, where it cannot compare to an expert human in their field. It's also getting rapidly better than the average human in a large number of standardized tests.
SPEAKER: And it's good to know about the weaknesses, but don't get blinded by them. Expect weaknesses to continue to drop out of these systems as they scale up. Now, that was this is essentially an open AI-presented slide on various standardized tests, so it's vendor supplied. But the folks at Mensa, the High IQ Society, have also really been probing around GPT-4, feeding it a bunch of tests, including a whole bunch that they're pretty confident weren't in the training data.
SPEAKER: And they're generally also concluding that GPT-4 outperforms average humans in most measurable activities that lend themselves to a chat interface. Now, where really smart folks tend to get hung up is that they find that this isn't as smart as them, and they get anchored on that GPT writes far better than the average person, but it can't hold a candle to our best authors-- yet.
SPEAKER: As an aside, the unevenness of the capabilities-- and the way they are emerging appears to be blowing up our existing theories of the nature of intelligence. Now, another thing people will talk about, in terms of these scaling laws, is about the limitation of quality training data. This speculation that these things have largely already sucked up the internet and outside getting into black pools of information or corporate information.
SPEAKER: There's kind of limits in that regard. And while the key players have stopped talking about their training data, because of some legal reasons and also competitive reasons, they will allude to simulations as being a likely pathway to rapidly moving beyond the limitations of available human data. And I'll give you a glimpse into how this can work with gaming engines.
SPEAKER: So traditional chest engines developed back in the '70s, '80s, '90s-- how they worked was that they had handcrafted algorithms. They had heuristics that were human-generated to basically use all of the human understanding of what a particular good move is. And then, that was coupled with the raw computational power of computing to be able to brute force explore moves ahead and ultimately be able to get more moves ahead than a human could strategize around.
SPEAKER: And that's how Deep Blue beat Kasparov in 1997. This worked well for chess, but it fell apart with the ancient game of Go. That is far more complex and has more permutations than atoms in the universe. So you just literally can't use a raw brute force computational approach and this became one of the gold standards for machine learning could they really get a system using neural nets to learn this.
SPEAKER: And AlphaGo was able to use a neural net to basically suck up the sum total of the games that had been recorded by human master players. And it was eventually able to learn and beat all human players. But the logical conclusion was to see if these neural nets could self-learn any two-player board game without any prior human knowledge.
SPEAKER: And AlphaZero was that system being fed just the specific rules of the game and nothing else and then played against itself to learn. And as you can see here, starting from nothing, where it was just random, it's surpassed human-level performance in 72 hours and became the strongest Go AI-- the strongest player in the world in 40 days, including all other AI systems.
SPEAKER: And then they turned it to chess. And within just four hours of self-learning, of playing against itself, AlphaZero mastered chess, and it outperformed the reigning champion, Stockfish 9. So this is an example of something in a simulation, being able to in a very, very short period of time, like four hours, outperform everything to that point.
SPEAKER: So being self-taught, these alpha programs are not constrained by conventional wisdom of a human player. I think we have a force upon us, which is a significant as fire, electricity, and the internet. The internet reduced the cost of transmitting information. The marginal cost is now effectively zero.
SPEAKER: Generative AI is reducing the cost of cognition. There's still a lot of friction to use these systems. That friction will be rapidly reduced, and we will then see a proliferation of capabilities. Now, what's interesting is this is an AI-generated slide. I was kind of talking to it and trying to get it to-- electricity, it kind of came up with the light bulb idea, a bit of fire in there, a hearth.
SPEAKER: It gave me some variance. And I thought this was pretty interesting. A heart kind of emerged within this and a double heart. And it's really interesting because it actually captures what we need most. AI will be ferociously intelligent. What we really need is for it to have a heart.
SPEAKER: Now, I consider it extremely likely that we're tipping into hyper acceleration with generative AI. And this is kind of known as Kurzweil's law of accelerating returns. And these are exponential factors that are compounding on top of each other. And the first of these is algorithm efficiency. What we've seen over the years is basically a steady exponential rise in the efficiency of the algorithms used to train and to generate the inference in these models.
SPEAKER: It's generally thought that the current efficiency is just that the smallest of small fractions of what's possible. It's likely that these models will continue to get more and more efficient. The second is hardware. This is Jensen Huang from NVIDIA in a recent presentation, and they're representing that in the last five years, relative to AI computational power, they have seen a 1,000X improvement.
SPEAKER: So that's significantly beyond Moore's law. And they're confidently predicting that they will be able to do the same in the next five years. And given chip fabrication has a pretty long lead time, they should have pretty good visibility into that. So that's a million X improvement in 10 years. That's something we very rarely see. And the last factor is these models being used within larger systems.
SPEAKER: It's models being reflected back upon themselves. So like if you've used ChatGPT, we all know that a hallucinates. Well, if you ask it to reflect on itself-- you ask it, like, grade yourself. Like, is there anything that you said that is untrue, it will catch a lot of that stuff. And if you get two instances of ChatGPT going and you kind of feed them off each other, you can dramatically improve the results there.
SPEAKER: So we're likely to see intelligence really amplified and the impact-- to see the impact really grow, as we're able to chain these models together to be able to remember to dynamically execute generated code and to be able to act directly on the outside world. Similar to how no human can just stream of consciousness a doctoral dissertation.
SPEAKER: It's an iterative process that involves reflection, a branching of thoughts, the exploration of paths, the step back, the evaluation, the feedback from others, validation against sources, comparison to historical trends, comparison to other branches of knowledge. And there's a huge amount of activity going into building these agents. There are open source projects that are just-- people are just piling into these things because people see what is possible with this.
SPEAKER: These agents are not yet stable, but there's huge excitement about the promise. And there's a logical extension to this, too, which is that as we struggle to understand how these models are working-- like, we understand them about as well as we understand this. It's still a somewhat dismal science. But one of the things that they've started to do is to turn LLMs to examine LLMs.
SPEAKER: And that seems to be where we're actually learning the most about how these systems are working. And what's interesting is they're now starting to make suggestions on how to directly improve their own systems. So this is where we're likely to get into this feedback loop. So I see these three factors, amongst others, as very strong evidence to support Kurzweil's law of accelerating returns.
SPEAKER: And what would an AI talk be without a reference to the singularity? So AI researchers generally agree that AGI will come-- or that AI will come to surpass human intelligence in most areas. OpenAI and DeepMind were both explicitly formed with the specific mission to create superintelligent AGI. The singularity refers to these systems recursively improving themselves so rapidly that intelligence will explode beyond the ability of the human mind to comprehend, let alone predict.
SPEAKER: If so, we're likely to be visited by aliens-- aliens of our own creation birthed in Silicon. But let's get back to more practical matters. [LAUGHTER] The McKinsey Global Institute-- [LAUGHTER CONTINUES] The McKinsey Global Institute put this out in July.
SPEAKER: And what they're doing here is they assembled a bunch of experts to basically try and bracket and look at, in terms of white collar work, what percentage of that in a given time frame did they think could be automated by AI? And they first did this exercise in 2017, and they came up with early and late scenarios. And you notice there's a pretty big gap between the two. It's about a 20-year difference.
SPEAKER: They're all generally saying, yeah, this is kind of pretty inevitable that the March of progress here is almost certain. But what's interesting is that they just went and redid this exercise in July of this year. And what's really interesting is that the range has dramatically narrowed, and the late scenario is now ahead of the prior early scenario. So it shows a rapidly converging consensus with timelines being pulled forward.
SPEAKER: Disruption is coming much faster and with much greater certainty. They also look at which business functions are up first to get disrupted. And you can't help but notice that Silverchair's core function of software engineering is right up there in the top right, because it turns out that LLMs are not only now highly proficient in English, they're very quickly learning how to write software.
SPEAKER: And it gets even more intense when you intersect business function with industry. And the standout here is the function of software development in the high tech industry. And when McKinsey forecasts what you do as the most impacted by the explosion of generative AI, you need to pay attention. It has my undivided attention. And we are turning towards this with curiosity and excitement and a little bit of fear.
SPEAKER: So I mentioned earlier that existing friction to use these models will rapidly decline. This is an article from last week. That title was originally something a little bit different. It was saying something along the lines of, before they're ready. But then they seem to correct that in a later publication.
SPEAKER: But the next wave is on us. These tools promise to be very powerful and crude. They're going to have very rough edges. They are being rushed to market. Nobody's disputing that. The underlying AI between this immediate wave that we're going to experience in the next two months, that's the same. It's basically-- it's GPT-4-based or GeminI-based based or the Anthropic Claude 2-based.
SPEAKER: This wave is about removing friction from the usage of LLMs as they integrated into existing tools and workflows because GPT-4 is available now. But to reallY-- I mean, and it's great to ask it kind of like Wikipedia questions. And it's easier to just ask it questions and get that content out. That's a pretty naive use.
SPEAKER: Where the real power comes is where you give the context about the problem that you're dealing with, and you really shape it, and you unlock it and you kind of bring those capabilities out. That takes a fair bit of work today. But this next wave is basically about creating all the hooks into our existing workflows. And there'll be widespread frustration with these. And you'll have companies banning their use.
SPEAKER: But they will also be heavily used. And their utility will significantly outweigh the frustration with them, but we're all going to have to learn really fast how to work around some really glaring issues. These things just make stuff up. We're using Copilot to write code. So writing method-level instructions is rapidly becoming a thing of the past.
SPEAKER: We're using Zoom AI companion for by default for internal participants. So it's basically capturing a real-time transcription that it's real-m time feeding to an LLM. So you can be there in the meeting, and somebody's making this really, like, sophisticated point. And then what you can do is you can ask the LLM to compare that to what would industry leaders say about that?
SPEAKER: What's the other side of this argument? And you can do all the usual stuff like-- there's like a little button there that says, catch me up because you just spaced out for the last five minutes, and it'll just give you the last little bit. [LAUGHTER] Or how many times were I mentioned in this meeting, or what are my action items?
SPEAKER: Or just summarize this meeting. Microsoft with their Copilot suite and Google with Duet are rolling out their products, hooking the most powerful LMS directly into their productivity software. And even Amazon recently overhauled Alexa to have an LLM behind it. So those conversations are going to get a lot more interesting real quick. And it's a real arms race.
SPEAKER: All these announcements were made within a week of each other. November 1 is the rollout for Microsoft 365 Copilot, and it's going to create all kinds of boosts within almost all their applications. This is within Word just being able to extract some data, rewrite things, paraphrase. You can basically start with bullet points and essentially say, OK, expand this into prose.
SPEAKER: There'll be fascinating boosts, like the ability to, within PowerPoint, essentially say, hey, create me a presentation based on this Word document. And like, poof, OK, great starter document. You probably don't want to just use it without looking at it, but-- [LAUGHTER] --it's going to get you a long way there. Likewise, with crafting emails-- and this is going to be kind of pretty weird because you're going to start getting all these relatively generic kind of very politically correct just kind of emails where you just basically start off saying, OK, here's the intent.
SPEAKER: This is what I want you to communicate. Just craft a communication around this. Ultimately, we'll see more sophisticated usages as people are able to craft and shape the language such that it's actually in your voice and is in your style. And likewise, being able to ask Excel to do analysis for you. Now, obviously, you need to check everything, as it's going to make some stuff up. Probably like about 20% of the stuff.
SPEAKER: It says is just dead wrong, but it's really, really useful in terms of very, very rapidly giving you things to react to. Now, there's so much happening in this space that even somebody like me that just loves this stuff-- I just drink it up. Even I'm a little overwhelmed by it. And many people are going to have a tendency to put their head in the sand. That's likely to be a dysfunctional strategy.
SPEAKER: And as an aside, when I was looking for a photo for this, I actually discovered that ostriches actually don't stick their heads in the sand. But I know I didn't have a problem generating an image of that, so. [LAUGHTER] So this is really a message for all of you here, all of you personally.
SPEAKER: These models are very sophisticated, and they can do great party tricks. And many of you will have had some sort of naive usage with it. The real power to these is actually learning to reach up. And it's a frustrating and difficult process, and it's so foreign. It's like relearning. When you were first a manager and you got charge of some people, and you're kind of like, oh, great, well, now I can just get them to do all this stuff.
SPEAKER: And you discovered that just telling them what to do didn't actually really lead to the best results. And there's this kind of really messy, awkward process of learning how to lead people. This is going to be similar. It's going to be different as well. The skills don't necessarily fully translate. But being able to actually reach up, learning how to actually get these things to take on different perspectives, learning how to use these models to amplify your own intelligence, getting them to actually reflect and point out your blind spots, critique your arguments.
SPEAKER: You invoke it as some of the preeminent researchers in the world and get them to expand upon your thoughts. These are all kind of non-linear things. These are things that actually take some skill. And this skill to be able to reach up into these models is something that's difficult, and we're going to see-- we're going to see people that are extremely adept at this. And it's likely to be a real power law distribution, where some people are really able to do this, and other people are just going to be urg.
SPEAKER: And those people aren't going to do so well in this transition. It's different from anything we've experienced before. And you really need to expand your mind with these and get creative and put the time into the learning. But as leaders of your companies, you need to be thinking about AI applications in terms of technology S-curves. Who's familiar with the technology S-curve? OK, not many people.
SPEAKER: All right, well, the basic idea is that if you look at basically cost or value produced, down here you're having to invest a lot, and you're not really capturing the results. And then at some point, the technologies mature, and there's this kind of-- they go vertical. And you're able to, at much lower cost, implement these things and get a lot of value out of them. And ultimately, they mature and the benefits taper off.
SPEAKER: The landscape around AI is very, very dynamic, and there's constant leapfrogging happening. And the costs are rapidly dropping out of these systems. So you invest too early into a particular application, and you're likely to waste a lot of money. You get in too late, and you've likely given your competitors a huge advantage. Regardless, you should be seeking to have a hypothesis for the S-curve for applications to your business.
SPEAKER: And LLM applications are fractal all through the whole way. So, ultimately, I want us leaders to get out there and start riding the wave. It's never fun to be caught in the wash behind the wave. It's actually pretty dangerous. That's where you drown. But there's also some tough stuff coming. It's not just about intellectual preparation. White collar workers are likely to be more heavily disrupted than anyone else in this wave of technology.
SPEAKER: It's obvious that some jobs will completely go away. For most of the rest of us, large swaths of what we do today will change. And change is hard. It's coming faster than ever before. Most people have significant parts of their core identities built up around things that they are competent in. AI is going to rapidly eat into that competence.
SPEAKER: Many people are going to feel left behind. Not everybody is going to adapt equally to the AI interface. You need to get prepared to emotionally lead your organizations and help them skill up through a period of great disruption. And basic principles of human psychology apply now more than ever before. And organizations queue from the top. Be very explicit about doing the things that support your mental health and find your center and help others do the same.
SPEAKER: And you'll need to do that, in part, because it's quite likely that science itself is going to get transformed. This is a recent article by Eric Schmidt, predicting that AI will reshape every stage of the research process, including literature review, hypothesis development, et cetera. So I'm being told I ran a little bit over time here, so I'm going to jump through a few slides here.
SPEAKER: But I'm going to really get to the punchline here, which is that society's about to have massive contact with generative AI. We've seen the first broad-scale societal contact with AI in terms of content curation for maximizing screen time, where AI rapidly developed capabilities to hijack attention through addictive manipulation of the dopamine circuits in the brain, with a side effect of amplifying societal polarization through filter bubbles and echo chambers.
SPEAKER: Peter Thiel asserts that the current narrow AI has deranged, drained, and intensified contradictions in society, stoking tensions to fuel engagement. And while the focus of politicians on generative AI is currently somewhat bipartisan, expect this to sharply change in the coming years with the dawning realization that as most people come to rely on generative AI for education, for comprehension, for interacting, and just intelligence amplification, human belief systems will be subject to unprecedented manipulation and hijack.
SPEAKER: Belief systems determine voting. Expect both political parties to gear up for open conflict in the name of eliminating IA bias, while nakedly seeking to win the race to indoctrinate the populace with their own political ideology that galvanizes voters through polarization around hot button issues. We may look back on gerrymandering in hindsight as quaint.
SPEAKER: Effective regulation is going to be a very hard problem, and it's likely to become partisan very fast unfortunately. And that's a shame, given the stakes. Virtually, all the major leaders agree it's a potentially existential issue. And these models are rapidly exploding onto the world. How it plays out is far from certain. But ahead lies great danger, but also great promise.
SPEAKER: Fantastical new tools are on the horizon. And they'll be rough at first, but they'll likely improve very quickly. And humanity has developed an amazing tree of knowledge. And the majority of you in this room have critical roles in growing that tree of knowledge and disseminating it into the world. AI will almost certainly transform how that information is consumed.
SPEAKER: And LLMs have the potential to be an audience and impact multiplier for your respective missions. It has the potential to amplify the accumulation and the refinement of this tree of knowledge, and you get to play a pivotal role in that. One way of looking at this is that we're birthing extremely gifted entities. The challenge is, that as is often present in gifted children, the talents are very unevenly distributed, and there are glaring flaws.
SPEAKER: The talents are also very strong. If the scaling laws hold , we're going to see them really turbocharged. But there's a difference between intelligence and wisdom. The vast majority of pundits agree that these systems are going to be extremely intelligent. What's of concern is whether these systems will be wise. And we see on Reddit, Twitter, 4Chan, they're really the poster child for like Daniel Kahneman's system one style thinking dominated by reactive hot takes and flared emotions.
SPEAKER: The premium will actually be on the more deliberative, disciplined, system, two-styled thinking for which academia is society's primary institution, with publishers as the gatekeepers. And the Frontier Foundation models are widely assumed to have been trained on a vast amount of paywalled content. And there are ferocious legal efforts underway to get this content out of the training sets.
SPEAKER: But the provocative question I ask is what happens when we have models growing ferociously in capabilitty, but we decline to train them on the very best sources of human knowledge and instead have them learn on the longer tail of less rigorously curated information or information that is out of date? If we treat this as a gifted child that may take over the world, we owe it to humanity to give it the best education possible and to ground it in the best of human wisdom.
SPEAKER: It, for sure, knows about Nietzsche, Sun Tzu, Clausewitz, Machiavelli. We want it grounded in the most current wisdom that we have. So my provocative thought here is that rather than trying to get paywalled premium scholarly information out of these training sets, I argue that we should be fighting to get it in there on terms that are economically sustainable. And that's in part because our children are very shortly going to be in direct contact with these models.
SPEAKER: Now, I'm not necessarily right about anything that I've said. But I think that this far more at stake than just preserving our business models. This is the time to look deeply into our missions underlying our organizations and consider what role we may play in nudging the outcomes. This transition is simultaneously fraught with existential risk and holds the promise to solve many of humanity's greatest challenges.
SPEAKER: Food for thought. Thank you. [APPLAUSE]