Name:
NISO Plus 2024 - Closing Keynote Panel - 2034 AI Futures
Description:
NISO Plus 2024 - Closing Keynote Panel - 2034 AI Futures
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/68f6afb4-38b7-45d0-805f-825a63d930bd/videoscrubberimages/Scrubber_1.jpg
Duration:
T01H03M27S
Embed URL:
https://stream.cadmore.media/player/68f6afb4-38b7-45d0-805f-825a63d930bd
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/68f6afb4-38b7-45d0-805f-825a63d930bd/NISO Plus 2024 - Closing Keynote Panel - 2034 AI Futures.mp4?sv=2019-02-02&sr=c&sig=tbMfP5Bx54zonBkX4mqJLk%2BuzB87TrqueRg4iv37uQA%3D&st=2024-10-16T01%3A58%3A47Z&se=2024-10-16T04%3A03%3A47Z&sp=r
Upload Date:
2024-03-05T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
How's everyone? Good Yeah. Good great. You're here. OK almost the last session. This is the keynote. My name is Kareem, dean of University libraries 20.
I think we have esteemed colleagues here, panelists. So please, please don't introduce yourself. Yeah hello, everyone. I'm arrangement on the AI platform team, meaning that I'm focusing really on. Me hi, I'm Cynthia Hudson Vitale. I'm the director of science policy and Scholarship for the Association of Research Libraries.
I'm actually the associate dean for digital infrastructure and director of the open source program office at. So we're going to play a game of Tetris. OK so 134 is not a tidal force. 24 it's going to be really, really hard to imagine a scenario for 10 years that actually was inspired by the work of Cynthia and the team from the Association of Research Libraries on the work of kind of imagining scenarios to help with succession planning for.
Early findings and also get a sense of like, where are we going? We heard from many of you. I came for the AI sessions here and you said we want to go beyond like really short term and try to reimagine, even reimagine how we work. So we try to do that. So I'm going to ask you to kind of five minutes ish make a statement and then also ask a question between us and then we're going to open the floor.
OK so, OK. Thank you, Karim. And thank you for that wonderful context setting. I'm delighted to be here. On behalf of the RL CNI joint task force on AI futures. The task force was charged with developing a set of plausible scenarios for an AI influenced future in the research environment. And as Karim said, within the next 10 years, what will our future research environment with regards to?
I kind of look like? So scenario planning is a very powerful tool in the strategic planning toolbox used during times of or in service of topics with high degrees of uncertainty. In the case of AI, as we've heard over the last few days, there are significant amounts of instability or uncertainty around AI in. The societal acceptance policy and regulations.
Intellectual property issues around trust and the veracity of AI results. AI workloads and technical development like the list goes on and on, and there's so much uncertainty. It's very difficult for us to kind of ascertain where to make strategic investments or shifts in resourcing, like how do we plan for this if we don't know, like what's going to happen? So through the scenario planning process, what happens is the task force.
Again is jointly led. Unite will develop four scenarios of plausible AI influence futures kind of along a continuum. So you can imagine one future may look more like the Jetsons if you're familiar with that kind of cartoon cultural reference from the 1960s with a robot AI personal assistant or AI as a force for peace, very optimistic.
And if everything goes right, what will that future look like? And then on the other spectrum or the other side of that, you know, an AI future that's perhaps a little more dystopian in its outlook in which there's either too much regulation or not enough regulation, or there's low societal acceptance or too high societal acceptance with low technical development. So just to context a little bit, you know, in the end, our future will never be captured by accurately or comprehensively in any one scenario.
And we don't, of course, choose a scenario or apply probabilities against them. Instead, you know, there's this belief that scenario or the future will be made up of components of each of the scenarios that are developed. So the goal is, is that organizations can leverage these scenarios to conduct SWOT analysis, to evaluate ways to mitigate risk and to conduct kind of like a thought analysis of their local implications.
So, you know, as organizations kind of go through each scenario, a number of robust strategies will kind of bubble to the surface. That will apply across all that will be important for organizations to put in place to be proactive about moving forward. And then those will, of course, be, you know, scenario specific strategies. So for our scenario process, just to kind of give you some background on where we are and where we're headed, this task force just kicked off in November of 2023.
So as far as our organizations are concerned, this is Lightspeed as fast as we're going. We had an amazing amount of interest from the community working on this kind of gnarly topic, which is actually really exciting. In December, in January of this past year or earlier this year, we held a number of focus groups and individual interviews with community members and thought leaders to surface these critical uncertainties, many of which have also been addressed here.
There are common things. And then to also understand kind of the core question we're seeking to address in the research environment overall, we had over 150 individuals participate in this work and share their expertise. And there's way more to do, though. So I would encourage everybody. There'll be more community engagement events going forward.
In January, we also conducted six interviews with sort of thought leaders, AI researchers and others on the future of AI in 10 years. So again, what is ideal? What is dystopian, and then what is the impact of AI for libraries and society overall? We do. We will be publishing those provocateur interviews in the next couple of weeks and they are incredibly thought provoking.
So I'd encourage everyone to take a look at those and think about them in their local context. So just last week we had our task force workshop in DC where we did a framing for the scenarios, the four scenarios that will be developed. We hope to have the draft set of scenarios published right before the Coalition for networked information meeting in San Diego in March.
So again, lightning speed and we will have another round of community listening sessions, both virtual and in person, taking that information, getting feedback. We hope to have the final scenarios published by the spring meeting in May of 2024. And I just want to pause and say, you know, the scenarios are just the scenarios we as a community really need to think and be proactive about their possible impact.
So for the remainder of the year, after the scenarios are published, we will host a series of workshops on helping organizations assess their local strategic implications and how they might leverage them for strategic planning kind of within their own organizations. So stay. Stay tuned for that as well. I'll just end with a little Nugget of information and insight that the provocateur interviews we conducted in January.
They were fascinating and varied. You know, one provocateur suggested that in 10 years I will create superhumans if everything goes well and, you know, brain computer interaction interactions will be widespread. While another suggested a future where the UN Sustainable Development Goals are exceeded and they're even shifted to focus more locally. So there's very optimistic stuff that's coming from a lot of these provocateurs.
And so we're kind of on the leading edge of AI research. There's also an amazing call for us to consider something called responsible thinking, which kind of pivots the traditional thinking around computational or critical thinking skills, which we discuss very often in the research environment, things more like responsible systems, diversity and inclusion and responsible innovation. So as this panel moves forward, I would welcome the opportunity to discuss this more with you.
I'm going to leave it at that. I'll pass it over. OK? OK. I'm sorry. Thank you. Yeah thank you. Yeah OK. There are really two things I would like to talk about in relation of where we're going to be in 2034.
And it's a little bit the corporate perspective of. And corporate representative here. So the first part I want to talk about or the first topic I want to talk about is increased productivity. I think in 2034, it's not more normal to use AI tools to increase productivity in many different processes that we are using. We're not necessarily fundamentally changing what we are doing, but we're doing it in a different way.
And just to come to a few examples. Think of metadata creation. Metadata creation from scratch, for example, for digital objects. Let's say you load an image into a system and the system automatically recognizes the image and creates a metadata record. We can also talk about records for e-books, for example. We know that today they're missing a lot of abstracts.
A lot of these records are rather thin, so we can automatically enrich those records by uploading the entire book. And we have a tool that automatically creates a summary. So we have an abstract and there are lots of other things that we can do here. Also, correction of metadata. Think about also disambiguation.
You know, currently we have metadata records that all have different author names, although it might be the same author for that. Again, we might be able to use AI tools. So metadata is one example. The other example is really about research integrity. We are indexing already and publishing masses of data or masses of articles and papers. Paper Mills comes to mind.
How do we preserve or how do we set guardrails in terms of what we publish and what we index? today we're using already some AI tools for that. There are tools, for example, that can set red flags if there are anomalies in somewhere in the abstract or somewhere in the text. And then a human looks at the red flags and checks, you know, is this even research or is this not real research, et cetera.
So that's going to be extended. The problem with that is, of course, that the paper Mills also use AI. You know, I kind of like my Harry Potter quotes and there's a nice quote where two people discuss the fight against evil, basically. And one says, you can do magic. Surely you can show us anything. And the other person says, the trouble is the other side can do magic, too.
So this is a little bit of a catching up race. Everybody can do I basically and you know, it's not something which we can solve purely on AI, but it's definitely going to help. And I heard a lot of discussions here also about networks that help us identify and research. Yeah, there are lots of other things that can be done from recommender systems for collection development, for example, which is taking masses of data into account costs, usage, et cetera, and can come up with recommendations.
And that's something that can be built into our systems software. Writing is another area where we can increase productivity by having automated writing of script and then a human sits. They're it checks that this is really doing what it's supposed to be doing and coming also to that point, it starts with the human and ends with the human. I don't think that means that the jobs are going away.
They're just going to change. Human expertise is really important and will increase in value in 2034. That doesn't mean that the jobs are not changing, but I think AI aspects will become the norm in the job descriptions in one way or the other. So education is shifting a little bit. You know, knowledge needs to be built up in terms of how to use AI.
Now that doesn't mean that everybody is going to write Dutch language models or that sort of thing. It's more about being able to use an application that is using AI responsibly complying to policies, ethical rules, et cetera, that can only be done by a human. My second point is AI changes the way we find we access and consume information. So for example, we in 2034, tech based discovery basically is going to become the norm.
I don't think that we are going to invest into training models. I think there was a lot of talk here about training large language models. We as a company, we are not training large language models and I don't see us doing that in 10 years time. I might be wrong, but I don't think so. What we are heavily investing in is grounding or what that means that although the answer or the fluids in semantics in the answer is based on the training data, the actual facts in the answer are based on documents that we are feeding the system.
So we're telling the system, you answer this question, you give us a nice narrative, which is coming from. What is learned from the training data. But actually the facts are coming from journal, article x, y, z, and that's also being referenced. So this is, I think, more where we're going to be in terms of investment. And that's already starting today.
I think we are also going to see a lot more personal assistants popping up either system specific or maybe institutional that help users, students, researchers, et cetera, to do their everyday work. There are already examples kind of systems or tools that flip the script. It's not the student answering the ad asking the question. The system is basically guiding the students through the coursework and already asking the questions that the user, the students, needs to be answered or needs to have answered.
So it's a little bit of a different way to approach material and approach a course. And then we have reading aids. I think everybody at 2034 is going to use a reading aid for longer article or younger, longer documents. We today have already systems like scholar or tune. You can just upload a PDF or a link and it will summarize the article for you.
That's going to become the norm regardless whether we want it or not, or how we write about it or not, because the users don't really think about copyright. And I doubt that reading the small print. So we'll have to see how this is going to work out for us. And mentioning copyright, I somehow suspect we are still sitting here in 10 years in discussing copyright. You know, maybe this is going to solve it for us.
Easy, easy, easy. No pressure. No pressure. Yeah, that's almost everything I want to say. Just one more point. I absolutely love the fact that many national libraries, and other libraries digitize and their national heritage, and that is something that's becoming much more available than.
By tools. This is the historian in me. You know, you have a huge digitized collection and you can run tools like chatgpt on those collections, which are answering questions for which before I needed like hours and hours and hours and days work. So that's absolutely fantastic. And I think in 2034 we see a lot.
It's just an beginning of the dangers here. This is always favoring institutions in countries with a really good digital footprint and others don't have that. That's maybe something that we need to think about. On the other hand, we're going to have lots of language translation tools. So that's opening up research in other languages which would be translated on the fly.
So there's also something good in that. Thank you. Thank you. So before we talk about the 104 and I'll talk about just a few minutes ago during the break, two people I know came up to me. One of them said, are you going to be the doom and gloom guy? The other person said, are you going to depress everyone?
So that's not the plan, I assure you, but I am going to provoke and I'm going to pick up on this uncertainty ideas, and eventually they become the planning. Thomas mentioned it. I would say sort of the sense of culture and how we approach things. And I know that there's a relationship between uncertainty and anxiety. But I will also tell you that one of the best leadership coaches I ever had someone told in my career there, there's an optimal level of anxiety.
And it's not zero. Too much anxiety is paralyzing. No anxiety. There's no urgency. And I do think we get your pieces so I don't hit that optimal spot of anxiety. I apologize, but I'm trying to find the sweet spot and I'll turn to the mirror myself. For a little while ago at the Coalition for management information meeting in December 2021, he gave a presentation where he made 10 year predictions of Ii.
At the time I did it because I said, nobody's going to remember, so what's the harm? We're not going to talk about two years later. They were and they were very much sort of the technology workflow kind of love for people that all truck fleets would be autonomous drivers, that human resources decisions would be made by AI, that transactional and documentation aspects of project management will be done by that. The majority of experiments would be somebody else.
Then most software. 80% will be written by heart. And the only reason for that would be that you still need humans to generate original code. That was just over two years ago. Right we try to revisit these. So this year, January 3 of the largest truck start UPS, autonomous truck startups, announced they're getting rid of their chemical plants.
So interesting if they call the human. Yeah I suspect most human resources decisions in large companies are being made by simply hiring, firing decisions that are being made by. In terms of the traditional documentation of project management. I think lots of transaction documentation to be made and not just project management and Carnegie Mellon in terms of experiments.
A researcher named Gabe Gomez, and we've read this from an article about a non-organic intelligence system. It's a great name as the first sign designed, planned and executed, a chemistry experiment and a program of the National Science Foundation commented on this. So you don't think this is just seeing in PR where he says they put all the pieces together and the end result is far more than the sum of its parts.
It can be used for genuinely scientific purposes. And in terms of code, I don't have statistics on that, but I am confident there are many contexts with more than 50% of the code is generated by I mean at Carnegie Mellon as well. I don't mean just now as long. So two years roughly after I said I was to preserve it. I think these things are going to happen far faster than 10 years.
So at this point in 2034, I'll focus on academic publishing and my research ideas. Whatever we think of big tech, one thing I think they're exceptionally good at is operating at speed and scale at the same time. So I spoke with someone at Microsoft recently who said so every day, Microsoft, we have to make thousands of reviews for the software development process, pull request, documentation, license choices, feature requests, bug fixes, and so on.
In your mind, pick a number of how many they deal with in any given single day. Right he told me they are as high as 600,000 per day. At that scale, you cannot look at all of them. You cannot have someone review them. You cannot have anyone say, yeah, I'm going to look for all this. You have to make very difficult trade offs and choices about what risks you accept and how you will move forward.
And I'll tell you that this mindset exists with the AI. So this was a meeting with Chatham House rules. So I can share this. I can tell you anything more than that. But I will tell you highly credible sources, every company building a life sandwich model is using copyrighted material and binary genesis, and they have legal defense funds. So they will get sued and they will buy.
I don't see this is a good response to speaking sale, but that is a response to speaking sale. I went to a couple of workshops around automated science over the last few months, one to Carnegie Mellon, one to NC State is talking about the future of science. Work will be done. Provocative question that came up. What do you do in a world where you have 1 billion scientists? Right we do a billion people working on the problem together, not all on the same level of rigor and all the same level.
That but you literally had a billion people working on something together. How do you operate in that kind of environment? Right he said it in a very interesting way. He said we used to build machines to do chemistry. We need to now ask what chemistry machines can do. He said, in my life, I would imagine many thousands of molecules. But now I can literally sit-in billions of.
The use of these tools. So why would I design the experiment the way I've been trained classically instead of taking advantage of the systems that we have? So this is a speed scale that is unprecedented. And if I think about an interface where you can have a billion people working on something together with varying levels of expertise and input, but all working toward some sort of common goal.
It's a game. It's a massively multiplayer multiplayer game. That's kind of environment we think we're going to move towards in terms of how we do our science, in terms of how we do our research. When we engage people who have no connection to our institution whatsoever, no formal professional training.
But but frankly, just also just really work hard to figure out how the system works and operate within that and how do we find ways to bring them in. Not to say you can't be part of this because you don't have to agree to come in. I don't care that you don't have a degree to buying your. Bring somebody in. These systems are fundamental to pattern recognition.
That's what they do. They look through solution spaces to find patterns using math, probability ways to solve. What aspects of all our work is pattern recognition? What aspects of our work benefit from having an interaction with us and having us train and tune them? Right? that's going to be a key question going forward. If there's a word I think about in terms of what publishing collection, development research would be like in 2034.
Was these systems will produce so much content continuously that we will have to look through them and completely new and novel ways, right? They're not going to be able to use the existing. This is the kind of purchase and sale. It's going to be more affordable and I'll be prepared for that kind of continuous adaptive type of learning. And this kind of relentless environment.
If you look at the so-called synthetic data that's produced in these intermediate stages, they're bizarre. They're things like cat paws all mashed together into to. Is it important to maintain that it should be keep that for reproducibility? You need to preserve that unless you really like cats. Is is something that's important. Well, what's the scientific equivalent of that? Well, if you have hundreds of thousands of molecules mesh together and you do need to keep that right.
So this is the kind of environment. I think we'll see in about 10 years. Do you think that sets a profound impact in terms of workforce, in terms of jobs? Primarily because I've said this in the previous session, when you move from the other two big cases, I think that were transformative, the internet and the web, what is fundamentally shifted the role of private sector engineering to the triad of government, industry, and universities working together.
And quite frankly the least influence and if you read a book called The visitors stay up late, it talks about how the government will regularly push around AT&T and Dec and say, well, can you see your government contract if you don't do this? That is not the case today, right? In the last five years, the majority involvement coming out of the private sector, they hired all the talent of universities and told them, you're not working on research anymore, you're doing product development.
So that is a big change. So how do we get ready for a world like this? I want any. So what's the they come, if I may say, I don't want to like your reaction about the job.
So we have to dystopian and utopian. So you were saying there. So we just kind of, you know, retrain, but we may not lose much. If you think there will be. So but in terms of scalability, how much especially how world like information industry, publishing and libraries are we thinking like you this half of people?
It's impossible for me to say what percentage of people jobs are going to go away. What I can say is look at the private sector because they are motivated by the bottom line. There's no question. This is a simple statement, but it's not very much focused on the development of Saturn. So there's plenty of evidence that they will reuse that, cultivate them when they.
And so this is a previous session that in 2023, the tech sector led off about a quarter of a million people, which is 50% more than 2020. Highest percentage since the Great Recession. And the irony is last year was not a recession. So the. People very highly. You know, in the month of January, 30, 2000 people.
Now, all of them are software developers or IT people. Project managers on the other asset managers. But this is looking like a trend. And what seems to be the pattern is there's no big announcement. We're laying off this many people because our shareholders have told us and our softness. It seems to just be happening continuously. Right and that leads me to think that we're making continuous improvements from a productivity perspective.
And so on in doing that. So I think the fundamental message is there has to be continuous adaptation, and I can't say six months from now I'm going to be ready to jump in and learn more about I have to do it continuously and think about what does that mean for my job as the requirements change, as the abilities change, so that when that happens, I can say, well, I'm already up to tune up to speed on that, or I've already been investigating ways to deal with that.
I don't think we can react to a new system and then say now. So we're. Do you think if we don't do anything? We would die. Against you. I was.
You think you're going to find you? Can I come in? Yeah, yeah, Yeah. I just want to add, you know, our provocateur interviews got into this extensively. Kind of. What is the impact? What is the impact going to be on research libraries? And we heard from, unfortunately, more than one, that there is a real concern that I will make libraries obsolete.
And so being proactive about the technologies that they come out and planning and strategic planning is going to be really important for our sector going forward. You know, I think just like reading a line, you know, and this was said in. It's not entirely accurate, but you see libraries now measuring their footfall in traffic where people used to go to the YFi. That's not what the library is about.
The library needs to have more purpose than just footfall and things like that. They need to buy infrastructure for using these technologies specialized to particular need. And then there was also a call for greater work with AI schools and others in the sector to prepare the next generation of libraries for this. How would you bring up something that was discussed yesterday?
You mentioned one of the commissioners said it was the worst supermoon. I just said for artificial general intelligence is a big hype around this. And big tech companies are making money off of this. But also the scientific community, including kind of the so-called godfather of AI, they start having doubts. They basically give him Geoffrey Hinton, one of.
Co-creator of shadaki said that in 20 years and 20 years, to raise the point of having been close to human intelligence. So it was mentioned kind of half joking, too, but kind of dismissing it. I don't think it's going to happen. And you have it within. Probably you're going to have it in 100 years because basically your machines, your biological machines at some point we have already mine, but ten, 20 years.
OK I don't want to pick up your brain. Whether you want to come back with this. For the provocateur interviews we did. This came up in the context of a mechanism or a way for humans to stay relevant in a world in which AI is. Is smarter is more efficient, is the better worker than we are.
And so, you know, when you think about superhumans, it's a way to ensure that. I mean, this is, I will say, the provocative our interviews were meant to be stretch thinkers. These are people who are on the leading edge, who were out there in their thoughts. And so they were meant to really to emphasize that in their thinking. It's not saying that this is the world that will come true.
And so just have that as context. And I'm doing a little doom and gloom, like you said, about what this future may look like. But, you know, the brain computer interaction is superhuman stuff was really in response to this idea that AI becomes. You know, just sort of. A large supercomputer that can answer all questions. And maybe we'll win the Nobel Prize for medicine or.
Best director for. So I'll make some comments. But I want to turn to something that Christina said. There's a great article in Rolling Stone. I just pulled it up. It's called these women tried to warn us about AI. I'll put it in the notes. And they make a really good point that this conversation about sort of sentiment, artificial security and.
Is this? Big tech wants you to talk about this because it's a frightening prospect. And the more energy that's spent on that, the less energy that's spent on the very realized risks we have today with the systems that we are all testing for creating. Free, right? So I think that's a really important point to keep in mind.
The current I systems don't think right. They don't think there'd be mathematical inferencing on data and training that you provide them. And they're very good now by coming up with probabilities and ways to navigate the space. That's not, in my opinion, the thinking. And there's all sorts of questions around creativity. What does it mean to be creative, you know, is binary because somebody up to 60 other estradiol falling rudice lightning hit a tree.
And that was several terrible accidents. Somebody finally managed to go over and put a stick in it and harness it, right? So if creativity is nothing about recognizing patterns, then yeah, I think we have some issues where creativity is more than that. And the kind of human expertise of the Christine I think would be really incredible. And it will be the places where people sort of say, OK, I can't compete with this machine in doing this.
Kind of interesting, but I have creativity. I have social, you know, personal skills. I can. That's that kind of an expertise. Personal assistant. Some stretch this, to use this word agent. Basically, you're going to have an agent that more intelligent and you could create any research or user create on the Azure to get to work for you.
Do this research for me. This is my profile. But this is super duper. You believe that? One possibility. The research life cycle. I, I think that the point is really that an AI assistant can in with a more kind of process tasks. Everything that is super human needs to be at least that you have to analyze a large amount of data at scale and fast.
And that's really what AI is good at. And that can be used also by. So if you're looking at a research site where you're looking at a research project, there are just tasks which are better done by a better, faster done by an AI assistant. And there are other tasks where creativity is needed, really human thinking. And I really believe that.
I mean, I'm talking about in 100 years, I don't know what's in 100 years, but in 10 years, yes, there have a certain intelligence, but it's a very specific type of intelligence that are coming from AI. It's not like every type of intelligence. You know, we're talking about things like sentiment. I think sentiment is very important also in the research process.
You know, there are all sorts of things which are very human and they are playing a part here. And you can't just eliminate that. So to answer that, I don't believe maybe I'm really a positive thinker as well. I do believe that I can really help us with things, bring us forward. This things without really threatening anyone. No question that jobs are going to change.
No question that we need to understand and know how to use AI, that's for sure. But I think that's possible. I just want to. A story from my former institution, Johns Hopkins, which apparently did research where they had patients interact with a human physician. And I. Position lost and they found that the physician did as well or better than the human sort of doing some pattern recognition.
So apparently the physician just wanted, well, OK, but at least you know, what about from a bedside manner perspective? And people thought the I had a better bet. So my sense is the human physician is better off at the game, right. If this ball is doing as well as you are in the pattern recognition side, you can certainly do better in terms of engaging your patients, caring about them as human beings and giving that empathetic side that an AI is not going to be able to do.
So it may seem somewhat facetious, but this is to me an example of where the human what we think of as being human skills are going to be really important and not easily ever replaced by a. And last question, and then we'll get on to the floor. I'm getting away with the nice hat. We're working on. I something like standard for which we want part of this kind of gathering ideas from you.
So 10 years from now, the evil is solved, that it has some kind of standard and that. Facilitate the interaction or like in any like chaotic. I know it's a hard question. It is a hard question. And let me share something. As some of you may know, more or less used to be the same people.
I don't know if it's still there are Creative Commons. He gave a talk where he talked. He mentioned that my airplanes were becoming commercialized and you're starting to see personalized. They fly obviously over farms, vast tracts of the United States. Apparently when that happened, farmers own the airspace of their own. And as suriano said, he was not because you're running from your problems.
Cofresi's you get away with that. It's not. Not anymore. So there are goalposts that move drastically. Take your pick. Commerce, sort of innovation has arrived for. I think we're going to see a lot of that kind of thing happening in the next few years. I think it's important to have sort of clear lighthouses that affirm what we think are important in terms of standards to how you interact with each other, how you share maybe even the values associated with how you share.
But the idea that we're going to have a standard that will persist for a long period of time. The idea of gold standards of work to do in a more philosophical sense rather than a transaction sense. OK thank you, Sally. Now, questions or comments?
Yeah you know it was coming. Yeah thank you. OK please grab the mic. So you got fired up? I think I've been told that I stirred stuff up with my questions. I promise. I'm not trying to get the mic like this or anything.
I'm just thinking about these things. And maybe you don't think I'm a but. And I still write code every day. 50% of my code today is being generated for me. It's integrated in my tools. It's I even I write a comment and from 50 lines to entire pages of code is being generated on the fly every.
I didn't see that train stopping. So I'm a CTO. That means I have people who work for me. I have a chance to work for me. My my job includes taking care of the people who work for me for a professional career Development Center. Postume because. So now I'm faced with a reality.
Where like many folks here, we would have people who report to them who are in some way, shape or form responsible for their career advancement. I'm in a situation where. I'm using tools. And I'm seeing the impact right of my day to day work. I'm generating 50, 50% more code, 100% more code than I used to.
And while there are concerns around AI and copyright, which is all set up, there are all kinds of concerns and issues. I can't tell my engineers not to use it because the company is still doesn't quite know where it stands with. You know, for derivative works that come out of larger language models, etc., et cetera. So I can't I can't out of fear that we might get sued because we end up sharing code that a model recommended because it was trained on there, licensed or not.
Right? Creative Commons or not. I can't stop my people from using the technology when their peers out in industry are using it to advance their careers and move ahead. So that's my answer. That's the stuff I go to bed. Like I'm afraid for my job. I know my job in 2024.
I know my job is to write code, and it's not because I'm going to retire. Hopefully I'll be dying with my fingertips still. That's what I love to do. I'm an engineer, right? But it's because my job, I know it's going to change. So how do I protect my staff? Do I say, well, I institution or our company doesn't have a stance on this stuff yet.
So don't use this in your job. Don't use this. You know, because we don't know where this is going to settle. The dust can settle five years from now, 10 years from now, 20, 34 can come. And we're like, oh, now take it easy. No, by that time, your side has already moved on to another work that will allow them to use these tools.
So how do we, as people, managers prepare our staff for the same role regardless of the greater policies or whatever our experiences? So I'll begin with the meta observation. And now Facebook are kind of used to have a lot more farmers.
We don't have that many now. They became industrial workers. So that's the kind of transition that happens in change is happening. I don't know what'll happen in 10 years of that kind of scale, but we will have less people doing certain tasks and doing the tasks. He in more specific short term.
Managed software development teams. There are lots of things that we always say we're going to get around to, and we don't. We don't test enough. We don't write enough documentation. We don't do enough. Right? right. Maybe you're in yourself.
Want to do those things? Maybe those aren't the most exciting things that attract a software engineer to a particular job. But they're important and they're necessary. And I would say that even more important today is certainly the security of that code is incredibly important. Right cracking down, licensing. Right getting and spending an enormous amount about and now looking at attribution or licensing the co-pilot because they're very important in massively staked a very large scale.
So these are new these are old tasks that take on new importance or new tasks that arise within this county. So that's the kind of adaptation I'm getting to, is that I used to think this is what my job was as a software engineer, and maybe some of this is some of it's being done as an assistant. But I got to do all this other stuff too, and now we have the time to do all of the things that they didn't before.
And I have to learn new skills to take on the new. Necessary so that's a specific kind of response. But I think that's the type of thing you. I'm just to add, because we have a lot of software development teams and they're all jumping at, oh, we want to develop a. Very hard to tell. Tell them. Well, you know, we have a lot of other stuff to do.
Can you please do your job? And, you know, there's a lot of fascinating stuff out there. So and there's one problem is that there is certainly a legal problem with, for example, using AI to write code just to understand that it can we use that code and it's better at this time. I would saying, not being a lawyer, not to do that. But there are lots of other tasks which are really interesting and I think there also needs to be a shift to see what new tasks are there and divide them fairly among the developers.
And that's what the challenge we also have, you know, I have a lot of work now in AI because I'm working on an AI platform, but I don't want to leave the other developers behind because they also want to do interesting stuff. It's a little bit of a balance that you need to do there, but there are lots of interesting tasks. It just needs to be balanced, a little bit of chances to develop and to educate themselves and give them also the opportunity and maybe also the time to learn about those new tools without even.
Many without specific board at this time, but the golden common. Question yeah, go ahead. Hi Athena decker from the University of Central Florida. And the comment about an AI designing and conducting an experiment and the capacity for AI to do it as a much larger scale, I can imagine that the ais are going to start generating more and more experiments for each other and then also write about the results of those experiments or even if it's got human authors writing out the results, it's sort of written in a way for ingestion into ais.
So it sort of becomes this situation where significant scientific discoveries and commercial discoveries are being produced mostly by ais or in your scenario, in a gamified group of people all contributing in some small way to a very large outcome. So then the question is, where does that fit into advancing society? Because we have so much commercially tied up in owns it, who's going to go forward with it?
What does it mean in terms of how knowledge goes out and gets useful in society? I know that's a very big question, but I'm wondering if you have any thoughts on that. I'm wondering if this is really a question. Is this really a question we came up with? I mean, is that not something that was always part of our philosophy?
It certainly relates to how things have been done as well. But the skill, I think, is just so different. I guess a lot of these questions are kind of coming up now because it's scale. It's not. Unfortunately, I. I have a really strange. There's a movement in the global South called environmental. From below is fundamentally acknowledged that the cop conferences have become useless for the same people who make promises every year.
Go to this conference in the next year, say, oh, we'll do it this time, we promise. And there's a recognition that many people in the global South are actually responding to climate change and actually being really interesting, sort of very small scale. And they in essence said. Also using the same global North approach of building policies and legal frameworks and engineering solutions is not going to work.
And that is very much the. Yeah you know, since. Right and even in Europe, Europe tends to be more regulation focused than the US. So here's a framework of what you can gain the framework, right? As a community, I think we need to do a lot more of this sort of I. Which is actually testing the systems, holding them accountable.
Bringing in someone, getting access to that, building off of them. Building a foundational lesson for all these hundreds of millions of dollars. It's not out of the question. If we galvanize the National Science foundation, NIH or whatever came together. It dollars so it can be done.
But this idea of sort of continuously testing and holding him accountable. And being credit neutral and basically saying, here's some counter examples of what can be done with this. Here are some goals of people who. But you're exactly right. The last five years, all the money, all the majority of the money has. So the very in that either build it, you can build on top of it.
The Foundation is still available. So I think. Pretty much the question, are we going to die? So I would respond, who's the leader? Is it the institution you're talking about, or is it the people in the institution or is the people outside of the institution? If you're trying to say, take your pick the Library of the University Press.
Sitting right and invested in people within the institution. I empowering people. Yeah and Kenny Rogers from project muse. And my question is for all of you, although. A reference. You said. You said that. 34 published in.
You can. Out there that if I put all my alchemists hat. I can see that as an opportunity because it will require a lot of curation, which is something that traditional. Publishers and librarians are perfectly positioned to do.
Do you think that well, do you agree with that assessment. And Perhaps more. We what do you think that publishers and. To get themselves in a position where they're considered to be the curators that people respect and. Information Yeah.
And so so yes, I agree with your statement that it is curious. It will continue to be a Christian problem and that people can play a role in that. One other broad professional make the point that for we're going to see a shift from affirming recognizing celebrating individual creative contributions to more who can recognize patterns and aggregations. That's going to be optional for.
As you mentioned, prosecutors title is an aggregation, right? So I think in some sense, you're very well positioned. So as publishers and libraries think about the next 10 years, what does it mean to have the skills to work through those aggregations and help the AI systems with the curation? People still use it a lot at these workshops. The scientists that I went to, they had this term of human employee, you know, who is the human in the loop that intervenes or trains the system or the system, or perhaps more importantly, bridges between different scientific disciplines.
One exchange that I remember is somebody who gave a presentation about using these connections in organic chemistry. Somebody came up and said, how well do you think your model will work with inorganic chemistry? And he said, well, which kind of inorganic chemistry? And they said, I say, don't worry about that. It won't. I can tell you.
So even within chemistry, the models are so fine tuned for that particular discipline. They don't easily have the ability to use it within another subdiscipline, but people can help me that that is the case where people can help me get it right. So those are the kinds of gaps or sort of intersections that have for.
Yeah, I've decided. I feel like from the Croatian side in terms of agency, I don't believe will have enough people to do that alone. So we are mindful of the beginning of the internet where it was our reaction to catalog the internet and then realize we can do it. So we, we will need a different place. And I agree we will be in the loop in kind of overseeing more on the management side, not you.
I just want to add quickly from the teaching and learning side, I. Various libraries have a very long and respectful history in providing information literacy. Education and there are strong roles for. Forward and learning and literacy. Digital literacy around them. We know our institutes of higher education are coming out with policies on appropriate use or not appropriate use.
And it's really important for librarians to be at the forefront of educating students on reviewing outputs, checking the veracity and ensuring. Donations and other sorts of misinformation are not accounted for or somehow evaluated. Just want to add. I just put in the client notes. I think, of course is other.com center is actually produced by the University of Helsinki and I am completely in on using the app tools experimental.
Should be, can and should be. But this for is the Foundation of the work. How these animals work. I think it's important to understand that as well. So if you think about the browser, sure, you can consume a lot of information. But we build on lots of people with the tools we have. If you understand that the bike works, you. So they just be able to understand the foundations of how the laws work.
You even without getting into the math. So in this course, I. We've lost another astrophysical source. Very helpful. So we have time. Can we? You saying. Nothing wannabes. Really great.
The like for the whole government they build ANSI for. I can. It's impossible for the. OK, we're done. I guess so. We're out of time. We can do one more question. One more question.
OK, we have one more question. One more. Oh, I just like somebody. So I'm just thinking I can do a lot. Use the mike, please. Mike hello? Thank you. So I'm just thinking I can do a lot. I mean, I think in 10 years, we will have everybody will have an AI assistant device on their wearable.
And you can ask any question, you can do anything with it. I think they will be flying cars and AI will control the AI to control all those flying cars in there. I think there will be a vacation planner. You say this is my bullet, tell me where is the cheapest roads and all those things and where to stay and all that stuff. A lot of good things coming out of AI planning wise and routing wise and with quantum computing and AI merge together, you unlimited, you know, things can happen.
We can do planetary stuff can happen. But my question is all that stuff. So if you're joking or this is going to happen. And this is what is in plan, right? But energy is the biggest problem, I guess, where you get all that energy. But if you are supercomputers and a lot of energy to be consumed with these things because there will be petabytes of data probably every hour generated where it's going to be stored or how it's going to be accessed.
So I think there's more challenges coming up with AI as we progress, not at this point, but it will be more very challenging to handle that data, to process that data and obviously money wise, but money is not apparently because they're talking about $700 billion for this OpenAI. So I think that could be a challenging. What are your suggestion? What are you thinking on this?
I mean, these are two major hurdles. I think energy and processing like petabytes of data handling of that data. What do you think? You know. I was mentioned yesterday, one of the conversation that the impact on climate change and also I is raising money. They want to raise $7 trillion.
Yeah, but there are companies there is some kind of working on combining quantum computing and energy savings. So probably I hope we don't know like it will find a solution. Six, seven years. But meanwhile, we don't have that kind of perspective. Global self-restraint to. Quantum computing is basically is going to preserve energy.
In that sense, is that what you're saying? No, but I mean, they're trying have the building prototype so you can run more efficiently, data intensive. But we don't know whether it's going to succeed or not, but some people are working on it. Meanwhile, we don't talk much about the impact of these data centers. It's huge. This will be a first in both directions, right?
So it's correct that giving more efficiency the move to more unstructured data underpinning may also be profound in terms of processing. Right? if you don't need to have XML structured data and so on. Google is apparently looking at building nuclear power data centers over the ocean, so this will be addressed in both directions. There are very smart people in these companies for anything creative.
They all right. Hey, thank you. Well, that's the end of the thing. Thank you, everyone. Thank you all.