The Future of Machine Learning and AI: A Visionary Panel Discussion
Speakers: Josh Mesout, Matt Dupree, Dr. Morten Middelfart & Kumesh Aroomoogan
Summary
Follow the journey into the future of AI and machine learning with industry experts, Josh Mesout, Matt Dupree, Dr. Morten Middelfart, and Kumesh Aroomoogan. They forecast major AI disruptions on the horizon, including the rise of causal inference, generative AI, and no-code platforms. Then get the chance to delve into disruptive machine learning models like ChatGPT, GitHub Copilot, and synthetic voices in customer service, and how they impact our daily lives. They address concerns around AI adoption, job replacement, and regulation, while also touching on the democratization of data, the challenges startups face when scaling, and the potential of AI-driven advancements in various industries. The conversation explores innovative ideas like AI taxation and the right to know if you're talking to an AI, highlighting the dynamic and evolving nature of the AI landscape.
Transcription
Josh: We're coming through. Fantastic, thank you very much for everyone joining today. I'm really excited to have a really wide range of machine learning expertise on the stage today, talking about some invigorating concepts within the space.
The introductions here, I think, are a little bit shallow, but I think we have a really wide range of people and a really broad range of machine learning.
One of our goals here is actually trying to make this really understandable for people, breaking this down to a level where we can talk about the concepts in a really effective way.
I think one of the things I'm really excited for is the broad range of domains and industries we represent here as well.
To kickstart us here and start talking about what we're really boiling down to, which is the future of machine learning, I think if we talk a little bit about potential futures. I don't want to jump into singularity, but there's a lot of conversations going on right now about ChatGPT and applying machine learning towards human intelligence, trying to take that next frontier.
Guys, what do you think is the next big interesting space that machine learning will disrupt in how we live our day-to-day lives?
Matt Dupree: Yeah, so, sorry, the question is a little loud. Just, what's the next big area where do you think machine learning will disrupt?
Yeah, so I have a take on this. There's an area of machine learning that's not as hotly discussed, called causal inference. Causal inference, one way of talking about it is, we all believe that smoking is not good for your health, but we've never actually conducted an experiment to verify that, right? Because such an experiment would be unethical. And so, experiments are typically how we isolate causation. If we believe that smoking is not good for our health and we haven't done an experiment, how did we figure that out? Well, the answer is causal inference. There's this other way of isolating the causes of something with data. This is, I think, an underexplored aspect of machine learning. When you understand the causes of things with a lot of data, it can be very disruptive and transformational. To make that concrete, I worked in product analytics, that's my domain where I really started working with machine learning. We were really interested in causal inference because of the possibility of understanding why users were doing certain things on certain websites. If you understand why they're doing those things, then you can optimize your digital experiences. But I think causal inference is a big deal, it's underrated, and it's going to be disruptive here pretty soon.
Josh: I think it's actually really interesting, AI trying to understand the things humans can't quite wrap their head around. I think it's something that's been really true in finance. I know that one of the areas you're looking at at the moment is how you can start to take some of those human sentiment and NLP problems and start to get an understanding that can be advantageous in financial situations.
Kumesh Aroomoogan: Yeah, exactly. So there are two things that we see right now in the space that's super hot. One is generative AI, which every single VC is going basically scattered and trying to invest in all these companies. Generative AI has been around for quite a while, but because of Chat GPT, it kind of blew up in proportion. It's essentially generating additional data on top of existing data.
So what we do specifically is we add categories to unstructured text documents and then we add scores on top of these documents. So we can let you know if the document is very positive or negative and so forth. These are additional scores you can now use in different models.
The other area that's also very hot is around explainability when it comes to predictive modeling. If you generate a predictive score, like let's say you build a machine learning model to start predicting underwriting for loans at a bank, computers, if they try to predict that information and there's no explainability behind it, it's going to have a lot of issues when it comes to regulation. So a lot of companies now are trying to develop ways to explain these types of models in a more simple way, so top management can actually understand it.
So those are the two areas we're seeing right now.
Matt Dupree: I don't know, wasn't there a company that got in trouble for not having explainable models recently? I want to say Apple got in trouble when they originally put out their card. And they couldn't really explain why.
Dr Morten Middelfart: I can offer also a suggestion here. I come from the 70s, and there was nothing meant by that, but I've seen the waves of technology move for quite a while since the 80s. One thing that comes to mind when I think about machine learning is, it started out as something that was available to only a few super big companies. But then, through innovation within usability and explainability, people need to understand that it is fundamentally just a big spreadsheet. It has no mind, and the sooner we tell that to everybody, then it will become demystified in a way such that people will actually start exploring in different areas. So all the way down to the individual, if we combine that with what we saw in analytics, machinery that can run on smaller and smaller hardware and make it easier to use, now suddenly we get a potential for mass adoption at an individualization at a completely different level. Because I think how we talk about these things is what creates a situation where it's not for everybody. I think it is for everybody, and I think everybody should care, but not because we don't want to see a Terminator. We should care because it can largely accelerate whatever it is we seek to do in life. It can take care of stuff that is taking away the time from what we really want to do. It can empower us as individuals, and it can do a lot of beautiful things. And back in the 90s, nobody knew how to operate a spreadsheet. Let's fast forward that 30 years.
Kumesh Aroomoogan: Yeah, and just to add to that as well, a lot of machine learning platforms now have a no-code element to it. So a lot of companies are creating no-code platforms, and essentially, you don't even have to be a super experienced data scientist anymore. A lot of large organizations can have their business analysts or data analysts build sophisticated machine learning pipelines and use cases on a no-code platform, and so that just increases the adoption within the enterprise itself. So we're seeing a lot of those.
Josh: I think the common theme we see there, right, is democratization of AI. It's about bringing that barrier of entry down. I think that particularly if it's finance professionals and executives being able to see trends they haven't seen before because the data is too deep, whether it's something like augmenting an x-ray and being able to detect those types of information, I think the really powerful part of machine learning is how we put it into our hands. I've got a phone that, if it thinks I'm going to be late for work, it recommends me a different train. I think simple application, but we're slowly seeing different types of these use cases here at consumer markets. I think the enterprise is very much so you see that type of no-code setting.
What's an example of a disruptive machine learning model that you think consumers don't realize?
Dr Morten Middelfart: I can offer one: try to do what you can do with deep learning on a different platform.
Josh: That's interesting
Dr Morten Middelfart: Because everybody is so married to the neural network. But for a lot of types of information, it's absolutely the wrong platform. To think about one-hot encoding to be the best way to interact with a dataset, just simply the idea. If you came to somebody in the 70s or 80s, and I have not tried that in the 70s, I was born there, but if you came and said, well, I'm just gonna build a bunch of neurons and then I'm gonna take all this data. It means the right, so in other words, you're excessively overusing computation, you're excessively overusing smart principles that were known to hackers from when a hacker actually meant somebody optimizing stuff. You're taking that all away, and we put it into a machine that now we think has a mind, even those that work with it, which is even more interesting. It's like optimization as an art, as something that makes stuff go cheaper, faster, get to everybody. I think it's gonna come back, and that's thinking about AI where you break free. Just because somebody else did it with deep learning doesn't mean that you have to do it. What if there's a better way, right? I think that repeat cycle, I think that would disrupt. I mean, that, I guess, that's disruption by definition, right? So, I think that would disrupt.
Josh: I think a really good example that we see in the engineering world, and I think it's been heavily adopted in the cloud-native space, is things like GitHub Copilot. I mean, that's kind of challenging some of the value propositions that Chat GPT presents. You know, you can throw code at ChatGPT and say, convert that from Python to R.
I think, how do we help describe to the audience the impact that has on almost AI building AI?
Matt Dupree: Yeah, and GitHub Copilot are both really interesting. I think that, yeah, you kind of want us to quantify the impact, you know. I think it's really early, it's hard to say how much more productive programmers will be as a result of these things. And actually, it's interesting, I think, I see a little bit of, if we go back in history a little bit, I know I'm young or whatever, but you know, it's in books, that's fine. So, when programmers were first moving from writing in assembly to higher-level programming languages, there was a lot of resistance and pushback to that. A lot of programmers said like, oh, this isn't real programming, you know, this, you guys aren't really doing the same thing, or this is for the yuppie programmers, they're not really serious. And I think there's a little bit of that reaction happening to ChatGPT now. There's a lot of strong reaction against it as threatening. Yeah, it's disruptive, and so it's very early in being able to see what the impact will be because there's a lot of resistance to the technology. But I think, even, you know, just using it myself, it's extremely useful. I can't quantify how much time it saved me, but it's great to, instead of, you know, you're trying to do a thing, you get on Google, you search through a bunch of different links to figure out what code to write, you just, you kind of start typing the code that you need, and it completes it for you.
There's actually, I'll end on this, there's a really funny YouTube video, and it's like, you know, what my parents think I do versus what I actually do, and it's from the perspective of a software engineer. Some of you guys might have seen this. The first part of it is, this is brutal, you don't want to listen to me narrate this YouTube video, go check it out. It's pretty funny, but it kind of captures what programming is really like a lot of the time, which is just looking stuff up on Google. And ChatGPT really changes that.
Josh: Interesting.
Kumesh Aroomoogan: Yeah, and then going back to your previous question in terms of figuring out what's real AI or not. So, within a lot of the banks now, there is synthetic voice when it comes to customer success or customer service. So sometimes when you pick up the phone, you call a bank, you're on the phone, you might not even know if that's a real person or not because the voice is so real. And so it's a little scary that it's kind of driving that way because you don't really know if you're talking to a real person or not, and it's becoming more and more adaptive based on more customers chatting with it, so it's becoming more and more like a real person. And then the other one that we're seeing is chatbots, of course, like ChatGPT, but these chatbots are trained for answering customer service or customer support tickets at banks and insurance companies. So when you're chatting, you think you might be talking to a real person, but in reality, you're actually talking to chatbots.
So, they are these kinds of technologies that are just driven by AI, and the more you chat with it, the more it knows you at the end of the day.
Josh: We've got to dig into that, right? So, I think one of the interesting parts about visual intelligence or machine learning chatbots right now is replacing that human experience. I think a couple of examples of that are, instead of writing an email, say, "Chat GPT, write me an email to this customer. Use this amount of candor and hopefully close with this type of outcome." And it almost replaces the human in the loop.
What are your perspectives on where that can take us? Is that something our industry needs? Is there something we should be worried about and almost cautiously worrying about as a black box problem?
Kumesh Aroomoogan: We think of it as basically merging humans with computers and just making humans superhuman and making all of us more efficient at the end of the day. Because, at the end of the day, you don't want to spend your time doing a lot of manual grunt work. If AI can help with it, then we're all for it. But when it comes to a lot of our customers, the way they look at it is from an efficiency standpoint. So, a lot of the banks and insurance companies are using AI platforms now to figure out how they can generate more revenues faster than their competitors, how they can be more efficient in terms of cost savings and so forth. One of our customers essentially came to us, they were like, "Hey, we have five analysts right now covering about 10 companies, and we don't want to hire five more analysts to scale up and cover 20 or 30 companies. How can you help automate this entire process?" And so, using AI, you can have the same number of analysts cover four times more companies than previously. So, we're seeing a lot of these efficiency plays within organizations.
Dr Morten Middelfart: No, I somewhat agree. Solving real problems, you know, there's a lot of cool problems that can be solved, and there may be very meaningful to some. I like the more simpler problems that can be solved extremely efficiently and thereby getting the technology and getting the, let's say, the touching it out to more hands. Those are better because, at the end of the day, that's how change starts, like how we evolve beyond where we're at. And it's simply more users. I like that idea.
Josh: I think the theme we're kind of covering across there is actually what we see is effective machine learning isn't replacing the human, it's kind of augmenting the human but supercharging that personality. I think one of the perspectives people keep talking about, right, is how that levels the playing field for startups. Companies that don't have 30-person sales teams can now have a single AI write 500 different sales approaches, all customized on what you're trying to be as an outcome.
How do you guys see that disrupting that type of AI startup space?
Dr Morten Middelfart: I can see and speak to that because I've had well started both AI companies, social media companies, and also security and AI companies. So, I guess the whole impact of the individual has never been bigger because the digitizing of the entire world with the internet. So, everything is reachable. At the same time, computing has gone down, availability of resources and all of that has gone down. So, you can compress an extremely impactful organization in your pocket, or you could be five guys and change the world. So, it's like that power is with everybody, and I always say there's never ever been a better time to start a company. I've been started the one analytics company back in the 90s, and I couldn't even afford the computer that my software would eventually run on. So, how do you debug that? So, I solved that by using the human principle of saying, "Yeah, that can be done," and then really worked hard. But you know, it's probably better to have more dynamic, incremental scaling and resources in your organization and everything. And I think AI, at its finest, should facilitate things like that, solving real problems at a very broad scale. Or, well, I'm on the side of the entrepreneurs, so I like that.
But not everybody I think.
Matt Dupree: It's a bit more of a mixed bag than that. I don't know if we disagree; maybe you would agree with this, but you know, it's kind of like when people talk about just setting AI aside for a second, people talk about just the cloud, right? And how that lowered barriers to entry for starting SaaS companies, and one of the consequences of that is that there's a lot more SaaS companies. And that can kind of lead to noise and a reaction against all of these vendors and solutions that are coming onto the market, right? Yeah, and so there could be something similar happening if ChatGPT can write 500 personalized sales messages for all of the startups that are coming up. Then people are just going to tune out the personalized sales messages, right? That's a potential danger there.
So, I think there's still, even though there's some democratization happening, there's still some kind of dog-eat-dog dynamic in business where you're looking for some edge over here.
Dr Morten Middelfart: Yeah, no, business is business. I mean, hopefully, I didn't come across as I didn't mean that. Oh yeah, I'm talking about your ability to navigate your resources with very much maneuverability has just been drastically changed since, say, the 1950s, and I did not live there, right? So, it's more like it's multiple revolutions. It's not just the world at your fingertips, now it's also your resources are elastic. And by the way, now I can make myself elastic in time by having AI as my superhuman abilities. I mean, if that's not pretty cool in 80 years, I don't know why. So, I like that. I said so, like, but eventually, things have to make sense, right? You should not start doing something if it's stupid, to begin with. I encourage everybody to just stop yourself before doing so. But if it is meaningful, and you could apply those forces, I think it's a beautiful time.
Josh: I think religiously, from an NLP perspective, right? Because Frontier has been there since the 1980s; however, we're still seeing new use cases enabled by new types of technology. I think it must be really interesting to see the scale and how these things are coming along.
What's your viewpoint on the current state of disruption there?
Kumesh Aroomoogan: Yeah, I mean one of the things that we're always looking at even from startups in general is product-market fit, making sure that there's a market and the market is large enough. There could be a lot of startups coming out mainly because a lot of these AI tools are popping up, but the startup is going to realize that there's really no product-market fit, and then you go bust. So, for our space itself, NLP is pretty old and now it's getting super hot mainly because of ChatGPT and a few other things that are going on. But in terms of the space itself, when people are looking at NLP, they're looking at real-life applications - how can this make a big impact on their business? Like just one very quick example, we work with a very large bank, Standard Bank in South Africa, and before Axon, the company, before they even looked at NLP, they wanted to underwrite loans to small businesses within South Africa, and it just didn't have enough data points to actually underwrite these loans. Like these small businesses, they weren't - they were only there for like six months or nine months and so forth. And so, Standard Bank was like, "Hey, can we start tracking all the reviews on these small businesses, trying to figure out are people saying negative things or positive things?" To think of Yelp reviews, which they're now tracking in South Africa to figure out, "Is this business good enough to underwrite?" And so, using NLP, you can start parsing those reviews, identifying if it's good or not, and then they're literally using that data and they're plugging it into their underwriting process in South Africa to underwrite loans to small businesses. And so, it's things like that that's super innovating, and it's able to help small businesses that could never get a loan before, and now AI is actually powering those things. So, it's kind of cool.
Josh: That's crazy! I'm gonna think twice before writing on TripAdvisor next time.
I think the point behind there actually was really interesting, is actually data seems to still be king. I think one of the challenges that blocks those types of small startups and being able to compete is access to data, how much data you have. I actually think looking for that type of real-world evidence that's, yeah, free domain, things like Google reviews, I think scraping those types of data sets has become really powerful. I know, Matt, you're trying to work a lot in the space around helping people wrangle those data and kind of like lower types of complexities. I think particularly across the medical and scientific fields, that's always going to be a challenge because of the domain knowledge.
What other blockers do you guys think startups are going to hit when they try to scale against enterprises?
Matt Dupree: Yeah, it's a good question. I think that you already kind of touched on a collection of data being a bottleneck in building, you know, quality models, and a kind of a disadvantage that startups have versus, you know, incumbents is they don't have a product that's being used. So Andrew Ng has talked about how larger enterprises or larger companies will launch a digital product and leverage their distribution just to get data from a machine learning model. So they already have a large presence in the marketplace; they can use that to collect data and feed it into a model. If you're a startup, you don't have that, so there's kind of this chicken-or-the-egg problem where if you want to build a machine learning-powered product to kind of offer something unique, you need to have people using it first to get the data. And so, you know, getting out of that Catch-22 is...
Dr Morten Middelfart: And why it's not that I disagree, but I also want to stress that there's two sides to this too, right? That, yes, some businesses will find that it would be nice with more data and maybe somebody could give it, but also a lot of data has become free and publicly available. A great example is one of our recent projects where we wanted to fine-tune and optimize classification of biopsy images, and we have free data in abundance available to prove a point with that algorithm and then benchmark it against the DenseNet by Google. And then go say, well, this is what we can do, this is what they can do, and then by the way, it's also explainable, this is why it said malignant, right? So that is all possible with free downloadable data. And again, a few years ago, you had to own a hospital system or even be a hospital system collaborating with other hospital systems to have the same amount of data.
Matt Dupree: Absolutely, yeah.
Dr Morten Middelfart: So again, I feel the world is more wide open than it is closed. I feel there's more data available that is not available. In fact, again, if you're an entrepreneur and you're building a business where you start out by saying I need money, I need data, then I would encourage you to go back and then work on something else.
Matt Dupree: No, I don't disagree with that. I think it's a good point, just thinking about roadblocks, right, is the topic. I think the other thing, I'll give you, you know, once you have the data, there's also, you kind of touched on this, there's the process of cleaning it up and putting it into a form that's actually useful for a machine learning model to use and learn from. And that can be a barrier, you know, just finding the people who can do that. And then, right now, the state of, there's been a few talks on Data-Centric AI, and Andrew Ng has this quote about how, you know, right now because there's not a lot of tools to help with creating these models, you're really relying, to some extent, on serendipity or the skill of an individual engineer to get the data to a point where a model can actually be performant. And until we get the tools to a point where we can make that stuff automatic or feel more like no-code, it's a bit difficult to get the top-performing model. I think that's another word.
Kumesh Aroomoogan: Yeah, exactly. And another thing too is, essentially, once you start making, once your company or your AI model starts making decisions for the enterprise, it becomes a roadblock mainly because there's just so much regulation that you have to go through. If you start making a decision for an insurance policy or a car loan or a bank loan and so forth, using just AI is going to be a lot of issues. So that's why there's a lot of explainability startups being built, but even that's also getting scrutinized with regulation. So there's still a bit of a wait to go when for you to actually start making full decisions using AI. But right now, humans and computers need to come together and basically execute.
Josh: I think just for the audience there, I think some of the key themes are democratization of data, data access being incredibly important to model quality, and I think something that we all probably already knew that no machine learning model is always right, you've always got to account for the gap. I think one of the pieces that we keep thinking about, right, is this replacement of jobs, replacement of roles in the industry. Lots of people discuss concerns around their job might be early to be automated. I think a really interesting quote from McKinsey I believe was that they think that 50% of all roles will eventually be automated by machine learning. Now, that's heavily dependent from where you sit. I think if you're a doctor, I personally wouldn't want to have my ChatGPT see me now.
But what do you guys think about low hanging fruit for jobs to be automated? What do you think about that 50% target being automated?
Kumesh Aroomoogan: Yeah, I think more people are going to be needed to operate the systems, like even the no-code systems or AI systems and so forth. There's going to be more people needed because when I'm thinking about it, when it comes to a call center, you've got people picking up the phone, answering customers and so forth. Those are actually being automated with chatbots and synthetic voice and so forth. And so, yes, that's happening, but there needs to be people working the tools. So, I think there's going to be more jobs for that side and less jobs in terms of the manual, mundane tasks.
Dr Morten Middelfart: The other thing is, I just think that it's all going to cancel itself out; it's going to balance itself. It's like we don't need to think about how the world is going to be if we allow these technologies to slowly but surely penetrate, and by the way, maybe even create AI without having to go through all of that encoding that you guys are talking about. But that ability to ease of use and getting out there is much more important than, and we all got to eat, right? And so, somebody's going to find a way.
Yes, yes, well, I don't care because that guy also got to eat, and so think about him, right? Like somebody having his opinion and putting a doomsday prophecy out, that's also a job. I think the guy that, you know, does whatever ChatGPT can do is way better off finding another job somewhere else and get some food. So I think that discussion with technology, what's it going to mean, is always going to be negatively overrated. If you want to think negative, it could also be very positive; it could be liberating. And I'm so much for that direction as opposed to anybody saying, "Oh no, oh, we're gonna figure out what to do." We will figure it out McKinsey. Thank you.
Matt Dupree: I think it's really hard to make these sorts of predictions. One thing, again, kind of setting AI aside and just looking back, there was an economist, I think in the '20s, who predicted that in a hundred years we would be working 20 hours a week, wouldn't work 40 hours a week anymore because of all the advancements in technology, right? Settings on AI, computers, any of that stuff, right? He said, "Look, we've got factories, we've got all this crazy stuff, there's no need for people to work 40 hours a week in 100 years." And obviously, that hasn't exactly happened.
Dr Morten Middelfart: We will figure out something to do.
Matt Dupree: Well, yeah, there's this term, I think it's like a hedonic treadmill, this idea in psychology that you're never quite satisfied with what you have; there's always something a little bit more, and that desire may kind of keep us going in terms of working even though we've automated a lot of what we do. So, this is a long-winded way of saying I don't know, and people have been predicting the end of jobs for a while.
Dr Morten Middelfart: May I offer a perspective here? I think one way we can rely on is the moment next to this. So somebody's gonna say, "I don't like that, now I'm gonna do this", right? And that's what I talk about with the balance; things are gonna balance to an advantage. It may not be something you feel right now, but it will be later, and you can rely on that part of nature, be part of human nature, to play its role. So it's not that somebody is going to sit, lose their job, wait for Elon Musk to come and put a sink in their office or something, and then say, "Well, oh, now I don't know what to do." No, they will make decisions every day individually, as everybody else will. The world will unfold the way it's supposed to, and it is impossible to predict. And I do think it's going to be good.
Matt Dupree: Yeah, I think we can maybe leave that as a counterpoint that's kind of interesting, but I'll leave it alone. Yeah, ask me about it after.
Dr Morten Middelfart: I can support that with the quantum answer. If you want to find something negative, you will find it.
Matt Dupree: Well, sure, yeah, fair enough.
Dr Morten Middelfart: That's more real than you think.
Matt Dupree: The other part of your question, though, is about that 50% target, right? From McKinsey. And I think to me that feels pretty optimistic. So, one thing that's kind of interesting if you look at the history of AI, it's we've been promising self-driving cars for decades; that's not new. And so, we're kind of in this second age of AI, the first age we had in the '80s, where we were really focused on symbolic AI, expert systems, a lot of basically writing code, right? And if-then statements. Now we have this more statistical approach, this is kind of the second stage that we're in. And I think that until we find a way to marry the structured, symbolic approach with the statistical approach, we're not going to get anywhere close to that 50% number. There's still, Steve was up here just talking about how AI is still missing a lot of things, and the thing that it's missing is the kind of structural, symbolic approach. A really good book on this is "Rebooting AI," and also "The Book of Why," which is on causal inference, written by Judea Pearl, who won a Turing Award. But he basically shows that there are these fundamental mathematical differences between statistical approaches to the world and causal approaches or structured knowledge.
Once, if we figure out a way to marry a statistical approach and the structural approach, then maybe we can get to that 50%, but we're not there yet. We're starting to see it, I'll end on this. We're starting to see it a little bit when we talk about chat systems. I was just at a conference a month ago where, you know, people are training these AI models to respond to customer requests. But sometimes the models say things that are silly. Somebody asks, "What time is the store open?" and it just gives an answer that's just flat out wrong. So machine learning engineers are layering in rules on top of these models to block those kinds of erroneous responses. We're starting to see the marriage of these statistical and symbolic approaches, but we're not really close. And until we get there, I don't see that.
Josh: Anything to add?
Matt Dupree: No, that's good
Matt Dupree: Sorry
Dr Morten Middelfart: I do think that the idea of innovation is really a combination of things. It's not just one certain thing that solves everything. Typically, any innovation, and that's why, what I mean by when the world is finding balance, it's like solve the problem right next to you, and once you're there, solve the next one next to you, and that's how progress happens. Do not be religious about the way you solve it in terms of technology or how you do it, as long as you're not hurting anybody or, I guess, evading taxes. As long as you're not doing that, then you know, you're fine. Just solve the next problem, make a business.
Josh: I think, just to summarize up for the audience there, because there's a couple of really good points, I think there's a lot of strong expectations versus harsh realities in AI right now. I think a lot of them are being driven, one of my favorite quotes is, "The Jetsons still have to drive their own cars," it's a really good point. I think there's an interesting paradigm there; we're moving that people will move from working hard to working smart as an ideal in a concept, which actually I think combines a lot of concepts like four-day work weeks and where you see a lot of the kind of career and technology moving. I think that one of the first jobs that will end up being automated is probably me sitting here on a panel asking people questions.
I really want to dig into that tax point because, actually, I think that one of the big gaps about artificial intelligence adoption is, what do we do if everyone stops working? An interesting fact to kind of consider is like 6% to % of people in the US drive a vehicle for a living. Self-driving cars obviously present concern to those people. Talk about retraining, what we do is kind of enhance education as a pathway. I think the really interesting point about it with taxes, obviously filling the fiscal gap, what do we do if people start working on hard manual problems, it's not having that automated, how do we replace the GDP drop to take it from a financial perspective first?
Kumesh Aroomoogan: Yeah, I mean in terms of that, not too sure. Essentially, right now in financial services, we're seeing a lot of jobs being automated, a lot of manual stuff being automated, but a lot more hiring in terms of the technology side, like JP Morgan or Goldman Sachs. They laid off a lot of their workers, but they're increasing their workforce in data science and technology, just mainly for operating a lot of these tools that they're investing in, and they acquire a lot of technology companies now as well, and they need people to actually operate it. So in terms of the GDP gap and so forth, not too sure mainly because it's just going to be a balancing act at the end of the day.
Matt Dupree: It's a hard question. I, yeah, I don't know, it's a bummer. You get on a panel and you want somebody to have a hot take or try and predict the future in a hard way, but it's tough to say what's going to happen. I guess I'll keep hitting on the kind of, you know, it could happen anyway. There's, we're at a conference where we're excited about innovation, we're optimistic, we're tech optimists, but it could unfold anyway.
There's, I really like, I don't know if people like sci-fi here, of course you do. And the sci-fi that I really like is what you might call realistic sci-fi. So Star Trek, everything's great, right? It's kind of utopian. There's conflict, but it's like there's a nice little bow on it at the end of the show. But there are other kinds of sci-fi that can be instructive as we think about what the future could be like. So an example is The Expanse. I don't know if anybody likes The Expanse, but there's a huge yes, all right, people are hyped about The Expanse. But there's a whole subplot in The Expanse on this exact question about what do we do when people stop working? How do we ensure equity, you know, stuff like that?
I think that it's useful to think about those alternative possibilities as we work. I don't disagree with you, I think it's great to be optimistic and to work on the next problem, but I think at the same time, thinking about the different ways that it can unfold is useful.
I think it's great to be optimistic and to work on the next problem, but I think at the same time, thinking about the different ways that it can unfold is useful.
Dr Morten Middelfart: Yeah, I'm not going to say anything better than that. I think, again, what's important to remember is, I'm most certainly a capitalist, but understanding how pivotal you are when you're in full balance. Just imagine a lot of people that understand everything about themselves having the time on their hands to not be held accountable on a clock. Now, suddenly they understand things; maybe they can navigate themselves in ways that we've never seen. So we are kind of looking through the lens of where we're at right now, and then we assume that, you know, they need this amount of money, right? Who says that? What if they have time to grow a nice garden? What if they have, you know, a lot of things could happen. But if we allow balance, that will allow for higher change of direction type mobility, if that makes sense. I didn't mean to be philosophical here either, but I just think it's like natural laws that will play themselves out, and quite honestly, I think we should, that's why we're here, right?
Josh: I think it's really interesting, actually. One of the conversations Mark has quite a lot is, how do you tax an AI? I think the idea originally was proposed by Bill Gates. I believe he said, if you take a worker out of the workforce, the AI you replace them with should pay the same rate of tax. Oh, interesting! That's quite an interesting concept, and I think that there's loads of dynamic solutions towards how we replace people with automation. I don't know what your take is on that, guys.
Matt Dupree: Yeah, I think that's really clever. I think that you mentioned education as important. So, you said there's a lot of people driving; if we're going to replace them, they need a way to be reintegrated into the workforce. So, I see the taxation play as in the same kind of genre solution, kind of easing the transition over to a more AI-powered world in a way that isn't so shocking. So, if you think of, again, setting aside AI and just looking at standard history, the shift of the American economy from manufacturing to a service-based economy was traumatizing for a lot of people and, in some ways, led to the political dynamics that we see today. So, if we could not repeat that kind of trauma as we retool the economy by leveraging these clever solutions, I think that's great. So yeah, I think it's a neat idea. I haven't heard about that.
Kumesh Aroomoogan: There are a lot of businesses that are not going to like that.
Matt Dupree: I think there's, now we're getting a little philosophical here, where we talk about capitalism and stuff a little bit, but there's, what you're proposing, a kind of not exactly a free market solution, right? Or not what you're proposing, but what Bill Gates is proposing, no surprise. But I think there is space for intervention for those sorts of things.
Josh: I think that's actually a really good point to turn over to the audience if you have any questions. And if anyone wants to, oh right away, right there in the front.
Audience: Hey, uh, oh, that was loud. All right. So when you think about some of the more popular examples right now, like ChatGPT, GitHub Copilot, DALL-E, things like that, the data source, the training set, is crowdsourced from what's publicly available. But I don't know that the people who are creating that content, whether it's the blog that they are intending to draw an audience or maybe that open-source piece of code where they're expecting attribution or to contribute to some larger project, are realizing that their data is being mined for these commercial efforts. So I'm just curious what your take on how to address the controversy around permissioning and copyright around that content.
Matt Dupree: Great question! You told me something interesting about this, like the differences between the art world and the pharma world in terms of intellectual property. I don't know if.
Josh: Yeah, I think it's inconsistent, right? And I think that actually, lots of the training data we're using to build machine learning models is very industry-specific, and every industry looks at trademark and copyright differently. I think one of the biggest inconsistencies I see coming from a healthcare, pharma, drug discovery background is US patent laws ruled out for any art generated by things like DALL-E couldn't be trademarked and copyrighted. I think if you went out there and looked at pharmaceutical and healthcare companies who are literally investing billions of dollars into creating drugs, compounds, and diagnostics, their intention is entirely to patent, copyright, and trademark them, those large-scale investments. So I think that there's, again, strong expectations, harsh realities coming towards AI. If I can be honest, I don't think it's bad actors; I think it's the result of scaling too quickly. We're not thinking about the steps we're taking as we're growing; people are deciding to move a massive amount of capital into these types of investments as alternative ventures overnight.
I don't suppose you guys have anything to add to those types of situations?
Kumesh Aroomoogan: Yeah, I mean, this has been going on for a few years, especially with like, we work with some of the quantitative hedge funds that do algo trading. And essentially, what they're doing is scraping a bunch of different news sites and blogs and so forth, and they're quantifying these articles and then taking that analytics and plugging into their models so they can do auto trading, and they're making a ton of cash off of your content that you're actually writing. So maybe you've discovered some really cool news event, and you publish this in your little blog, and they're able to pick that up and actually make a bunch of money off it, and you're not getting anything in return. So it's a little difficult to figure out how can you get compensated for those kinds of things and so forth. One thing that some of the hedge funds are looking at is the robots.txt files of the website, so that kind of sets the rules in terms of figuring out, can you actually crawl this content and monetize it? So if that's all in the robots.txt file, then it's off-limits. But a lot of people don't really realize that, and a lot of people are just monetizing on everything you're putting out there.
Josh: All right, one more question right here.
Audience: So, I have two questions: one's pretty simple, one's more philosophical.
The simple one is, obviously with ChatGPT and others, you can see a clear replacement of call centers and those types of positions, but the next step would be maybe financial advisors or medical professionals. So, the simple question is, do you think the consumer has a right to know if they're talking to an AI or to a person who's an expert?
My more complex question is, with basically, I think it was John Maynard Keynes was the economist you were thinking of, who earnestly got wrong that we'd only be working 15 hours a week. Do you think there's a real risk that yes, we may not lose as many jobs, but the jobs people have will become more meaningless, where they'll be watching the AI do the job, and so, is that really a fulfilling job to have?
Dr Morten Middelfart: For the first question, I would say, no. For the second one, I believe again, things will, people will not, they will fundamentally take what's available and then improve. I don't think we'll just be feeding a computer. It's kind of like what they said also about the first computers being there, "Oh no, now it ends. Now we're just going to be slaves to the machines." But I think it will free up time, and that's something that I feel is completely overlooked. We think about it, "Oh, but then I'm not gonna make so much money," but what if you just doubled your time? Time is worth more than money, not just because of exchanging time for money, which is actually entrepreneurially a bad idea, but if you exchange the freedom that comes with time to expand your knowledge, then that will be much more valuable long-term. So, I'm not so concerned with people getting bored, people not doing the right things, people being slaves. I'm much more concerned with those people who actually think it's their job to regulate and try to force things into boxes because I think there will be less of that in the future.
Kumesh Aroomoogan: For question one, which I'll only tackle, you have to disclose if you're talking to a robot or not, because it's a big issue. Right now, we are working with a few wealth managers to figure out how you can get ChatGPT to better answer some of these financial questions because right now, ChatGPT is very general and it's not trained for financial services. You need a lot of training data to also train it for financial services by plugging in a bunch of 10-K, 10-Q filings and so forth. So right now, that's being worked on, and wealth managers are looking into it at the moment, and they're going to propose that to their clients, but they have to disclose that, saying that, "Hey, you're talking to a bot," or else there's just going to be a lot of regulations there.
Matt Dupree: I think, more generally, the right to know whether you're talking to a bot or a person is undergirded by the difference between the performance of the model and a real human being. So if they perform the same, who cares, right? But I think what we're worried about is, hey, is this company basically being cheap, and instead of hiring a person to give me the good stuff, they're going to have a bot do it? And I think that, just as if you think about the Ford Pinto, Ford knew that there was this problem with the Ford Pinto, and they said, "Well, it's going to blow up on some people, but we'll settle. It'll work out in the average." And that's the thing that you want to guard against with some of these machine learning models, is these companies are maybe deploying models that, in individual cases, are disastrous, but in the average, it saves them money. So, I think you do need some sort of intervention to guard against that, and the intervention is for people to have the right to know what they're speaking to. I think as time goes on, that right may be less important. The difference in performance may be, you know, the gap won't be there as much.
Second question, oh, we're out of time. Okay, second question, I'll just, I like sci-fi, I'll mention a few other things on this that are kind of fun. As far as whether the work will be boring, there's a nice, goodness, I think this was on Radio Lab, NPR show, they told the short story about this kind of semi-dystopian future where there was only one job left. It's called "The Last Job," that's the name of the story. This guy's job was to make sure that nobody worked because humans will mess it up. So, his job was to make sure that no one worked, and there's a group of humans that kind of rebelled against this. They felt that to work is to be human, you gotta be able to do this. So, they had this underground operation, and the story ends with him shutting them down. And so, then his job's done, and he's like, "Great, you know, there's no more work to be done." And then he finds himself working, he joins the rebellious movement unwittingly because it doesn't go away. So that's, I thought that was provocative. And then, the only other thing I'll add about boring jobs is, reminds me of "Brave New World." So, "Brave New World," they, if you recall, unfortunately genetically engineered humans to enjoy certain jobs that were boring. I mean, I don't, I'm not saying that I think that'll happen, but again, I think these looking at the different permutations of the future through the lens of sci-fi is interesting and can help us be prudent as we march into the future.
Josh: I think we'll call it there. I just want to say thank you very much to the panel. I think a huge round of applause.
Stay up to date
Sign up to the Navigate mailing list and stay in the loop with all the latest updates and news about the event.