Episode Transcript
[00:00:00] Speaker A: Foreign.
Welcome to the Victory Show.
[00:00:14] Speaker B: Hey victors. Welcome to this episode of the Victory Show. If this is your first time joining us, I'm Travis Cody, best selling author of 19 books and the creator of bestseller By Design.
I've had the privilege of helping hundreds of business consultants, founders, entrepreneurs write and publish their own best selling books. And through that journey, I've discovered a really fascinating pattern. Most businesses really struggle to break past that elusive seven figure per year in revenue. So on this show I sit down with some of the world's most successful founders, CEOs, leaders and business owners to uncover the strategies they use to scale way past that mark so you can do the same. So get ready for some deep insights and actionable takeaways that you can implement in your life and your business.
Starting now. Today's guest has spent his life at the intersection of math, machine learning and strategic innovation. Naz Qadri is the CTO and a partner at Princeton Equity Group where he leads the firm's AI data and technology strategy, bringing the power of intelligent systems to private equity at scale. Here's the thing. He's a mathematician by training. He has a master's degree from the University of Warwick. Naz began his journey as the co founder and CTO of Black Back Chat, an early pioneer in continuous voice recognition recognition tech.
Decades. It'll be a full decade ahead of today's conversational AI. So he's at the cutting edge. We're going to have a very fantastic conversation about that. He then went on to build an extraordinary career across global finance and fintech, holding leadership roles at Goldman Sachs, ubs, Morgan Stanley and Bloomberg where he led the creation of enterprise data science infrastructure. At J.P. morgan, as managing director, he architected the data and AI foundation for fusion, which is the firm's flagship buy side analytics platform.
And despite all these executive roles, Nas remains a hands on technologist, still writing code and teaching pragmatic AI and machine learning as an adjunct professor at Northwestern University.
His lifelong pursuit has centered on a powerful theme, using intelligent systems to amplify human decision making.
Naz, thank you for being here. I gotta say, it's rare, it's rare that we get a founder that still got his hands on the code.
[00:02:25] Speaker A: I think it's critical, and we'll probably go more into this as to why, but the technical distance between the keyboard and the PowerPoint presentations in the boardroom is one of the fundamental issues I think people are going to face.
[00:02:39] Speaker B: Oh, that's funny. Now see, I'm creative, right? So I made it through math, you know, I can, I can balance my budget and my checkbook, but I think of mathematicians and I think goodwill hunting. I think Beautiful Mind. And I'm like, my brain just goes, does not compute.
[00:02:55] Speaker A: You know the funny thing about movie goodwill hunting? My wife won't watch it with me because my complaint is always that the whiteboard is not big enough for him to have finished that.
[00:03:06] Speaker B: Like, he needed a whole room for that equation.
It's, it's, it's movie math. It works different.
So how did, when did you realize you were good at. I guess when did it occur to you that, like, math was a viable path to getting you into a better lifestyle?
[00:03:26] Speaker A: And I'll be completely honest, through this entire thing, I don't think that was ever the realization.
I think what it was, and I once was, you once were as well, a young man, a boy, and we are preternaturally lazy.
I decided to take the things on that I found easiest and were the least amount of work to me. I just happened to be lucky that those are actually good choices career wise.
[00:03:56] Speaker B: That's what I did with writing. I'm like, this is so easy. I'm going to do that.
That's fantastic. Now, you grew up in London, actually
[00:04:06] Speaker A: a town about 100 miles north of London. It's called Leicester. I think it's actually very cruel because making a child spell that is the first intellectual test you have to pass.
But yes, I spent the formative years of my life in that town.
[00:04:22] Speaker B: I was going to say you look a little more tan than the pasty Londoners. So did that also make you stand out a little bit as well?
[00:04:30] Speaker A: It's interesting. So my family actually emigrated from East Africa in the 70s.
[00:04:36] Speaker B: Oh, wow.
[00:04:36] Speaker A: I don't know if you've ever seen the movie the Last King of Scotland with IDI Amin and kind of loses his mind in Uganda. Well, they came from Uganda and Kenya in the 70s. Started from scratch again.
And that's where I was born.
[00:04:52] Speaker B: Oh, that's amazing. That's fantastic. All right, so we were having a conversation earlier where you were saying that, and I didn't know this about the uk, which it was, used to be sort of a meritocracy based education system. So didn't matter your background if you were, you know, depending where you're at, you could end up in the best school. And that ended up happening for you. Right. You said you felt like you came from the wrong side of the tracks, but because you were good with math. Math, it gave you the Opportunity to go to, I'm assuming, was it Warwick that you ended up going to?
[00:05:22] Speaker A: Yeah, I went to Warwick and it. It is a meritocratic system and you can get from the poorest neighborhoods to the best schools. You do need a little luck along the way. You do need some good teachers who understand that system and can guide you and direct you in that right direction. But, yes, the UK does make it possible for virtually anyone to get to virtually any school if they have that appropriate background. And those great teachers are the ones that find you out and point you in that direction.
[00:05:55] Speaker B: That direction. So, you know, it's so interesting is I grew up in a really small town in Utah, 2,000 people, and I don't know why, somewhere in my young age, like, Oxford was like, that was where I wanted to go. Like, I was Oxford, I wanted to go, but, you know, I never applied because I convinced myself that Oxford would never want a farm boy from America.
And that later I am.
[00:06:24] Speaker A: I didn't think about Warwick at first. Warwick turned out eventually to become one of the best schools for mathematics in the uk.
[00:06:30] Speaker B: Yeah.
[00:06:31] Speaker A: But originally I was thinking about Oxford and Cambridge, but didn't think that I could do the. They have an entrance paper only for those two universities and one of my teachers actually gave me the entrance examination, but hid what it was and I managed to pass it.
It's more that mental block. Had I seen. What was it that I saw, Cambridge at the top, I wouldn't have thought I could do it. But not seeing that it was just questions.
[00:06:57] Speaker B: Yeah, well, it's. For me, I think, when I got in the 20s and I ended up having a teacher in college who was from England and we had a good laugh because he was like, oh, my God. He's like, when did you want to go? And I was telling him and he was like, he was teaching at a university in England at the time and he's like that. They were actually doing an outreach of, like, how can we bring in some more Americans into the school? But again, right, it's the mental thing.
[00:07:20] Speaker A: So.
[00:07:21] Speaker B: And I want to talk about that because, you know, obviously we're talking about this from a. From a. An educational standpoint, but mindset also comes hugely into play in business and startups specifically. So. So. So you did school and then you got out of school and you chose to go into the Marine Corps.
[00:07:38] Speaker A: I didn't choose to do that. So whilst I was at university, I had to make some choices. What was the next direct.
So on.
Now I couldn't really think I didn't have enough knowledge network to figure out what was a broad set of career paths. I had a lot of buddies joining the Marines at that point.
And I was like, well, this looks pretty cool. We all worked out together.
They were going in as non officers. I was like, well, I'm doing a degree, I could join as an officer. So my mind, I was training for it and that's what I was going to do.
And then I accidentally got an internship at Goldman Sachs. I honestly didn't know what the company did, and it blew my mind and opened my mind and that was a pivot. And I literally moved away from a choice of one potential career path to an industry, potentially multiple industries.
[00:08:37] Speaker B: Yeah.
[00:08:37] Speaker A: Wow.
[00:08:38] Speaker B: So what were a couple of the things that I guess were shocking for you when you want to work at Goldman Sachs?
[00:08:45] Speaker A: Firstly, I had never seen buildings like this.
The equipment that they had in them, the number of whiteboards in different rooms, and there's a whole bunch of different math on each of them. There were people exploring space of ideas and then there were other people implementing what came out of the space of ideas and other people sitting there taking on risk and making money from that. And all very quickly done. And it just felt like a playground.
[00:09:17] Speaker B: Wow.
Across a dozen different industries at the same time. Right.
[00:09:23] Speaker A: So they were in every single market you can imagine. This was the year 1997.
So I remember hearing the first time that there was going to be a crash in Russia and we were going to later have the Asian market crash. And I'm thinking, that's so terrible. No, we just need to figure out what you do with the knowledge of that being the situation.
It didn't really matter to them what was going to happen in the world. The question was how do you deploy investors money appropriately, use intellect behind it and then elbow grease to actually make that reality.
[00:09:58] Speaker B: So I'm assuming coming from your background, the idea of like, wait a minute, there's guys that are making money on the way up and then there's guys that are making money on the way down and there's guys making money when it's going sideways. The fact that that for them was just a reality where, you know, normal people are taught like, oh, get in low, so I sell high. Right. But then you realize like, oh no, it's just like, doesn't matter how it's going. There's ways to, to make money if you're, you're smart with it. So how did that pivot then your career and how, how do you end up going From Goldman Sachs into the tech space.
[00:10:33] Speaker A: Yeah. So Goldman Sachs is a financial company. But you'd be surprised to realize prior to all of the west coast tech companies coming up in the 2000s, it was the companies like a Goldman Sachs, a Morgan Stanley, a JP Morgan, all those I worked for later, who were the tech companies?
Tech wasn't their goal, but it was the means by which they met the goals that they wanted to achieve.
[00:11:00] Speaker B: Sure.
[00:11:00] Speaker A: So being in those firms, you were learning multiple skills, you were understanding the domain of finance, but you were implementing it using the domain of mathematics and computing.
I actually didn't. I was probably one of the worst programmers in the world when I was at university.
But there's something about being in the real life situation of having to do it, having a mathematical background and being a constant learner.
I went and learned the technology aspect of this.
So I'd always been interested in artificial intelligence.
Prior to this being a thing, we used to call it machine learning, advanced statistics. You can put whatever label you want on it. And we've been playing with neural networks. When I was at university back in the mid-90s.
[00:11:47] Speaker B: Wow.
[00:11:47] Speaker A: So coming into the year 2000, I had a guy I knew reach out to me and he had this idea.
If you can remember what phones were like in 1999, they were nothing like the phones we had today. There was no apps. It was a flip phone. And you pressed buttons.
[00:12:04] Speaker B: Yep.
It wasn't even a flip phone. It was a BlackBerry with a keyboard on it.
[00:12:08] Speaker A: Yeah. It was a Nokia. Right. I tested everything on a Nokia 9210 or whatever it was.
And he had this idea, could we build something where someone phones a computer, talks to it, and gets connected to goods and services?
I said, well, that's impossible.
Let me think about this. What you would have to do is analyze the wave frequencies and come up with models which type to language. And I said, okay, maybe we can do something here. And so that turned into about six months of basically living in the back of a telephone exchange with my own server and trying to build this thing out. And we largely got it working by middle of 2000.
[00:12:50] Speaker B: Wow.
This is before all the voice automated stuff, before even, because everybody thought it was impossible at that point in time.
[00:12:58] Speaker A: What was interesting back then, and folks will probably remember, if you're a little older like myself, you would have dictation programs on Windows 95. And before you use them, you have to sit there and train it by reading some text that it had in your voice a number of times.
And then maybe or maybe not, you know, maybe it'll try and recognize some dictation from you. What this did was.
It just was trying to figure out what anybody was saying. So I don't know how well you know the English accents. We have many different accents. And I have buddies from all over, so I would have buddies from Newcastle, it's one of the hardest dialects to understand, from East London, from Birmingham. And I would have each of them on my phone in the pub trying to talk to the machine to see if it would understand the next day, going in and looking at what it couldn't do and trying to figure out how do we get somewhat consistent across different people without them having to do any training. So anybody could phone in.
[00:14:03] Speaker B: And how was that?
[00:14:06] Speaker A: It actually worked pretty well. Yeah.
And there were a few tricks here. We obviously couldn't do what the ChatGPT equivalents do today, where they listen to the whole thing and they can transcribe the whole thing.
[00:14:20] Speaker B: Sure.
[00:14:21] Speaker A: We would be looking for, I'm looking for a Chinese restaurant in Islington.
And once we kind of pick up looking for, we're looking for what is the descriptor, what is the noun and what is that? Geographic location.
So we try not to understand the whole thing. We're trying to pick out enough that we can match through some heuristics to figure out what. What was the intent? And then we would say back to them, I think this is what you're looking for. If you get an affirmative, then we move forward that the technology and the math was simply not there back then to do what's easy peasy today.
[00:14:59] Speaker B: All right, so what ended up happening? So was that with the first company, was that the Back Chat? Am I saying that right?
[00:15:06] Speaker A: It was Back Chat. The original name was going to be Digame, which is Spanish for tell me, talk to me. We just didn't think that was going to work very well in England.
Bigamy. What is that? It was not going to work out too well.
We made a few mistakes and then we had some really bad timing. So this is the year 2000.
Everyone's getting investment for everything.
We didn't take any investment because we didn't want to give up too much of the company. And when I reflect back, that was a mistake. We should have been willing to give up something to get something back and something serious to get a partner. And I think that was one of our, personally me, one of our fundamental mistakes.
And it compounded because we weren't doing that. I had to build the entire technology foundation alone.
Had we done that, we could have had a small group, had more people bought in. It probably wouldn't have worked out given the timing and the markets there. But I think we would have increased our probability of success had we.
[00:16:16] Speaker B: Did you have people seeking you out to offer funding or did you have people you knew saying, hey, you should go talk to these guys about that?
[00:16:25] Speaker A: We had one primary angel funder, but there was a limit that he could put in and he did all the seed investment.
We then had other folks who came in, but honestly, they were more vultures than they were partners.
I think it was kind of right to push them off. But I think where our mistake was was realizing that how do we go and find more of the similar, you know, partner to the angel that we could have used. So, for example, he was great.
He worked at a marketing agency. We didn't have an office. I got to share the marketing agency office with all these people coming in, doing surveys and all the other stuff. So we had a place to work from.
We should have figured out how to expand in a partnership based manner. We didn't need to work that job.
[00:17:15] Speaker B: Yeah.
So then like, what was the play that out then over 2000, what ended up happening? Because you said you made some mistakes, then you had really bad timing. I'm assuming that the dot com crash kind of crushed you.
[00:17:26] Speaker A: Well, we could see that this was the wrong time to be doing this. As soon as we started seeing that market not just going down, but like day after day, we're seeing 5% after 5% limit down.
It's like, even if we're successful, do we make any money at this, at this point?
And at the same time, I'm seeing all my buddies in finance getting great offers to do this and that at some point there becomes a pivot point. Yeah, like, I like the idea, I wish this would happen, but I just felt this was a good time to cut my losses.
I had intellectually succeeded at something I wanted to do. I had not business wise succeeded at the thing I wanted to do.
But I think it was a good time to cut my losses and get back for a short time into finance before I just kind of disappeared for a while.
[00:18:18] Speaker B: So what did that look like then? So you went back into finance for a little bit. And then
[00:18:23] Speaker A: I did so back into finance and I discovered contracting, which was great.
They pay you multiples of what you would normally earn to do exactly the same work.
But the thing that I did see and story behind this, I was on subway one day and I would have my newspaper. And I looked across me, and it was a guy with the same newspaper on the same page. And he looked about 30, 40, let's say 30 years older than me. And he looked miserable. He looked like life had beaten him up.
And I just saw myself for a moment looking in the mirror, A bunch of my buddies, as I mentioned, the marines were coming back from Afghanistan soon, and they were all like, mate, let's go and meet up in Thailand. And I was thinking, I'm going to go around the world.
So I actually quit my job later that day, and two weeks later got on a plane to actually go around the world. But I ended up staying in Thailand for a whole year, starting up a business there and learning a lot about it.
[00:19:27] Speaker B: Wow. Isn't interesting that, like, just those moments where, you know, you have the insight, where you look at someone ahead of you going, like, oh, that's me. Do I want to. Is that where I'm headed? So when I. When I was younger, my mom worked in a pharmacy, and I wanted to be a pharmacist because coming from a small farming town, the pharmacist was like, literally the richest guy in our county.
I was like, that is it. Like, that's the job for me. And there was a guy who owned a local pharmacy, and he let me essentially come in an intern for him on the weekends. On Saturdays and Sundays, I went in and he paid me a pretty good salary to, you know, do stuff in the back office. And I had a very similar experience. After about eight or nine months of working there, There was one day where it was just him and me in there, and I was doing my thing, and I just remember seeing him sitting at the counter, and he was just sitting there, and there's all these bottles of pills, and he just had this little knife and wolf, and he was just counting pills and putting pills from this bottle into this bottle and this bottle into this bottle. And that's what he did all day. And I just remember looking at that going, that's the next 40 years of my life.
And, like, there's no thinking having an
[00:20:43] Speaker A: image of Scrooge and the ghost of Christmas Future.
[00:20:48] Speaker B: And it was like being. I realized, you know, and he even said, like, he's like, well, he got started.
It was more fun because the. The pharmacists compounded their own medicine. So you'd have all these bottles of chemicals, and so you'd have to mix them up yourself, right? And. But then, you know, it's become so conglomerate now, it's like everything just comes. And that's literally all they do now is just count, count from one bottle to another. And so it was the same thing like that completely. All up to that point, I was getting ready to graduate high school and all the applications that I had sent out had all been to, like, top pharmacy schools.
And so now I'm starting to get these acceptance rates going and coming in, going like, that's not what I want to do anymore.
And so same. I ended up having a pivot where I was just like, oh, yeah, I think I need to go this way.
[00:21:34] Speaker A: So really good point that you bring up. I remember, as I said, having no direction but having vague feelings about what I wanted to do. And it coalesced into, I'm going to join the Marines. For you, it was pharmacy. So I do a lot of coaching with young folks where I try to extract out. I get that you want to do fill in the blank, but let's just move back and figure out what is it that you want to do that has drawn you to this thing and then expand out that funnel and then off you go and see what's going to work.
[00:22:08] Speaker B: Yeah, that's fascinating. Well, so the one thing I find fascinating here is that you've.
You just sort of stumbled into machine learning and language networks, you know, two decades before it's becoming mainstream. And of course now, you know, everybody's tripping over themselves about this stuff. So let's talk a little bit about your experience. Just from like 2000 to 2020, being involved in this and seeing the potential and seeing what's coming along and then like, sort of watching, sort of.
I guess it's like an industry stupor where everybody's just kind of like, yeah, no, that's all right. We don't need to pay attention to that.
[00:22:49] Speaker A: So let's think about this a little.
The problem to my mind is actually about.
It's kind of about language and it's kind of about understanding.
So I never called it artificial intelligence. Way back when it was just math. And then we take that math, we compute it out, and we see if it does things that we want to do. These are stochastic machines. Now we have deterministic machines. So your outlook normally is deterministic. You have an email, you send it, it gets sent, and you're good. These are random machines. So what we're always trying to do is figure out how can we control that randomness and get good results.
The fundamental problem was these things were slow.
We couldn't quite get the math, right. And we needed smarter people than me to make good progress in this. And it wasn't really until around, let's say, 2010, when people like Geoffrey Hinton, who did the ImageNet example and really got neural networks working.
We then started having this geeky kid called Jensen Huang, who back in the 90s was making cards so you could play video games. We didn't realize that what that allowed was a lot of parallel computation. Parallel computation is what you need for these networks. So there were a whole bunch of things missing and they in parallel kind of evolved until there was this point where everything came together.
So those of us who've been kind of following this, and I was a skeptic, but at the same time a user. So I would tell people, I think we can do this with, I don't know, some kind of evolutionary genetic algorithm, but it's going to be slow and it won't be great, or we can do this other thing. So we've been watching this thing evolve and to my mind, the real time to jump on it was around 2017, 18ish.
I know everybody thinks about the chatgpts where you talk, but there's a lot going on behind the machines. So it was around that time I was at Bloomberg building out a machine learning practice because I felt we were on the precipice of this starting to take off. We were building all the models behind.
It's before the GPTs actually came out. We were doing the mathematical modeling, not the language modeling. My feeling was language modeling is too hard to deal with. Let it happen.
It will happen at some point. And it did. And then we've got all the math side of it. You apply the language and now you start to have tools that you can really use for the regular user.
[00:25:31] Speaker B: You know, it's interesting because I have somebody I'm acquaintances with where he was. You know, Everybody thinks, oh, ChatGPT in 2023, but they forget that OpenAI was 2016 or 2017.
So they had their playground for six or seven years before they really sort of honed in and like made it user friendly.
And I just know that because I had the marketer that was using the language stuff in 2018 and 2019. And every, you know, writers are looking at him going like, you're doing what?
[00:26:01] Speaker A: It was a little bit more than this. So OpenAI spent a lot of time on reinforcement learning. So the first time I discovered them, maybe 2016, 17, they had the reinforcement learning playground. I was like, fantastic. It's such a pain to put that together, what has happened is everybody hates reinforcement learning today.
I actually still quite like it, but that's by the by.
And they realized this is not working out. Let's pivot to something else. And they did. They started doing all this transformer work.
Would it surprise you to know that what they're famous for, Google actually solved first?
Google actually came up with the transformer.
Do you know why they didn't use it?
Have a guess. Their business was made from search.
Started to monetize this thing. Hold on. Is that going to cannibalize our search revenue?
[00:26:59] Speaker B: That's so interesting, because it's kind of like Kodak. Right?
[00:27:03] Speaker A: Right.
[00:27:03] Speaker B: Because Kodak was the one that had the first patents on the digital camera, and they didn't do anything with, oh, this is going to cannibalize our film business.
[00:27:11] Speaker A: And then you've got a very similar pattern.
And then what's really interesting is the OpenAI's of the world. I mean, I've been using this tooling since 2018, 2019.
And it was the day they decided, hey, why don't we just put a web front end on this and give it to Average Joe?
It works. I mean, it. It will tell.
[00:27:35] Speaker B: And everybody's like, the programmers are going, that's not gonna work.
100 million users in 30 days later, they're like, whoa.
Oh, that's so funny. All right, so one of the things you talk about is pragmatic AI. What do you mean by that? Exactly.
[00:27:53] Speaker A: Yes.
And let me give you the pyramid I always talk to people about.
So I think at the bottom of the pyramid, you have simple deterministic algorithms.
So if I need to figure out, I've got a bag of colored balls and I can't see into the bag, and I know each of them is a different color, how long will it take before I pull out the color that I want? You do not need an LLM for that. There are algorithms already today that go and solve that.
There are. Then the next level, where you have simple machine learning models. And we have been using these for decades. We understand them very, very well. These are things like random forests, decision trees. We understand these things very, very well. And then at the top, we have complex stochastic models like a chatgpt or whatever else it is. When I talk about pragmatism, what I'm trying to help my students do is to separate the wheat from the chaff.
You will see people today, no matter what the problem is, LLM is the answer. It will always be given Problem LLM is the answer. Well, if I want to calculate some equation, LLM is not the greatest answer. We already have those answers. So when I say pragmatic, I want them to understand this history, what actually exists out there that we're already forgetting about, and then to choose the appropriate tools. So to give you an example of one of the things that I've worked on in the past is you already told me in our pre chat that you're not the greatest mathematician in the world and so on.
Now there are folks out there today using LLMs to allow you to become a data scientist.
And I can tell you most of those solutions are pretty terrible because what they've not done is the basics. What you really want to do is build the machine learning solutions and then teach the LLM to be a data scientist properly so that you can go and do what you need to do. If you go to ChatGPT today and ask it to do a proper data science problem for you, it's going to have a go at a few different things.
You'll never have the confidence that this is really doing what a professional data scientist would do for you. And then don't apply that to a whole bunch of other fields. So that pragmatism is about knowing what we should already know, knowing our background, knowing our history, knowing the knowledge that we were built and they will.
[00:30:34] Speaker B: And you know, it's compounded because you can go to chat and have the exact same question and you can do it in three different threads and it, and it gives you a different answer every time and then, then you're, then your, your confidence goes even lower because you're like, no, wait a minute, should it all be the same every time? And yeah, so again, right, but it's the tools, right?
[00:30:54] Speaker A: If you're using, what's the consistent thing about all the answers you got?
There's only one consistent thing about and I can already tell you what it is.
[00:31:05] Speaker B: You tell me because I'm drawing a blank.
[00:31:06] Speaker A: They are super compelling and if you do not know the domain, you'll believe them. So the way I explain this to normally management or folks who are learning the space is there was a guy that you probably went to college with called, let's call him Chad.
And Chad never studied a day in his life, but he just had this ability to sound like he knew what he was talking about. He could BS with the best and that is part of what they're able to do. No matter what you ask, the answer will sound compelling. And this actually Creates a fundamental challenge for people who need to make critical decisions.
The AI will always sound like it's right. And if it's just outside of your domain, you're going to start to trust it.
[00:31:58] Speaker B: So how do we fight? So let's talk about then, then about what are the tools that are available to the public now and, and which ones are good for what? Right. What is like a chat good for versus some of the other ones ultimately, because it's interesting, because
[00:32:16] Speaker A: I've had this
[00:32:17] Speaker B: conversation before where everybody's trying to make chat be the end all, be all everything. But that's not what it was designed to be originally. Right. So coming from your perspective, I would love to know, how does somebody like me as a writer, how do I know which tool to use for. For which task?
[00:32:35] Speaker A: So, and I'm going to talk about average Joe here, who is on the right hand side of the machine, let's say.
So today there are, let's say, five foundational AI companies. There's your OpenAI, Anthropic Grok, Gemini. It's like a small hat. Yep.
All of those folks are going to be using those tools in their personal capacity because of where they are. Setting up their own local AI is a bit of a pain. It's a bit of a lift. And only folks like myself or corporations will really do that.
So given they're using those, I'm just going to say generally they are all basically the same.
They're all basically the same. I can tell you that the Nana Banana image thing is better on whichever one has that today. And there are certain things that are a little bit better, but given that they're not fundamentally different to the rest, they're all, you know, much of a muchness.
The way I explain to my wife is, you're driving to the supermarket, it doesn't really matter which car you have. It's like mile down the road you go and get your stuff and come back. So they're all effectively the same. So what's really critical for people is how they use them.
These agents do badly if you're unclear and you want them to impute your meaning. So what I will often do, and I currently have about five different agents running for me here, is I will first pick an agent that has no context about what I'm working on and say, hey, I need a plan to figure out the following thing. Don't solve it, write me the plan.
I will then look at the plan and I will go through it and say, let's change this let's change that now. I like the plan. I'm going to take the plan and give it to another agent and show it my documents and say, this is what I really want to do.
I will get that agent going and running, but I don't trust that agent. I will then set up another agent, say, hey, can you keep an eye on this guy?
Here are the guidelines. So what it comes down to is we can have as many of these as we want and generally the AI companies are running at a loss day on day.
So use as many interactions as you want because most people will have a. I pay this much, I get pretty much whatever I want. I never use my, my limit. Yeah, there are all of these strategies. One of the other ones I found is really useful is if you have a question, do not ask for the answer.
Ask for five options with your probability of your answer being correct.
This has been proven to make the LLM really reflect because you could ask me something about, you know, which way is the market going to go tomorrow. But if you asked me and said, and whatever you tell me, I need you to put $100,000 on that tomorrow and then tell me how it goes, I'm going to spend a lot more time on that second flavor request than the first flavor request.
So LLMs don't have consequences, but you can almost give them consequences by the way you are framing the question.
[00:35:50] Speaker B: Can you give me an example of that besides the money one?
[00:35:54] Speaker A: Yeah. So let's pick.
Why did the Roman Empire fall?
And everyone's got an answer. It was the Goths. It was, was.
You know, we've got a number of
[00:36:10] Speaker B: different overextended, too big of an army, there's currency.
[00:36:13] Speaker A: We can go through a whole bunch of stuff.
Go into, and do this later today, go into each of the top three LLMs and ask that question and then open up a brand new tab and go and ask the same question and say, and list your answers with probabilities for how correct you think you are. You're going to see, actually you'll probably see that most of them now agree on the answer.
Probabilities will be somewhere in the bounds of each other because you're now asking them to show their work and it triggers a whole bunch of different, a whole bunch of different neurons if you like, in the way that they are processing that question. They take it more seriously basically.
[00:36:58] Speaker B: So that's so fascinating to me that like now I'm sort of asking, ask the question and say, okay, I want three answers but I want you to list the answers by your probability of which one you think is the most accurate.
[00:37:10] Speaker A: Yes.
[00:37:10] Speaker B: That to me is such a fascinating, like just a little tweak on the, on that.
[00:37:16] Speaker A: But it's not dissimilar to, and I don't know if you have any children, but if you ask a child, especially a boy, to go and clean their room, they're just going to move one thing and then they're done. Because as far as they're concerned, they answered the question, they answered the request that you had.
But if you're like, I'm going to take pictures, Christmas presents are on the
[00:37:35] Speaker B: line,
[00:37:38] Speaker A: like, I gotta think about this, what's the minimum I can do to satisfy that request with those constraints?
[00:37:45] Speaker B: That's such a, that's such an interesting sort of constraint to put on that.
So what is the work then that you're doing with the Princeton Equity Group, especially involving this technology?
Were you guys kind of like, you know, I'm, I'm assuming you guys are seeing the trends before everybody else. So where, where do you, where do you personally feel like this technology is taking us? And you know, what do you like, if we're having a conversation, follow up conversation in 2030 where, what do you think the world looks like for us then?
[00:38:17] Speaker A: Obviously I can't talk about anything proprietary.
[00:38:20] Speaker B: Sure, sure, of course.
[00:38:21] Speaker A: World view.
I work with, with some of the smartest people in private equity. These guys have a ton of experience and honestly, half the time when they're speaking, even I can't fully grok, you know, the way they're thinking.
However, they are not data scientists. They are not abstract mathematicians. They are not. I can kind of list a whole bunch of things out.
I want to imbue them with that skill without really having to learn it.
The next thing is there's a whole bunch of information out there. There is publicly available information and then there is privately. We have amassed a lot of this.
I want to be able to connect their wants with those skills they don't have over data, seen and unseen by a group of agents which really just solve the problems for them. So what do we do? And it's kind of simple. We go and find the best companies to invest in that are going to make great profits for our investors. It's those people on the ground, not me, that go and do this. My responsibility is to empower them with tools that make them bigger, faster, smarter in the shorter space.
[00:39:35] Speaker B: Wow.
So where, where from your seat, where do you feel like is what, what are some of the most Disruptive, I guess. Elements of AI and how it's impacting business. Right now
[00:39:53] Speaker A: this is a big subject and we may not have time to get through.
[00:39:57] Speaker B: Give me the 30,000 foot view in a few minutes.
[00:40:01] Speaker A: History doesn't repeat, but it rhymes. So if we go back to 2000, the reason for that jump out and then the crash was the Internet was a big thing and you know, people were selling furniture online. We had pets.com, we had all this stuff.
Those were the companies trying to make something out of the Internet at the time.
Honestly, most of those companies and even in that moment didn't matter. Even Amazon in that moment did not matter. Amazon was one of the few that survived, but they went all the way up and all the way down. They just made it later.
What was really important was how real companies that do real business actually took on this Internet thing that was coming along. What could they do? Okay, we can do online sales. We can connect up different hubs, our internal people can see visibility into what's going on in the factories and so on.
So that impulse at the year 2000 was somewhat irrelevant and it was more of a market thing and a new participants trying to make money by selling shovels or getting funded.
I think we're in a similar position here. We're seeing all of this hype and everyone's looking at the Nvidia stock price. Ignore the Nvidia stock price, doesn't really matter. What matters is the 50,000 companies or whatever the number may be in the US who are going to decide how to use AI. It's going to take them a while.
In the same way it took them a while to go from these mainframe systems to Internet based systems.
[00:41:42] Speaker B: Yeah.
[00:41:42] Speaker A: So I think it's going to be over the next, let's say to the end of this decade. We're going to see them start to get to grips with this thing. There is no magic. Turn AI on, switch and then make.
[00:41:56] Speaker B: No YouTubers are lying to me.
[00:42:00] Speaker A: Yeah, you've got an Englishman telling you that. And everything we say must be trusted.
So we're going to see this gradual thing where the companies learn how to use the tool and what they will do is they will fail. When they will fail less, then they will succeed a little bit, then they will succeed some more.
And this is new to them. So we have to give them time to evolve. There will be new companies come in who are selling them the moon and the earth and the sun, it doesn't really matter. It's about implementation and it's about human adoption. And then companies will have to make tough decisions. Huh. This really does work well, maybe we should slow down hiring.
This does work well actually we need a whole new cohort of people to do that next thing that we never thought that we could get to because we're too focused on screw the widget in and move it down the assembly line.
So there will be a transformation. It'll be a lot slower than people think.
And it's got to be at that mid tier corporate level. Kind of doesn't matter what the big guys are doing. They have all the money, they have all the people. But what does that mid tier do? That's what's going to be important.
[00:43:12] Speaker B: That's so fascinating. Yeah. You know, there was an article, was it maybe three weeks ago where some company paid, was it McKinsey to do a, you know, it was like an 18 month survey. They went and I think surveyed like that, the Fortune 1000 companies about AI and it was just like, everyone's like, AI is critical and we need to do it. But then the way they structured everything and they said 97% of the companies are not going to do it because everyone's so overwhelmed they don't know where to start.
[00:43:44] Speaker A: And it's actually a little bit more than this. And I'm not going to name any previous companies, but I've been in boardroom and near boardroom meetings where the challenge you've got is senior management are older and they're older than me and they are not technologists. And so they are definitely not AI people. And the one thing they will, there'll be a board meeting or a senior management meeting. And the question is that someone will say we're doing blah and it's not working. Well, then the senior manager will say, well, why can't ChatGPT just do it? And then the guy will say, I don't know. And he'll go and ask his people and if they're afraid, they'll be like, okay, we'll just get ChatGPT to do it. But remember my, my pyramid, they're going to start to use the wrong tools for this problem.
So people just need to get to grips with.
If I was to say to you, your PC is not working, I have a hammer, I'm going to send it to you. Fix the PC.
Well, even you would say a hammer, maybe a screwdriver, but not a hammer is going to fix it. And until people get to understand what is the family of tools we now have available to us, what is the applicability and then the efficacy and the cost. Cost is going to become critical.
And we can talk some more about that if you want, but they've got to look at it as a business. Once they understand this new suite of tools that they have available to them. Yeah, once they really understand those, we could see a hockey stick type inflection. And this is why I say it's going to take time, because today they simply do not understand what they have.
[00:45:25] Speaker B: You know, it's so interesting you say that, because when I was in Hollywood, my mentor, he was in his 60s, and I, you know, I being the young guy, I was just like, man, streaming, streaming, streaming. This is 2006, 2007, 2008, and I'm even 2010. He wrote a book and about the film industry. And one of the things he said was, streaming is never going to go anywhere because he didn't have the technology background to really, you know, because in 2010, streaming sucked, you know, like spinning forever. You couldn't upload, times were awful. You know, he had Netflix, and it would take 10 minutes for something to load up and then keep freezing.
[00:46:08] Speaker A: Right.
[00:46:09] Speaker B: And but what's interesting you look at, I think majority of, you know, he was sort of the old guard of Hollywood. The guys that were the top in the 80s and the 90s. And then what happened, right though, all of those guys ignored streaming because they didn't think it was going to go anywhere. But all the technology kept getting better and better and better, and suddenly 2015, 2016, everybody's like, wait, why is Netflix making more than all of Hollywood? Right? So it's interesting that that's happening in AI too.
So
[00:46:35] Speaker A: just a quick anecdote. There's a great book. It's one that I reread about every 18 months. It's called Good Strategy, Bad Strategy by Professor Richard Rumelt.
And he gives an anecdote in there about Steve Jobs. And he interviews Steve Jobs and asks him about what his strategy is. And I'll skip most of his get rid of this, get rid of that. And then Steve Jobs says, and I'm just going to wait for this next big thing. And the professor is like, you're just gonna wait? So, yeah, everything is set up, and I'm just gonna wait for the next big thing. The next big thing was the ipod.
All the technology had come together. His strategy had put them in just the right place.
They could create the ipod.
[00:47:19] Speaker B: That's so fascinating. All right, so how does.
How does a layman, let's say, we got a, you know, small to medium sized business, say 5 to 30 million a year in annual revenue.
They know they should be adopting AI. Now they're hearing, you know, they're reading this chapter here in the scene going like, oh my God, we're using the wrong tools.
Where, where does someone start so they can start to develop the knowledge to understand which tools to use for what problems?
[00:47:49] Speaker A: I'm going to give a very disappointing answer.
It's really, really, really hard. And they are going to be at the whims of the vultures who are going to come in. And some of those vultures are actually good firms and good people. But it's hard to separate because they're all going to sound very similar.
So what I would say to them is allow these companies to come in because you're not going to be able to do it yourself unless you have the background, the skills, the capabilities within your firm. You're going to need new DNA. So bring those companies in.
But don't buy into the hype. Be a skeptic.
Find small things that you can do that can prove out their capability. A lot of these firms are doing, I mean it's just, it's so basic what they're doing. They're just rinsing and repeating to the next firm.
Find something which is specific to your firm.
That will be a small challenge to the company. Not trivial, but a small, it has to be a stretch.
Make sure you don't want piecemeal solutions. So I'll give you an example. Imagine a call center and they want to, they want to AI. Ify we need a better word for this. The people calling in to speak to a machine or to the chatbots to do a bunch of things. What you do not want is multiple piecemeal solutions to do 15 different things where you pay whatever cost for each of these.
You want to think about how is something shareable, foundational.
So you almost want to think about, do I need to fix my data layer? Because once you fix your data layer, your AI layer becomes simpler.
Your AI layer can become composable. I have an agent now that knows how to resolve a customer query. I've connected it to nothing. We can internally see that it knows how to do that. Okay, so now I need a chat agent that can speak to this agent. Does a chat agent do the right thing? And it's literally about becoming an engineer or an architect. How would you build a house? I would design it first. I would put down foundations, I would put up the walls.
I don't think about the fixtures in the kitchen until much, much later on. And we can also change those. So it's really about thinking about how do I foundationally build up where I want to go and find a partner to help me do this.
[00:50:30] Speaker B: That is such a good analogy because that explains, at least in the marketing space, with smaller businesses making under a million a year, everybody's selling them the fully built house with all of the flashy lights and whatever, but nobody's asking about what's on the foundation.
No, no, you don't need that. Just look at the flashy lights.
[00:50:51] Speaker A: Let's just quickly talk about cost because this is going to bite everybody.
The problem with the foundational LLM companies is their business model.
So what they do is they make money on what's called a token. So if I say, tell me about the history of Rome, it's going to break that up, not into words, but a little bit shorter than words. These are tokens and you're going to pay a fraction of a cent for each of these. And then it comes back and gives me an answer, okay, cost me nothing. It was very, very cheap. Now these companies are going to come and build you a solution. And let's say you have a database.
If this company is not very smart, what they will do is load up your entire database, send it off to the LLM, ask a question like, what is the first record?
They will send the whole thing across, they will get the answer back. And let's not worry about privacy for a minute. They only needed the first row in order to send the question, but you've now paid for sending the entire thing.
So the LLM companies are losing money because their costs are so low on this stuff. Stuff. But the problem is if you don't have great controls around this or smartness about how to decide what to send and what not to send. That's where your bills start to go up. And if this is not your domain, you're like, I don't know what to do here. So you don't want to get yourself snookered.
That's the key.
[00:52:18] Speaker B: That is.
Yeah, I didn't know that that's how it worked. But that's so interesting, right? Because I think, and correct me if I'm wrong here though too, just from my conversation with my friends that are not tech savvy, that are just ordinary people that are using AI, most of them are using it, like Google, they're exactly doing that. They're just like, oh, I just go type my question and it and they're not realizing it's like, oh, no. But like, if you're writing a book, like, it reads the whole book and then it comes back with the answer instead of just going to the specific piece.
[00:52:54] Speaker A: And this is a problem because it gives people a mental bias on what these tools do.
So this weekend, maybe me and the wife want to go and see a movie and I'll just ask whichever tool, hey, what's on in my local area? And give me the ratings. And it's great. It'll literally show me. And that's fine. Now, if you are new to this space and you run a business and you did exactly the same thing, you'd be like, oh, I love this. We should use this at my company.
But the difference between asking which movies are on locally and starting to automate, I don't know, a paint factory is very big. It's huge.
[00:53:39] Speaker B: All right. I think I feel like I could talk to you for hours. So a couple of final questions and I want to respect your time here. But this has been so fascinating. Like, so most of the public facing you know, AI is all the LLMs right? With the ones you just, you know, the big five there. So in your opinion, like, what are those tools?
Like, what is. What are they actually really great at and what are they not?
What's the thing that people are trying to use them for that they're just not cut out for?
[00:54:13] Speaker A: This is a moving target. And I'll give you the example I am. I kept a chart on this and obviously I can't share it.
From when Chat GPT came out to about two and a half years later, I saw this much improvement. For those on the call, my hands are arbitrarily giving a distance.
From that time to about a year and a half later, I saw the same improvement. Shorter time from that time to the beginning of this year, saw the same improvement. From the beginning of this year to April, I saw the same. So this is kind of multiplicative factor of how good they get it, and that's for me, is specifically for reasoning, coding, mathematics, that type of thing.
So when you ask the question, what are they good at and what are they not good at?
They do not think I know that Sam Altman and Darius and Anthropic are going to yell at me. They do not think. What they're becoming really good at is predicting the next word in response to the question that you've asked, giving context and having been trained on a lot of information that's out there.
[00:55:30] Speaker B: Sure.
[00:55:31] Speaker A: The Things that they are not good at is being pushed out of their comfort zone.
Have a place sometime and ask them legal but edgy questions. I'm not going to repeat any on this call, but you can kind of think about ways to stress them morally, ethically, in ways that they're not comfortable with. And forget the moral or the ethical nature of that. What this is showing you is the guardrails that have been put in by humans training the machine.
Those same guardrails have been put in in the same way when they are sycophantic to you. So if you I will ask the dumbest question ever. They'll say, oh my God, Naz, that's such a good question. I'm like, no, that was a really dumb question that I sent to you on purpose.
What I really want folks to understand is how the boundaries inside there actually work.
Now, we have some strategies earlier on. Create a plan, get another one to review the plan. It can be cross machine before you get it to execute. Get another one to look at the plan and the result and review it. Give it back to the first one. You can bounce around all the time. They have no memory.
They're just whatever you sent in. They go and figure it out and bring it back.
Where are they? Not good. Where they are not good. And this is not their fault. I do not blame the LLMs for this is on doing things.
So have you ever asked ChatGPT? Oh, this is great. Send it to me in an email.
Yeah, because it can't send email. But in corporate systems we build tools tools and we give the AIs access to the tools in order to do things.
The problem with the tool ecosystem and for those listening, you can Google MCP model context protocol.
What seems to have happened is that companies have built great software and then they found some junior intern and said, hey, build the MCP for our really cool thing that we built because we want to be AI first and it's like an intern task and those things are not great, which makes the LLMs not seem great, which makes them not great. If that makes sense. Yeah, all these weaknesses in the overall system today, security has not really been fully dealt with. I would absolutely not send any PII information or any kind of private information within your firms to one of these companies until we can get some kind of proof or use open source LLMs.
If anyone's keen, hit me on LinkedIn. I'll send you some links.
There aren't gaps here. And I think the thing I want everyone to take away Is this is going to be a civilization changing technology, but it's not there yet. There are many holes. There are many people trying to make money and good for them and as many people trying to utilize this as we all are to make our businesses better.
But be cynical.
The best thing is if you're cynical and you're wrong, it's fine. You'll just still get the thing. But you just waited a little bit longer. So there is great opportunity and advantage.
Just make it work for you.
[00:59:02] Speaker B: Well, I was, you know, it's like you we said earlier in the conversation, right? Things, things go in cycles, right? And so like I always think back when, when I just seeing all of the hype of where, what we can do and where it's going and all the stuff. And I just think back to 1985 when back to the Future came out, you know, and, and, and what did, what did it show in 2015, right?
Flying cars, hoverboards, all this. And we get to 2015 and people are like, in 30 years, the fashion is worse, the music's not as good, the car worth way more gas, right? And so I think with like, I always kind of think of that with like, I, you know, the, the human optimism is always sky high, but then the reality of the boots on the ground ends up being very minimal. But again, right, we're, we're 2000, 2001. The Internet's going to change everything. And you got to adopt and you got to do the stuff. And, but it did eventually, but it was really slow. And, and, and I think in some ways you can make the argument until, until we hit 2020, you know, mass adoption of virtual work was still kind of this weird fantasy thing of like digital nomads. That's weird. And then 2020 happened and everybody's like, wait, we can use the Internet for this? And like that to me feels like it shifted Internet usage more than anything. And so that's what I look at, right. Right now I kind of feel like this reminds me of like 2022 or 22,002, 2003, with the Internet this and Internet that, and it's like AI this and AI that. I'm like, oh, we're right here again. But it's just different tool and different thing. And to your point, eventually, yeah, it's going to change everything. But we're, we're, we're not quite there yet. So, so if somebody's listening to this, this episode, the CNN, YouTube or the reading chapter in the book, they love what you're doing. How do they get involved in your world? Like, do they, you know, LinkedIn is there are, are there things you're actively looking at with the Princeton Equity Group that you're like, hey, if you guys are this type of company, let's chat.
[01:01:09] Speaker A: So generally not vma. So I am not the best online marketer of myself.
I stay pretty hidden in my, in my cave.
[01:01:20] Speaker B: You check, guys, you know, if anyone
[01:01:22] Speaker A: does want to hit me up on LinkedIn, that's fine.
But what we do is generally behind the curtain and once we are involved in a company and it has to be public, it is, but we generally stay, you know, somewhat.
[01:01:39] Speaker B: Well, this has been fascinating. I mean, boy, this has probably been one of the most eye opening conversations that we've had quite some time. So again, thank you for taking time out of your day. I really appreciate it. And yeah, I feel like I want to reach out with you in a couple of years and get an update.
[01:01:54] Speaker A: We should.
[01:01:56] Speaker B: All right, take care, Ness.