Webinar Replay: AI and the Markets

June 22, 2023

The buzz around Artificial Intelligence (AI) cannot be avoided lately and at WisdomTree, we believe AI could reshape every aspect of how we live, work and invest. But is the recent AI-powered boom another dotcom bubble, or the next logical step in the march of this megatrend?


In this webinar replay, Professor Jeremy Siegel, Senior Economist to WisdomTree, and Chris Gannatti, WisdomTree Global Head of Research, discuss:

  • The evolution, ongoing breakthroughs, and potential long-term economic impact of Artificial Intelligence
  • AI‘s increasing role in productivity, profits and other economic factors
  • The latest Fed moves, where policy may be headed and its implications on AI and the markets overall

Chris Gannatti:

Welcome everyone today to our artificial intelligence webinar. We definitely appreciate your taking the time with us. My name is Chris Gannatti, Global Head of Research at WisdomTree, and I'm going to be joined by Professor Jeremy Siegel, Senior Economist at WisdomTree. And we're going to be taking you through one of probably the biggest topics of 2023 so far. Our perspective both on ... We're going to start and you'll see for the first part of the webinar we'll be talking a bit about grounding us in the language and what we're thinking about and what we mean when we say artificial intelligence. And then we're going to shift over to some of the macroeconomic impacts and the perspective of Professor Siegel on the topic. So thanks again for joining us and I'll just dive right in.

So to start, broad question, what is artificial intelligence? It might be one of those times where you almost say you know it when you see it or you know it when you experience it. There are a lot of different definitions, a lot of different meanings. A lot of different words get thrown together. Sometimes people will say machine learning and mean artificial intelligence or vice versa. So on the right-hand side of the page, we came up with what we think anyway is a helpful infographic to put some of the keywords out there and to start visualizing some of the relationships and some of the different structures. I would note that there are certain things, whether you see neural networks or deep learning. Those are more referring to the structure of how these different models are run and how you are doing certain calculations and setting up the different variables such that you're going backwards and forwards with the information to ultimately build a model that has predictions and can make predictions as accurately as possible. There are certainly other types of models.

The genetic algorithm that you see on the page is interesting if you think in terms of how evolution may have worked where you have all these different iterations and some outcomes are viewed as more positive and they persist. Other outcomes are less positive and they fall away. But the key thing to be thinking about with artificial intelligence is there's a lot of data, there's a lot of computational resource and you have to run many, many times through to train the model. So you'll hear frequently the term training. And then you'll hear the term inference. This is when the training is done and the model is being used usually to predict a certain outcome or a certain situation.

Now, there are certain topical areas that you see a lot. Autonomous driving along the left. Virtual assistance. Content recommendation. Content recommendation we've probably all experienced because it's one of the biggest areas, possibly the biggest singular area, where artificial intelligence is actually used today. It's the reason why if you log into YouTube or you log into TikTok or you log into Netflix, any of these platforms you see not all the same shows. So if there's 200 million people or a billion people using these platforms, they're each getting their own customized recommendations, their own customized set of experiences. And additionally, this is part of how you get down the rabbit hole and the path of say meta platforms where they're advertising specific things to specific people and doing that whole targeted advertising set of actions and activities. So that has been where companies ... You see some of the biggest companies in the world, they've made an awful lot of money on content recommendation and specialized advertising.

As we move forward, virtual assistance and autonomous driving may come more and more to the fore. And the important thing to think is it's not an all or nothing in the sense that it's not the case that the virtual assistant comes out and can perfectly do things the same as a human. It's not the case, at least not yet, that cars in all cities around the world can just drive themselves with no human input. But at the same time, you have these steps along the way where maybe the car can apply the brakes in certain situations to help you out. Maybe it can adjust and remind you to get back into a lane or it can recognize certain obstacles in the path. So even if you're not getting all the way to where the computer is doing everything, the incremental gains you get along the way are very interesting and possibly quite productive.

Now, we've made a leap and the reason we're having webinars such as this, we're certainly not even the only one, is last year in November something new came to the fore. Now, what's interesting is the something new actually existed as far back as 2017 within Google. And the notable thing is OpenAI was able to bring it forward in the form of ChatGPT in November of 2022. And the fact that in two months you were able to get to a hundred million underlying users. It's basically broken through into the mainstream such that you don't need to be a technologist now to be talking about artificial intelligence. Everybody's talking about it. Everybody's experimenting with it. You see on the page some images that maybe ... and each person's going to have their own frame of reference…maybe to some people those images look real. Maybe they can immediately tell they're computer generated. They're both computer generated. I just took them off of the site that you see below.

But the idea here is now with generative artificial intelligence, and ChatGPT is one example of this, the systems can write full articles, they can conceive and take the laws of chemistry and physics and possibly suggest new materials, compounds, possible drugs, therapies, other things. It can create videos. It can create images. This idea of creating, going from there's nothing there to suggesting something new that didn't previously exist is very interesting and that's what's inspiring a lot of people to think, whoa, this is something different than anything we've experienced in prior years. Even though artificial intelligence has been around from the 1950s, the computational power that the world has today and what companies out there, the semiconductor firms that we all know, are bringing out is truly remarkable in terms of the gains that we're able to put forward.

So the scalability of AI solutions is important. I tend to think of the movie often, Hidden Figures, if anyone has seen it because it illustrates the fact that at a certain point, looking back through history, there was actually a job where people would have to do math. They would have to do division. They would have to do addition, multiplication, take derivatives, calculus, all these things. If you think of that movie, it's the application of those things to the space program, getting from obviously the surface of the earth ultimately to the moon. Now if you think today, is there any job where the whole job is to do mathematical arithmetic type calculations? Not really. Because even in the movie you started to see the IBM and Cray supercomputers coming into the fore. So this idea of you take something like arithmetic, which people used to have to do and now computers can do it and we're totally comfortable with that.

What is AI? AI is a next possible step where you take the idea of predicting a future outcome and you are getting closer and closer to a reality where more and more the computers are doing that and the people don't need to do that and maybe you're more accurate. If you're a distribution center, an eCommerce firm, maybe you can manage your inventory better as one example because you can predict what customers are going to want. You can predict what the weather is going to be in certain scenarios. Taking in data and coming up with more and more accurate predictions is really the core of a lot of what AI is seeking to do. And that's really where the world is going. Even if you think of autonomous driving, the autonomous driving systems that have the best chance of success are those that are predicting what a good human driver would do in a given situation. You can't program every scenario. The kid running into the street. The ball flying into the street. You can't know everything that could possibly happen ahead of time, especially in certain ... If anyone's been to Rome or Madrid or some of these other places.

So ultimately you have to be able to predict what a good human driver might do in a situation. So reframing a lot of these objectives as prediction problems are very valuable. And it could affect literally every industry. You might predict where an electric utility needs to do a repair. You might predict certain disease clusters or certain types of molecules and drugs that could be particularly useful. Education is being completely changed. This is an area where I remember when I was in school not that long ago, at least it doesn't feel that long ago, sending a text message was a big thing. Recording onto a CD or a DVD was a big thing. And now you can go to the computer and ask it any question and you can get that information.

So being able to instantly access almost any information that you can imagine is becoming more and more the norm. It was certainly not the norm in the early 2000s when I was in school. And so in essence, maybe part of what education is going to have to do is think about asking better questions of these systems because simply memorizing information at a certain point in the future, maybe that becomes less valuable ultimately. And if you think of the flywheel effect, there are many flywheels out there. It's a commonly used analogy. But what I like to think of is why is it the case that it's so hard to compete against the Google search engine, it's so hard to compete against Netflix. It's so hard once you have these incumbent players because essentially they're able to move first, they capture the users, they're capturing more data, they're learning, they're improving. You have the circle that keeps going around and around. And ultimately there's a reason why even though earlier this year we had discussions of Bing versus Google search, if you actually dig into the data and look at the numbers, Google search still has 90% plus market share. Hasn't changed much even with all the publicity. There's a reason why people use that search engine and feel like the results are better. You start sooner, you get to a better result because of that learning.

Of the investment case for AI you see a lot of activity. So this is from a Stanford report. It comes out every year. And it's fascinating to consider that in 2013, a mere 10 or 11 years ago, you were at 14 billion in terms of the annual investment for that year. In 2021, the peak, the frothiness of the markets that I'm sure we all remember, you had 276 billion. 2022, rougher year for technology oriented investments, but still a little bit less than 200 billion in total investment. And you look say what countries are doing the most in the space? It probably surprises no one the United States is number one by a significant margin and then China is number two. And that's one of the more interesting discussions. Maybe we'll get into it in the Q&A of China versus the US. That is a key geopolitical rivalry and AI is one of the key areas where you see that rivalry playing out.

Now, if you think of the functional areas, the topical areas that get the most of that funding, medicine, data management, financial technology, cybersecurity, it probably makes sense that these are some of the biggest areas. Generally these areas deal a lot with data and ultimately data is the fuel for what we are talking about here. And there are also up and coming areas. I know Apple in recent weeks was talking about their new headset and other companies as well have these headsets. So if you're looking at things that maybe aren't the biggest today but could have some future potential, these augmented reality, virtual reality systems, legal technology, drones, these could be on the cusp of getting bigger if you're looking forward five or 10 years.

Now, the reason that these things are getting more and more attention, one of the reasons, is the accuracy is improving. They're getting better and better at making ... again, it comes down to predictions. If you're thinking of what you see on the screen, ImageNet, what is the prediction? The prediction is the system is exposed to an image, a picture. Remember it doesn't see the picture like we see it. It sees it as numerical values. Pixels. It's telling us the red, the green, the blue of all the various points on the screen. And depending on the screen quality and the image quality, that's how many points there are. But it's basically taking in those numbers and predicting is this a cat? Is this a dog? Is this a person? Now it's at the point where you can say it's a person riding a motorcycle down such and such street. The weather is this. It's fairly detailed what these systems can do. But you always have to come back in your mind to the idea it's taking in numerical mathematical information and forming a prediction based on it.

Now, AI gets a lot of attention based on cool things in my opinion, that it can do. One thing ... If there are any people in the audience that have done anything with software code, the game has changed in the sense that now what you have almost is a partner where if you're trying to write software code and you get stuck, you can use these systems. I've listed a few on the page. Alpha code, co-pilot. But co-pilot is a very apt way to characterize it where you wouldn't say it's replacing the human software designer, software engineer, but you could say that the co-pilot is sitting there ready to help anytime the human being maybe gets a little bit stuck. So it's a new resource that didn't previously exist that is making people more productive.

Fusion power is something I tend to track because I hope one day it works even if we're not yet close. And you're dealing with something depending on how the reactor is designed, you need to make very, very fast adjustments. And it was interesting that DeepMind simulated one possible reactor configuration called a token mac where you have this plasma, it's going round and round and round in a circle and you need to ensure that the plasma doesn't actually touch anything through controlling these very, very powerful magnets. And so using AI to quickly be able to adjust may help us along the path of more cheap and carbon free energy in our future.

Now AI can actually design certain things and one of those things is the H100 class of GPU chips being offered from Nvidia. So these are the newest, most cutting edge cannot be sold in China chips that Nvidia does put out. And when you think of what a chip is, it's these tiny, tiny transistors all that are etched into the silicon wafer. And these chips now, they fit in the palm of your hand, but they will have billions and tens of billions of these transistors etched almost at an atom by atom scale within the silicon wafer. So it's incredibly impressive that you can even do that. But then you think given the use case, given the power consumption, given all the variables, what's the best or most optimum way to lay out those transistors? Because you're not just randomly etching the silicon. There's a plan here. There's a design here. And it's basically saying that Nvidia is using AI to help improve those designs, improve those functionalities even today.

Proteins are one of the biggest areas of potential. If you listen to companies that are seeking to showcase what they can do in terms of artificial intelligence a very commonly cited example is the idea of protein folding or the idea of even novel proteins. And you see the example here of AlphaFold from DeepMind. You see as well what a protein looks like. So predicting the structure of a protein ... This is one of the simpler ones that I saw on the website so you can imagine this is not something for the faint of heart. It's actually three-dimensional as well. So if you go to the AlphaFold website, you can actually rotate that thing around. But if you're thinking, wow, DeepMind came up with a way to predict the structure with reasonable accuracy of 214 million of these and they released it freely available last year, it's absolutely an incredible achievement. People were working on this for more than five decades. It was thought to be potentially an unsolvable problem in computational biology.

And then there is the idea of AI interacting with human beings. Sometimes you can put that forward through games. Meta released Cicero. What's interesting is as the AI is playing games, it's influencing what humans are doing. If you think of a poker game, if you think of a game here, Diplomacy, there's information that as a player you have and without giving away the truth of what you have, you want to convince other players to do things that are going to work in your favor so you can ultimately win the game. And it's very different than say the game of Go or the game of chess where the information is all arrayed right there on the board and everybody can see everything so it's an interesting step forward within artificial intelligence.

The final section is thinking about the landscape. We tend to think of AI as an ecosystem. You see the broad categories that you have on the screen here. The software, the semiconductors, other types of hardware and the more innovative use cases and functions. And software is probably the core. So a lot of what we talk about as what is AI, it's usually some sort of a software package that somebody is subscribing to and you see some of those functions. It might be computer vision. It might be natural language processing and translation. Robotic process automation. That's more complex automation of different job functions. The software that you subscribe to is then actually going and solving a problem through predicting what might be necessary, what might be needed to be identified in the future. And these are just some other examples. Chatbots and virtual assistants in particular coming to the fore. Other types of software. So when you say what is AI? At the core, it's software that people use to do different things.

Now to run AI ... And it's our opinion at WisdomTree that it's also important to think and the expression picks and shovels has been out in the space recently. You cannot just have the software existing freely on its own. There's a reason why Nvidia and other companies are coming out with these ever advancing chips with more and more and more processing power because that allows the software to do more and more things. And what you see across the industry is more and more specifically designed chips to perform very specific functions. So it's not the case that you sit there and there's one GPU, graphics processing unit, and that's it. Frequently in the server farms that Microsoft Azure and Amazon Web Services and Google Cloud represent, you have these servers and within the server you have memory chips, you have CPUs, you have GPUs, you have all sorts of equipment. And when you think of the ecosystem and the way we think of it ... Obviously some companies tend to hog all the attention you could say. But there's a lot of companies that need to come together for these servers ultimately to work.

And if you think of other hardware, frequently, this is where you're starting to go into the physical world. Autonomous vehicles we discussed. But robotics and industrial automation. You see surgical there. The idea of robotic surgery is another important area. Think of it as augmenting humans. It's not the case that the robots are just doing everything. It's the case that robots could be useful in helping humans augment their productivity across the board. And then finally, innovation. You have different things that we're all trying to do. We're trying to be greener, we're trying to emit less carbon. And one of the ways that you do that is you rethink how you're generating electrical power.

And if for example you're thinking more about wind power and solar power, you may need a smarter grid. Frequently the term smart grid is thrown around. What does that really mean? It basically means that certain parts of a given country, for example, might have a lot of sun or might have a lot of wind and certain other parts might need to get that power because they don't have a lot of sun, they don't have a lot of wind, they don't have a lot of generative capacity. So if you're bringing the resource, in this case electricity, to the most valued area through using some of these systems, that's a very innovative use case and it helps us all if we're thinking about climate change, if we're thinking about carbon emissions and the like. So I'll conclude there and pass it over to Professor Siegel to talk about some of the macro implications.

 

Jeremy Siegel:

Thank you Chris. That was a wonderful summary. Let me say at the onset, I'm not an expert on AI. I'm not an expert on technology. I do have historical perspective though and I'm going to give you some of my thoughts and then some of the macroeconomic consequences of what I see and then we can go into Q&A.

I'm a little puzzled why we use the term artificial intelligence. Chris mentioned it's really machine intelligence versus human intelligence. And in many ways machine intelligence is superior. It can do any mathematical calculation many times faster than any human being. It could win chess games we now know better than the greatest grand master. There are definite areas where ... And that was 10 years ago, where machine intelligence trumps the human intelligence. Human intelligence has a weighting system on judgment. Sometimes that judgment's good, sometimes it's not.

We learn the same way. The interesting thing about machine intelligence is that it simulates things like what I learned about how it became a champion Go player is it simulated millions of games and learned all the positions. Well of course a human being can play for many, many years before and could not simulate that many games. So in many ways we learn the same way. We learn through experience, but that experience can be so much faster in machine learning. The term artificial has a negative connotation. It's like there's real intelligence and there's artificial intelligence. Almost like something is fake and something is not fake. I think that's the wrong designation.

Let's talk a little bit about first movers. Chris mentioned it. It's very interesting. I think back ... And almost everyone on this call, probably you only have to think back 20 few years, the dotcom mania that overtook the markets in '98 and '99 and into the first few months of 2000. Again, going to change the world. The internet did change the world. The first movers are no longer with us. The first real browser or was Netscape, Mosaic. It's not the Chrome, Microsoft. The first word processors were WordStar and WordPerfect. It was a number of years later when Microsoft Word came in and wiped them out.

The first spreadsheet was Lotus 1-2-3. Now non-existent. Excel. Again with the OfficeSuite. As the OfficeSuite of Microsoft overcame Harvard Graphics and many others. The first movers are sometimes no longer with us as far as that's concerned. On the chip side, remember Intel used to provide almost all our chips and Intel is a poor second to Nvidia at the present time. Cisco was going to be the first trillion dollar company on the way to never getting there and now I don't even hear what kind of role it plays in AI today. IBM at one time controlled 85% of the computer market. In fact, the Justice Department had to break it up because it basically monopolized all computers and now it struggles to stay relevant.

What I'm saying is that there's always a rotation that goes on. First movers don't always survive. One could say among the cryptos, Bitcoin I believe was the first. Someone might correct me on that and has really led the charge now over a period of more than a decade, whether it might be the only surviving one is another question. But I do want to emphasize this rotation.

The picks and shovels aspect that Chris mentioned. That of course comes from the old saying that the only people that really made money in the gold rush were a few miners that hit it wealthy. But everyone that provided all the picks and shovels and pans and gear to the gold miners made a fortune. Is Nvidia one of those players? Don't forget when you pay more for chips, you have to make it up somewhere. You have to provide a service that somebody is going to buy in that enhanced space and that becomes certainly very, very important.

I also just want to remind you in terms of individual stocks, what a rollercoaster the world can be on that. And I'm going to go back to the dotcom world for just a moment. One of the first dotcom companies was amazon.com. Sold books online. That's all it did at the beginning. Was laughed at by many people. Soared in value. People saw the value of that and thought that it could be extended. Hit its high again in the first few months of 2000, then started its way down as the bubble broke. Again, it was not making money. It was selling at a multiple of hundreds of thousands times sales, not times earnings. When it had fallen 50%, Barron's magazine came out with a cover page that was well noted called Amazon.bomb. How something that everyone thought would take over the world was a joke. The stock went down another 90% from that level and the writer of the Barron's article looked like a genius. I think Amazon went down between 90 and 95% between early 2000 and 2002 to then become what it is today. Not quite Nvidia or Apple, but one of the big seven driving.

So when one thinks of investment themes, one also has to think of the cycle that goes on. You were a hero by being negative and then you were a goat by being negative. You were a hero by being positive for quite a while and then a goat by being positive. So whenever we think of any company in the space, one should go back to the ups and downs of ... I mean Apple could be the same thing. It withered on the vine for quite a long time before Steve Jobs came back and revived that company. So again, one has to be careful.

Let me go to some of the macroeconomic aspects here. Is it quantitatively different than the continuous technological revolution that we have experienced over the last 50 years? Or some would say, is it really different than the industrial revolution which we've experienced the last 250 years in the sense of the march of technology? Is this a break in other words that is going to accelerate the replacement of people, disrupt entire industries? And I would say the jury is out on that. I may be one of the more skeptical that it is going to be a revolutionary change such as we've never seen before. However, I'm not going to deny the fascination with what it's created so far.

Now, to the extent that it can replace a lot of people ... And let's go to the area autonomous driving, which Chris mentioned. We always have Elon Musk promising that by the end of the year he is going to have a fully autonomous driving machine. It always seems to then get delayed till the next year. Chris could probably tell more about that. How close is he in fact? Now if we do really have that, wow. That is amazing. There's a million truck drivers, there's millions of people that drive automobiles that if I'm not using it eight hours in the office, then someone else can use it and I can reduce my cost by 80%. Woo. Everyone buy a Tesla then because it really would be reduced in cost by 50, 60, 70%. No one else could compete, but he has a software to do that. In general it would mean less automobiles, less energy consumption, less congested highways, better efficiency for everyone.

All the artificial intelligence or what I would call machine intelligence replacing people who are writing memos today, yeah, they'll have to find jobs elsewhere. And the great healthcare industry really needs hands on. Although in medical, diagnoses and the development of drugs still requires rigorous testing, but it can be aided by that. I think we all expect that probably in 20 or 30 years we're going to get personalized medical services and medicines that are really designed for us and the ailments that we may suffer. All that is certainly something that I think is going to be down the line.

I remember Elon Musk ... I'm trying to remember whether it was the interview with David Faber that was often played on CNBC or at another time and he said that he would have with autonomous driving with all the problems of, again, just like Chris mentioned, some kid running into the road unexpectedly, it is 75% cheaper ... Excuse me. 75% safer than human driving. And I thought about that and I said, wow, that seems to be a no-brainer. Let's go do it. But then I thought of something else. If you're the one person or the one quarter of the people that's going to be hurt badly or killed by an autonomous Tesla, let's say, who has the deep pockets to sue? Right now it's the responsibility of the driver, the insurance company, et cetera and so on and negotiations are made. But whoa, you have the deepest pocket around, which is Elon Musk. Wow. It's his fault. Even though it's 75% cheaper. Whoa. There's an interesting question about ... It isn't driver error, it's Tesla error. And driver error is limited by tort and insurance. Tesla error, you can imagine where the tort lawyers are going there.

I'm just saying in terms of when are we going to get to some of these autonomous levels and then who can we blame if they go awry? Is it deep pockets out there and who's going to ensure them, et cetera and so on. So there's a lot of questions that one has to think about. So there's many themes. It's clearly a deflationary force. Any technology is a deflationary force. Because if someone can produce good at a cheaper rate, doesn't need to pay somebody or can pay machines at a lower rate than human beings, it's going to be a cheaper rate. It's deflationary. The question is, is this going to be a break from history or not?

I think the jury is out. Again, we see the wonders of it. We saw the wonders of dotcom, being able to instantly get to a website and do things and order 25 years ago. It did change retailing. Retailing has gone down on shop. We know that. But it's been a long process. We're 20, 25 years into the future there. And it hasn't been like we're putting them all out of work and there will be no retailing in 10 years. That didn't happen. Amazon took over Whole Foods. How much has changed at Whole Foods in the almost 10 years that Amazon has held it? Yes, there are always going to be breakthroughs. I'm questioning as some people might see it happening faster than history suggests that it might. Chris and please comment. Again, you are the expert. I'm just I guess an old professor that has gone through a lot and tries to see a big picture here.

 

Chris Gannatti:

Professor, I'm looking through the Q&A and we thank everyone for lots of questions. Something that leaps to mind and it's timely because we're actually going to be together, the professor and I, tomorrow. Every Friday we do the Behind the Markets podcast and the topic for tomorrow is actually going to be speaking with another professor at Wharton who has written about this exact question which is being asked. The expectation of job losses related to AI and certain things about those jobs. For instance, is it going to be more on the lower end of the wage spectrum? Is it going to be more on the higher end of the wage spectrum at least on average? But professor, I'd love maybe an initial preview even if we recommend people coming and joining us tomorrow on Behind The Markets.

 

Jeremy Siegel:

Well I think here, ChatGPT and machine intelligence is really middle educated people. You're not going to replace healthcare workers that are, ... unless we have robots that are going to be helping people that have significant disabilities. That sector keeps on expanding as we age. As we age and as we improve the ability to treat conditions that we weren't able to treat before. You need physical people in that. You need physical people at this particular point to run subways and to run buses and all that. Now, when you get to that autonomous driving trucks, buses and everything else, then you're getting to more of the lower end of the scale. Just to go back to a real something, when elevators were first invented in 19th century, there was always an elevator man. There was no such as an automatic elevator. And then at 75, 80 years old when the first automatic elevator ... A lot of people were uncomfortable about going into it. An enclosed space where all you do is push a button and you get somewhere. Very uncomfortable for a lot of people at the beginning. Now it's everywhere. I don't think there are any elevator men or women left.

So this is the type of change that hasn't happened. Could it happen with a thud? All of a sudden Elon Musk says mine is ready for it and accept it. Will people accept it just like people weren't willing to go into that elevator that didn't have a person in it? They were very frightened because what happens if got stuck? How would I get out? We all do that today, but back then that was a big thing. Would you go into a car that you can read a newspaper in and make sure that it stops if something happens? These are human reactions that do not change on the dime and I think it's something that we have to think about.

 

Chris Gannatti:

Another question that I think benefits from, professor, your historical perspective because you would've I guess seen this from the early days, the idea that we all know of as Moore's law. So just as a review for everyone, roughly every two years, the number of transistors on a given microchip is able to double and there are certain benefits to that. The processing power you get goes up, the cost of say storing memory, the cost of a television, the cost of all these things, we've seen it go down in exponential ways. But people are debating now with the fact that you're at four nanometers, three nanometers, how many more doublings can we even get? So professor, I'd love your perspective on Moore's law and what you've seen over time.

 

Jeremy Siegel:

Yeah. That was Andy Grove was it? Who was it?

 

Chris Gannatti:

It starts with in Intel. So you've got Gordon Moore and Robert Noyce and then Andy Grove was a great-

 

Jeremy Siegel:

Yeah. The first person that actually mentioned it. But the interesting thing is that that doubling has gone on for 25 years and we've all seen how things are faster. Our phones are what? A thousand times powerful than the first UNIVAC of 1954 that took up three rooms. We've seen that. We also see ironically that productivity as measured classically by output per unit, input output per hour work has not accelerated over the last 75 years. In fact, if you do a least squares regression of productivity since 1947 to the present you'll get a downward slope. Why is that?

I remember my good professor Robert Solow when I went to MIT ... And I went to MIT in 1970 so computers were coming in their own well before internet of course. He said, "We see computers everywhere now, except in the economic statistics." We haven't seen any real spurt of productivity. Now it makes things easier and faster, et cetera, and so maybe it gives us some more leisure time. But in terms of output per unit input, we have not seen an acceleration. That's why this idea ... We all think something is groundbreaking. Perhaps one of the most groundbreaking things is work from home. Why were we all going to the office as much we can when so many of us can do it from home and save an hour and a half of commute? Well that was the pandemic that pushed that away. We all capitalized on Zoom. We're on a Zoom meeting now. Zoom was in existence beforehand. The technology was there for years unused and then pushed to use and people say, you know what, it's effective for a lot of things, not everything.

So sometimes you get pushed off your dime by external shock such as a pandemic that lead you in another way of doing things that you find preferable. But the long-term statistics of output per unit ... Listen, when I was young and growing up, three and a half percent GDP growth was the norm. Now what? Jay Powell thinks it's going too fast because it's growing ... He's even projecting ... It hasn't even grown 1% this year or not projected. It could grow less than 1% last year, less than 1% this year. Where are we seeing that GDP increase that this technology is supposed to have given us? We're not. And again, more leisure time, which we all value to say the least. Three day work weeks or three days plus two at home, whatever your company is telling you to do or you want to do. Gives you a little bit more leisure time. We all value leisure. That's certainly the name of the game here.

But again, in terms of actual seismic shifts, I don't see it. When I say don't see it, things that have been invented over the last 40 years have changed my life. I could never have written Stocks for the Long Run without a word processor. Some people can put pen to pencil and do it. To me, I need to be able to do it and redo it and move sentences and chunk out words and if it was pen and pencil, I couldn't read it and it would be a mess and I couldn't do it. The technology enabled me to do a book that I really couldn't do. But that technology was developed in the late 20th century, early 21st century. People writing books beforehand, scientific books afterwards. Matter of principle. Keeps on improving. But a seismic shift at this point doesn't really seem to me to be in the cards.

 

Chris Gannatti:

A question here. Actually, speaking of Stocks for the Long Run, it's framed as valuations may not matter in the short run, but you even mentioned Cisco in 2000. You've got people pricing Tesla based on you might say hopes and dreams. Shopify in 2021. There are all these examples. When you look at what's happening in the market today, is it a bubble based on you looking back over the period that you looked at for Stocks in the Long Run?

 

Jeremy Siegel:

Well, first of all, nothing like the bubble of the internet bubble of '98, '99 where it's like miles away. These are real companies. They're making profits. Their valuations though high ... NASDAQ is selling for 30 times earnings. I think NASDAQ was selling for 100 times earnings in 2000. The technology sector of the S&P 500 was selling at 80 times earnings. The market itself was 30 times earnings. And by the way, that was at a much higher interest rate level than we have today. 10 year TIPS were going for four, four and a half percent and now they're one and a half. The competition was fierce. And that is when I wrote the article, Big Cap Tech Stocks Are a Sucker's Bet. I said, they just cannot sustain this with this interest rate structure. We are not anywhere near there.

I think we use the term bubble a little bit too quickly. I think of a bubble where something is priced two, three, 400% above what its fundamental values are. Is Nvidia? Maybe it's twice what it'll eventually be. Maybe it's half. But I see nothing like 2000, which tells me this can go a lot longer. The momentum players in the market are very, very strong and they can persist for a long time. Once they've latched onto the group that is going, they ride the train. And they're very quick to the trigger. When they see us a falter, they jump off.

But while they're riding that train, nothing can deter them. Valuation doesn't make any difference at all. It's all the technical momentum that plays. We've seen it go much further than it could go. We've also seen it go shorter. Crypto had a boom too two or three years ago. You had crypto to your name and you went up 20, 30, 40%. That didn't last. Dotcom to your name 25 years ago, you also doubled in price. That didn't last. Although there were eventual winners. You had AI to your name, you're up 20, 30, 40%. Could it be more? Absolutely. Yeah. You usually have the story plus some concrete evidence and the one too, you mentioned the one, the emergence in November last year of ChatGPT and then the Nvidia earnings. Oh, I can monetize this. That just told the story. And that story can persist and drive it much, much further than it is now.

I'm not going to predict it. I don't know if anyone can. You have to be really nimble as a trader here. And by the way, usually we had fakes. You think it fell off the train and then it usually surges to another high and then you get back on at the end. It falters again. You said, yeah, it faltered before, but then it didn't. It's a poker game of psychology between these players back and forth, back and forth many times before it finally does. Listen, take a look at the meme stocks. There's a good example. Try to play the meme stocks. They were never worth their valuation. It was all a game. Now, I'm not saying that about Nvidia or the current AIs. They've got solid companies behind them. But once you get into a certain mode, you're in a mode where it is momentum and not valuation. That makes the biggest difference.

 

Chris Gannatti:

Another big topic, because anytime AI is mentioned, sometimes it's talking about the technology which we've been covering, and then sometimes it ends up talking about the geopolitics. So given the benefit of the years and cycles that you've seen professor, when you look at China versus the US or China versus the west today with AI being used as the central thing that countries sanction each other and argue about, whether it's China saying Micron can't put their factory in the US, saying Nvidia can't sell certain chips. Where do you think we're going on the China versus US part of-

 

Jeremy Siegel:

There's a lot of interesting questions in there. You have the pioneers, you have the Jeff Bezos’s, you have the Bill Gates’s. Or the David [Mark] Zuckerberg. You have those that have broken ground to new technologies and that have changed the world and changed the way we do that. Chips didn't do that. It was human intelligence. We have to ask the question, are we going to get AI making those huge types of breakthroughs about what is possible to do? Not just improve the current, but to think out of the box. Can it replicate human ingenuity is a good question. Now I bring this up in terms of China because I think the United States is the most conducive environment for thinking out of the box in the world, and it has been for 100 years.

We celebrate it. Thinking outside the box, destroying any current company. If you can do that, you're fine. In other societies, including China, you don't do that as much. There's a social contract. People are employed by companies. Don't invent something that'll destroy that company. Well, that puts a restriction on you in terms of being able to innovate. Being an academic for a half century, come to the United States for innovation. Then you go back to your own country if you want to implement it and sell it, et cetera and so on. But come to the United States for innovation, come to the United States for financing, our ideas on financing and how you do it and then you go to the rest of the world. So we're going to see that. Although I think back, I guess the Sony Walkman was Sony Japanese. That was an innovation at the time.

They also made fantastic cars, which are legacy motor companies. Really fell down on the job. I'm not saying others can't do better. There are some things with digital transfers and currency that are quite advanced in China. But just building the minds of Elon Musk, Bill Gates, Jeff Bezos. Yes, they became the richest people on earth because of it, but they also broke through. Are we going to see that happen elsewhere if there's an environment? Yeah. Because to do that you usually have to slay someone else. You have to step on some toes of the establishment. And some societies do not take to that as well as others do. As far as the war and what's going on in China, in Taiwan and chips and everything, I don't have a lot to say. We really depend on China. They depend on us for their exports. Obviously their economy. There's a lot of mutual interest. It is in both of our interests to make sure that that moves ahead peaceably.

 

Chris Gannatti:

Final question, just based on ... We've gone almost a full 60 minutes. Another big topic professor is regulation. And we've heard it before. Companies saying regulate us. You remember, I'm sure, Microsoft in the 2000s and the various things today. The government seems not to want to let certain M&A activities go all the way through. And sometimes you hear from the EU, sometimes you hear from the US. So regulation, it's a very active topic, but you wonder, do you think it's possible to have effective regulation or is it just a lot of noise?

 

Jeremy Siegel:

You need to encourage competition. That's the important thing. You have to encourage someone to be able to compete fairly on the grounds. That's really what America was about. You come, you show what you can do. I'm going to compete on fair grounds grounded in law. And that's really important to benefit the consumer. You don't want just to be bought out and not be able to compete and to be squelched. Regulation should be towards that goal. I think there's almost a knee-jerk reaction now, we're not going to let the big get bigger. Some people say if we break up some of these big firms, they actually might do better. There's some arguments to that. I remember the conglomerate phase of the 1960s. Everyone thought, oh, that's the way to go. And it fell apart because the bigger the units ... No. What happened to General Electric, Jack Welch? Worked for a while and then it didn't. If you keep on absorbing without revitalizing and making sure that the parts can compete independently, that might mean stagnation.

 

Chris Gannatti:

Well, I want to thank the professor and also thank everyone for dialing in. That basically gets us to the full hour, which was what we were scheduled for. So we definitely appreciate everyone taking the time. And if we weren't able to get to your question or if you want to go deeper into some of the specific investments that WisdomTree could have, do not hesitate to reach out. We can always go longer, we can always go deeper on this particular topic. But other than that, take care, and until next time.

 

Jeremy Siegel:

Thank you.

 

 


 

This material contains the opinions of the speakers, which are subject to change, and should not be considered or interpreted as a recommendation to participate in any particular trading strategy, or deemed to be an offer or sale of any investment product, and it should not be relied on as such. There is no guarantee that any strategies discussed will work under all market conditions. This material represents an assessment of the market environment at a specific time and is not intended to be a forecast of future events or a guarantee of future results. This material should not be relied upon as research or investment advice regarding any security in particular. The user of this information assumes the entire risk of any use made of the information provided herein. Unless expressly stated otherwise, the opinions, interpretations or findings expressed herein do not necessarily represent the views of WisdomTree or any of its affiliates.

Professor Jeremy Siegel is a Senior Economist to WisdomTree, Inc. and WisdomTree Asset Management, Inc.

Christopher Gannatti is a registered representative of Foreside Fund Services, LLC.

WisdomTree Funds are distributed by Foreside Fund Services, LLC.

 

Glossary:

Deflationary: The opposite of inflation, characterized by falling price levels.

GDP: The sum total of all goods and services produced across an economy.

Treasury Inflation Protected Securities (TIPS): Bonds issued by the U.S. government. TIPS provide protection against inflation. The principal of a TIPS increases with inflation and decreases with deflation, as measured by the Consumer Price Index. When a TIPS matures, you are paid the adjusted principal or original principal, whichever is greater.

Valuations: Refers to metrics that relate financial statistics for equities to their price levels to determine if certain attributes, such as earnings or dividends, are cheap or expensive.