AI’s DeepSeek Wake-Up Call: Q&A With VettaFi’s AI Expert Zeno Mercer

The market reacted exuberantly to the Project Stargate announcement, heralding $500 billion of AI infrastructure investment. Then, investors got a wake-up call, when Chinese artificial intelligence firm DeepSeek released its new AI model. 

DeepSeek’s model seemingly outperforms, or at the very least closely matches, many U.S. competitors, like OpenAI, at a fraction of the cost to build and operate. According to the company’s claims, DeepSeek’s large language model (LLM) cost just $5.6 million to train. That calls into question the lofty sums U.S. tech giants are spending on the computing infrastructure, chips, data centers, and the energy required to run them. 

Markets are particularly roiled by this announcement. That’s because the U.S., given its large investment in AI, was thought to be far in the lead. As seen in the chart below, private investment in AI in the U.S. dwarfs China’s investment by 8.7 times.  

Private Investment in AI

Source: Stanford.edu

Clearly, this announcement has called into question the U.S.’s superiority in the space over China. Despite U.S. efforts to curb chip exports to China, DeepSeek has found a workaround, causing a reassessment of the computing-intensive nature of rival models by OpenAI, Anthropic, Google, and others.   

Fortunately, here at VettaFi, we have Senior Research Analyst Zeno Mercer to help us figure out the implications of all these new AI developments and put them in a better perspective. I spent some time speaking with Zeno and excerpted some of the key points from our conversation below in a Q&A format, as taken from our AI transcript.

Jane Edmondson: Zeno, one of the aspects of the DeepSeek model is that it uses something called a “Mixture of Experts (MOE)” approach. Instead of having the entire library in your model, this method segments data into smaller sections so that we can more quickly and efficiently access information. Is that a correct way of looking at it?

Zeno Mercer: Yes. Effectively, DeepSeek’s approach identifies what mixture of experts should be utilized outside of the library in real time. To be fair, DeepSeek is not the first one to utilize an MOE approach. There are actually a lot of MOE models. But, they’ve compounded and added a couple of new tricks. And they have found a way to train these new models in a cheaper way. 

There’s debate around whether they actually had access to a lot of Nvidia H100 chips or not, and other points of controversy. There are now potential investigations around Singapore being a sales channel for Nvidia to avoid trade restrictions with China. Additionally, OpenAi and Microsoft are complaining that some of their technology may have been compromised.  But at the end of the day, DeepSeek’s approach is sound and more efficient from an energy perspective. We will continue to see innovations at the model level, which should keep driving down costs due to reduced energy and compute requirements. 

Jane Edmondson: Since the DeepSeek announcement, there have been other developments coming out as well, from other Chinese companies, correct?

Zeno Mercer: Alibaba released a very good, low-cost, open-source vision language model. Last week, DeepSeek R1 came out. It’s a reasoning model, not multimodal. It’s mostly text, reasoning, and thought. Models come in many different shapes, sizes, and purposes. Vision language models will increasingly be used in augmented reality and robotics. 

Alibaba also released an R1 competitor. So even within China, the AI competition is heating up. We even saw some new models come out of Berkeley that effectively were able to copy a lot of the methods that DeepSeek used, also on the cheap. 

Jane Edmondson: The interesting thing about the AI-DeepSeek sell-off in stocks is that it had a trickle-down effect on the energy and utilities sectors.  

Zeno Mercer: Yes, especially utilities and nuclear, which are right now projected to get a lot of their future revenue from data centers. You saw companies like Raspberry Pi, a relatively smaller, under $2 billion market-cap company climbing, because they don’t sell the data center. 

We were surprised to also see a company like Ambarella (edge vision computing) fall. That’s because it’s actually a company we believe could see significant gains with more distributed, cheaper AI options and adoption. There is a whole section of the AI ecosystem that will benefit from powerful, lower-cost AI. And it doesn’t care if it is closed source or open source. The more AI that exists, the better off the entire AI ecosystem is. 

Jane Edmondson: The market sell-off on Monday was interesting because the DeepSeek news had been out a good week before. 

Zeno Mercer: Yes, the DeepSeek news came out in January, alongside the Stargate announcement, so we had the DeepSeek and Stargate Project news come out at the same time and…

Jane Edmondson: And, of course, the market focused on the positive news, announcing $500 billion in AI infrastructure investment. Was the data center and semiconductor sell-off in names like Nvidia warranted?

Zeno Mercer: The data center and infrastructure investment story still makes sense. Maybe there will be slightly less capex, but there are many different types of AI. Some of them will involve language and processing. We are also going to explore the world of physics engines and drug development simulations. Those will require a lot of real-time computation and inference. 

If you’re trying to simulate drug design and how a drug will interact at a molecular level, that’s a bit more complicated. Deep learning models, quantum computing, physics and chemistry simulations will all be physically demanding from a processing power and energy standpoint. 

We are entering a paradigm of hybrid AI, which has been mostly cloud AI. The next phase of hybrid AI, where we have embodied AI agents (think local AI operating on devices, robots, and vehicles), will still require a lot of computational power and energy.   

Jane Edmondson: So, the DeepSeek R1 model most directly competes with OpenAi’s ChatGPT?

Zeno Mercer: It absolutely competes with OpenAI’s ChatGPT and their underlying models and APIs. They charge developers access. And the energy consumption and the cost that goes into that effectively undercuts ChatGPT’s pricing power. That is something we’ve been saying for a while. We’ve been watching and talking about open source for a long time. And the risks and opportunities associated with the open source business model. 

Artificial Intelligence is going to be a cumulative knowledge progression. Right now, it has essentially sucked up all the data from the internet perhaps one could say much of humanity’s collective knowledge. The next step will be to utilize what we already know better. And then get into more tangible physical intelligence that requires lots of sensors or hybrid approaches, such as simulations, to compress learning time.

Jane Edmondson: The United States had invested so much more than China in AI technology. And we had been restricting them from getting the high-powered chips so that we would stay in the lead. With the DeepSeek announcement, would you say the playing field has now leveled? Or are we still ahead in other areas?

Zeno Mercer: There are reports that Gemini 2’s flash thinking model is ahead of even this latest R1 release. But again, the battle between open- and closed-source will be a constant struggle for the people investing a lot of money in AI technology. That’s unless they get leagues ahead in a way that is not replicable. 

It is a little bit harder to guarantee a return on investment in developing large language models (LLMs). That’s especially so now that DeepSeek has revealed the risks associated with open-source AI. And also that U.S.-based artificial intelligence models are not as superior as we once thought. However, we certainly see the ecosystem benefiting — the picks and shovels of now and the future tech stack ahead. 

There are reasons why people and organizations would not use these Chinese models. I’ve already downloaded DeepSeek using Llama onto my laptop and can run it locally.

Jane Edmondson: I hear the DeepSeek app is now the number one downloaded app on the app store.

Zeno Mercer: I do want to separate the two really quick. There’s Deepseek R1, the model, which is the open source license model that can run locally. It is not actually the same one being downloaded from the app store that’s a separate thing. 

Jane Edmondson: But we should still get some pretty good intelligence on this new AI technology, because people are out there using it in real-time and testing it out, right? Will it cause companies to reevaluate their own models? 

Zeno Mercer: There is a lot of research, and there are groups like Anthropic, Meta, and Google. All of them have great AI research teams. Certainly, they will incorporate these learnings. But they also have a lot of stuff up their sleeves that has not been released to the public yet.

I would not underestimate megacap companies’ gigantic AI research arms, like Meta. Alphabet also has Waymo for autonomous vehicles. There are a lot of different parameters and angles to the artificial intelligence story that are not necessarily captured just from LLMs, which are increasingly becoming tools.

Jane Edmondson: Taking a longer-term perspective, is this news ultimately a good thing or a bad thing for AI?

Zeno Mercer: It’s both. From our take, it’s not unexpected. I think the delayed market reaction was a little bit of a surprise. We did an ETF Prime podcast recording many days before the market sell-off. And I covered the DeepSeek and China models. The market was not reacting yet. [DeepSeek] definitely was overshadowed by the Stargate news. 

It’s also important for people to realize that there is a lag period between new technology available, and widespread adoption and deployment. It’s all interconnected. And small or large advances in one subset (like a more efficient processor) can open up new platforms. 

We will continue to see disruption and new innovations make AI more powerful and energy efficient. I believe that AI will solve its own energy problems by coming up with innovation not only with human solutions but with paradigm shifts solving the problems at hand in the most energy-efficient way possible. 

Then, if you start to add smart connectivity, AI routing, and other enhancements, we’ll get to a point where our asset utilization will become much greater. That’s a big plus for society, because that’ll reduce energy costs.  

Jane Edmondson: The ROBO Global Artificial Intelligence ETF (THNQ) portfolio seemed pretty well-positioned. It held up much better than many of the competitor ETFs. 

Zeno Mercer: THNQ is a more equal-weight strategy that diversifies into several different subsectors that are currently contributors and will be future beneficiaries of artificial intelligence. There is exposure to infrastructure layers, application layers, and we have the companies that connect the two. We don’t have a big bet on Nvidia, as we were expecting underperformance from video gaming going into 2025. Given elevated expectations and valuations, we believe that other constituents have more upside. That’s because they have underappreciated opportunities as we enter this next phase 

We only have 12 companies down year-to-date, out of more than 50. Snd the rest are positive, even after what happened. There were some companies that went down that we didn’t think should go down. Amberella, for example, whose focus is on edge computing and not data centers, was down 7%. It’s a company that benefits from cheap AI models, creating use cases for autonomous vehicles and drones and other scenarios that use general camera intake. Low-cost AI will enable computer vision AI to have a much greater impact in the world with real-time inference, real-time heads-up displays, consumer devices, and things like that. 

The THNQ portfolio focuses more on edge computing and connectivity than many other strategies out there, which focus on the first phase of AI and data centers. We are definitely primed for multiple stages of artificial intelligence. There are a lot of exciting areas there that are still very early in the innings and will start to appear over the next decade. 

Jane Edmondson:  As of this writing on January 30, 2025, the THNQ Index is up 8.7% YTD, and globally, the ETF and UCITS have amassed over $1 billion in assets under management. 

For more news, information, and strategy, visit the Disruptive Technology Channel.

VettaFi LLC (“VettaFi”) is the index provider for THNQ, for which it receives an index licensing fee. However, THNQ is not issued, sponsored, endorsed or sold by VettaFi, and VettaFi has no obligation or liability in connection with the issuance, administration, marketing, or trading of THNQ.