In-person attendance is by invitation only.
The event will stream live on this page.
America’s AI Challenge: Strategic Imperatives
In-person attendance is by invitation only.
The event will stream live on this page.
Founder and Partner, Asdal Advisory Group
Senior Fellow, Foundation for American Innovation
Senior National Security Advisor, Andreessen Horowitz
Senior Fellow, University of California Institute on Global Conflict and Cooperation (IGCC)
Chief Economist, Foundation for American Innovation
VP, Beacon Global Strategies
Executive VP for Global Public Policy and Government Affairs, SIIA
Senior Research Analyst, NVIDIA
CEO, Auterion
Senior Fellow, Center for a New American Security
Trustee
Shyam Sankar is a trustee at Hudson Institute and chief technology officer and executive vice president of Palantir Technologies.
Founder and CEO, Eclipse Ventures
Asia-Pacific Security Chair
Patrick M. Cronin is the Asia-Pacific security chair at Hudson Institute. His research analyzes salient strategic issues related to US national security goals in the Indo-Pacific region and globally.
Senior Fellow, Center for Defense Concepts and Technology
Timothy A. Walton is a senior fellow at Hudson Institute, supporting the work of the Center for Defense Concepts and Technology.
To sustain its role as the leading military and economic power and uphold global security, freedom, and prosperity, the United States needs to shape and win the race to develop and field advanced technologies like artificial intelligence.
This conference will bring together experts, policymakers, and representatives from leading firms to discuss the intersection of AI, strategic policy, and national security.
9:30 a.m. | Panel 1 | The Great AI Competition: Can the US Out-Diffuse China?
The opening session will examine Chinese and American strengths and vulnerabilities, identify critical inflection points, and establish the strategic context.
- Jimmy Goodrich, Senior Fellow, University of California Institute on Global Conflict and Cooperation (IGCC)
- Paul Lekas, Executive VP for Global Public Policy and Government Affairs, SIIA
Moderator
- Patrick Cronin, Asia-Pacific Security Chair, Hudson Institute
10:45 a.m. | Panel 2 | Other Frontiers: Strategic AI Arenas Beyond Frontier Models
This panel will identify specialized (or narrow) AI areas that are the highest strategic priorities and examine how Washington can rebalance the AI agenda to win these crucial sectors.
- Lior Susan, Founder and CEO, Eclipse Ventures
- Divyansh Kaushik, VP, Beacon Global Strategies
- Jack Mallery, Senior Research Analyst, NVIDIA
Moderator
- Jason Hsu, Senior Fellow, Hudson Institute
12:00 p.m. | Luncheon Keynote | America's Evolving Strategy
- Dean Ball, Senior Fellow, Foundation for American Innovation
- Daniel Remler, Senior Fellow, Center for a New American Security
Moderator
- Bill Drexel, Fellow, Hudson Institute
1:30 p.m. | Panel 3 | AI’s Ideological Competition: Addressing the Techno-authoritarian Advantage
This panel will assess China’s asymmetrical advantages in pursuing AI-enhanced control and explore paths for the United States to pioneer the use of AI technologies for democracy.
- Kirsten Asdal, Founder and Partner, Asdal Advisory Group
- Bill Drexel, Fellow, Hudson Institute
- Sam Hammond, Chief Economist, Foundation for American Innovation
Moderator
- Michael Sobolik, Senior Fellow, Hudson Institute
2:30 p.m. | Panel 4 | AI on the Battlefield: Evolving Military Implications
Experts will examine how AI is reshaping military capabilities, the evolving challenges it poses for defense planning, and the strategic implications of AI-enabled warfare for US-China competition.
- Shyam Sankar, Chief Technology Officer, Palantir and Trustee, Hudson Institute
- Lorenz Meier, CEO, Auterion
- Matt Cronin, Senior National Security Advisor, Andreessen Horowitz
Moderator
- Tim Walton, Senior Fellow, Center for Defense Concepts and Technology, Hudson Institute
3:30 p.m. | Concluding Remarks
Joel Scanlon:
Good morning everyone. Good morning and welcome. Thank you for being here on this beautiful fall morning. Thanks especially to those of you who are going to be sharing your expertise and perspectives throughout the day on what will be a series of fascinating conversations. Let me say at the outset today that for Hudson at least . . . I should introduce myself. I’m Joel Scanlon, executive vice president here at Hudson. For Hudson, at least this is a learning and planning exercise. Hudson was founded in the emerging first nuclear age, when the US sought to understand both the promise and peril of new technology. How to harness it, how to control it, how to stay ahead of our adversaries, how to utilize it for international leadership, how to mitigate the significant and inherent risks, how to think through broad and possibly existential implications. Depending on your outlook, the potential impacts of emerging artificial intelligence revolution run the gamut from Uber Eats better anticipating your Taco Bell order to a world . . . Not a small thing, to a world run wholly by super intelligent and self-interested machines.
At a minimum, it is already starting to disrupt economically and raise profound moral questions. We’re not going to try to tackle all of that today, but we do find artificial intelligence and its implications increasingly intersecting with the policy areas most relevant to our work here at Hudson. That may be its role in enhancing American industrial capacity that may be thinking through the challenges of diverging technological approaches with allies. It may be AI’s ability to provide decision-making advantage on a battlefield. Today’s conversations will help inform, guide and expand our own research in the years ahead. We do begin with one guiding principle though, that I think you’ll see in our themes today, it matters who leads this in other emerging technologies. At a Hudson event in New York last week, Palantir co-founder and CEO Alex Karp laid out the two stark near-term possible alternative paths for AI. Either the US leads or China will. The hopes and aspirations around this technology and the debate and resolution of the questions and concerns and legitimate fears all take on a very different aspect depending on that outcome.
Of course, how we get there is the question. Two brief program notes. I’m sorry to have to report that Tarun Chhabra, we’ve lost due to illness this morning and Sriram Krishnan has been pulled into the Saudi Crown Prince’s visit and the US-Saudi Investment Conference going on today and has had to drop out. We look forward to having him back at Hudson soon, but we’re very grateful to Dean Ball and Daniel Remler for agreeing to pinch-hit today and a great conversation during lunchtime around the past two administration’s approaches to AI. So thanks again for being here. And with that, let me turn it over to my colleague, Hudson’s Asia Pacific Security chair, Patrick Cronin.
Panel 1 | The Great AI Competition: Can the US Out-Diffuse China?
Patrick M. Cronin:
Joel, thank you so much. And what a great day this is going to be talking about the AI challenge. This first session is really just the opener. We’re only going to get so far in the next hour or so, but we have really great panelists in terms of Jimmy Goodrich and Paul Lekas, and I’ll introduce them in just a second. The title of this session is The Great AI Competition: Can the US out diffuse China? So the question for me from this is, will America’s advantage in our frontier innovation be erased by China’s ability to diffuse and scale technologies across the society and beyond? And not just diffusion because as Jimmy will talk about China’s building the whole ecosystem, I’m an optimist by nature and I think the United States can win as Alex Karp was employing us to do, or at least not lose, even though he says we must choose between those two. But I think the challenge is more enormous than even I think, and I think more enormous than most people think.
I just think about high-speed rail in China. I think go back to 2008, the very first high-speed rail and then 45,000 kilometers of high-speed rail 15 years later. Where will we be in 10 or 15 years at the current rate of work and investment and policy? So the US has many strengths as I know Paul and Jimmy will both talk about and if we can harness them, we’ll do well. But China’s ambitions and efforts are all encompassing as we’re going to hear. So which country can most rapidly innovate and diffuse advanced AI technologies to gain the upper hand for economic productivity, for military power, for global influence, and the consequences as Alex Karp has said, as the late Henry Kissinger wrote in his final book are almost existential. I’m thinking of Jeffrey Hinton, sort of a godfather of AI meeting Graham Allison. The paradigm shift meets the power shift coming together here and this panel is going to help us think through these things. And Jimmy Goodrich and Paul Lekas are really well poised to help us explore this discussion.
Jimmy Goodrich has got a great bio, but he’s essentially a deep China hand who has been working in this tech industry field from the beginning and he’s doing important work in consulting at RAND at the University of California Institute for Technology, the Global Conflict and Cooperation . . . Center for Global Conflict and Cooperation based in La Jolla. Although he’s based here in Washington most of the year, he’s in China and La Jolla the rest of the year, I think. It’s really a great honor to have Jimmy Goodrich here. He’s just a phenomenal treasure.
Paul Lekas is a former top DOD lawyer who’s worked on the technology accounts for years. And is now basically in charge of government operations for the Software and Information Industry Association, which is the umbrella group for the hundreds of companies that work in this field. So it’s great to have Jimmy and Paul anchor this discussion this morning and I think we’ll turn to some audience questions as well later, but I’ve asked each of them just to give a couple of minutes or very brief framing remark on how do you think about the AI competition between US and China? And I know Jimmy’s more likely to talk about China and Paul’s more likely to emphasize the US, but there’s no barrier here, so over to you, Jimmy.
Jimmy Goodrich:
Well, great. Well, thank you and it’s a pleasure to be here with everyone this morning. Sorry we don’t have Tarun. I’ll go jab him for not being here, but I did see him yesterday. He had a sore throat, so I can vouch for you, the authenticity.
Patrick M. Cronin:
His voice has been hacked though, I think of his excuse.
Jimmy Goodrich:
I think if we just step back and look at the competition between the US and China, it’s really not just limited to AI, it’s in all technological domains and if you want to get really the clearest sense of the competition, well, the good thing is that Beijing writes all their intentions down in verbatim and word in a five-year plan, and they just did that last month in Beijing. In fact, in the 15 five-year plan recommendations, Xi Jinping proposed that China needs to take extraordinary measures to become fully self-sufficient across the entire supply chain in semiconductors, robotics and a few other key areas, and they want to be a global scientific leader of the next five years.
To think that they weren’t investing enough, that’s going to mean they’re going to quadruple, quintuple down on those efforts. And the language that China used, extraordinary measures is actually a political term within the CCP that is used for the most important political mandates from the top that must be met without any excuse. And so any measure can be taken, any amount of funding can be spent to achieve that objective. In fact, Xi Jinping used that same phrase for his poverty alleviation campaign in 2015, and guess what? They eliminated officially poverty by 2020.
Patrick M. Cronin:
Amazing.
Jimmy Goodrich:
And so of course they didn’t fully eliminate poverty, but that’s just the way the party works and there will be a massive mobilization over the next five years to try and achieve that. I think for AI and the cutting edge, so far, China has shown that really they are very close behind, if not equal to the US in many of the different areas of the AI stack. You have the compute infrastructure where the US and its allies, Taiwan, Korea, Netherlands still retain an advantage. There is in the models and the algorithms where China’s six or zero months behind depending on how you measure things. There’s the talent where China and the US . . . I think actually it’s a net wash as . . . The US actually has the global talent ecosystem available to it and China has its own domestic ecosystem.
That’s, I think a disadvantage for China. And then there’s data and competitive business models and then energy of course. And China, I think there has a profound advantage in terms of the ability to mobilize and connect things to the grid. But China overall, if we can see from what they’ve been talking about and looking at, they’re quite comfortable being a fast follower. I think they’re not yet fully convinced of the mythic God in a box superintelligence idea. They’re willing to let the American labs go run and pursue towards that direction where they’re very focused, as you mentioned on diffusion, getting healthcare, education sector, transportation, government, public sector focused on using models, developing models and integrating that into their economy.
One example is in the public policing, of course, China first adopted AI for facial recognition in 2017. China really led the world in adopting facial recognition technology. And so we saw that ubiquitous surveillance and Xinjiang and Tibet caused significant human rights concerns. No surprise already, China is investing heavily into adopting large language models for domestic policing, well ahead of the world in that space. And so I think it’ll be . . . For China, they believe they can kind of sit right behind, gather a lot of the innovation that’s created out in the west and then just deploy faster.
One of the big cruxes though is going to be how much computing power they will have access to. As even if you want to deploy models, you still need state-of-the-art semiconductors data centers to deploy and inference your chips. And we all know that the export controls and the complexity of the technology has held China back, not maybe as far as some would like, but it has created a lot of hiccups and speed bumps for China. And so you can’t lead into fusion if you don’t have enough semiconductors and I don’t have enough data centers. And so really we’ve seen that play out over the export control issue, over U.S.-China trade negotiations where the most advanced semiconductors have become the flashpoint for a lot of those discussions. So there’s a lot to unpack there. But I think China, you have to at least . . . You don’t have to agree with it, but you have to admire that they have a very determined strategy that they will act and execute on.
I don’t think state planning is the answer to everything, we lead today in AI and 80 percent of the world’s installed computers in the US, the best models are here and that had nothing to do with government policy. And so likewise, China’s most innovative company, DeepSeek was the furthest away from the five-year plan that you could find. And so we have to be still optimistic that the private sector is going to lead the way for us and how do we figure out how to partner with the private sector in the US to either get out of the way where necessary or partner where required, things like energy and permitting so that they can run as fast as they need to.
Patrick M. Cronin:
That’s a great foundation. I’m going to come back and talk about some of the other foundations that China’s trying to lay on this. And whether AI was even ever mentioned in say the Made in China 2025 plan when it was first hatched back in 2015, because I don’t think it was even really a big goal at the time. I want to turn to Paul and get your framing remarks. I’m just thinking Paul, about the big Hudson gala where Sarah Stern was handing the Herman Kahn Award to Alex Karp. And Alex Karp was saying, “We’ve got to build things, we have to build things.” You’re representing all these US industries. Are we building? What of our strengths here? I know we got this frontier innovation going on, but are we diffusing at the same time?
Paul Lekas:
Yeah, it’s a great question and I thought Jimmy’s comments were phenomenal. You really laid out the core advantages on either side and the challenges that are awaiting us. The United States leads in innovation, model development and maybe closer, compute is really where we have a demonstrable advantage. Probably also on talent and the private sector investment. And I think there is a lot of energy. Everybody in this room knows that there’s a lot of energy in the private sector to build, to advance adoption, to diffuse AI technology from the United States, not only across the United States, but also internationally. And there’s a lot of challenges there. I want to go back to something also building on what Alex Karp said is crystallizing why all of this matters. And I think for people in this room, building on those comments and also Joel’s, national security, economic security, these are absolutely critical for the world going forward.
But I think that something that doesn’t quite get as much attention, it matters a lot to the private sector, is why US technology. If our goal is to diffuse US technology, how do we convince the rest of the world that they need to be building on US technology, US models versus those that are coming out of China? Especially in cost-sensitive regions like the global south. And that’s a narrative that I think probably hasn’t gotten enough attention, but there are some clear differences and they go to things like content filtering, they go to security, privacy, what we consider to be core democratic values. And I think that that has become something that matters a lot to US industry. So US industry generally is very favorable, has a very favorable view of the actions that the Trump administration has taken. What has laid out in the AI action plan, there’s a clear imperative to diffuse US technology around the world. I think that we’re facing growing headwinds in various countries and we also have some challenges domestically. Jimmy spoke about energy.
China probably laughs at the situation of energy in the United States and how much tension we have over building data centers. Increasing both generation and transmission is a huge challenge right now and that is a core focus of US industry. So we really need to build that infrastructure in the United States and that requires cooperation partnership with governments, with the federal government, with state governments. A lot of these issues are local and that all goes to another advantage that China has for better or worse, which is centralized planning, which we don’t have here.
There is a fair amount of regulatory friction in the United States and that is certainly impeding the diffusion even within the United States of AI technologies and applications. Applications across all the domains where we think AI can actually contribute. So I think I’ve been focusing much more on the tensions than on the positive, but US investment, private sector investment is extraordinary. I think US firms are really well positioned to go even further than they have right now, but it is ultimately a question of how well do we diffuse technology across the United States and there are a lot of headwinds there and also how can we advance adoption internationally.
Patrick M. Cronin:
I want to come back to you Jimmy, about the systematic approach of China and what they’re trying to build. Not to make the problem too large, but unless we appreciate really how China’s viewing building this whole ecosystem, which they do well when they put their minds to it, whether it’s on military affairs or whether it’s on this very important critical technology area. When North Korea says it’s going to build 10 new factories over each year for the next 20 years, we can be very skeptical. But when China says they’re going to build 20 new major science and technology facilities over the next decade, it seems like yeah, they’re going to do it. So how do they think about the relationship of the foundations of science to the ends of technology, especially AI technology?
Jimmy Goodrich:
Yeah, it’s a great question. I think right now what China’s industrial policy is focused on is providing what they think are public goods infrastructure. So education, talent development is a huge part of China’s investments, whether it’s in a leading edge universities, DeepSeek, for example, which surprised, not Silicon Valley but the world for their capability. In fact, about 90 percent of their workforce was indigenous, had never been trained overseas, had been educated in local universities and they were working on cutting edge reinforcement learning and other techniques that were state-of-the-art at the time, earlier this year when that model was released. Showing that China’s education system has really seen dramatic improvements over the last 20 years. Another area that China’s investing heavily in is basic research. And so this year, China’s going to spend roughly $500 billion on basic research. It’s coming neck and neck with the United States and on a net basis. China’s firms are also increasingly innovative.
And those are, yes, there’s accounting fraud and there’s all the waste and corruption that there is within the Chinese system. But when you’re dramatically increasing your funding like that, there will be an output change as well. And they’re investing in public research and technological infrastructure. So for example, China believes that semiconductor facilities are basically a public infrastructure and that the state will fund the development of a self-sufficient semiconductor capability. Likewise, for data centers, China has a national data center compute program known as Eastern Data Western Compute where they’re migrating most of their data centers into the interior of China where they have access to clean energy and they’re giving subsidized support so that when DeepSeek or Baidu choose to use a data center, they’re using a subsidized low-cost state-owned operator like China Mobile or Telecom from their telecommunications companies. And then on the adoption side, the government can encourage or make a phone call and make happen their state-owned enterprises that control 30, 40 percent of the commanding heights of the economies to adopt AI much quicker.
Now all that sounds impressive and scary, but on the other hand, I was actually having a conversation with a Chinese venture capitalist who was telling me that amongst his dozen or so portfolio companies that are invested in AI, 80 percent of them want to be in the US market because no one in China’s paying for AI. So we all know in software, Chinese companies don’t like to pay for software, and so they really don’t want to pay for a chatbot or generative AI, and if they can use DeepSeek for free, why do they want to pay for anything? And so it’s just quite interesting that the US, I think still fundamentally is leading in business model commercial innovation monetization of the technology much quicker than China is. And so ultimately it’s going to be a contest of systems. Which system is going to be better positioned?
We have to remember there’s significant national security implications at stake. We talked a little bit about this earlier. Already, advanced AI systems, even without AGI or super intelligence show very capable nation-state advantages. China, for example, was using DeepSeek for a disinformation campaign targeted at Taiwan and the United States. The New York Times reported this company called Sugon, owned by the China Academy of Sciences. Already China’s deploying AI for weapons development and simulation, and if you think about AI that can be applied, think having 500 Einsteins in a data center creating new technologies, materials, chemistries, we may see for example, where AI itself wins a Nobel Prize, whatever system name. And so China is investing heavily, particularly in the scientific area of applying AI to science. And so all this combined, I do think we have to be very cognizant that China’s running very fast and investing heavily. We don’t have to replicate, again, as I mentioned before, but we’d have to be mindful and adjust accordingly.
Patrick M. Cronin:
500 Einsteins and I’m thinking of Herb Simon winning a Nobel Prize much later, but after helped co-founding the idea of artificial intelligence in that first computer program, Logic Theory in 1956 at Carnegie Mellon University along with RAND Scientist, Allen Newell and programmers. And that was just to show that a computer could think in intelligently to prove theorems and it’s come all this way and now you imagine multiplying this is phenomenal. It’s why I think Japan and Korea and Taiwan and Malaysia and Thailand and Singapore are all putting AI as the top priority in their industrial policies and why in this US-China competition, there’s so much at stake here. I think Paul, are we able to move with the agility though that we have in our great democracy and market system working with private sector and government to see not just the sustainable bottom line of selling the software, but also the putting in the money for R&D and putting in the inputs that are going to be central to get the outputs that we’re trying to have?
Paul Lekas:
Well, I think the answer in terms of the government side is can the government move with that kind of agility and invest in what we need in foundational research? The answer is unlikely. We haven’t seen that. We don’t have centralized planning. We really should be investing much more as a society. Our government should be investing a lot more in research in all these programs, in creating some of these national capabilities like national compute resource, potentially even work on data centers and much more coordinated energy and permitting work, that we need. Some of that is investment, some of it is just coordination. Now the private sector moves with a good deal of agility and can work very well also with foreign governments.
So there is soft diplomacy that happens through the private sector and I think that that is moving quite quickly. You mentioned all these countries out in Asia, those are really important allies and a big part of this diffusion question is how well are we working with allies and how much are we on the same page and rowing together? The US government has launched something new called the AI exports program, which many of you may be familiar with, and the idea is for the US government to promote the export of full AI stacks to other countries. They’re undergoing review and accepting comments right now on this. In theory, it’s a nice idea. I think there are some challenges there, but ultimately, the success of that is going to come down to how well can the private sector be working not only with the US government, but with foreign governments that are looking to increase their AI capacity and build on US technology in different ways.
Patrick M. Cronin:
Jimmy, you mentioned of course, the US is very interested in artificial general intelligence and maybe too much interest in that quest of the superintelligence outcome, whereas China has been much more pragmatic and we’re not sure about the future, but let’s just defuse at the moment. When they think about the integration and they talk about it across society, but they think globally, and you’ve mentioned the global surveillance or the surveillance state basically in terms of how they integrate this into society and policing. What does China have in mind when they talk about the integration of AI? I know they talk about AI enabled intelligenized warfare, so AI is clearly they see it at the center, whether that means turning over the keys to autonomous systems, maybe not, but how far will they go with this?
Jimmy Goodrich:
I mean, the buzzword that’s being used in China is embodied AI and . . . multimodal. So having AI that is able to not just converse in it in a chatbot, but also images, video, scientific data, atmospheric data, hyperspectral data from space so that the model can integrate all different types of data. So it’s multimodal and embodied, meaning that it’s not just sitting in your data center and maybe accessed in your terminal, but also sitting in your drone or your surveillance camera, low power, high processor capabilities so that you’re pushing AI out to as many endpoints as you can across your economy. In fact, Xi Jinping had a study session on AI last year where the lecturer was talking about the specific topic of embodied AI, and that’s the quest that China’s going on. There’s a lot of different approaches there. It’s not just traditional competing, it’s neuromorphic competing or brain competing interfaces, literally embodied physically in humans. Like with Elon’s Neuralink is a whole slew of companies in China trying to do that and researchers as well.
Some of them are filled in with the PLA, so I think they’re not placing one bet. They’re placing many, many bets on different types of technologies. At the end of the day though, virtually all . . . Not all, but virtually all of them do rely on your compute infrastructure. And that’s going to continue to be a perennial area where the US and China are going to be competing with each other is your model, your embodied AI still depends on how much silicon you’re producing, how good it is, and that’s where the US in coordination with its allies like with the Netherlands, Japan need to work together to create a tighter regulatory regime so that fewer of those advanced technology items get to China so they can produce less of those chips. And so the US can maintain a competitive gap with its allies of how many silicon chips and data centers combined can they pull together to create that national competing advantage. That’s still going to be vital, irregardless of what China’s strategy is going to be.
Patrick M. Cronin:
Yeah. Paul, I wonder if I could push you a little bit on the data center question. And compute power is the buzzword at Carnegie Mellon University for the president there. President Trump had the big AI summit there a couple months ago, and you had the Democratic governor and you had Senator McCormick, the republican leader on the energy committee come together with industry and say, this is going to be compute power, right? We’re bringing energy and AI together and we’re going to do it with these data centers. We’re going to showcase in a place like Pennsylvania, but it’ll be replicated elsewhere. Do you feel like this is poised now to succeed? I mean, you have the Saudi prince here in town striking a deal essentially on what is a data center-centric kind of alliance. This seems to be having the geopolitical impact in real time that data centers are so crucial and where’s industry on this?
Paul Lekas:
Yeah, I mean, there’s a domestic and an international dimension to the data center issue. Industry, it’s probably the number one priority in terms of policy for industry right now. A couple of other things that are really important too, but this I would say is number one. And there are challenges at the local, state and national level. So that is the goal is to be able to develop more data centers more quickly, and there is overall industry is probably 80 percent on the same page here, but then there are differences because there’s still a lot of competition within industry. And everybody wants to be able to build the data centers to power their own AI, but I think you’re seeing a big push by the federal government in terms of streamlining energy reform. A lot of this process is actually done at the state, local, regional level.
It’s incredibly complicated. There’s significant pushback also in the United States, so local politics matter a lot here. And there’s narratives that AI is really driving up the costs of energy and not to get too local into politics, but that’s a big issue and it’s not necessarily true. It’s much more complicated. The utility companies, we have aging grid infrastructure. China’s been able to get past that through state investment and centralized planning, but our grid infrastructure across the country is poor. We built in the past 20, 30 years, I think two or three new nuclear power generators. In China, it’s something like 30.
We are not supporting alternative energy sources. Industry’s view is we need an all of the above approach. Internationally, I think we’re seeing some progress with Saudi Arabia, which is positive. I’ll say that until very recently, US hyperscalers were not able to construct data centers, construct their own data centers in countries like Saudi Arabia and UAE, even though that has been an administration priority for several months. But now we’re starting to see some movement in that direction, which is positive because we need to look at building data centers globally. And yeah, I’ll pick up on a couple of other threads as we go on.
Jimmy Goodrich:
Even one other point as well, just thinking about, I mentioned again the compute issue. We talk about data centers. We do have one Achilles heel, which is that most of those chips are being produced in Taiwan. And it’s not just the chips actually, if you look at who’s assembling the server, it’s a Taiwanese company.
Who’s building the capacitors and the resistors? It’s another Taiwanese company. And so that’s all great. Taiwan is a friend of the United States and likewise, but it just reinforces the importance as well, in all of this is both credible deterrents for Taiwan so that, God forbid we prevent that from happening. And then also reinforcing our own supply chain resilience, and so that instead of being 90 percent dependent on Taiwan, for some things, we might be 60, 70 percent, but we need to have some credible alternatives as well. Otherwise, this is just the house of cards. It’s waiting to fall down.
Patrick M. Cronin:
And for both of you, how do we get from that 90 percent to 60 percent? What are the top things we need to do?
Jimmy Goodrich:
Well, I think the administration strategy so far has been good in that we’re actually seeing a lot more investment announcements than we did over the last seven or eight years in terms of not just leading edge semiconductors, but assembly, the high-end packaging that’s involved with semiconductors as well. We need to think not just about the shiny object, but the broader supply chain as well. And so it’s going to require both a offensive strategy on the sort of investment side, tax credits. I think I actually work a lot better than pure industrial subsidies as tax credits. We know how to use them, they’re fair. The government’s not picking winners. Everyone fills out a form.
And on the other hand, it’s using trade policy such as tariffs or export controls and coordination with our allies to create a quarantine zone so that we decouple ourselves from areas in which we know Beijing’s going to use them against us. Just look at Nexperia, which is the Dutch company that the Chinese acquired and semiconductors, and now the Dutch government’s trying to wrestle back control. They’ve had to essentially give up to try and wrestle back control of the company that’s in their own border because China said, well, sorry, because it passes through our borders to be assembled, we’re not going to let them go back to you unless you give us control back at the company. And so they will hold hostage our supply chains, and we do not want to be dependent on that.
Paul Lekas:
And if I can just build on that, I think this goes to the traditional way of thinking about these issues. We need to promote strategy, we need to protect strategy, and our lead in chips and compute remains sizable. And we need to be much more focused on how we maintain that lead and recognize that that is the choke point. It remains the choke point. And it’s the semiconductors, it’s also the semiconductor manufacturing equipment. We need to be working much more hand-in-glove with Japan and the Netherlands on that to make sure that we’re not allowing that equipment to leak to China. China will be able to catch up on data. They’ll be able to catch up on models. They certainly have the energy lead, but this is really where we can try to mitigate some of the potential national security risk.
Patrick M. Cronin:
And Eddie Fishman’s book Chokepoint, it’s a great primer on some of these issues. What about the choke point on talent? I mean, and China keeps advertising this talent siphoning off and every day if you read the South China Morning Post or other-
Jimmy Goodrich:
That’s owned by Jack Ma, but yeah.
Patrick M. Cronin:
Right. Yes. How do you think about the talent challenge here in terms of people?
Jimmy Goodrich:
Yeah, I’m generally more optimistic on talent. I think the US is this shining beacon where people from around the world want to come and work in the most cutting-edge companies. And we still have the world’s leading universities, and so as long as we keep that system running with the appropriate adjustments, then I think we have the lead because we have a 5 billion plus talent pool that we can pull from. Whereas China, if you go into virtually all of their companies, it’s going to be a very homogeneous, domestically sourced talent pool. There’s some exceptions like Alibaba, Huawei will have some small R&D institutes outside of China, but they’re not really read into or part of the core . . .
Jimmy Goodrich:
. . . outside of China, but they’re not really read into or part of the core efforts. So yes, I mean I think China, from a sort of pure state plan, top-down initiative, it’s impressive, but we should not lose sight of our own talent advantages, which I think are formidable. The fact that if you go into a fusion energy or a AI startup, you’re going to find the brightest minds from the US and around the world together is amazing. You do not see that in China. It’s really the brightest minds from China in those companies. I think that’s a disadvantage ultimately for China.
Patrick M. Cronin:
Yeah.
Paul Lekas:
Yeah.
Patrick M. Cronin:
Paul, do you see that?
Paul Lekas:
I agree. I think talent is advantage. We just need to make sure that we’re going to maintain that advantage.
Patrick M. Cronin:
Keeping those incentives. You know, Jimmy talked about the defensive side of this as well. Industry has a little . . . Has some qualms though, right, about export controls and some of the defensive mechanisms that we might try to be putting in place to slow down the competition and to preserve that lead right now so we can get our act together and build out. What are the concerns of industry on export controls?
Jimmy Goodrich:
Well, I mean I can’t speak for industry, but certainly if you’re a private company and your role is to maximize profit, absolutely you’re going to want to sell your product to as many markets at as many people as possible. And it’s the role then of the government to decide which of those products that relate to national security and set those thresholds to define what those are, and then determine where those cannot be sold. And we’ve seen a really intense debate over the most advanced AI GPUs, whether or not those should be sold to China. It’s my view that we absolutely should not be selling the most advanced, and even I’d say in sort of popular terms, is it the fourth-best chip, the third-best chip?
Patrick M. Cronin:
Yes.
Jimmy Goodrich:
Our fourth-best chip is actually a lot better than China’s fourth-best chip by a wide margin, and we can produce a hundred times more of them than China can. And so one of the key elements is not just the technology one for one, what is the comparison, but also the quantitative advantage that we may or may not have because having one
or even 10,000, even 100,000 of those chips is not enough. You need to have millions and the ability to sustain them. And so that, I think those two elements need to inform our export control strategy.
If you look at, for example, the H-20, which was restricted then lifted recently, which was NVIDIA’s workaround chip for the China market legally, it had the fourth best processing performance, it was lowered, but it had at the time the lowest. . . The highest memory bandwidth, meaning it was the best in the world for inferencing, which is really important for deployment and scale. And that’s why China really wanted and still wants those chips despite the government wishes to use domestic ones, because it integrates with a better software framework and it has really high memory bandwidth, which is exactly what companies like DeepSeek need to deploy their model across their economy.
Paul Lekas:
Yeah, I completely agree with everything Jimmy said. That was an excellent analysis in comparison. There’s a clear division in industry today among the big AI companies between those who think we best advance American interests by allowing American companies to sell everywhere, to sell chips everywhere, and those who believe that we best advance American interests by maintaining our lead in compute and trying to advance the rest of the AI stack everywhere. And my association of the companies I work with are more on the latter side of this.
The view being that by providing, by allowing companies to sell the most advanced chips to China, what are we actually going to achieve? We will enable China to develop its AI much more quickly than it would otherwise because it’s chip manufacturer, its own domestic chip manufacturing capacity is so far behind where the United States is . . . Granted we don’t manufacture them all domestically, but so far behind where American technology is. And the view there is that we best advance American interest, values national security, economic security by trying to diffuse the rest of the stack. We want countries to be built, countries, companies around the world to be building on US software, US AI models. We think they’re safer, we think they’re better. We want them to embody the standards that are being developed, and we think that’s the best way to further American interests and also help to keep China from advancing in AI as much as it potentially could in raising a whole range of national security risks.
Jimmy Goodrich:
I mean it’s kind of like, for example, I’m a fan of F1, it’d be like Ferrari saying, “We’re going to, just because we would make an extra 50 cents, we’re going to sell our engineers, our design specialists, our fuel, our drivers to all of our competitors because this is going to increase our top line.” But then you lose at the end of the day because you’ve enabled all of your competitors to have the best technology just like you do. But it’s a threshold. I do think it’s important for our American semiconductor companies to be able to be competitive worldwide, to be even competitive in China with somewhat of an offering, but we have to decide what that level is and that’s where the debate is.
Patrick M. Cronin:
I’ve heard it described by some experts that if we protect three percent of our industry for national security reasons today, we maybe need to have six percent of our industry protected through various measures. The question is who makes these decisions and how do we make them? Since we don’t want to emulate the hammer and sickle, or have a fourth plenum, or have these long-term plans being done by central government, how do we best try to make strategic decisions where we make these trade-offs rationally? For both of you.
Jimmy Goodrich:
Well, I mean one thing that’s really important is organizations like the Hudson Institution, others that are bringing third-party analysis to the government so that they can decide what’s in the national interest. The private sector should have an important voice in that process. I mean, this is cutting-edge technology. I remember a cabinet official in a previous administration was saying they’re staring at these slides in a principals meeting looking at EUV lithography mirrors, and they have no idea what they’re making decisions over because the technology is so complex.
And so it’s also developing a workforce here in Washington that is technology literate, bringing people from the private sector into government for short stints and so on, and that our process is appropriately partnered with industry, but at the end of the day, the government should be making those decisions for national security. And we have to ensure that the national security definition is sort of well-defined, and you’ve got a good mission to go after, and you’re not going to let too much scope creep come in.
Paul Lekas:
Yeah, those are all good points. The discourse around national security has changed over the past 20 years, and now it almost seems like national security swallows everything else. And that’s clearly not the case. Not everything is national security. It should be the domain of the government to determine what actually is national security. Other groups can make arguments, can convey why they think different issues qualify as national security risks. But we do need to define what is national security, and I think we need to do that in a more crystallized way.
But ultimately it is the federal government as advised and counseled by a range of different interests. And there’s academics, there are civil society groups, there’s industry, and all these different groups have perspectives on things. Plus there is the engagement with foreign countries in bilateral engagement through multilateral, multi-stakeholder organizations. And that’s all relevant too for the US determining what is national security.
Patrick M. Cronin:
I was struck by, it was on LinkedIn this week, and you have to be careful about LinkedIn, but nonetheless, the defense department put on LinkedIn the new six priorities on technology. They dwindled it down from 14 and they talked about now, which six , because if we’re trying to protect too many things, we’re maybe protecting nothing. So here are the six , and they were quite well-integrated and thought through, but I’m thinking, what are we missing if we do narrow it down too much? I mean, how do we, again, strike that balance, and is it enough to go on LinkedIn and tell the world that this is what we’re doing?
Paul Lekas:
Oh, it’s certainly not enough to go on LinkedIn. What we see coming from the government in the public eye is a sliver of what’s going on behind the scenes. And I confess I did not see the list of six priorities or if I read it, I don’t recall exactly what was in there, but innovation is happening, and the way we think about AI today is going to be very different than the way we think about AI in five years. And Jimmy talked about multimodal, and I think that is going to be so much more prevalent in five years. We have people doing research and starting to develop applications around spatial AI.
The line from an LLM to AGI I think is a broken line, and if we really are moving to AGI, it’s going to include a lot more than just words and images and videos. The DOD or Department of War and other agencies in the federal government redefined what they think the critical technologies are and their priorities, but that is going to be changing. And I think later on today we’re going to hear about some of the technologies beyond AI or that use AI, particularly those where China has a very clear advantage like robotics, advanced manufacturing and so on.
Jimmy Goodrich:
Yeah, I’ve seen dozens of lists of critical technologies, and they all kind of mean nothing.
Patrick M. Cronin:
Yeah.
Jimmy Goodrich:
What matters is, is the government going to invest in research development? Are they going to buy the technology? That’s one thing we haven’t spoken about enough is the government is a major purchaser,, and they’re not using AI enough.
Paul Lekas:
That’s right.
Jimmy Goodrich:
If you compare, for example, say the Chinese military to the US warfighter, one can argue that China is already racing quite ahead. They’re developing all sorts of AI, whether it’s deep learning or reinforcement learning, large language models across their different armed services. And many parts of the US government, they still can’t even figure out how to use Microsoft Office, let alone it’s completely penetrated by Chinese cyber hackers. And so I think we have to also think the government’s role is sort of an incubator of initial research, and then also it can act as a buyer of the technology that can help also build national capabilities.
Patrick M. Cronin:
That is one of the six priorities, by the way, that the Department of War announced, is that they are trying to push this out into the military faster and be more focused on that for the reasons you’re spelling out. What are the military implications in general of losing the AI competition? I mean, are they as dire as capitulation? I mean, how do you think about some of the larger military strategic implications of falling behind or losing the lead and maybe then falling behind on AI?
Jimmy Goodrich:
I mean, just think about the wealth of different opportunities that nation-states have to deploy AI. It’s either in the gray zone space, particularly around disinformation, having all sorts of information, whether it’s in an election cycle, whether it’s during a run-up to an armed conflict where a nation-state wants to intervene in the perception of the local population. And having much more accurate deep fake technology that can . . . When I was sitting in Taiwan with a friend and they were all, “The generative AI is not good enough to replicate what an actual person would say.” I said, “Well, let’s actually test that.” And it was spitting out very accurate local dialects of Taiwanese Mandarin that was very convincing.
Likewise, on the military side of things, particularly think about nuclear deterrence. Critical aspect of that is going to be situational domain awareness. And if you have an AI system that, for example, in real time can monitor all of a nation’s tracked nuclear assets and essentially render deception impossible, then that could create a significant advantage to whomever might be able to develop that sort of AI system. And likewise, a big debate is who’s advantaged by AI, the defender or the attacker? Oftentimes it’s the attacker, not the defender that gets the advantage out of it.
Patrick M. Cronin:
Indeed.
Paul Lekas:
Yeah. I mean there are, certainly AI is going to speed up war, it’s going to speed up decision making and also execution. Whether that provides a comparative advantage to one or the other is not clear. I think that technology is going to be developing. I think this notion of embodied AI that is critical to intelligence, surveillance, reconnaissance. I think in the hybrid warfare areas, offensive cyber operations, information operations, AI can be absolutely critical. And we’ve already seen China taking steps in that direction that have been productive for them. I think the stakes are really big. I don’t know where as a, I think the stakes are big on both sides. This is where I think the analogy to nuclear technology makes a lot of sense.
Patrick M. Cronin:
You see all of Russia’s hybrid war right now in Europe, and I’m going back to Frank Hoffman thinking through the concept of creating hybrid war back 20 years ago when he was thinking about Hezbollah sending rockets in, but using terrorism, mixing irregular force with more traditional force. This is going to be on steroids with a superpower, with AI-enabled systems though in terms of what warfare could be like, in terms of whether it’s kinetic or not, it could be just coming from every single azimuth, and it’s sort of mind-boggling.
I want to go back to our kind of origins with Herman Kahn in the 50s and 60s of, at Rand, and then creating Hudson, and Rand working with Carnegie Mellon and others on AI development. You’ve talked, Jimmy about American foundations of science and technology coming out of World War II, essentially, right, through the 40s and 50s and 60s. That’s when these people had these fertile minds and were thinking about them. China is feeling like they’re going into this sort of era, right?
And they’re very optimistic and ambitious. They have headwinds, and a lot of challenges, and corruption, and all the other things we can talk about, but they’re thinking that they’re in this era now. And I’m thinking of Dan Wang’s book as well, Breakneck, which is, we’re a bunch of lawyers and they’re a bunch of engineers. And if you look at the Polar Bureau, yeah, there are a lot of engineers sitting, running China.
So with those kind of big scene-setting challenges, how do we get back to harnessing our capabilities and creating a new generation to infuse the future for science technology in general, but for AI-enabled futures and technologies in the future? I mean, that’s a big question, but what’s the through line here between American history, this new cycle that we need to go through to be able to compete with China?
Jimmy Goodrich:
Yeah, so mean, as you mentioned, China today believes it’s sort of in their era of big science, and that’s investing in their national labs, building particle accelerators, light sources, inertial confinement fusion, laser fusion ignition facilities at an unprecedented scale, dozens of them across the country, huge investments. They are not vanity projects, they’re the real deal from having talked to scientists who’ve been out there and visited some of them.
And they’re just playing from our playbook that works. Invest in public infrastructure, basic research, your national labs that have a mission towards national security, allow them to also partner with the private sector and spin off ideas into the market. And that’s worked pretty well for the US with DARPA, with the internet, GPS, the integrated circuit with defense programs in the 60s. And so China’s just saying, “I want that. That works really, really well.”
The one area where I think though we still have an advantage is in our innovative venture capital. If you see, there’s been a dearth of venture capital in China for privately funded venture capital, most of it’s government-based. And I just think inherently we’re going to make better decisions on what to invest in coming out of our science infrastructure. But there is now a tremendous debate over science politically, and I think just Beijing has to look at that and say, this is great. Let them continue to destroy themselves.
And so there’s things that we need to clean up in our research community, our own issues, but we have to keep our eye on the ball that there is a greater national mission involved around scientific global technology leadership. And if AI is going to be multi-modal and you’re going to want to be generating all sorts of scientific data, whether it’s astrophysics or chemistry that you can feed into your models, and Beijing’s investing in the best, biggest facilities in the world, that’s going to attract lots of leading global scientists to work on those. And that’s going to put us at a disadvantage where Lawrence Livermore Lab is going to be working off of 60-year-old facilities, whereas China has a brand new system that’s better, more modern. Where do you think that scientist from Singapore or India is going to want to go? They’re going to want to go to China, not the US.
Patrick M. Cronin:
Paul.
Paul Lekas:
Yeah, I think we need to be doing more in terms of public investment. We need to invest more in our national labs, all of our science and research agencies within the federal government. We have an extraordinary potential there. Even just being able to unlock all the data that the US government has that is not incorporated into any of these AI models that have been developed, because there’s often siloed.
It’s often siloed, and there were some initial moves in that direction under the Biden administration. There’s some current moves under the Trump administration. I think we need to do a lot more of that. It is going to require money, and it’s going to require resourcing. And the other thing that the federal government can do really well is to convene stakeholders. And I think that’s really important. We have, I would agree that our venture capital sector is productive, has helped to generate a lot of innovation, but there’s also probably more cutting edge foundational innovation that’s happening at the bigger companies where the path to market may be further off, but they have the resources and the capability to do the kinds of things that in some countries maybe in China would be done through state-run operations.
And so I also think that our big hyperscalers, big tech companies that are investing heavily in AI are doing a lot of foundational research behind the scenes to try to map out the next generations of AI and AI applications, and I think that’s pretty extraordinary and something that we need to foster.
Patrick M. Cronin:
This has been a phenomenal discussion. In our final minute, I’m wondering if I can just provoke you to try to think about what kind of surprise might we experience in AI over the next five or 10 years, whether it’s vis-a-vis the US-China competition, whether it’s in industry, whether it’s in the science itself, or maybe it’s outside of AI that’s going to surprise us. But when you think about things that people are not thinking about, what comes to mind?
Jimmy Goodrich:
I think one thing that we should be thinking about is, as I mentioned earlier, that AI itself, whether it’s Gemini or ChatGPT is a Nobel Prize winner over the next five years. And what does that mean, and what are the implications for accelerating humanity’s scientific discovery? And then what does that mean for national security when all of our future and present military systems depend on our technological advantage? So I’ll just leave it with that.
Patrick M. Cronin:
All right, so our next Nobel Prize winner may be Claude or something.
Jimmy Goodrich:
Within the next 10 years, I’ll put it there. Yeah.
Patrick M. Cronin:
All right.
Paul Lekas:
I’ll hedge you a little bit. So a prediction, I mean, I don’t have any predictions about the future of innovation that haven’t already been laid out there. I focus much more on the risks and the challenges and how do we get from here to there, but I think it’s what we’ve been talking about. I think we’re going to see a lot more multimodal AI. I think as we are able to unlock new sources of data, we’re going to unlock applications that start to realize some of the potential that people have envisioned.
Patrick M. Cronin:
Well, thank you very much. Jimmy Goodrich, Paul Lekas, please join me in thanking our speakers.
Jimmy Goodrich:
Thank you.
Paul Lekas:
Thank you. Good to be here.
Panel 2 | Other Frontiers: Strategic AI Arenas Beyond Frontier Models
Jason Hsu:
My name is Jason Hsu. I’m a senior fellow at Hudson Institute. Formerly, I served as a member of the parliament and the legislator in Taiwan’s Legislative Yuan. At Hudson, I run a program called Semiconductor Trade and Geo Strategy. I’m excited to be here with you all, and also joined by a group of very distinguished panelists who are kind of experts in their own domain, but really bring a width and depth to this AI debate. And also discussing all things beyond LLM model.
We understand that today’s AI discussion is obviously centered around compute, the power, and as well as what type of frontier model is most effective and efficient. But there’s more to it. And obviously in the context, and from the lens of the geopolitical competition, we need to understand how to best foster innovation, but also to address the national security concern. So I’m honored to join by today’s panelists. To my right . . . To my left is Lior Susan, the Founder of and CEO of Eclipse Venture, and also Divyansh Kaushik, Vice President of a Beacon Global Strategy. And to the far left is Jack Mallery, Senior Research Analyst at NVIDIA.
Now, America’s AI debate has been dominated by the frontier models, but that is only one dimension. There’s more to it obviously, as I have mentioned. The decisive arena would be domain-specific AI-applied systems that deploy compute at the edge, accelerate industrial capabilities, modern critical infrastructure, and define the next-generation military and economic power. In this panel, experts from the industry, venture capital, and policy will identify the specialized AI domains that matter the most, explore US vulnerabilities and strength, and discuss how Washington can rebalance its AI strategy and to win in these critical arenas. So I’ll just begin by having our panelists to give a quick three-minute overview of today’s high-level topic. So over to you, Lior.
Lior Susan:
I would need 30, I don’t know if three, I can do it with three, but I mean we’ll see later today with NVIDIA earnings, how things are going to shift.
Jack Mallery:
I have no comment.
Lior Susan:
No comment. But I think it’s . . . I always think about, I think artificial intelligence naturally is we got into our life mainly through the LLM, as you noticed. But I think what it mainly give us is a taste of what actually the technology can do.
Jason Hsu:
Great.
Lior Susan:
And if you actually take it one step back, I personally believe artificial intelligence will be the most powerful technology or an infrastructure that humankind ever built. And that actually can go, now we can talk about CAPEX and do we think it’s a bubble or not bubble? And I have my own two cents there of when humanity roll out the electricity, we invested around five percent of the world GDP every year. We’re currently in US investing about 1.1 percent of our GDP on AI. So I actually think there is a long way to go. I personally believe the advancements of AI will be greater than electricity.
The other thing I believe is AI is a weapon, and we should treat it like a weapon. And we should care about it, we should invest about it, because it will have the most critical weapon in the world, that is people’s minds. And if I can control people’s minds, naturally, I have the most powerful weapon out there. And I think us, for us in US, we need to be obsessed with this thing. It means to own the entire stack and make sure that our allies can enjoy our stack rather than our competitors.
Jason Hsu:
Div?
Divyansh Kaushik:
Yeah, I think that’s all great. I am more looking at it from the perspective of, okay, you’ve got all the models coming up. There’s certainly a lot of well-deserved attention there. There’s certainly a lot of well-deserved attention behind the hardware that goes in. Kudos to some good companies in the United States that are doing that work, but what happens next? Where do we integrate these? How do we integrate them in factory settings in manufacturing to give us the advantages that we are seeking?
We have lost a lot of tacit knowledge on how to build that scale, which our adversary, our near peer adversary, does enjoy currently. We have a lot of regulatory actions that we, for instance, you can’t test a drone right now unless you get an explicit permission from the FAA. And so that’s why you test drones in the Osage Nation in Oklahoma, because they control the airspace over there.
You’ve got all these issues that are currently standing in the way of this technology actually creating transformative change. And so the conversation that happens, oh, whether it’s a bubble or not a bubble, it’s also a choice of whether it becomes a bubble or not a bubble. Do we see the transformation tomorrow or do we see the transformation 20 years from now? Those are things that are choices that we have to make. I was just talking to somebody this morning, 1979, the Office of Naval Research and Westinghouse put five million at Carnegie Mellon to create the Robotics Institute. 1986, the first self-driving car drove from Pittsburgh to California, Navlab 1. It took so long for us to get here where you see Waymo’s driving around, yet there is not a national standard. Yet there are states trying to ban autonomous vehicles. And when I think about that from an AI policy standpoint, that is one of the key areas where we have this technology being deployed.
And so there is this inertia amongst us of change. We do not want to see that change. There’s this whole conversation on job loss, which is very true, there will be some, but the answer to that is, “Let’s just not do it.” Which is not something that we are seeing in many other parts of the world.
And in fact, for instance, I would love to talk more about robotics in the conversation, but China has created specialized factories in Southeast Asia where people are just collecting data for robotics. Their whole job is just to do that. There’s no manufacturing happening, just collecting data. We have not even started thinking about it from that perspective. Some of the companies have, but a lot of the companies are just like, “Okay, I’ll have the robot watch the videos for hours,” and that may get you somewhere, but there is a lot more sophistication that needs to happen for integration of AI across the economy. There are a lot more supply chain vulnerabilities that we will have to address, a lot of sensors that we’ll have to deploy. These are all choices that we can make. So that’s where I’ll leave to Jack.
Jack Mallery:
Yeah, I think what’s so exciting about this technology is that AI is not only a fundamentally new technology, but it’s the one that upgrades and transitions existing industries. And that’s new, and that’s really exciting where you have these large, general-purpose foundational models that in and of themselves are new and exciting technology, but then you also have these models proliferate and diffuse into other industries and transform them.
And I think this goes back to what the panelists have talked about, which is the full stack. And America needs to lead in not just chips and infrastructure, but in things like energy and open source models, these are incredibly important to the United States and for artificial intelligence. So to touch very briefly on energy and open source, energy is the foundation of everything for the American stack. And NVIDIA, we design our chips and our systems with energy constraints in mind.
If you look at the performance increases from Hopper to Blackwell, our previous generation to our current generation deployment, that’s a 25x increase in energy efficiency per token generation. So if you’re a US hyperscaler and you’re a NeoCloud, you’re thinking about, how am I maximizing the performance that I get out of my data center shell with a certain amount of energy? That is something that’s going to be incredibly important is, how are we maximizing our compute performance for the energy inputs into that data center?
I think secondly, I’ll touch on open source, which is something that’s incredibly important. Large closed-source models are unbelievably important and foundational to AI. But open-source is also a super crucial part of this. For a lot of what we’re going to need to see for the transition in US onshore end of manufacturing, is we’re going to need small distilled open-source models, fine-tuned on proprietary data that have a wide range of capabilities. So you’re going to need a multimodal model that is embedded in a production line that can do vision recognition of yields in manufacturing scenarios to improve the efficiency of your factory. And I think open source models are incredibly important, and we should think about AI in terms of the full stack, and not just lump it into one thing. So I’ll leave with that. So energy is something we think about a lot in NVIDIA, and open source as well.
Jason Hsu:
Obviously there’s a lot to be kind of delved into from all your remarks, but let me play a little bit of a devil’s advocate here. What are some of the impeding factors that will prevent the AI technology to be further developed, and where do you see as the potential hurdle? And also in the green room, we mentioned about bubble, and I know there’s probably a lot of speculation about all the things, but in your kind of the firsthand observation and experience, what are the impeding factors that could sort of debunk the whole AI myth of, to you, Lior, how do we prevent it actually? Yeah.
Lior Susan:
I mean, you need to make money. At the end of the day, it’s like the application needs to make sense, and you need to have a return on investments or TCO, whatever the way that you measure. I built companies in the manufacturing world 15 years ago, and now in the last 10 years, investing in building companies in US and physical industries. At the end of the day, what I love about physical industries, semiconductor manufacturing, logistics, defense, robotics, is you need to show return on investment. And at the end of the day, the technology needs to serve a purpose that you can somehow correlate it to profits and free cash flow.
You cannot just say, “Oh, look on this great application that they build and how many users they have.” At the end of the day, if you’re building a nuclear reactor, if you’re building an energy storage or you’re building a chip, nobody cares. You build a chip, okay, talk with me about the yield of the wafers, and did you improve the process? So can I put more cores on my wafer? The energy storage, can I get a better uptime or more cycles of my batteries? The AI needs to pay for himself, and we needs to show strong correlations between the technology spans and the unit economics.
Jason Hsu:
Yeah. Div?
Divyansh Kaushik:
I mean the nearest threat I see, the closest threat is time to power. It’s not even lack of energy, it’s a lack of power. It’s electricity. We have plenty of energy, we’re just not transmitting it. There’s one of the reasons you see co-location of data centers with energy sources is because their interconnection queues are so long. And that is also another reason why you see state and local politicians right now in the Northeast, with Democrats and Republicans running against PJM, just because the utilities are not serving the need the way they should be.
And you’re starting to see that with the interconnection times grow so much, the lead times on transformers, circuit breakers, switch gears, grow so much. Your electricity prices are made of two things really. One is the energy cost, and one is the utilities and the poles and wires. When you start seeing issues stem and show up in—
Divyansh Kaushik:
Start seeing issues stem and show up in your supply chain, and prices go up, your electricity prices go up, your voters start to say, and I’m not going to use the term rate payer, your voters are going to start to say like, “This is not okay in my neighborhood.” Because it’s increasing the electricity prices by connecting this data center to the grid. I think that is one of the biggest near-term challenges we have over the next three years with the amount of power that these data centers are looking for, with the amount of power that the entire country is looking for. There’s so much electrification happening. There are new industrial facilities coming online. That, I believe, is the single biggest challenge for the next three years in terms of making sure this technology can further proceed.
Jack Mallery:
Yeah, absolutely. I think echoing everything that was just said, we really think about energy is the constraint. It’s not access to chips, it’s access to energy. It’s the problem that for the better part of the last 20 years, low growth has been stagnant in the US, and now with this onshore manufacturing with the increase of AI data center build out, we’re really witnessing a transformation in the energy sector that really hasn’t happened in the past 20 to 25 years. And this is something that I believe they will be able to do, and it’s going to get done, but it’s going to take a lot of time and effort to change the fact that, like Div said, the lead times on core power equipment like transformers is oftentimes years at this point. The access to the land that’s needed is really difficult to get.
There’s a business where our entire business model is helping farmers assess is their land good for AI data centers? That is an entirely new business that is creating just because it is so difficult to get the land and the energy and the permitting to get this done. And I think that’s really what we think about.
And in terms of NVIDIA, we obviously design our systems with energy as the input that we’re controlling for. So like I said, there was an increase from Hopper to Blackwell. There’s going to be a similar increase from Blackwell to Vera Rubin, which comes next year. But at the center of all of it is, we’re trying to maximize the computational performance of AI data centers relative to the chips and the energy that go into them. And so, if we can’t stand up those data centers, that’s going to be a huge problem. But I think everyone here has confidence that we will be able to do this, but it’s just going to take that muscle going again.
Jason Hsu:
So let me press on that, Jack. So you mentioned about energy. What’s sort of the ratio or equation that makes sense for the compute to be further distributed, and also more democratized in ways that that is enabling the general public with ample access to it without being stifled? So question to you is, are you then, is NVIDIA developing chips that’s obviously more and more effective and performative, but then is the energy piece will be even more consumptive on that part as well? And what would be the technologies that would be critical to bring down that energy demand, but also increase the level of efficiency on that part?
Jack Mallery:
I think we’re seeing in our design cycle, which Jensen talks about a lot at the keynotes for GTC, we’re seeing a transition where NVIDIA is increasingly thinking about what is AI going to look like in two years, five years, 10 years? Because I know there are many people from the semiconductor industry in this room, these design cycles take a long time. And by the time you go from design to tape out to manufacturing of these chips, that’s a really long time. And so we’re having to think long term.
So a specific example of that is the Rubin CPX system. So we’re thinking about the fact that as AI evolves, you’re going to be doing a lot of training and you’re also going to be doing a lot of inference as you’re deploying these systems in the world. So we have CPX, which is an inference-based accelerator, which is designed for. And if we think about how AI inference happens, you take a tranche of documents and you give it to your AI assistant. And that first part of giving the documents to that AI assistant is really computationally intensive, but it’s not very memory intensive.
And then the second part, which is called decode, the first part is called prefill. Decode is computationally intensive and memory intensive. And so what NVIDIA did is we designed a chip for that first half, the first half, the prefill part, to accelerate the ability to go to the generation of first token and inference. And that helps maximize the energy of the data center, because you’re being smarter about what you’re doing.
And I think the way we think about our systems is, NVIDIA’s chips are designed for both being useful at training and inference, but we’re also making novel technical tweaks for the rack-level designs to accelerate the applications that the people are choosing to use them for in data center.
Jason Hsu:
A few months ago, China released DeepSeek, and obviously it’s a different training model, it’s an inference model. It’s not a large language model that OpenAI, Anthropic are focusing on, or Llama from Meta and Gemini. But what I see is China is able to find a niche to win this race through exploiting the, first of all, the lack of the high-end chips, but they’re able to bypass a certain ways to develop that margin, or having the same type of training results and performances.
So my question to you all is, obviously you speak from the investments and ventures perspective, and love for both Div and Jack to talk about how realistic is US’s assessment to China’s capability on their AI. So over to you first, Lior.
Lior Susan:
I actually was thrilled to see that they announced it. I built a factory there in 2013 in Zhuhai, and I remember calling my wife in the first four weeks, and I say all the shit, “If we’re not going to wake up, we’re all going to work for them.”
Jason Hsu:
Yeah.
Lior Susan:
And the main point is I think China, rightfully for them, for themselves got obsessed with what I call the five forces; capital, technology, customer demand, policy, and overall, naturally, technology invention. And by aligning these five forces, they’re basically able, not only to scale and lower cost manufacturing, they’re actually able to innovate.
I think is the first time I see in this country for a long, long, long time, maybe possible will two, arguably I will say actually Henry Ford time, that the five forces actually exist in US. And as a builder I’m like, this is the most exciting times in the world ever to go and build here in this country, and go compete with China, because they’re not waiting. They’re working on a very long-term horizon. And I don’t want their lifestyle, I want my lifestyle here. But we need to remember that we need to fight for this lifestyle, and we forget it how to build and innovate and work hard, and really take the entire best minds in this country to go and compete on a global level. So when I saw DeepSeek, for me it’s just in another fuel of like, yeah, of course they’re innovating, of course that they have a smart people, game on.
Jason Hsu:
But how do you compare China injecting the state capital into a sector that is fairly open competition in a country like US, where China’s able to organize its resources?
And I’ll share with all of you an anecdote that China, after realizing they could not get the high-end chips, they transform and repurpose their fabs across the country to be able to mass produce legacy chips. So by the year of 2030, they’re able to dominate 72 percent of world’s legacy chips, and those are the chips that you use to power your everyday lives, such as microwave ovens and etc.
So the question then is, if we set ourselves up to fall in the trap of such a choke point, then what can we do now to prevent that from happening? Or at least, that course is so clear in front of us right now.
Lior Susan:
I mean, I think they are doing that, but they also innovate on modern nodes.
Jason Hsu:
Yeah.
Lior Susan:
They take in Huawei and they’re going to kink make them as they know in order to learn how to build chips that will compete with our chips. What we need to do? We need to compete. And we need to work really freaking hard in order to compete.
Divyansh Kaushik:
I’ll say a couple of things. And I see Dan Remler’s here, who’s done a lot of good work on this issue in a previous life, on how much lead do we have here? But I see DeepSeek and I see Kimi and I am like, “All right, this is a fast follower competitor.” Yes, people can follow fast and that’s okay, it just means we have to continue running too.
The issue I see on a couple other pieces of the stack, and there are a couple different estimates of how ahead we are, whether it’s on chips themselves, on tools, on the application layer in robotics, they are going to try and catch up. That is going to be true of every adversary. When Sputnik happened in October 1957, if I’m correct, four months later we put our satellite in space. People are going to try and catch up. That’s not a new thing. So we have to assume that, that the actions we have to take is try to stay ahead of it. Limit what they are getting access to, while continuing to develop and deploy our sector and further mature our sector.
Now the other piece of it is our choke points. It’s not just legacy chips. We have way too many choke points right now. You want to build a fiber network in the United States, about 90 percent of your fiber is being imported from the PRC. Displays. You look at batteries for any of the embodied AI systems. All of those are critical choke points that we need to be figuring out. I’m sure everybody in this room has become bored of the word magnets. I certainly have, because we knew about this in 2010 when they took this action on Japan, what did we do? And so the question is, what do we do now to get ahead of those choke points, becoming choke points? The ones that will become by 2030 or something. And while reducing the reliance on them for the components that we are reliant on them today, while preserving our lead in areas where we control. The United States, our partners and allies, whether it’s to your point, like high-end chips or the semiconductor tools and whatnot.
I think another piece I would like to mention is, there is just some . . . There used to be a view of China as like, okay, they’ve got the quantity, not the quality. I don’t think that’s true anymore. They’ve got both quantity and quality in their talent pool. They certainly have the researchers to develop better models if they had access to the hardware. Tencent just down revised its CapEx yesterday or day before, because they don’t have enough hardware to put in the data centers. That is our leverage point, like where we can control on them. Because they’ve got the people, they’ve got the energy. They could be building a whole better models if they had access to the hardware that goes in.
The final point that I would just make here, if you look at the researchers who are on the DeepSeek-R1 paper, first of all, I think everybody on that paper was educated in the PRC. It’s not international students going back or something. They’re starting to hire just domestically and cultivate that talent. So that’s one thing that we need to be looking out for.
And the other thing, a lot of the technical, quote, unquote, technical advances that were in the paper like mixture of experts, well I’m sorry, but Mistral actually did that two years ago. It’s just sad that Europeans can’t advertise what they do.
There was not any new R&D in that paper. It was just good engineering, really good engineering in that paper. And so we have to be clear-eyed of what the challenges are to design the solutions accordingly.
Jack Mallery:
I think I’ll say, if we look across the full stack, from energy to shift to infrastructure to models and applications, we really are neck and neck. And we each have our comparative advantages in each of those, but if we look across all of it, we are really, really close to China in this.
And one of the things that China really focuses on is the diffusion of their technology worldwide. And I think that, like to talk about what the panelists have talked about today, is we need to compete. We need to have US technology go abroad. We need to be the global standard that everyone builds on, because a lot of what NVIDIA thinks about in this is what do the developers like? What are the developers working on? And where are the AI developers? Well, half of the AI developers are in China. So we need to recognize that China has the capability to stand up a completely indigenous AI ecosystem without access to American technology, and they will continue to advance.
China is in many ways a fast follower, but we should be clear-eyed about the fact that if we stop running, they’re going to continue. That these are two separate ecosystems. China is not reliant on America for advances in its AI ecosystem. If we look at, you talked about state capital, right? If you look at the capital going into the Chinese AI labs that are creating these novel applications. So I’m thinking DeepSeek, Moonshot AI, which makes Kimi K2 Thinking, if you think about Alibaba with their Qwen models, most of the AI labs who are coming out with these novel architectural improvements are private capital. It is the private sector in China which is driving a lot of this development. And it’s the extremely talented AI workforce in China that is driving this development.
And I think we should be very clear-eyed about the fact that if we stop running and we hold back our abilities, that does not mean China is going to stop with us, because they are incredibly innovative. And it’s something that I know at NVIDIA we think about a lot, where we very much view the fact that the technology industry is America’s national treasure. This is one of the things that make this country so strong. The ability of American technology to go overseas and set the global standards for all of these other countries to build on top of is what makes this country’s economy strong. It’s what makes our technology industry strong. And we should be very clear-eyed about the fact that in the absence of US technology, China will diffuse their completely separate indigenous stack. And other countries would love to choose American technology if given the chance.
But AI is so exciting, it’s such a new technology. I think what we’ve talked about today shows AI is a diverse multi-use technology that’s really exciting all over the world, and people want access to this. They want access to large foundational models. They want access to models and applications and all the novel things that AI can do for their economy, and we need to give them the ability to choose the US tech stack.
Lior Susan:
Maybe just one last point on that one. I grew up in the military, and you don’t give your best tools to your enemy, right?
Divyansh Kaushik:
Sure.
Lior Susan:
You agree that we’ll not give the CAD file of the F-35 to China, correct? That’s would be a bad idea.
Divyansh Kaushik:
Don’t give anything to China.
Lior Susan:
We are. I mean, the majority of the GPUs today is being built in China.
Divyansh Kaushik:
Yeah.
Lior Susan:
iPhone in China, Tesla in China. What happened? Xiaomi based on Apple. BYD is based on Tesla. Huawei and others based on NVIDIA. So we also needs to be obsessed by building an infrastructure in this country. And yeah, I’m game. As an investor, if the government’s now, and I’m very excited about the current administration, if the government wants to take a stake in one of my companies, I would love to see it. I actually was very happy to see the Intel investments. Great. Awesome. I don’t see any issue. We need to build an infrastructure in this country that allow our best minds to do the economical flywheel of not only design here and come up with an invention but also build here.
Jason Hsu:
There’s so much to unpack right there. I think one critical piece is, when I learned about DeepSeek, and I spoke to some people who actually had a contact with their engineers, they told me that DeepSeek had over 1000 engineers, and their CEO told them, “You don’t ever need to leave China and you can compete with the US.” And actually they actually put a ban on their engineers from leaving the country. They said, “You don’t need to study abroad, but then hone in on your local technology and you can compete with the United States.” And to me that’s scary, because it used to be the case where we know what they’re doing and we know exactly what they’re working on, and we will develop ways to at least counter the various actions that they are doing.
Now, when it comes to competition with China, I want to ask all your thoughts on this. There are two school of thoughts. One is create dependency to China. Get China hooked on the US technologies. Sell as many chips as they can, export as many GPUs below the threshold of restriction, but then get them hooked on US technologies. But then the dependency also create vulnerability. When you are too dependent on something, you are also become overly reliant on its economic resources. That’s why today some of you might have read a book, Apple in China, and several others also have similar implications as well.
So question to you all, especially work on different aspects in this field is that, just exact how much we should be selling to China before we lose our competitiveness? And obviously, NVIDIA is on the spot here, and I’ll leave you to the last answer to this question, but to you first.
Lior Susan:
I think I met my point of how little I think we should sell them.
Jason Hsu:
Okay, yeah.
Lior Susan:
By the way, maybe just to finish though. NVIDIA and Apple and Tesla have earnings to support. If they cannot be competitive in this country of building their products, they will not build their products. We all owe as a society here to create the infrastructure in these five forces I talked about, from governments to talent, from capital to technology and customer demand, in order to make it very economical for them to do it. China was not competitive in day one. No, they worked 50 years to build their infrastructures and they did a good job. And I’m like hats off, but they’re my competitor.
Divyansh Kaushik:
My personal view is, every company in every strategic industry will realize that they will have to decouple. And if they think that they can capture or get the Chinese market addicted to them, that’s extremely wishful thinking. That has not worked out for any company whatsoever.
And the second I would just echo, I mean, North Korea has nukes. Should we give them our nukes so that they’re addicted on our technology? I wouldn’t do that. It sounds stupid to me, but like . . .
Jack Mallery:
Yeah. I think, well first off, I think everyone in this room knows NVIDIA’s position on this issue. But to elaborate on it, I think this is just a very complex issue, and I just personally don’t like to boil it down into very simple things. Because what we think about with this is, NVIDIA is in many ways the accelerator for the US startup ecosystem and the US technology ecosystem for AI applications. Not just with CUDA, but with our broad software stack that allows the acceleration of parallelized computing, which was invented by this company. Many of those architectural improvements in the efficiency of our systems is created by the number of developers that we have working on our software stack; debugging our systems and optimizing those systems to diffuse those benefits across that ecosystem. Half of the world’s AI developers are in China.
We know very importantly how much our company benefits from having a wide variety of extremely talented AI developers working on our systems. And we know exactly how much benefit that will give to our competitors in China if they are not working on our platform. We understand this. We understand that developers choose the flywheel that creates the most benefits for them and their company. And if they do not have access to the American flywheel, they’re only going to supercharge an entirely indigenous flywheel in China.
We have seen this with the growth of the developer ecosystem of our competitors in China, which has skyrocketed over the past couple of years. It’s not a coincidence that there are many novel applications in China being built on either external silicon. So a company like Huawei or Cambricon or MetaX and Moore Threads that sell their chips, or internal silicon with PPU from Alibaba or the Kunlun family from Baidu. I think we need to step back from this idea that China can only do this with the US’s help. China is going to be doing this with or without America’s technology.
And the question is, do we want to gain improvements in our systems and our platforms from the talent in China? Or do we want that to go onto a separate stack, that does not benefit American AI developers, that does not benefit the world’s AI developers, and that only benefits Chinese companies working on Chinese silicon in a Chinese stack?
Divyansh Kaushik:
I just want to echo one point Jack made on the talent piece. You just made this point about how DeepSeek had to, and at first why they reported how they had to seize the passports off everybody on that paper so that they could not leave. People want to leave. Look at long-term stay rates of Chinese nationals who come to the United States for grad school. 90 percent plus are still in the United States, still living in the United States 10 years after graduation. They do not want to go back. Nobody wants to live once. Nine out of 10 people do not want to go back. They want to live here and contribute to the United States.
So I want to emphasize that point on talent, though I feel I may slightly disagree on how to approach that talent. I think we get that talent out of there and tap into that versus, tap tap while they are in China, and I think that’s a reasonable disagreement.
Jason Hsu:
Yeah. Let’s talk about building America. Obviously, talent is a big piece. Infrastructure is also one, energy and et cetera. To you Jack, Jensen paints a picture of AI factory as kind of the future development of a full stack integration from software to hardware, which probably speaks to the development of the re-industrialization this country is moving forward to. Can you speak a little bit on what would that would look like for AI infrastructure perspective? And obviously as someone who’s from Taiwan who study the Taiwan’s industrialization effort, and also being part of the policy-making community, it’s what we believed in and have been working on, but to be able to do in America, what pieces need to be in place in order for this to happen? And happen quickly, as China you have said are catching up in a very fast speed.
Jack Mallery:
Yeah. I think we’re doing everything right right now. NVIDIA’s made a $500 billion commitment to bringing back manufacturing to the United States. And that encapsulates everything from physical factory set up, to purchase commitments, to demand guarantees, and our entire supply chain. So everything from Foxconn, one of our rack suppliers for our systems building factories in America, to demand purchase guarantees in the semiconductor industry to make sure that there is a financial incentive to onshore manufacturing. So we are doing more than almost every company out there to bring this back to the United States.
And to talk a little bit about why. So the Semiconductor Industry Association has done some really interesting work on how many jobs does it create downstream for data center jobs? Where a data center has a lot of people going into the construction and the operation and the maintenance of it, but then that diffuses to the entire supply chain.
So you have the growth of, like we talked about today, the energy industry. We have the amazing work that’s going on at TSMC, Arizona and Intel. And I think this is an industry where we have to also not just think about how many jobs are going into the data centers themselves, for the creation of those, but that downstream supply chain, what has that created? And the answer to that is, it’s an incredible amount of jobs and talent. We need electricians, we need engineers, we need welders. The amount that we are paying people with really, really important trade skills has doubled since this started. And I think we were seeing a boom across all of these sectors in America. And it’s not just the setup of the data center investment, it’s entirely through the whole supply chain. And I know NVIDIA and our ecosystem partners are a huge part of that.
Divyansh Kaushik:
I think the point Jack made on electricians is really, really important. 50 percent of the electricians that we have today are going to retire over the next 20 to 30 years. And that is, if you look at the Bureau of Labor statistics data, that is the population of electricians which are the most experienced. So we need to invest in apprenticeships today to create that talent pipeline.
But the other thing just on the energy piece that Jack mentioned, I want to refer all of you to just go search Dallas Fed’s, look up Dallas Fed’s last quarterly survey of oil producers in the Permian Basin. Oil and gas producers in the Permian Basin. All the comments you will see from the oil and gas producers in the Permian Basin are that they’re slowly but surely going out of business, and have not drilled a single well this year. And why is that? OPEC has dramatically increased production. Oil prices are hovering around $60 a barrel on the Brent Index. That’s not profitable for most of the regions in the Permian. You need about 65 to 75. The President’s also calling for $40 a barrel oil, that’s perhaps also playing into this.
And then we have tariffs on the inputs. Now, some of those tariffs, you can draw the lines differently on which products should be tariffed, which products should not be tariffed, but those are creating a lot of pressure, a lot of pressure points on the industry in the Permian, where a lot of companies are going out of business, there’s a lot of concentration happening. That is another area that we really need to pay attention to, because that is one of our superpowers. The American shale industry is one of our superpowers that will add a competitive advantage that we have, and we need to tap into that so much more for this data center boom to actually succeed.
Lior.
Lior Susan:
Yeah. I mean, we built 85 companies in the last 10 years here in US innovating in physical industries. We employ about 20,000 people into them, will do this year. I think in aggregate about 15, $16 billion in revenue. And that’s what I say is like, when I start building those companies 10 years ago it was actually much harder. I think now there is a five forces aligned here in this country. And as a builder, it’s like the best environments to build.
Two things that always is very good reminders is when he wants to innovate and compete with China on the industrial arena, you always need two things, and it’s scale. Because if you don’t have scale in industrial, it’s whatever. You can show me a video, I’m not interested. Or some YouTube or some demo, it’s irrelevant. You’re not going to make an impact on industrial market. And it’s a skill.
Now China of course, had the advantage. They had a lot of people, and of doing that over 50 years, they build this incredible skill. Today with technology, both with robotics and automation, we can get the scale. And with AI, we can get the skill, and China knows it, and they playing much more defense now. And thinking really hard how it’s going to impact their massive infrastructure that they build on an analog basis, when we have the ability here to build straight digital, and go build those factories fully automated, leveraging software, building those data centers fully automated, or close to fully automated. Start there, do some incredible works. We can show you companies in our portfolio that connecting cables with robots. Flexible cables that historically will be very hard to do. But thanks to our friends at NVIDIA and other companies, you can actually use diffusing models and do reinforcement learning and transfer learning, and using different type of an approaches. So I think this scale and skill now is reaching a point that we can bring technology and go and innovate and build.
Jason Hsu:
Lior, you have a interesting point in government enhanced venture building. And I use this as kind of a neutral sense when it comes to companies such as Intel. Or in other words, governments basically taking a hand on shaping the course of industry.
I want to get all of your thoughts on, since we’re talking about competing with China, and China obviously, the heavy hand from the state government as well as the capital is very obvious. How would you advise, or how can Washington reallocate focus and resources to the specialized AI areas that you talked about? And also Jack, where NVIDIA is seen as a horizon to refocus and then identify a few key areas to win. And what would your be a priority, as Zhang Bo said, in terms of advice to our government?
Lior Susan:
Yeah. I actually have a tie. It’s in the bag because we will go to the White House after that, and I hate eyes as a soldier, so I save myself a few more minutes without that. I would say I’m very bullish. Putting politics aside, very, very bullish on the administration.
The focus on critical industries that are going to be a foundational, are foundational to our superiority in AI is that, yeah, I mean if you are building some enterprise software, yeah, the government shouldn’t care about it. Should not own the stake. If you’re building another Fintech or another piece of software for the fin, the government should not care about it. You build something in rare metals, you build something in manufacturing, you build something in defense, in nuclear, the government shouldn’t care about it. And I’m fine with them to have a minority stake alongside of me, to align their incentives and making sure that yes, I’m getting a special treatment. Yeah, I build in this country, we should.
Jason Hsu:
So as a venture capitalist, you welcome governments-
Lior Susan:
All day long.
Jason Hsu:
. . . taking shares of your portfolio companies.
Lior Susan:
Yes. And just to answer the question, I view these five layers on the AI in my mind that the government should focus is the rare metals. So it’s like the ingredients of what we are using is the fabs, the semiconductors, basically how well we build the wafers. The packaging. After we build the wafer, we need to create actually a packaging out of those chips. There is the assembly, the manufacturing of putting all of these products together. And of course, the AI models and the power.
Jason Hsu:
They have government enhancement?
Divyansh Kaushik:
Look, I think there are areas where I, and this is again my personal view, so don’t quote me. But I think there are areas where government equity and partnership makes sense, and there are areas where it doesn’t. Intel’s problem is not capital, Intel’s problem is lack of customers. If you were to think of it that way. MP Material’s problem is capital, right? And so there are different tools in the industrial policy toolkit that need to be deployed.
Divyansh Kaushik:
Different tools in the industrial policy toolkit that need to be deployed differently. Now, I’m not saying either of these is right or wrong and smarter people than me can judge that and history will be the ultimate judge, but if you’re deploying industrial policy, there’s a lot of politics and a lot of baggage that goes with that. The United States has tried and failed on industrial policy many times, and if we are going to do it, we have to do it right, we have to do it patiently. So it requires thinking through what the end objective is that we want to achieve.
On the other side of it, just look at Chinese industrial policy. A lot of state and local, state and provincial governments, they’ve propped up all these companies that they cannot afford to lose. They cannot afford for them to go down. So what they do is just more and more capital going in, a downward spiral on prices, and the issues that we are dealing with today of dumping where they are just exporting their products at a massive price dump because they do not have the consumer base to consume them. There’s no discipline. We have to avoid getting there while still reaping the benefits of the industrial policy. People who ran the chips program have started this thing called factory settings to starting to understand, okay, what worked well and what did not work well with the CHIPS Act? I think that is the kind of analysis that we need there.
Second, just on your point on where should government be looking at? I personally see a lot of potential in robotics, synthetic biology, and space. We haven’t particularly tapped into the space economy. We’re starting to, there are a lot of startups starting to do some manufacturing in space. Just on synthetic biology, it’s picking up a lot. It’s already there, we just have to see where the market lands. On the robotics piece, I think there is so much that is going to happen and I’m sure like Jensen talks a lot about physical AI. That is going to be so key to our industrial process. And building at scale over time for multiple years will bring us that asset knowledge of integration of these technologies in different parts of the economy. It’s one thing to build 5,000 parts, it’s another thing to put those 5,000 parts together so that they work seamlessly. That knowledge comes over time with more reps. And so I think that is something that we need to be working on developing, protecting and preserving over time.
Jack Mallery:
Yeah, I couldn’t agree more. I think we’re doing everything right. We are doing everything right for bringing back manufacturing, for understanding how are these systems in these new technologies going to transform people’s lives. At the heart of this is an understanding that the US technology industry is really a miracle. It is an unbelievable industry that has been cultivated in this country that has really helped our country become the economic powerhouse that it is today and we should be embracing that industry and enabling them to move forward. So things on the energy side for data center construction, not just for data centers but for broadly based manufacturing. We need to enable our technology industry to do what it does best, to take private capital, to move it towards these innovative players, to take risks, to allow some companies to fail. This is something that is the life blood of the US technology industry in Silicon Valley and it’s something that I think we are on the right trajectory for moving forward and really welcoming the tech sector and understanding that this is one of the jewels of our country.
Jason Hsu:
I think in terms of a competition trajectory, I feel there’s going to be a disparity where China is going to focus on middle to lower end acceleration and scale, talking about pushing a $10,000 EV to a Global South or other parts of Europe, and where US is moving towards the upper echelon of the value chain. But my question is, you talk about all the great things that this country is doing and we’re doing all the right things, but I fear that we don’t run fast enough. And the question is if China is focusing on dumping or pushing all its technologies, white label to countries that it has economic influence, over-flooding them with cheap products and control them from the data inflow and the outflow choke points. Fast-forward five years from now, China’s able to control the whole stack probably for the country that exports technology both on the software and hardware pieces and now already doing it. Huawei have already installed millions of pieces of the network devices across African continent and other parts of the Global South.
But the question is if us were to choose areas to compete is then those competition in the areas that you mentioned like space, biology, biotech, synthetic biology, and et cetera, and essential building stuff? And for an industry giant like NVIDIA, how do we all organize ourselves together? I feel like China is very organized for the way it’s . . . You’ve lived in China, you built a business in China. How do we organize ourselves together in all of this?
Lior Susan:
I say they’re good-
Jason Hsu:
Yeah.
Lior Susan:
And they have the value of time. They don’t operate-
Jason Hsu:
They’ve been doing this for the last 330, 40 years.
Lior Susan:
And they don’t operate on four years with midterm bullshit, so they can run marathons and build a strategy for that and that’s what they’ve been doing. It’s interesting. Reminds me, I’ll send you a presentation I did that to our limited partners two weeks ago with this point and we call it made in eclipse rather than made in China. And I showed the overlap they call it by the way, in Chinese they call it the overlap economy. So they subsidize the EV car, but they actually have seven other companies that’s selling to the same EV car. So they have one company that’s selling the chip. One company is selling the software, one company is selling the sensors. CATL will sell the batteries. So they go subsidize the car, they kill the German OEMs that make money only on the car because they actually make money on 700 industries. So they don’t flood the market. It’s not like lack of discipline on financial, it’s a strategy.
They basically will take an industry that they believe is strategic for them. They will subsidize from the central government’s budget that industries while they’re making money elsewhere in the other sectors that they think are strategic like semiconductors, robotics, AI driving, momenta will sell their self-driving cars in that car and without really understanding culturally how they operate, we cannot really compete. And I personally struggle with the notion of industrial policy in this country. We don’t need industrial policy, we need industrial culture. We need to go build.
Jason Hsu:
Industrialists.
Lior Susan:
Yes, we need to have pride by building.
Jason Hsu:
Div?
Divyansh Kaushik:
It would be nice to have a special envoy kind of a thing whose job is just to do diplomacy on critical and emerging tech, if you want to do that. But I think just there is so much that we could be doing. I’ll give you an example, a comparison of Manufacturing USA institutes. Chronically underfunded, not-for-profit institutions, 400 plus members, 500 plus members and whatnot. Compare and contrast with manufacturing innovation centers in China. For-profit entities, 20 to 30 equity holders creating clusters around universities to just prop up a whole thing. They bought GE home appliances for instance, and they created a whole manufacturing innovation center around it. They announced four MICs last year. Two were on robotics. One was on industrial robotics, one was on humanoids, one was on advanced manufacturing and one was on synthetic biology.
We’ve got one Manufacturing USA institute on robotics in Pittsburgh. It’s called the Advanced Robotics for Manufacturing institute, arm institute. 450 plus members. It’s doing a great job, phenomenal job, but it is a very different ambition. You can’t compete or even think about dominating an industry by just having one Manufacturing USA institute that’s chronically underfunded. These things need a lot of market signal, a lot of capital. That is just pure ambition difference that we see in that. The other thing I would just say. I like our system even though it has midterms and whatnot, so we are not putting people into forced labor and all those things. And I do feel like even with our midterms and all the other nuances that we have, we’ll come out on top because off that nature. We want to bring the village with us, not force the village to come with us.
Jack Mallery:
Yeah, I think at the heart of all of this is we need to have an understanding that this is going to take time. China’s industrial expertise and the ability to manufacture at scale, and let’s take BYD for an example. BYD is an incredibly competitive company and not just from the technology standpoint, but from the fact that they vertically integrate almost everything in their vehicle minus the wheels and the steering wheels and the tires. They fab their own semiconductors and in fact are some of the largest fabrication facilities for application-specific semiconductors for the electric vehicle industry in the world. And that takes time. It’s because China has had time and expertise and process knowledge to build up these industrial clusters and America will get there, but it’s going to take time. So we need to be patient.
But in the meantime, while we are being patient for this, we need to understand that there are sectors where America is leading and we need to embrace those sectors and we need to go global in those sectors. AI is a really good example because AI is not just in the data center, AI is currently and will be in the future moving into existing industries. So look at the partnership that we did with Nokia. If you look at the patents for 6G, pretty much everyone universally agrees that AI is going to be incredibly important in 6G. We know very viscerally that we lost 5G. And so what are we going to do? We can use our expertise in existing sectors when that expertise can bleed into new technologies and transform them to get a leg up and to reenter sectors like that.
So I think that is at the core of this, is we should be serious about understanding the risks of this technology, but we should not be afraid of it. And we should not let a difficult understanding of ground truth stop us from going out and trying to take these markets and trying to diffuse our technology and try to take back many of these technologies like 5G that we lost. Because, like I said, this is going to take time and this is not going to happen overnight, but we should take advantage of the places where we are leading and we should not hide. America’s strength of our technology industry does not come from hiding, it does not come from creating walled gardens. It comes from going out internationally and leading, and we should enable that.
Jason Hsu:
We have time for maybe two or three questions. If any of you is interested in it in asking a question, please raise your hand. Herberto in the back. Briefly introduce your—
Audience Member:
Yes. Hello. One thing . . . seeing among several—
Jason Hsu:
The microphone, please. Wave.
Audience Member:
Awesome, thank you. One thing that we’ve been seeing among Wall Street investment analysts and bankers is that they’re very concerned that there is a serious AI bubble and it’s about to pop. Notwithstanding NVIDIA’s earnings, which I know that we can’t speak on just yet, if there is a bubble that will pop in 2026, how do you feel that will change our AI strategy? Will there need to be more government support? Will we be less? How will AI companies, how raise capital, how will this change over landscape in the United States?
Jason Hsu:
A question to Jack?
Jack Mallery:
Oh. I think . . . So obviously not going to speak to earnings, but if we look at from the NVIDIA perspective, what is one of the things that we look at? We look at is our compute going in a data center and is that compute being utilized? And there are indices that track what are the rental rates of our previous generation, so the Hopper generation and the Ampere generation, and there is still consistent demand for our products. Amazon has been very clear on earnings calls, same with Microsoft where they say when they have the energy where they can put chips and data centers, because for them right now it’s an energy problem, not a chip problem, that compute gets utilized almost instantly. So for now, what we are seeing is the fact that our technology is no matter what is going on, on all of the speculation, and I’m sure my Twitter looks shockingly similar to your Twitter, is the fact that our compute does get utilized right now because there is overwhelming demand for these services.
Divyansh Kaushik:
I just add two things can be true. There isn’t a bubble at the frontier level. You will continue to see dramatic improvements in performances. We have not hit the wall on pre-training yet. This is what I used to do in my day job, just train these models. We have not hit the wall yet and there is a lot more to do. There’s not even a company . . . There is one company right now that has multimodal first thinking machines. They haven’t even come out with a product yet. There is so much advancement that will happen at the frontier level. Yet it is also true that there are several companies that are overvalued at the tail end and there will be . . . Certain companies that just have access to massive land, one IOU, and no employees are valued as tens of billions of dollars. Perhaps not. Yes, maybe there is a bubble there.
I think, however, your question is very important on what happens on the policy space. The political economy, how does that change when said bubble drops? Do people just overall start thinking that, okay, it was a bubble and not distinguish between the frontier and the tail end, or do they actually realize that? Because if they do realize that, then they will realize that the frontier will continue to advance, the risks on safety and alignment will continue to pop up. You should continue to work on that. If they’re like, “Okay, this whole thing is a bubble,” then we might start letting our guard down on these important issues of safety and alignment at the frontier, which would be a big mistake.
Jason Hsu:
You have anything to add to that?
Lior Susan:
Yeah, I think there’s two things there. I think there is the evaluation that people are concern about in multiples that above my pay grade, I only know how to build companies, I don’t know how to Wall Street, to analyze if the value is right or wrong. From a fundamental point of view goes back to what is the penetration and how much of GDP per capita we’re investing today on AI and we are close to zero. So I’m like, I personally believe that 85 percent of the world GDP, we call it 110 trillion, a lot of that physical industries from defense to manufacturing and energy and healthcare are going significantly to change by AI and other technologies. So I believe we’re still in the beginning.
Jason Hsu:
This lady here. Oh.
Jason Hsu:
Yeah, Shreya.
Audience Member Tsiporah Fried:
I just would like to ask and coming back to the AI competition between the US and China, if we shift from an LLM model to a symbolic AI model, who has the best advantage? Is it China or is it US? What would be the impact on AI in defense?
Jack Mallery:
Are you referring to foundational world models? Okay, I can go first. What I’ll say because in NVIDIA we develop a lot of the software that goes into robotics applications through the Thor platform that we utilize. And I think what we’re witnessing is there’s a huge exponential increase in the capability of robotics, but you’re going to need a lot more training data and you’re going to need a lot more improvements in the general purpose brain for something like that where LLMs are by far the most robust application of AI right now and everything else is advancing quickly. But from a technical standpoint, we’re going to need a lot more synthetic data and we’re going to need a lot more real-world data until we get into a place where we’re going to have humanoid robots running around everywhere.
Divyansh Kaushik:
It’s very costly too. You can’t just watch the videos, you have to be able to grasp something. I need to know how much pressure to apply on that glass until it breaks versus how much pressure I can apply on a stick of wood. So that requires actual data collection in the wild and that’s costly. We haven’t invested as much as our peer adversaries have so I think we are ahead on certain things, they’re ahead on certain things. I feel like on the physical AI world model piece, outside of the hardware software piece, I think they’re pretty good on that. I personally really enjoyed Gemini when it came out for robotics. The Gemini 1.5 robotics, oh my god, game changing. The advances that you can see coming out of that, incredible.
Lior Susan:
I think you mentioned AI for defense. It’s happening, and it’s happening at scale already. We’ve been using it in order to calculate and have missiles making their own decisions while flying in two, three mark, which target they should hit, how they should hit the target, which angle, and it’s all being run originally by training a model for that and then of course putting that on the inference. So we are seeing that that is happening already. I do think the battlefield, and we’ve been seeing swarms in Ukraine and other autonomous systems that are purely physical AI being used at scale. So I think the battlefield is definitely changing by this technology and fast.
Divyansh Kaushik:
And I think just on that, we saw when there was that week-long clash between India and Pakistan, there was ISR integration on one side with AI and not on the other side. And so that also started to show up. It’s penetrating everywhere, as Jack said, and we have to make sure that we are running faster on all of these.
Jason Hsu:
As a final cautionary note to conclude the panel, I wonder what’s your biggest fear in AI? And also I want to allude what Geoffrey Hinton has said, that AI could go rogue, and also his caution to all the AI developers to have a clear guardrail and et cetera. So just a quick 30 second of your biggest fear in AI.
Lior Susan:
As I mentioned in the beginning, I think AI is an incredible technology, but it’s also a weapon. It’s weapon that controls minds and I believe we must own that weapon in the entire stack.
Jason Hsu:
Yeah, yeah.
Divyansh Kaushik:
I don’t have one fear, but one of the many fears is . . . We have the technical pathway to get to automated AI R&D where AI models are training better newer AI models. That is not a science fiction thing, we have the technical pathway to get there and it’ll probably happen within the next five to six years. If we haven’t solved some of the technical challenges around making sure that these models do what we intend for them to do. There was this . . . Anybody who’s done RL in their free time, you’ll remember this Anthropic video where they had asked a boat to cross checkpoints. A boat was on fire, you had to cross checkpoints, and you had to reach a final stage and you would gain points every time you cross a checkpoint.
We thought that was a great reward function for the boat to cross to the final state, but what it did was just kept reward hacking by remain on fire, kept crossing the checkpoints, kept gaining points. That’s the problem that in technical terms we define as misalignment, reward hacking. So if we haven’t solved those problems by the time we are at an automated R&D stage, I do fear that loss of controlled risks could materialize. Those are not science fiction, those are real technical risks that could materialize and that is one of my fears.
Jack Mallery:
I think I worry less about a spiraling out of control AI mostly from the perspective that as any good engineer will tell you, it’s a miracle a computer turns on. So from the actual integration perspective of this is incredibly difficult and complex to integrate artificial intelligence into the existing physical world like the firmware and the hardware layer. But I think what I worry about is we all need to understand that AI is more than chatbots. Chatbots are incredibly important and they’re a huge industry in this, but AI is robotics, it’s autonomous vehicles, it’s telecommunications, it’s going to be edge processing, on your phones, it’s going to be in the medical industry. And we need to understand that the competition for AI dominance is really going to be about diffusion and it’s going to be about transferring existing industries, and that’s something that I worry about a lot.
Jason Hsu:
We are time here. All I can say is that we are really at an extraordinary time and join us in building this new era of AI and building the new America here from Hudson Institute. So thank you all for joining us.
Luncheon Keynote | America’s Evolving Strategy
Bill Drexel:
It’s good to be with you all here and glad to be moderating this over lunch. We’ve got really great authorities here to talk about how is the United States strategy evolving? I should preface, I’m a fellow here at Hudson. I’ve been here since March. I work on AI policy, also US-India relations. But more importantly with us we have Daniel Remler to my left and Dean Ball to my further left. Their current capacities are basically both senior fellows, one at the Center for New American Security. Dean is at The Foundation for American Innovation. What’s interesting about having them together here is that they’ve both played major roles in setting strategic policy for different administrations. So Dean basically, as my understanding is, you can correct me if I have this off, but went into the Office of Science and Technology Policy in the current administration at the beginning headed the writing of the nation’s AI strategy, the AI action plan, and then pieced out to be an AI celebrity around the world.
Daniel meanwhile spent several years in the state department, was a policy advisor in the office of the Special Envoy for Critical and Emerging Technology, and co-wrote the department’s first technology diplomacy strategy. So the great thing there is he has an insider view into how this has looked on the ground in the prior administration in terms of implementing how the United States was approaching AI and all the questions around it. So we’re really likely to have both of them. We want to look at what are the evolutions, what are the currents of thought on how we’re approaching this internally. And I’ll just start with an overview question, which is how has the United States AI strategy evolved, particularly between administrations in recent years? And I’ll pass it . . . I’ll go straight to my left first.
Daniel Remler:
Sure. So thanks, Bill. And thank you to the Hudson Institute for having Dean and I here today. The story really begins in my opinion with the first Trump administration actually, and the recognition that AI was becoming an increasingly important foundational technology for US national power, for our economy, for our security, and also the recognition that China was doing quite a bit to catch up to us here and in many ways using American technology to fuel its military modernization, intelligence power, and mass surveillance capabilities. And so that first Trump administration took very good action against several of China’s leading companies to slow them down. When the Biden administration came in, and I should say . . . I won’t speak for everyone in that administration, I played a small role, but they sought to really build on that, systematize that approach on China in particular, and then expand it through things like the October 7 country-wide export controls.
And then more broadly on AI was to really accelerate, and this is even before, I should say before ChatGPT, but to really accelerate the US government’s adoption of AI. You can see this through OMB memos that tried to systematically create a kind of single risk management framework for the whole federal government to be able to use AI more effectively to make sure that we attracted and retained the top talent from around the world on AI, and then also to work with our partners and allies around the world to set the rules for AI around the world consistent with America’s economic and security interests through groupings like the G7, but also with the UN General Assembly resolution to make sure that America was leading, not China, in setting the rules.
And so as you can see, the Biden administration had a kind of full approach across many different dimensions, really building on that first Trump administration’s early moves. And this really culminated with the national security memorandum on AI that the Biden administration released towards the end of 2024 where again, you see these three pillars, America should lead an AI, harness AI for national security advantage, and then work with our international partners around the world. And I think the second Trump administration, Dean should obviously speak to this more, but is continuing in many of those respects and charting new ground now.
Dean Ball:
Yeah, so I think there are some areas of consistency between the administrations. One of the key messages that I hope to convey wherever I go is that there’s many areas of . . . AI policy in general is probably an anachronism. It’s like one of those terms that we use today that I think we should think of as an anachronism, even as we use it today, because it’s going to end up encompassing a huge range of things that will not just be one policy area, it’s going to fracture into many different things. And some subset of those are going to be really contentious, polarized things in a lot of different lines.
And some of them should be bipartisan areas of cooperation and consensus, I think. And some of those, I think good candidates for those include some of the national security issues with respect to AI. Others would include, you alluded to government adoption of AI. I don’t think we should have differing. . . There shouldn’t be one party that’s trying to rip AI out of the government and another that’s trying to put it in. I think we should basically all agree that this is a technology that government needs to adopt. So those are things where I think we advanced.
One area, though I think a difference of emphasis between Biden and Trump 47 relates to this issue of development versus adoption. The Biden administration I think took a very, what you might call AGI-pilled perspective on AI policy, the basic idea that we will build transformatively capable AI systems within the next few years, that those systems will in and of themselves have utterly world-changing capabilities. And it will be something like the nuclear bomb where there was one day where we had it and another day where we didn’t have it. And so we need to conceptualize it strategically in that way, we need to be very concerned. We need to make sure that the data centers where these things are built, are built on federal lands in ways that the federal government can have some control over. And really we need national security apparatus type of control. Not just use, but control over this technology.
The Trump administration I think sees it somewhat differently. It sees this question of adoption of AI systems not just in America, but all over the world, and not just in the government, but throughout the economy as being its itself a hugely challenging and longterm process, technology diffusion always is. But if you just develop these systems, the world doesn’t change overnight is I think the view I take and I think the view that many others in the Trump administration take without speaking for them.
Instead, if no one uses these models and if our regulations and our existing laws actually make it hard to adopt these models, then you actually don’t get a lot of the benefit from them. And also it is in the use of these models that we improve them and we figure out where their deficiencies are, we figure out where the areas where they’re unreliable, where they’re difficult to control. And it’s actually through adoption that we resolve those issues and we co-evolve in some sense. And I think that process is more subtle. It’s longer term. I think it will utterly be utterly transformative in the fullness of time, but it’s a major difference of emphasis. That concept diffusion, adoption, deployment, things like this are spread throughout the action plan, all three pillars of it.
Bill Drexel:
Great. That’s actually a great segue into my next question, which I want to start back with you for. So as we mentioned before, Sriram Krishnan was going to be with us today, but he was poached by the visit of Saudi dignitaries who, speculation has, it may be arriving towards some sort of deal that involves AI, which begs the question, and you obviously teed this up, could you expand a little bit on what is our strategy with AI for allies and for . . . Maybe let’s say not just allies, the full spectrum of countries touching UAE, but also Saudi Arabia and others. Is there a grand theory? How would you characterize it?
Dean Ball:
This gets exactly to the different conception of this technology between the Biden and Trump administrations. I think the Trump administration, again, took a very national security heavy approach to this. So the idea was this is a technology with . . . And this is all true by the way, serious military intelligence kind of implications. And for that reason, we need to restrict it really mostly to the United States and maybe some allies. Maybe there’s this small group of allies that we’ll also let into the club, then we’ll make a second tier and then everyone else in the world is going to be distinctly third tier and-
Dean Ball:
. . . is going to be distinctly third tier. This kind of goal, I think there’s a strategic logic to it to be sure, but I think the Trump administration especially having way more greater density of technologists in it, I think that it is more inclined to see AI as a technology with things like network effects, platform effects, developer and user ecosystem benefits, these sorts of things that are traditional in technology, operating systems, developer platforms, programming languages even, things like this.
Not being the same as those technologies, but AI is having similar types of dynamics that will affect which ecosystem is the most successful, and so for that reason you need to spread the technology as far as you can. As opposed to trying to control it, you need to get it out into the world. So I think that logic went into the deals in the United Arab Emirates, and I don’t know what will happen with Saudi Arabia, but plausible that something will be announced there. I think it also factors into the export promotion executive order, which is different from those deals. Executive export promotion is more about the global south and developing countries, but it’s all downstream of the same mindset.
I would say with that said, this does need to be done within security parameters that are set by the US government. So there are things like cybersecurity, there are the physical security of the data centers themselves, there are assurances and safeguards against things like diversion of export control technology including chips, but also semiconductor manufacturing equipment. So you have to have those things, but I think you need to do it within a context that is more broadly permissive of technology export.
Daniel Remler:
So on the adoption question and diffusion globally, I really want to commend Dean and the administration for the AI exports program. It’s the right kind of message to send in terms of I think the spirit of openness and collaboration that Vice President Vance used in his speech in Paris at the beginning of this year. Dean, you sort of let slip that that was intended for developing countries, which confirmed my suspicion that really the point of that is to make sure that America is competing in largely middle-income and maybe lower-income markets in competition with China to win the AI future in these markets. I think that’s great. Under the Biden administration, there were moves in this direction at the end, but I think that’s really been turbocharged under this administration and I think that’s awesome, and I need to turn off my phone.
In terms of our close allies, another thing I want to commend the administration and really OSTP for are these technology prosperity deals that have been struck with I believe the UK, Japan, and Korea, and there’s potentially one on the way with Australia. I’m sure there are others lined up for this. This is really a way to kind of deepen our really not just AI, but technology partnerships with our closest allies across the spectrum from AI, quantum, fusion, other technology areas. The previous administration tried to try to do this through high-level dialogues, I think this model is great too. What I’d say is my only concern there is just a capacity question, because it does take a lot of just hours of humans until we can automate that. If AI could run a high-level dialogue, that’d really be something for my colleagues at the State Department, but seriously that’s good.
On the question of the UAE and Saudi, maybe a few other countries in that mold, in the Gulf, maybe a few other geographies, I think these are somewhat unique in the sense that these countries want to hedge their geopolitical relationship between the United States and China. They for various reasons believe that the United States is not 100 percent committed to staying in the Middle East, and so they have to hedge their security, to some extent their economic relationships. They obviously see that China’s making progress on technology across the board, but they also I think recognize that the United States is really in the lead on AI technology and particularly when it comes to designing the highest-end chips. Talk about being AGI-pilled, but my suspicion is that some of the leadership in both of those countries is quite AGI-pilled.
So all the things, Dean, you were saying about making sure that there are national security safeguards in our technology in those countries in terms of diversion, cyber-physical, that’s all good. To me, that is necessary but not sufficient, because to me the United States’ incredible lead in this technology combined with how these countries perceive AI as being fundamental to their economic future and really to the long-term viability of their political systems means we have enormous leverage, and we should use that leverage to pull Saudi, UAE, other countries in similar positions more fully into the United States’ technology sphere of influence, right?
What does that mean? It means commitments for their sovereign wealth funds not to invest in critical and emerging technologies in China. Perhaps it means also inbound investment restrictions on China investing in those countries, it means really seriously for real this time ripping Huawei out of their telecom network, things like that. So we’ll see, I think this administration is not shy about using leverage in many respects and I hope they use their leverage in this area too.
Bill Drexel:
Mm-hmm. Okay, great. So, we’re AI maxing and hopefully going to see some more use of leverage in some swing countries. That makes sense to me. Also brings me, maybe I just wrote this sequence of questions really well, to the next question very cleanly. What else? We’ve talked about friends, what else in terms of competitive actions with China are we likely to see next? I especially just want to add a plug here. How are we feeling about the GAIN AI Act? I don’t know who wants to start.
Daniel Remler:
You take that.
Dean Ball:
Sure. Yeah, so in terms of competitive actions, I mean, right now this is all taking place in the context of a broader trade negotiation that’s of course happening and ongoing, and I think no one, except for a very small number of people, really has full visibility into all the details of that. So it can feel, especially to people in AI and I think rightly, like AI is the most important issue in the world, but when you’re sitting I think at the president’s desk, AI is one of many important issues. So, I don’t want to comment on where things might go in terms of specific export control actions or things like that. Of course we saw the tit-for-tat with respect to rare earths, I think by the way that that was a quite major miscalculation on China’s part.
I think broadly speaking where the US should be, and I think the Trump administration is doing a lot in this regard, but I think it’s all well and good to have policies that try to slow China down. I think that’s all well and good, but if I were to offer one more respectful criticism of the Biden administration, I think there was maybe a little bit too much emphasis on the stopping them and a little bit less on the strengthening us side of things. So, we have a real imperative in this country to re-industrialize in many ways. Some of that is about energy. Building energy generation and transmission to power AI data centers, but also many other factories, but there’s also lots of verticals where we need to improve, right? Rare earths and critical minerals would be a great example of that.
I would also identify ships, that’s another area that’s been very prominent, shipbuilding, very prominent. Defense industrial based more broadly we have seen a lot of weakness. We have this sort of structural problem in our defense industrial base that there’s a lot of the base of the base. There’s a lot of these basic industrial processes, machine tooling and investment casting and things like this, that they’re all done . . . nothing’s changed in the last 60 years. I’ve been inside of some of these factories, I’ve seen it firsthand. In addition to that, a lot of the workforce is sort of on the verge of retirement, and so we have this huge imperative to improve in that regard.
The other area I would flag there where I hope we see more action from policy and also from the private sector is robotics, because that is going to be the next major thing. I think there’s not actually a lot we can do to . . . there’s some stuff we can do to weaken China there, sure, but really we need to do much better ourselves at this. This is an area where, make no mistake, China is ahead of the United States in major ways when it comes to the production of robots. I think on the software side it’s a little more debatable, maybe more even, but not trending in a direction where we’re going to maintain a lead in terms of robotic foundation models, things like this. So this is an area that we need to view as an urgent problem that needs to be fixed, and I think it demands a multidimensional solution.
You asked about gain. My view on gain, just to be totally candid is . . . and I should say I’m not an expert on gain, I’m not super in the weeds on it. It seems like it imposes a quite potentially large regulatory burden and information sharing burden on hyperscalers and compels them to share information that strikes me as being perhaps proprietary, and so I’m disinclined. Also, the other thing is the whole benefit of the export control regime, I think. Executive power giveth and take away, but one of the benefits of it is that you have flexibility.
I think that all the architects of the export control regime, both on the think tank side and within the Biden administration, all the people who are foundational there I think would all agree that this is a policy that is . . . it’s governing an area of fast-moving technology and you need to be able to pull back just as much as you need to be able to . . . sometimes you want to ratchet up export controls, but if we see that China is really catching up quickly in chips, we probably want to pull back. By the way, the administration’s assessment is that they are, which is part of the posture that’s informing their strategy . . . it’s part of the thinking that’s informing their strategy.
So I think statutory encodings of export control-related things, it can have benefit, but it also can limit flexibility in an area where I think we really need flexibility. So that’s my view on gain, though again, I should say you have to pick and choose your battles, or not your battles even, but just what you’re going to focus on during the day, and it has not been an area of extreme focus for me.
Daniel Remler:
So on the competitive actions against China, it’s unfortunate that our export controls were dragged into the trade fight earlier this year, previous administration tried hard to silo these things. Sometimes it seemed a little incoherent, but it at least had strategic value in that sense. Maybe that was inevitably going to come to an end at some point, I’m not sure, but the reality is the export controls did get put into the pot. For the broader trade discussion, and as Dean very rightfully said, from the president’s desk, you have to view the whole landscape of all the issues affecting Americans and you can’t just be fixated on AI, so I take that well.
Now, there are actions and loopholes that we do really need to close, and it’s possible that the administration could do that progressively bit by bit in ways that don’t upset the apple carts of things like closing the remote access loophole such that Chinese firms can essentially rent GPUs, or disallowing Chinese firms operating within the United States from buying chips, imposing licensing conditions on Southeast Asia and other regions where we know there is a high degree of chip smuggling to evade our sanctions. It’s plausible to me that the administration could effectively message those as not really directed at China per se, potentially. I’m not sure, but it’s possible.
To Dean’s very good exhaustive list of all the things we need to do in terms of reindustrialization, I think that underscores the point that in some ways competitive actions that we need to take in the competition with China on AI aren’t about China right now in some sense, they are about self-strengthening the United States, but also working with our allies and partners to basically build allied scale in these areas like shipbuilding, rare earths. How can we create trusted markets so that we can all benefit, and so that we do not continue to perpetuate these supply chain dependencies on China in areas like rare earths? So, I think that’s exactly right.
On the GAIN AI Act, I should say it can get a little bit confusing, because there’s effectively two versions of this bill. There is the amendment to the NDAA that passed the Senate and then there is the standalone GAIN AI Act. The standalone GAIN AI Act includes the major portion of what was in the amendment in the Senate NDAA, and that is basically a right of first refusal for US companies to purchase chips before they are exported to D5 countries, which is essentially China in this case.
To me, that is pretty common sense. It actually doesn’t really touch on major questions in the export control debate, and it basically operates from the principle of shouldn’t American companies have the chance to purchase some of our most exquisite technology before we give that technology or sell it to our primary strategic adversary? That seems pretty common sense to me.
It’s important to underscore that given the serious constraints on production of chips in general, we are in a bit of a zero-sum world where a chip manufactured for export to China means one less chip manufactured that could go to an American buyer or even one of our allies. So to me, that America first, shall we say, principle for making sure that American companies have a right of first refusal for chips before they go to China makes a lot of sense. I don’t think, by the way, it would really impinge on these broader export control questions and our allies would still have access as before.
Bill Drexel:
Great. You looked almost like you were going to add something. No?
Dean Ball:
Well, I mean, I would just say that for the same exact reason, again, the administration I think thinks about technology in the way that Silicon Valley thinks about technology, not so much in the way that Washington DC thinks about it, which is less zero-sum, in fact. I take Tyler Cowen’s mantra that supply is elastic very seriously. This notion that under supply-constrained settings markets will have a tendency to produce more supply if there’s sufficient demand for it. That includes being more efficient about things, it includes by expanding capacity. It’s many, many different things, reapportioning existing manufacturing capacity. So, I don’t see us as necessarily being in a zero-sum race for fab capacity right now.
I think the other point I would make is that just in the interest of clarity, from a network effect perspective, if you are interested in maintaining US software ecosystems being preeminent, which is strategically advantageous, there is a reason. Something I always say is we don’t think, ‘cause it’s white paint to us, we don’t think about the fact that the terms of service of every piece of software in the world are set typically by American contract standards often adjudicated in American courts. Every programming language in the world is written in English, uses words like for, if, continue, else, right?
These are things . . . every single person who programs, including every programmer in China, learns to speak English, and this is a form of soft power that we do not notice. I think that for that same reason, there are times when it is strategically advantageous. I totally agree that we don’t want to give China frontier computing capability, that would not be wise, but there are absolutely times when you’re looking at prior generations of the technology and I think you do want to make supply available in China, such that those ecosystem effects will persist, because like it or not, a huge number of the engineers in the world who work on AI and who do some of the most advanced work on things like CUDA kernels for NVIDIA or other GPU kernels, other software enhancements, a lot of that comes out of China. All American companies in fact benefit from these sorts of optimizations, so this is a subtle and complicated issue, and again, I’m an inherent skeptic of new statute.
Bill Drexel:
Okay, great. So, I have one more question and then I’m going to open it up until we run out of time at 1:15, I think?
Staff:
1:30.
Bill Drexel:
1:30, great. Okay, more time for questions. Maybe I have two questions then. So, I have two questions and then I’ll open it up for questions. The first one’s a little more of a fun one. Broadly speaking it’s, we’ve talked about this implicitly, but actually a lot of strategy depends on how you do or do not feel the AGI. So my question is there are a lot of different ways to feel or not feel the AGI, we’ve definitely seen some evolutions in both administrations on that question. I want to ask you both what your sense is about how are people feeling about that in the administration? One hears, for example, some contentious debates within State Department in the last administration that I’m sure you can’t comment on on that question, but I just want to ask both of you, and I think why don’t we start close to far here to get the more historical perspective, but feel free to comment on your view of what’s going on backwards or forwards.
Dean Ball:
Sure.
Daniel Remler:
Sure. So I suspect this is true now also, but opinions vary on timelines let’s say, and these debates can be very spirited, and I don’t want to speak for my colleagues on this. I will say even before ChatGPT launched, one great thing about being in government is people come to you and want to share all their stuff with you. They want to show you their cool technology and tell you what you should do about it, and so in government the AI labs come to us and say, “Look at the progress we’re making. These are the risks that we see coming over the horizon.” You obviously have to take some of this stuff with a grain of salt, but it was hard to avoid the conclusion that AI progress was very fast and possibly accelerating even in ‘22, ‘23. Obviously after the release of ChatGPT that became known to everybody, and we went through a kind of series of hype cycles basically where new models released suddenly. You see it smash the old benchmarks and we have to ask ourselves again, what does this mean for our strategy?
I will say that the way I thought about a lot of this stuff, and Dean, I’d love to know your view is thinking about in the context of national security, you do have to take low-probability events very seriously, because we’re talking about the safety and security of American citizens, and you ideally want to take no-regret actions, right? What can we do now that will be useful for national security regardless of whether AGI is three years out, five years out, seven years out or is just not even really a thing perhaps as many people view this?
So, I think you could say that chip controls were like that. Obviously they seem more urgent if you think AGI is coming very soon, but in our view they were no-regrets actions in many cases. Adoption in the government is obviously a no-regret thing that you should focus on, so that was really the framework that we tried to think of is pay attention to these low-probability, but very high-consequence events and then try to take action. That’s like no regrets, maintains US leadership, accelerates adoption, that kind of thing.
Dean Ball:
Yeah, so I think it’s definitely true that just like the Biden administration within the Trump administration opinions on these things vary to be sure, I think it’s not just opinions on timelines, it’s also opinions on what is AGI? What does it mean? I think if you took a panel of AI experts from 10 years ago and you didn’t share anything with them about our present context, but you just took the artifact of ChatGPT and showed it to them, many of them would conclude that they were looking at AGI. What we have today is AGI, right? So these things change, our conceptions of these things change. I kind of do like the framing of Dario Amodei of Anthropic who has said like, AGI is like a city, and so when you’re doing a cross-country road trip, it’s fine to say we’re going to Chicago, but then as you take the exit off to Chicago, it’s like, okay, well, what neighborhood are we going to? What street? What house number? So, we are going to, I think, over time I hope develop somewhat more sophisticated abstractions for talking about this.
The action plan, absolutely. For my part, I can say at least my mentality was no regrets. Let’s take a bunch of steps that are not predicated on a lot of exquisite assumptions about where things are happening, and not just assumptions, but assumptions that are then downstream of the first assumptions you made, and that can lead to creaky policy, I think. So instead, let’s try to develop policies that are good in a wide variety of different worlds, ‘cause we already know AI has national security implications, we already know this. It’s obvious, right? From a sensor processing perspective, the intelligence community has had the problem for years of collecting way more data than it is capable of processing.
I remember seeing a report some time ago that suggested that the National Geospatial Intelligence Agency would need to hire eight million analysts to process all of the intelligence just that it collects, and so it’s very obvious that there are many different applications of artificial intelligence, including large language models, to just this data processing issues that the intelligence community currently faces. Of course also we’ve seen that these are tools of both cyber defense and cyber offense, which are things that implicate national security and intelligence quite a bit. So, we already know this, right? We don’t need to have some debate about whether or not there are national security implications, and so we can do a lot of positive sum things that move the needle forward and probably help us in the world where AGI comes soon, but don’t necessarily rest on that assumption that AGI is coming soon.
I myself, and now I speak purely for myself, truly everyone’s always like, is Dean secretly speaking for the Trump administration? The answer is no, I’m really not. So, I think that we are looking at a rapid capabilities trajectory here. It seems like the reinforcement learning, we’re early in the stages of the reinforcement learning paradigm and inference time compute. When applied to this, it is jagged, it has its flaws. It is not itself going to get us to AGI, but it is a general-purpose way to furnish extremely rapid capabilities in any domain that is verifiable. So we will bring over time more domains of human knowledge into the realm of verifiability, and as that happens you will see a series of S-curves of just rapid escalation and capabilities petering off close to 100 percent.
For that reason, I think you should basically expect that mathematics gets functionally solved within five to 10 years, right? Riemann Hypothesis, Navier-Stokes. Outstanding challenges in mathematics, those get solved probably. I think you will see . . . saying 100 years of progress in biology is probably a little too nebulous, you probably want to say more like there’s 1,000 sub-fields of biology and we will see 100 years of progress individually in those, and that in fractal-like fashion that will end up feeling like that. But there’ll also be other areas of biology where we don’t advance, because we can’t generate the data loops and the mass experimental throughput that you need to really bring AI systems to bear. But whenever we do set these loops up correctly in material science, in biology, the action plan talks about this, in chemistry and physics, you will see rapid progress. Decades of progress in spans of months is the kind of thing, and I think that’s basically locked in.
So, that is something that is not priced in. Most people, including probably most of the people in this room have not emotionally internalized this, and there is a reason that the phrase in San Francisco is feel the AGI, it is somewhat of an emotional process to internalize. Yeah, I mean, I think we’re absolutely in for a wild ride, and I should also say there’s this meme that like, oh, well, they don’t have continual learning figured out. Continual learning is this ability of the model to learn in real-time from things from its experiences, as opposed to just doing it at training. So it can update its weights when it is doing things for you, in the same way that if I were your employee my synapses would be changing based on things I’m learning as I’m learning new tasks that you’re paying me to do, right?
This is sort of a frontier. I bet this frontier gets cracked in 12 to 24 months, if not sooner, and then we’ve got several more years of the reinforcement learning and inference time compute paradigm. We’ve also, by the way, still not done with the pre-training paradigm, and then there’ll be this new thing and probably other new things that get layered onto that. So the amount of room to run here is just unbelievably large, the amount of low-hanging fruit for improvement is just unbelievably large and we have barely scratched the surface of adopting even the current technologies, so I’m bullish is the short form.
Bill Drexel:
Okay, locked in and bullish and emotionally internalized the reality—
Dean Ball:
Hope so.
Bill Drexel:
. . . so that’s a whole other level of consciousness. Not to speak of AI consciousness. I kid, but one really quick follow-up question and then I will open up, I promise. This endpoint and timeline agnostic approach that both of you describe seems like it would have costs, and I’m curious what you might think those costs are? And then I will open it up.
Dean Ball:
I can—
Daniel Remler:
Go ahead.
Dean Ball:
Okay. I mean, yeah, I think that there are potential costs. I mean, there’s costs associated with like, well, we should have planned for X, Y, Z, and we didn’t. However, I would say a lot of that is only obvious in retrospect, right? It’s very hard to know there’s going to be emergent properties of these models. Especially when they get out more into the world and they’re interacting with one another, there’s going to be emergent properties that by definition emergence can’t be predicted, not in specific at least. But I do think that the failure mode from a political posturing perspective, I think the failure mode is that people look back on maybe both of our administration’s policies and say, these guys really were asleep at the switch, right?
I mean, there’s an interview that Ezra Klein did with Ben Buchanan who was the AI czar in the Biden admin, that Ben Buchanan did with Ezra Klein, his exit interview from The White House. Ezra Klein is kind of yelling at him, kind of bullying him, kind of being like, “Wait, you’re telling me that all the stuff I just said about like, whoa, we’re on a rocket ship kind of thing, you’re telling me that you believe that and you did this? This seems somehow small in comparison to what’s coming.”
To that, I would say yes. Yeah, absolutely, but I think my perspective is that it’s the right thing to do. It’s the prudent thing to do is to take these modest steps and hope that it takes you in the right direction, but also, I mean, I think one very important thing to note here is that there will be things we don’t predict where we’re going to have to display flexibility and adaptability. Part of the reason I’m skeptical of regulation is that you don’t tend to associate heavily regulated sectors or even moderately regulated sectors. You don’t associate regulation with adaptivity, you tend to associate it with sterility and ossification, and so I worry about regulation for that reason.
Daniel Remler:
Yeah, I’ll co-sign a lot of that, Dean. I guess one of the really . . . there’s two hard things that need to be done. I mean, one is strategic planning, scenario planning in government, because I think we can all have our idea of what the world will look like, but it’s very important, even when you’re in the seat, to have an expansive view of what is possible to the left and the right of what you imagine the world to be, and to have a sense of, okay, if this happens, then these are the policy options that I have at my disposal. At the State Department, we have the policy planning staff, at the NSC and in other parts of The White House, you have these kinds of functions built in for precisely these reasons so that we have an expansive view of what’s possible. Even as I agree with Dean, it’s probably better to take modest steps along this very jagged and uncertain path.
The other thing is just we live in a democracy and making progress on any issue is very, very difficult, and it’s probably better to have a broad political coalition and to feel like our country is moving in the same direction, because that’s how we build a durable consensus for action, but that can just feel slow and modest for people that have their hair on fire about the trajectory of things.
Bill Drexel:
Great. All right, so now I will actually open it up. Anyone . . . okay, great.
Audience Member Shawn Venditti:
Dean, I like how you talk about network and emergent effects. It’d be an interesting poll to ask how many people in the audience use AI agents and to what extent and how often in their lives, but I bet the majority of people aren’t actually using things like Claude Code or the latest agents in their workflows. Similar to how somebody on the previous panel said most people in the government are even struggling with Microsoft Office, so there’s been a lot of investment in chips in the infrastructure to be used, but essentially it’s still largely a Rube Goldberg for generation of memes, right? 71 percent of social media content is AI generated now. So, how do we incentivize more application layer use cases and adoption and workflow insertion for these technologies? Because we’re spending so much—
Audience Member Shawn Venditti:
. . . insertion for these technologies because we’re spending so much money, resources, natural resources on something that’s not really going towards economic output largely right now.
Dean Ball:
Okay. So that was for both of us, but okay. So I have a lot of thoughts about this. I think we are seeing enterprise deployments of AI that are increasing productivity. A lot of it though is kind of exactly what I referred to. Macroeconomic statistics are made up of fractals. They’re fractal-like and they’re made up of much smaller things. Of course, they’re like measurements of these broad phenomena. It’s like temperature. The temperature of this room is an emergent property of many particles moving around, but any single one of those particles has no temperature. It’s not cognizable to talk about the temperature of an atom of oxygen. That happens from the movement of many. So you see things inside of enterprises where individual processes get sped up like 1000 percent hugely, and it’s like, cool. Then we move on to the next bottleneck and the next one and the next one and the next one after that.
And it takes time. And it’s not that you’re not seeing productivity gains, it’s just that these things do take time to manifest themselves. I think you’ll see them rather rapidly. I can tell you from my part that AI does accelerate my own work hugely, and I think there are a lot of productive applications, including, by the way, in government. I mean, government has not done nearly everything that it needs to be doing or that it needs to have done, right? There’s a lot more work to do. At the same time, I saw processes in government where I sort of saw the before and after of this is how this worked before AI and this is how it works now that we have AI. And it’s like, wow. Yeah. That process inside of that agency is going to go up, is going to be 100 times more efficient than it was before.
And so that’s great. On the other hand, when it comes to making aesthetic judgments on people’s use of technology, I would only remind you that if we cure cancer in the next decade, it is going to be because we figured out how to use chips originally designed to play video games to do it. And so we don’t know where these things go. I am always disinclined to render judgment on other people’s use of technology. And the memes ultimately are, we do convey information to one another. It’s an important part of cybernetics, which is what AI is all about. So I’m not opposed to meme generation, but I think you’ll see it in the fullness of time. And I think the incentive, by the way, the incentive structure comes mostly from markets, right? Firms want to be more productive, they want to be more competitive with one another.
I think AI is what the recent Nobel Prize winner, Joel Mokyr, would call a macro invention, which is his term for a general purpose technology. But I like macro invention more because macro invention is like, it’s an invention that inspires other inventions, and it inspires institutional inventions too. So there’s all sorts of examples of general purpose technologies being layered on to sort of existing types of organizations, and then what happens when a wholly new type of organization that is predicated on the assumptions of that general purpose technology. The canonical example in technology history is the assembly line.
The assembly line is basically a downstream of the diffusion of electricity into the factory, and that was a wholly new type of organization that redefined labor relations and massively increased productivity, won World War II, and it’s a hugely consequential thing that’s downstream of technology diffusion. Very hard to predict, and I would contend to you that we have not seen the AI first organization yet. I don’t think anyone’s done it. And so when that happens, those will be fiercely competitive, fiercely disruptive organizations. They’ll be wholly new, they’ll be alien to us in some ways, and the way that we internalize those new types of organizations and the inventions they create, that will be the whole ball game.
Tim Walton:
Yeah, given the accelerating AI progress that Dean described so well earlier when he was “feeling” the AGI and then the fact that the leaders of many AI labs say, for instance, that we’re going to be getting Einstein level AI at everything humans can do within a year or so. If there’s even a small percent chance that is the case, it seems like it’s cause for great concern, and it’s interesting to just consider that we are fairly lucky to exist in a universe where the history of the greatest developments in the AI R&D of AI alignment, which is sort of making AI more likely to do what we want it to do, has driven the greatest capabilities gains with things like RLHF and reasoning models, et cetera, being the biggest capabilities gains. Those are all downstream of AI alignment R&D.
I’m curious, so basically as things continue to accelerate on this trajectory, I’m curious what your guys’ take is on how do we maximally accelerate the R&D that needs to happen here in the west, such that the west does win on this stuff and we solve these open problems here sufficiently to come out on the right side in the end, rather than just losing out, for instance, to China, which is increasingly investing in algorithmic improvements and their model Kimi K2 was the best AI ever used by far until Gemini 3 came out. They’re really neck and neck with us and incentivized to make all these algorithmic improvements. Now curious, how do you think we can actually win on this stuff in the long run?
Daniel Remler:
Dean will have more interesting thoughts than me. I’ll just give one, which is that to do that we really do need to retain and expand our talent base for AI researchers. I mean, it’s the most obvious thing in the world, but it’s just is worth repeating. It’s not just attracting talent around the world, it’s retaining them. Lee Kuan Yew, I don’t know if this really was a quote, but it’s said that he said that while there are one billion people at a time in China, there are seven billion Americans around the world, and that remains true. People want to come here, as Divyansh was saying earlier, and we need to continue to get them here, especially the best and the brightest, and make sure that they stay here. That, I think, is really essential to accelerating AI R&D.
Dean Ball:
Definitely agreed on those points. I think I’d add that the action plan includes some work on this for DARPA to do for, I think this is differential approaches to alignment and interpretability. And investing in those things, things that maybe the frontier labs won’t invest in and things that are maybe necessary for military grade levels of alignment. And if there were a consumer spillover for that research, it would be, shall we say, not the first time in DARPA’s history that such a thing has happened. And so that would be great indeed. I’d say a couple of things. First of all, if we have Einstein level AI next year, on one level, it’s like is it possible to be an Einstein at everything? Is that a thing? Can you be Einstein? Can you be smarter than Einstein at persuasion of people? Can you be smarter than Einstein at politics as a robot? Can you do that?
Maybe, but you might be just making a category error there. And so a lot of times I think Tyler Cowen always says that Silicon Valley thinks in infinities and DC thinks on the margin. And so I think there’s a lot of AI thinking that projects these very radical changes because you assume it’s like, oh, it’s 150 percent better than Einstein at everything, but that’s not necessarily the way it actually plays out, I think. And it might be the case that if Einstein, I don’t think that will happen by the way, but if we did have Einstein-level AI next year, I’m not sure how different the world would be. The question you would ask is like, okay, what do you do next? What do I do right now? What do I do? Say I’m back at the White House, say I am an EEOB staffer. What do I do?
What about my life changes? What about the White House has to change because of this thing? Because by the way, the Einstein-level AI probably still gets stuff wrong sometimes, just like Einstein did. Einstein was wrong about quantum mechanics. Quantum mechanics did not conform to . . . It wouldn’t have been tested well on reinforcement learning from human feedback, shall we say, right? It wasn’t very intuitive for us, and Einstein was wrong. It didn’t work on reinforcement learning from Einstein feedback either. So what exactly do you do? And I think when you start to think about those things, you realize how challenging the concept of diffusion is here. Because at the end of the day, if we have a machine that’s that smart, we still have to verify its outputs. I’ve always said that if I write a book, it will be called I am the Bottleneck, for this reason. Because how are we going to know that it’s right? How is it going to know that it’s right? And now we’re doing philosophy.
Daniel Remler:
Yes.
Audience member Patrick Wilson:
A corollary question. Hi there gentlemen. Patrick Wilson from MediaTek. So my question is predicated on the previous question about research. I actually don’t see anybody in the government championing research. That’s out of our top priorities legislatively, and it’s confusing. But one of the other confusing things is the United States in 2024 made the exact same number of engineers that we made in 2001. There’s absolutely no increase in real terms in the number of graduates each year who receive advanced degrees in engineering. How are we going to dominate AI in the future if we’re not producing more engineers? Next year China will exceed our national investment in R&D for the first time ever, and they produce roughly 17 times more engineers than we do every year. So how, as a strategic imperative, are we going to fix that?
Dean Ball:
Well, I think first of all, there are lots of research priorities that the administration has with respect to AI. I just identified one for DARPA, that’s one small one. But there’s also directives that are being given to NSF on fundamental AI research and also NSF and NIH on applications of AI in various fields. It is totally the case that, if this is a raw population numbers game, we’ve got some problems because I don’t anticipate that the US population will double in the next five years, except if you sort of do some fuzzy math and model AI as a kind of knowledge worker doubling, through either productivity or through the idea that, well, we now have 50 million new software engineers that have joined the workforce and they’re digital. I mean, I think that’s probably what we’re trending toward in some broad sense, even if they’re . . . That’s what AGI is, right?
In some sense. But even before that, there’s many productivity benefits that you’ll have in between. So I know many engineers now who think they do the work of 10 engineers, so it will be through productivity enhancements. That’s really the only way that we will . . . And through clever adoption and through leading the world in adoption, that’s why it’s so fundamental. I would also just say one thing, which is that I don’t know this, maybe I’m wrong, but I would bet the number that you cited about Chinese investment in R&D versus US is government to government comparison. USG funding of research, which does matter, I think there’s a lot of things the Trump administration is doing right now to get way more bang for our buck out of that research investment.
Because there’s a lot of inefficiencies in terms of the way that system worked, and everyone in the scientific apparatus agreed about that problem until the Trump administration started saying it, and then they all started disagreeing, which is a common pattern. But it’s not including, very importantly, corporate R&D. It’s not including all the venture capital going out, the billions and billions of dollars that venture capital firms are investing in defense tech, some of which ends up being basic-ish research. So there is, I think, a whole . . . And philanthropy too for that matter. I think there’s a huge role for philanthropy to play here. So I think the US ecosystem is stronger than we sometimes lead ourselves to believe.
Daniel Remler:
Totally agree that the strength of our research ecosystem is bigger than the US government. One thing the US government does do, that I want to highlight for you, that’s in the AI action plan somewhat prominently, is NSF’s national AI research resource, which is intended basically to democratize or expand access for, in particular, researchers, maybe even startups too for compute and data, and I think specialized models as well. It’s not huge. It’s not incredibly well-funded. It doesn’t have a ton, but it’s something, and I think it’s an interesting model to build on, to basically ensure that research and innovation is happening outside of the biggest technology companies, the most well-financed startups, and the kind of few elite research institutions that have access to all the resources. So I just want to shout that out because that’s in the AI action plan, and I think it’s a very good initiative. Yes.
Audience Member Tsiporah Fried:
Thank you. Tsiporah Fried, The Hudson Institute. Also, I have one question about allies, but this question is subdivided into two questions. So my first one is you mentioned a kind of club in AI partnership with the US. So who are the beneficiaries of the first, of the number one club? Is it more like AUKUS or Five Eyes? And what do they gain to be in this privileged club, knowing that when you are partner with China, you have the full package? You have investment in infrastructures, in data centers, in robotics and physical products, and AI application and data. So what does it mean to be in this club when you have this relationship with the US on AI?
My second question is about European Union and how do you see it as a partner? Maybe you know that recently the European Union finally awakened to the idea that regulation could kill AI, and is going to be lighter on its regulation and maybe review the AI Act. We’ll see what will happen. But at the same time, European Union claims that his ambition is to get sovereign in AI, and there are still judicial issue with the US big tech. So how do you see European Union as a partner or as a competitor in AI?
Dean Ball:
You need to take the second and you take the first.
Daniel Remler:
Sure. So in terms of the partnerships between the United States and our allies, I would not want to characterize it as a club and then the rest, more of a spectrum. And at the kind of closest end of the spectrum in terms of our closest allies and partners, it’s sort of like we know which countries they are. They share our values, they share our strategic interests. We have a common threat perception of the China challenge when it comes to technology. I think that’s something that’s actually very underrated and important part of US AI and technology diplomacy in general, is just husbanding and making sure that we have a common understanding of the trajectory of the technology, but then also the various threats posed by China’s use and misuse of AI technology. But I think we should try to think about a model where we’re doing a few different things with our closest allies.
One is aligning on rules to protect critical and emerging technology from diffusing to China. Well, at least technology that would pose a serious national security threat when used by China. So that’s rules around outbound investment, inbound investment, export controls, research security. I know that’s something OSTP is working on now. You can see an outline of a lot of these things actually in some of the trade agreements and trade frameworks that have come out of the office of the US Trade representative and Ambassador Greer’s team, where you have language on economic security that’s meant to align US export control policy with some of our close allies like Japan and South Korea. And I think that’s a great model and I hope that that’s replicated across our agreements with many of our close allies, and that we continue to really go further and drive that alignment on these kind of rules.
The second thing then is co-production or co-investment or sharing resources to scale up our research and innovation ecosystems in general. Obviously this gets to issues like critical minerals, shipbuilding that Dean was talking about earlier. And then the third thing is building a partnership that we can then take to third countries or to international institutions. AI exports program, the technology prosperity deals interact on this in an interesting way where, in the tech prosperity deals, there’s language on . . . So in the UK instance, the United States and the United Kingdom working together to promote the full AI stack in third markets. I think there’s similar language in the Japan and Korea deals. And so basically could we use the AI exports program to say, look, if there’s a Japanese or a Korean or a British company that’s part of one of these consortia or . . .
Dean, I know you’re a little wishy-washy on the consortia these days, but let’s say there’s a British or a Korean or a Japanese AI company that wants to work with US AI companies to build some kind of AI solution in a third market, can they also access some of the benefits of the American AI exports program, and can their governments bring something to the table as well? Whether it is diplomatic and commercial advocacy support on the ground, whether it’s financial resources, I know Japan’s JETRO and JBIC are very good on this stuff as well. So those are kind of the three elements of what I see as the benefits and the kind of goals of our AI alliances with some of our closest partners.
Dean Ball:
So in terms of European Union, I’ll happily beat up on EU tech regulation any day, but that’s not that interesting. You can all next token complete for what I’m going to say there. So I will say the European Union needs to delay the AI Act obviously, very clearly. And the code of practice is maybe fine. It’s a little hostile on copyright. I don’t know what the plan is there. What are you going to do? You’re going to sue the AI companies for copyright when they’re inventing new science? Come on, let’s get real, but whatever, fun. But I think the deployer side provisions of the AI Act are the thing that actually are most potentially damaging. I would delay those by multiple years. If I were king of the EU, that’s what I would do.
I think there’s a lot of other things though that are probably actually bigger hindrances to the EU’s dynamism than the tech regulation. I would say capital markets are a place to look. I would say labor markets are another place to look. There are many things that the European Union and member states have done, which in my assessment, not being an expert on EU policy, but my sense is that it has kind of preserved the society in amber in a way that is not going to comport well with AI. We’ve done a lot of that too in the United States. We all have old society disease, but you are older. So there’s this issue. And then I think when it comes to sovereign AI, I think that there are a lot of things that can be productively done on sovereign AI and I would really try to conceptualize it as a technical rather than a political problem.
I think it’s unlikely that the European Union is going to develop a competing stack. I would really even be surprised if on the model layer, the AI system layer. I think probably what we’re looking at in the AI sort of advanced AI field is the big language models, is probably an operating system style dynamic. And that’s going to be really hard to replicate. I think you’re going to need huge economies of scale. It’s going to be hard to catch up. So I would think of that kind of thing as there’s other types of modeling you can do, right? There’s scientific models, there’s other sorts of . . . I would think of that as a combination of data and compute, interesting things you can do there.
And then I would also think about this broader imperative because if AI just ends up commodifying lots and lots of knowledge work, making it available to everyone, then a lot of the western societies are staked on high-margin knowledge work. And probably that continues to be, maybe it gets higher margin in fact, but there’s maybe fewer people who get to do it. And so this re-industrialization imperative becomes not just a matter of national security, but really a matter of economic security. And my read on it is like the EU’s industrial base is not actually . . . I don’t know. I drive around northern Italy and it’s like there’s a lot of cool stuff here. You got cool people making industrial solvents and stuff. We don’t have that in northern Virginia.
There is an industrial base in Europe, a lot of advanced manufacturing in places like northern Italy and Germany and elsewhere and France. And so I think I would be trying to double down on that and to own a lot of choke points in the physical world, not just relating to compute and SME though. That’s something where Europe has more than just ASML. There’s other things too. There’s companies like Zeiss, there’s lots of things you can do there. But outside of just semiconductors, all sorts of different areas where I think you can make the best stuff in the world if capital markets and labor markets and partnership with the US all work out. So I’d say Europe feels to me like it’s in a pretty precarious position, but not an unsalvageable one. But I would stress that it’s a precarious position and you have two years to stop yourselves from being Argentina in Europe. And I would be cognizant of that and I would be acting with maximal alacrity, which the US should do too, because the US has a lot of problems.
Daniel Remler:
I just want to actually draw something out about the tenor of your comments, Dean, which is I was part of the group in the previous administration, and even in the first six months of this administration, providing constructive feedback on the EU AI Act. And the motivation in both cases consistently was that we actually have an interest in Europe being innovative and being in the lead with us on AI, for your prosperity obviously, but from our self-interest perspective, because we see you as strategic allies that share our values.
Dean Ball:
100 percent.
Daniel Remler:
And we want a prosperous Europe. And so this is tough love that Dean is delivering here.
Dean Ball:
But it is truly, I don’t think re-industrialization is not something the US is going to be able to accomplish completely unilaterally on its own. We will have to do that in partnerships with other advanced capitalist democracies and so be advanced capitalist and democratic.
Bill Drexel:
All right, well tough love is a good note to end on, I suppose. We are running down on time, so please just join me in thanking our speakers.
Michael Sobolik:
Primarily on the stack, the full stack of AI, which I guess is another way to say the conversation so far has focused on the material, the hardware, the technology, the coding behind the software, and the actual physical and, I suppose in some cases, digital things that make up artificial intelligence. And who is ahead and who’s running behind between us and the PRC in this material competition for artificial intelligence. But this panel that we are about to start now steps back and diverts our focus and redirects it from the material to the immaterial. And this is a discussion that I’ve been excited to convene and I’m grateful that it’s a part of this conference, because it takes me back to the beginning of the nuclear era and the Cold War, where nuclear weapons were more than just a categorical shift in war fighting and in the scope and possibility of destruction. It brought up fundamental first order questions of morality and questions for the United States of, if we believe in freedom and the individual and in perpetuating our system, the constitution, the Declaration, what does that mean in the nuclear age?
And you saw policymakers wrestle with these questions as early as 1950. In one of the seminal strategy planning documents of the Cold War, NSC68, which was published in 1950. It was written out of policy planning and State Department by Paul Nitze and a whole other team of people. The opening three sections of NSC68 are not about nuclear weapons, even though that Soviet catch up with nukes, which was one of the big things that precipitated the research that became NSC68. The beginning of it wasn’t about nukes at all. It was about first order principles of what is the United States, what is the Soviet Union, and how do our two systems compete differently in the nuclear era? And for this portion of our day of our conference, we want to ask that question about the United States and the People’s Republic of China in the context of artificial intelligence. And I’ll add one more data point then we’ll launch into a conversation with our panel.
When it came to the philosophy of nuclear deterrence in the Cold War, one of the biggest asymmetries, from a morality perspective, was how closely you co-locate your strategic missiles with your population centers. One of the big decisions that American policymakers strived to do during the Cold War was to keep our land-based ICBMs far away from dense population centers. And if you recall the Chinese spy balloon that went over our country a couple of years ago, it hovered in Montana and that was not accidental because that’s where a lot of our ICBMs are. And that was done very purposefully to try to get it away from high-density areas. Whereas the Soviet Union would co-locate them as closely as they could to essentially dare America to strike their nuclear capacity at great collateral expense. So different calculations on the value of human life were crucial to decision-making about how you did nuclear deterrence. And we want to have that conversation today about artificial intelligence with America and China. So for this conversation, the panel you see before you, Bill goes from the moderator seat to the hot seat.
Bill Drexel:
I’m back.
Michael Sobolik:
He is a fellow here at Hudson Institute and has done quite a bit of work on this area in previous jobs and in this one as well. So glad to have Bill here. To my immediate left is Kirsten Asdal, who is founder and partner of the Asdal Advisory Group. You previously worked for Matt Pottinger, former Deputy National Security Advisor. And from previous conversations we’ve had, I’m really looking forward to hearing what you have to say on this topic. Then we have Sam Hammond at the very, very end, who has one of my favorite ex handles, which I think is Ham and Cheese, which is a very nice play on your name. But Sam is a senior fellow at the Foundation for American Innovation, which is I think is one of the most interesting, and I suppose in the name, actual innovative think tanks out there doing great work on AI.
And I’ve enjoyed partnering with some of your colleagues and you on a few skirmishes earlier this year. So with that table setting, let’s dive in. And I want to start with Bill, because Bill, earlier this year you published a report that got into some immaterial questions with the AI race with China. And there’s this one little snippet that, to me, reads as almost making a case for the importance of the ideological nature of AI. And want to briefly share this snippet with all of you too. It says, “While economic and military competition undoubtedly make up the backbone of the AI rivalry between the United States and China, success in the struggle for AI advantage requires far more than simply accelerating technical capabilities beyond those of the adversary.” Can you unpack that? Why does the immaterial side of AI specifically matter for strategy with the US and China?
Bill Drexel:
Yeah, it’s a question that I think we don’t think enough about, and that’s because we get so focused on just winning in that whatever that means, winning the race, what’s the destination? And the destination really matters because ultimately this is, to a large degree, a battle between systems. So we talk about the battle between systems to be able to produce the technology, but it’s equally a battle between systems about what do we want the technology to do, what’s the end state? And China has a very clear specific vision of how it normatively wants to apply this technology. And in terms of individual rights, liberty, the future of governance, it’s a very dark vision. We take a much more organic approach. And in general, I think that’s a good thing. Technology kind of grows best when it is organic. But there are trends.
We ultimately want our end state to be attractive. And you could imagine a world in which, if our technology continues down a path that seems to exacerbate loneliness, depression, polarization, dysfunction in society without any corrective, and we just continue down, if AI incentive structures all head in that direction, it’s not going to look so good in terms of that soft power competition between our ecosystem and theirs. Another layer here that I think is worth mentioning is that tech competitions between superpowers often have this symbolic import that is presumed to translate to hard power, but doesn’t always. And it’s still very important. So if we talk about the Sputnik moment and the space race. That was decisive. Well, maybe not decisive, but it was extremely important for the arc of the Cold War. But if we talk about hard power, space ended up not being that important in practical terms. It was the perceptions and the sense of progress and who is harnessing technology for the betterment of humanity that made the rest of the world envious of one system and—
Bill Drexel:
. . . that made the rest of the world envious of one system and more distrustful of another. And so if we lose sight of that, I think we missed the force for the trees. At the end of the day, there’s also a moral element. We don’t just want to be the dominant superpower because we like power. Although there is self-interest, of course. The direction that China could pull this technology, which is uniquely has moral embeddings. They’ve mandated already that their frontier AI must imbibe core socialist values. It’s a technology that’s uniquely has this ethical valence and we need to think much more deliberately about that and what do we want our end state to be.
Panel 3 | AI’s Ideological Competition: Addressing the Techno-authoritarian Advantage
Michael Sobolik:
You said something in addition to imbibing core socialist values, which we’ll unpack as the panel goes on, I hope. But you said that there’s a clarity of vision in China for how this fits in. And, Kirsten, I want to turn to you about unpacking what that looks like. From your understanding of the Chinese Communist Party and the way that broadly what ideology has meant for the party historically, and that has gone through its own iterations over the course since 1949, but with Xi Jinping today and with the party today, when they see AI and they look at themselves as a single party dictatorship as they have looked at past transformative technologies, how does AI fit into their framework for how they look at themselves as a single party dictatorship, as a Leninist party, and how do they view AI in terms of where they want the party and a party led China to be going?
Kirsten Asdal:
Well, thanks for having me, Michael, by the way. As Bill said, it’s been articulated actually quite clearly. AI is just the latest in a long line of science and technology resources that the party can harness to help move the society towards the inevitable end state of socialism with Chinese characteristics or ultimately communism. So in the Marxist worldview, which is pseudo-religious because they’re technically an atheistic regime, but instead you’ve got this Marxist worldview that says the way to progress, the way to advancement and flourishing is through S&T advancement.
And it has been all the way since the twenties when they were formulating this in the western mountains of China. And so AI definitely has the purpose of a engine of new productive forces. That’s how they call it. So it’s a means of production or it’s an engine of means of production, but it’s more than that, right? So AI has this unique capability to shape truth, shape data, shape narratives because it harnesses data and words and meanings and manipulates it.
Not all factors of production have that ability. So this gives the party a unique hold on a new form of political control. And that’s, I would say the second way that they view AI in terms of where they’re going as a party and how they need to engage with it. The power over the pen discourse power, so to speak. The other one I would say is political control. So we talked about Marxism. In the Leninist structure, they want to put party control all the way down as far as they possibly can into the palm of your hand, into every decision you’re making, into every piece of information you’re consuming, every interaction you’re having with your neighbor, with society, with the government.
And in their plan, in their AI plus plan, which they’ve just proposed in August, I believe it was, they intend to diffuse AI. It’s a little vague here, but it diffused AI across essentially what is the entirety of their economy and society. 90 percent of it by 2030. That’s a very short time from now. That’s five years away and almost the entirety of life in China and commerce and economics in China will be AI enabled. So every time you have an interaction in society or in commerce, it’ll be yes digital. That’s almost already the case in China.
And it will be powered by some kind of AI interface, some intelligent terminal or something. And that allows the party who controls these, who controls and has a final veto on these algorithms to be right inside that decision-making cycle of every citizen and have their voice be heard and shape those decisions.
Michael Sobolik:
Let’s talk for a moment about America and then we’ll start cohesing some of these two different observations together. But, Sam, a lot of the debates that we’ve heard on prior panels before this one, it’s all been America or most of it has been America-centric before this one because we’ve talked about debates of diffusion, of technology versus making sure that we have enough compute edge on China and enough compute here at home.
We have raging debates in the United States about state patchwork regulations for AI versus a national standard. And that debate in and of itself is interesting because it’s usually talked about as are we going to let big corporations do whatever they want to do versus constituent concerns? But that also is such an American debate to have of a state’s rights versus federalism, which I don’t know that I often code that AI debate in those terms, but that is what it is.
So when you look at the United States, again, not on the physical tech stack of AI, but America as Americans, but the way we view the world and the way we view ourselves importantly, how do you think we are thinking about where we are moving towards as a democracy with artificial intelligence?
Sam Hammond:
That’s a fantastic question. Maybe I can tie this a little bit in with China as well via contrast. So I think in the west and in America in particular, we valorize the sovereign individual and that’s I think reflective of a certain Christian cultural heritage. This idea that we render under Caesar what is Caesar’s and have the separation of church and state was in part motivated by early Protestant colonists who want to have their own private realm for worship.
And you can think of Chinese military civil fusion as the flip side of that where there is no separation between the public and the private realm. And that’s I think reflected more broadly in how we’re approaching AI technologically. So in China, many cities, local authorities will brag that they can identify every person on the street within a microsecond or a second based on large databases and AI facial recognition and gate recognition and things of those sorts.
People who park outside the line and the parking spot will be instantly fined. There’s a sort of organic whole to their approach where we’re all part of this sort of Confucian harmony, order thing. It’s similar to my body. I’m made of trillions of cells, which all began as individual organisms, and now I have this executive function which becomes the dictator that is controlling the way things go.
The US in contrast is I think much more like a man of war. We’re like a super organism that’s made up of many organisms that all have their own motives, all have their own motivations. And one of the big challenges going to Bill’s point about the end state is how do we reconcile the intrinsic powers of AI for surveillance, for censorship, for social control with our open society and the respect for individual privacy and individual rights that comes with that.
Michael Sobolik:
So when we look at not just Washington and Beijing as capitals that enact foreign policy, but as these two different civilizations and two different political cultures, let’s start with Sam and then just go down the panel towards me. Where does artificial intelligence stand to make the political-social ecosystem that the Chinese Communist Party attempts to oversee and control? Where does AI make it easier for them? Where does it make it difficult for them and where does AI strengthen and challenge American democracy?
Sam Hammond:
I mean, for years, it was part of US national grand strategy, if you will, to promote the open internet as a starting point. The internet we believed was this democratizing force because it allowed people to exercise individual expression and to coordinate, and we actively pushed that out into developing countries. And then we had the Arab Spring. And the Arab Spring was partly a byproduct of everyone suddenly having Facebook and not having the customs and traditions that build the antibodies for debate and free expression. But it was also something that China watched really closely and began investing in their surveillance state and their internet firewalls and information controls in response to, because they saw the power of information technology to subvert the regime.
But this technology is incredibly dual use in that sense because it can both serve to subvert, right? If we can drop in open weight LLMs that can talk about any issue that would otherwise be censored, that’s something that becomes very difficult to censor. But the technology also has a flip side which makes it much easier to control what people are seeing and doing and saying. And so I see us on this knife edge path where some countries like China given their initial conditions will become more fortified and more locked into their authoritarian model and make it much more difficult to see resistance, whereas in the US I see given the level of fragmentation and degree of disagreement and lack of state capacity and political coordination for AI to be much more fragmenting, and that leads to potentially its own form of surveillance.
It could be a kind of bottom up surveillance. I just recently watched the Ring keynote, the Ring camera company was unveiling a new technology where if you lose your dog, you can upload your dog to the app and any Ring camera that has opted into this will immediately alert you if the dog runs by someone else’s door. We’re getting potentially our cake and eating it too if we have the ability to harness the surveillance capacities of these technologies in a more voluntaristic way.
But it also could lead to its own sort of lock-in because when you have the state providing those kinds of services, they can also turn them off or choose to reform them, but if this is something that happens organically bottom up, it becomes much harder to reverse.
Bill Drexel:
Yeah, I think one thing I always want to highlight, I think the conventional wisdom we hear right now is that AI is primed to help the bad guys in bad ways and hurt us, hurt our system. The bad guys can surveil with much greater granularity on much more levels and crunch the data more sophisticatedly, and we are creating tools for deep fakes, which then make it very difficult to have robust conversation. And this is the opposite of the internet discussion.
The internet was going to be good for the good guys and bad for the bad guys. The internet turned out if you go by popular discourse today to be good for the bad guys and bad for the good guys. My own perspective on this is that I don’t actually think it’s that intrinsic to the technology. I think it’s a question of will.
So the internet was developed with a kind of libertarian bent structurally, but the CCP funneled billions and billions of dollars every year into creating technologies that wrested it from those founding principles and those structural principles towards a particular goal. And every year now, tens of billions of dollars go into creating and accelerating a pro-authoritarian AI ecosystem. And that’s what I mean by it’s important that they have this clear, moral, strategic governmental vision of what they want from the technology.
It actually goes back further than what we call AI. So back when what we call AI now, when we called it big data, there was a really interesting op-ed in the Washington Post from a PRC official when it was still opening up and people were still naive to the party’s trajectory that was basically saying, “Maybe this big data thing can make true communism viable because we can crunch the numbers and predictively analyze what resources are going to be needed, where, et cetera.”
So the impulse goes very deep and it’s a very natural ambition for them, and they’ve been building this ecosystem for years. And not only have they been building this ecosystem for years, we have been building this ecosystem for them for years. We laid the foundation, they’ve been building on top of it, and to an extent we are still laying the foundation. So again, I really think that we . . . It’s going to look daunting and like the stack is stacked against us. If our adversary has the momentum of tens of billions of dollars for several years building an ecosystem in the opposite direction, and we don’t have an equal and opposite vision, let alone an ecosystem or funding channels for it.
We had a president’s council on bioethics. We should have a president’s council on AI that looks at these kind of first order questions and where can we funnel attention, energy, funds towards creating mechanisms that support democratic governance instead of are purely profitable. The only government that has really started to move in this direction, I think, is India with its digital public infrastructure stack, which is not AI specific, but that would be the next generation. But I think there’s a long runway for us to go.
Kirsten Asdal:
I agree with Bill. I think when he said his point about basically tech is neutral and it depends upon who’s implementing it and how they’re using it and for what end state, I am in that camp as well where . . . And I don’t even think you can say that the internet was good for the bad guys and bad for the good guys. We obviously enjoyed tremendous opportunities and tremendous economic growth from the advent of the internet as well. But I would say for example, we thought that the internet would be too difficult for the Communist Party to overcome and it would democratize China.
And they overcame it basically with US companies tech help from the earliest days. That’s true. I see that being the same difficulties they had to overcome with the internet as being their most difficult. To your question, Michael, their most difficult challenge controlling AI, using AI for their advantage, exploiting AI for their political advantage vis-a-vis the US, which is that it’s going to be extremely costly and extremely labor intensive to continue to try to manage and shape every model that’s out there, every implementation that they’re diffusing across.
As I said earlier, basically the entire economy and society in the next five years, that is a wicked problem to solve that you have to be there as it’s getting implemented and monitor it. Even if you have a hundred million cadres, that’s still hard to do. I mean, a lot of people don’t even understand AI, much less can monitor the decisions that it’s making and the inputs that it’s having and what exactly data sets is it drawing from? And is that aligned with party interest? That kind of thing.
The easiest thing I would say goes to Sam’s point about surveillance is this is a natural advantage for autocratic states where the amount of data that you can scoop up on individual citizens is just ridiculous. And we’ve already seen. I mean, this is already happening in China. It’s already a techno surveillance state and AI, it just exacerbates that problem because it can scrape enormous data sets and find patterns that the human takes months to put together.
To your last question about American advantage. I think it is the debate about AI that is our core and most long-term advantage. The very fact that we do have different bodies and entities and civil discourse about this and private sector versus government and NGOs and different religious groups and rights groups and all this. The fact that there will be friction in implementing this is our advantage because China is going full speed ahead. They’re placing all of their chips in the basket of AI implementation will be a net positive for our society. But we don’t know that’s the case. I’m sure everyone in here can envision one or two major downfalls just off the top of your head that we could brainstorm and foresee.
We have the advantage of thinking critically about every one of those implementations, deciding whether or not there’s a market need, whether it’s good in accordance with our values of individual rights and liberties, whether it’s valuing the human and the unique contributions of the human over machine. I think that diversity variety and critical thinking over the long term will be our advantage over the monolithic approach of the CCP.
Michael Sobolik:
So many different threads I want to pull on. One thing I will tug on before getting to the public interest, private interest dichotomy in America. All three of you talked about the internet, and this is an interesting reference that I actually wasn’t planning to talk about, but let’s do it briefly because listening to all three of you, the talking point, or I guess the clip that was running through my head was President Clinton in the ‘90s when he was asked about the risk of the internet going to authoritarian regimes, and this might’ve even been in the China context, he said, “Trying to control the internet would be like nailing jello to the wall.”
It was a visceral image where all of us can see it happening. And of course trying to do that would result in failure. But as it turns out, the internet isn’t jello. And I think Bill’s point where you talk about the will of either system to leverage on martial technology to their interests and to their own political identity, I think that’s compelling. But let me ask all three of you, and let’s keep this maybe 30 seconds to a minute response for all three of you, if we were wrong, collectively, which maybe is an oversimplification, but if America broadly was wrong about the internet in the ‘90s and the early 2000s that the authoritarians wouldn’t be able to handle this, what assumptions do each of you think we’re getting wrong about AI right now?
Kirsten Asdal:
Do you want to go first? Go ahead.
Bill Drexel:
Well, I just think it’s the same assumption in reverse. It’s that the technology, it’s a techno-determinist assumption that the technology wants to go in a particular value-laden direction and not that we have the agency to direct it towards a different direction. That’s what China understood then. It’s what it understood now. That’s what we didn’t understand then, and it’s what we don’t understand now.
Kirsten Asdal:
I would say, and I’m not sure this is a fully baked thought. I’m open to brainstorming on this, but I would say it’s the assumption that we can control the technology. And this isn’t like robots are going to take over the world type of thing, but if you ask the top software engineers and coders at these top labs, you will get to a point where they do not know. They cannot articulate exactly how the model knows what it knows and where the answer came from.
Yes, they’re sourcing and we’re increasingly governing sourcing policies and stuff, but the logic behind it and how it thinks and where it’s pulling from at what weighting and stuff, they don’t know. There is a mysterious thinking process that is not outlined. And so I think that is a base assumption that China is applying to its control mechanism here, that it can just apply the right inputs and make sure that it’s all done in accordance with party interests and that’s a vulnerability for them.
Sam Hammond:
Yeah, sure. Playing off Bill’s point, I think we give technology the telos that we think it has. I think in China we’ll be centralizing, but I think one of the mistakes we’ll make is to lean into the Peter Thiel line that crypto is libertarian and AI is communist, AI is communist for the communists. But I think for us, I’m much more worried about actually insufficient coordination leading to rapid AI diffusion and capabilities being in the pockets of people that used to be in the pockets of CIA agents, and that leading to a lot of fragmentation and vulcanization that will get away from us.
Michael Sobolik:
Okay. So another thread that y’all mentioned earlier is it’s not just two systems that are dichotomous. Not only is there overlap and over bleed between the two, but in many cases American companies have been at the forefront of building China’s own AI stack and have been instrumental all the way from upstream to downstream application. And the example that many China watchers were watching in 2018, 2019 or so was a company Massachusetts Thermo Fisher that was involved in sending, I think it was genetic sequencing technology to authorities in Xinjiang for the execution and prosecution of the Uyghur genocide.
It was only in response to a huge PR push that Thermo Fisher changed their perspective on this. But Thermo Fisher is only one of several examples of American companies that have chased private interest and shareholder profit maximization and shareholder value in revenue streams to China. And in many cases they do so legally. So when we talk about American values and this interplay between the public interest and the need for our own companies to be competitive, how should we think about this with the AI race right now? Sam, we’ll start with you again, then we’ll come back.
Sam Hammond:
Yeah, I mean this goes to the level of nation states if there’s a kind of Darwinian evolutionary selection that could take place. I worry that China’s degree of fusion between the Republican private sector may actually be more fit for that competition. There’s the fact that they can do these sort of epic forms of coordination that we struggle to do. So five years ago, the tech policy people were talking about delivery drones, and we still don’t have those yet because that takes an awful lot of coordination. They have those in China.
Lisa has reported that during the exam season they paused access to LLMs, so students couldn’t cheat. That’s an incredible capacity for picking a good package deal with this technology. And that also applies too to the way they treat their companies and then still national loyalty. So when Jack Ma stepped out of line at Alibaba, he disappeared to an island somewhere for six months. And so we are leaning on our companies to adopt a . . . Not corporate social responsibility, but a corporate patriotic responsibility. But to the extent that we can do that is only through a bully pulpit and through aligning the executives at these companies with the national interest.
Otherwise, they are global citizens who are free to export their technology abroad and play both sides of the coin. And I think we see this in particular with AI, not just the examples you gave, but NVIDIA is lobbying aggressively to expand their chip access, their sales exports to China. This is essentially selling them the ammunition that they’ll be using to surveil against us and drive their own autonomous cyber capabilities and so on and so forth.
That’s good for my 401K, but I think at some point we need to decide is a $4.5 trillion company going to a five trillion company worth the national security trade-off and how do we in an American way, without becoming like China orient our companies to national interests?
Bill Drexel:
Yeah. I mean, I think that’s the challenge. There are gazillion examples of how we’ve built China’s techno state. Two of my favorite are that IBM basically built Huawei. And equally if you look at the AI companies that we’ve blacklisted, it’s like the CEOs are a who’s who of 1990s, Microsoft research, Beijing. But in both of those cases it was so hard to predict at that time.
I don’t think it’s hard to predict now. In NVIDIA’s case, I don’t think it’s hard to predict now. At that time, it was hard to predict where this was going at least more than it is now. And I think that this whole problem is downstream of the bigger problem we’ve had with China in our struggle, which is that the American public consciousness still today has not woken up to the extent of the threat that the CCP regime poses to the United States to human flourishing.
I agree with you. It’s hard for us to marshal that level of coordination, but it isn’t always. I think that if the American public is alerted in a way that is salient and really sounds alarm bells, things start to move. It’s just that the CCP has been very effective in its kind of PR even till today in making people think that it’s not as big a threat as it is. And I think it differs sector to sector.
In AI, it’s pretty clear what this technology is going to and it’s pretty clear that it’s very bad. An ambiguous case where I think we haven’t woken up is biotech where China has really unhinged ambitions that are direct threats to human flourishing generally, but also to national security. And our companies just simply don’t under . . . Under the auspices of medical advancement, which is a good thing, don’t understand that if you don’t take action now, you, like IBM, like Microsoft, like Seagate, like Apple . . . Well, Apple doesn’t seem to regret it yet, will regret what you’ve done. There are sectors we can focus on, but the real thing is we got to move American consciousness to recognize the threat that China is.
Michael Sobolik:
We could not agree on a biotech more, by the way. Kirsten/
Kirsten Asdal:
I would add that I think the moral framework that these companies were operating under in the latter half of the 21 century, sorry, 20 century, was fundamentally different than what they’ve been operating under in the last, say, 40 years or so. And that has shaped what they believe their moral imperatives are around the world. So, Michael, I love that you started with the Cold War analogy because during the Cold War, I think there was a very moral framework for the world.
There’s good guys and there’s bad guys. We have an enemy out there that’s trying to defeat us and they want to kill us and they are threatening our way of life. They have a very different worldview. They want the world to look different than we do. It threatens democracy. And so companies naturally aligned to that and they saw . . . You would not seek to do tech partnerships in the way that we do now with Soviet companies. Your tech was supposed to enable American military and technological advancement and also dominance. It was a national campaign.
This aligns to the post-truth era. The moral framework has been for these global companies, globalization. And for the scientific companies, it’s science for the sake of scientific progress for all mankind. And it just blurs everyone together. And we can debate whether that’s good or bad. I’m more of a humanist than dividing the world into blocks, but that means that there is no us versus them anymore. And actually they feel that morally it is good to be servicing all the people of the world through as robust of supply chains around the world as they possibly can.
The companies that I have worked with over the last few years that have actually engaged with decoupling are ones that have returned to or have always had this new moral framework about the world as opposed to the ones who are still in the globalization era. The ones who are still globalization is the utmost imperative are very hesitant to talk about the threat from China or to really internalize the idea that the party has a different vision for the world and that they threaten our way of life and they want to subsume us. Maybe not subsume, replace us. And so the companies that have taken a different approach to the Chinese market are ones that have a core assumption deeply held and a conviction that the world will be different. We’re going back to that block style globe and they have the courage to pursue it and the vision to actually do something about it now.
Michael Sobolik:
Interesting that you bring that up. I think one such company like that . . . And again, for those of you who were here from the start of this morning, Tarun was not able to join on the first panel that Anthropic recently made an interesting decision of withdrawing all of its products and all of its services from PRC clients. They don’t offer their AI suites or AI services to PRC actors. I think a lot of different companies get a lot of press in the AI China debate, and some of you have talked about a number of them, but I think, Kirsten, you’re right, that there are companies that have maybe more updated assumption of where things are geopolitically.
With the short time we have left, I want to ask a very practical question and then we’ll see if we have time for one question afterwards, but I can’t promise. But this has all been very conceptual and I think valuable, but also very conceptual. For policymakers, whether they’re in Congress, whether they’re senior officials or mid officials in the administration, and this is what they’re living and breathing, it’s easy to think about the stack and what you can see.
It’s easy to think about chips, easy to think about energy, talent and data. But when we talk about not just the idea of AI, but the principles behind AI, AI for American democracy versus how the party views it in China, what does this mean for them on a day-by-day basis as they have all these decisions come across their desk about export controls, about legislation? How does this conversation, or how should this conversation inform policymakers as they wrestle with the physical material side of artificial intelligence?
Kirsten Asdal:
Well, I’ll throw something out, I guess. As I have been grappling with this recently, what I found super helpful, and I know giving policymakers reading lists is not useful. So maybe this is for staffers or somebody to write a short one-pager on it or something, but read Solzhenitsyn and read the . . . The people who survived the Soviet oppression and this subjugation of their brain underneath the oppression of the Soviet worldview, and you’re not allowed to think anything other than what the regime wants you to think, thought crimes. Read 1984, go refresh yourself on Brave New World. There are a lot of people who . . . CS Lewis. That’s just in the 20 century. Basically, people who have grappled with the more theological meta questions of the use of these technologies by . . .
Kirsten Asdal:
. . . the use of these technologies by autocratic totalitarian states to oppress freedoms. Basically freedom of the mind, freedom to go where you wish, freedom to say what you want, associate with who you want, freedom to practice religion, have gods before the Communist Party, have Gods before the Soviet Communist. What I mean is Gods above them. And yeah, this is just the latest, I think in a long line of this rich, rich debate of how man relates to technology, machine, God, what’s our meaning? So I like to look back actually.
Bill Drexel:
Yeah, I think the temptation for any policymaker is to hear this and think, “Yeah, this is bad.” It’s very interesting academically, but we just need to get our export controls right or what have you.
I would encourage them instead to really be disturbed at this asymmetry between China having a very clear vision and pursuing it and developing an ecosystem towards techno-authoritarianism and our lack of vision. And instead of saying, “Let’s keep this an academic question,” look to examples where we’ve actually bridged policy and these sort of governmental questions before, President’s council on Bioethics. We need a President’s council on AI that’s able to actually direct policy affirmatively rather than reactively on a lot of these issues.
Relatedly, I think the other example you would look to is the . . . it’s a gnarly long name, Political Declaration on the Responsible Use of AI and Lethal Autonomous Weapons, I believe is what it’s called. But basically it’s the State Department’s program to try to avoid the worst outcomes of killer robots. And we’ve had several dozen signatories. It’s being implemented. It’s actually moving the needle a little bit on, hopefully on AI-empowered warfare, avoiding the worst outcomes.
We need the same thing for AI-empowered biotechnology. And again, this is an area where there’s a huge gulf in what we’re thinking and what China’s doing, and if we don’t act now, we will regret it.
And then the last thing I’d say too is that if you want to compete with China on rolling out AI to governments around the world, especially those governments where states are weaker, that is to say the global south, you just have to realize that we have not been winning and we will not win unless we partner with a country, let’s call it India, that has the scale and the price incentives and the will to develop an alternative. And I think that is another area where we haven’t taken action, but we could.
Sam Hammond:
I guess the first point is that I believe that we’re no more than five years away from AI autonomous systems that can essentially do any desk job, and that includes obviously the job of a bureaucrat and that one of the ways that we protect against authoritarianism at home and other countries is the fact that if a leader tries to exert undue authority that people can resign. There’s humans in the loop that actually can resist. And once that part of the government stack is fully automated, it becomes much more easy to envision a small group of people and even individuals taking control over countries. So given that timeline, the first point is that we’re under a lot of time pressure.
A second point is these things are very sensitive to initial conditions. So if we set values in these systems today, they may reverberate and be embedded and carried forward into the systems in the future, and recognizing that a lot of this is just a package deal. You can wish for a world where we only get the upside of AI, none of the downside, but the fact is we’re going to get both. And so how do we try to using those initial conditions in better values in the technology and then use that to export abroad.
So the example that I always go back to is Palantir during the 2000s. And their original thesis was like, “We’re going to be doing this counterterrorism work. We’re going to have this surveillance technology. We should at least engineer privacy and civil liberties values into the technology so that the person at the NSA, if they try to spy on their ex-girlfriend, there’s an audit trail or they don’t have the permission rights.”
And then we have a full way of auditing actually what took place. And I think that we’re going to need similar investments in not just the defensive side of AI, but the pro-privacy, pro-liberty part of AI, so that as the technology rolls across the world, and I think it will be very destabilizing, there’s going to be many of these weaker countries that are going to be demanding. I need technology to control my population to restore civil and social order. And that can be done in a way that respects rights or that ignores them.
Michael Sobolik:
Amen. That’s right. That’s right.
We could spend much more time on this. We are five minutes over, so I want to be respectful of everyone’s time. We’re going into a 15-minute break, so if the panelists are able to stick around, if you have questions that you do want to ask them, feel free to come on up during the break and chat it up. But yeah, 15 minute break. We will reconvene at 2:40 for the final panel, but please join me in thanking this panel right here. Thank you.
Panel 4: AI on the Battlefield: Evolving Military Implications
Tim Walton:
Well, good afternoon and welcome to the Hudson Institute. It’s a delight that so many of you have taken the time to gather us for the panel of the day, which is on the military implications of AI.
We’ve heard throughout the day that AI is suffusing all aspects of our society and military operations really is at the heart of that as well. In our Center for Defense Concepts and Technology, we’ve been exploring how AI-enabled decision support tools across echelons are unlocking radically new ways to employ the force and also how the growth in the number of uncrewed assets in particular are generating new opportunities for much higher levels of scalability of the force that allow that combination of AI tools plus more assets allow us to employ the force differently and could augur in a new revolution in military affairs that allows military forces to fight decisively different than they have in the past.
Many countries around the world are pursuing advantage in these areas. We saw Israel conduct operations in Gaza and over Iran leveraging new AI-enabled targeting tools. We’ve seen Ukraine and Russia field new classes of automated target recognition tools and decision support tools at the tactical and operational levels to guide how they structure their communications, how they plan and execute their operations.
And the People’s Republic of China is also marching steadily ahead in this area. As we heard earlier this morning, the Chinese Communist Party has established that they will pursue “extraordinary measures” as Jimmy Goodrich stated in pursuit of AI dominance and the People’s Liberation Army is pursuing that. They fielded the integrated command platform, which is an operational level AI tool that has both automated and now agentic elements to be able to plan and execute their operations dynamically. And they’re incorporating AI capabilities at all echelons of their military forces.
Beyond I think also the algorithms and the decision support tools themselves, China has a world-leading industrial capacity. They’re producing annually millions of internet of things devices, tens of thousands of other military and dual-use applications. And that combination, again of AI plus scale is something that we will need to as allies confront directly.
To have a discussion, all of that and the future of AI and the near-term opportunities to gain advantage, we have an illustrious panel with experts with experience in industry and government. I’m going to introduce the three of them. Our first is Shyam Sankar, who is a lieutenant colonel in the Army Reserve. He’s also the Chief Technology Officer and Executive Vice President of Palantir. And last, but most importantly, he’s a Hudson trustee.
Secondly, we have Dr. Lorenz Meier who’s an inventor who’s fielded a number of highly important communications protocols and hardware tools that are now used in drones across the world. And he’s also the founder and CEO of Auterion.
And then lastly, we’re joined by Matt Cronin, who’s the senior national Security Advisor at Andreessen Horowitz. And prior to joining his current firm, he was the senior or the Chief Investigative Council for the US House Select Committee on the Strategic Competition Between US and China. It’s a long title there. But he’s also worked as the Director for National Cybersecurity at the White House and a number of other government and legal roles.
In our discussion today, I’m going to ask each of them to share some of their initial thoughts on this emerging military competition involving AI and then ask maybe a few more questions and open it up to all of your questions and comments. Shyam, can you kick us off please?
Shyam Sankar:
Yeah, so I’m going to take it as a given that the audience believes that AI is going to change how we fight, that it’s a pretty profound opportunity for a change in military affairs. I think some of the aspects that are probably focused on less but are actually going to be determinative to whether we adopt these inventions first really come down to recognizing that AI doesn’t change the problems that are worth solving. It just changes how you’re going to solve them. The problems that you needed to solve to be the most lethal military in the world before the AI revolution are the same problems you have to solve now. And I think that speaks to the need for experiential learning. The sort of hand-wringing that comes from, let’s study the problem, let’s think about what problems we should be solving is a complete waste of time. You have to steal the march and the initiative here.
One of the reasons I think this is going to be really hard is that it’s very disruptive culturally internally. There’s a power law dynamic to AI that actually there’s all sorts of friction that limit the capability of the individual within these organizations. Maybe limitation in competence. Well, I am the subject matter expert, but I’m not technical, et cetera, et cetera. AI flattens these massively.
So the ceiling that a single individual is able to accomplish goes way up. So now you have one person who might be as productive as the next 1000 people, and our military structure struggles to deal with that. But if you want to be the most lethal military in the world, you’re going to have to learn how to reconfigure yourself to be able to capture that. And what you see broadly playing out in the commercial sector is that people with specific knowledge are the most valuable humans to give AI to, right?
It is actually the blue collar worker who operates the machinery, who figures out how to build the AI applications that adds value to the company in a way that the white collar worker never can. They don’t understand the business. So the people who understand the doctrine, the people who understand fires, the people who really know their military craft are going to be extraordinarily valuable. And we’re going to have to reorganize ourselves around them. We’re going to have to get back to a culture of World War II of field promotions, of merit-based rank, who actually should be in charge of this thing based on their aptitude to do that.
This is not skills. These are not things you go to a schoolhouse to learn. It’s aptitude and it’s unevenly distributed and you’re going to have to confront that reality.
Stepping one level out from the person is recognizing that nothing has changed about the OODA loop. It’s just that in this case it’s not about out-executing. That’s historically been our focus. We’re going to out-execute the adversary. AI is going to enable us to out-learn the adversary and the first derivative is the thing that we’re going to have to really focus on here.
And I don’t want it to go unsaid. You mentioned a little bit, it’s like we spend a lot of time thinking about the revolution of military affairs in the foxhole. I think AI has just as profound, if not a more profound implication for the factory floor and our ability to produce mass, to get the mass to where it needs to be, to think about where it even ought to be, that drives a significant amount of deterrence and lethality.
Tim Walton:
Thank you, John.
We next have a man who knows the business of drones deeply. So Lorenz, can you share some of your thoughts?
Lorenz Meier:
I think it’s a perfect setup. Thank you. Because I’m coming straight into the cyber-physical domain where we have autonomy and AI and we actually, you can combine AI and the fact that for example, drones can reach places we can’t reach because they can fly obviously, and you get to something that I’m actually struggling with what to call it exactly. We might call it super-autonomy, where autonomous systems combined with AI are able to outperform anything that we can do to date in a way that is transformational. We’re seeing that on a battlefield in Ukraine, but only the beginnings because make no mistake, those are still manually piloted drones.
And so to me, or my focus is like people are telling me, “Yeah, in the future we’ll have swarms,” and I’m like, “What? We flew a 20 drone swarm with the Marines two weeks ago. It’s already happening.” Thank God the Marines were the first force, sorry for the army, to fly a swarm in the world. No near-peer adversary has done that with a strike capability, yes, formations and everything, and we’re doing it with the army with live fires today.
But I think the really important thing is once you start to combine AI learning, let me give you a specific example, striking an air defense battery, now you have automatic target recognition. You can fly without GPS because you can do AI-based navigation on maps and a physical system that is now distributed, many drones, hard to shoot them all down, swarms. Now you get to something that is so far superior to the existing weapon systems that it really changes how we operate.
The other piece that is also true with respect to the skill like distribution you’ve mentioned you can now have one operator bringing a thousand drones into the perfect target, and it doesn’t matter anymore that that one operator previously could only fly one drone. So you could have the best tactical mind on a battlefield. That mind could only bring one munition into the target. Now you have to decide who of my operators is actually going to get the command of that whole set. And so the same leverage you have in the information space you now get in the physical space.
Tim Walton:
Thank you, Lorenz. Matt, over to you please.
Matt Cronin:
Yeah, sure. So hey everybody. Three initial thoughts. So first, just get a level set of where we’re at in terms of China and warfare. So China has spent the past three decades tirelessly working to create, to custom build their military to asymmetrically defeat our military. And by their accounts, they’re doing a pretty darn good job of it.
What happened is that in the past few years, AI just kind of randomly threw into the mix as X factor, and that has made them very nervous. They actually saw . . . A whole funny side story about how they actually had these massive laboratories filled with primates to try to figure out AI, and then America did what it does just accidentally figured out AI through chatbots. But they were focused on this issue. They knew it was important, but they didn’t get to it first. And they now realize that what AI is doing both in the digital and the physical realm, is allowing us to attack this multi-decade plan laterally.
So all of a sudden they built their entire rocket force and navy to defeat our navy, which was based on a small number of exquisite, highly capable, very expensive, very long to reconstitute ships. All of a sudden we can have a very large attritable autonomous fleet. Same thing, they worked tirelessly on electronic warfare where all of a sudden we have abilities now, one of our companies, CX2, that can take those EW capabilities and turn into beacons so we can immediately attack.
They’ve taken all of our manufacturing away and we lost all of our upskilled ability. We have capabilities using in large part AI to make different manufacturing abilities that brings us right back up to the 1940s maximum industrial production levels relatively quickly. So that makes them very nervous and it’s an opportunity we should exploit.
Second is I think we should have a more expanded idea of warfare because the Chinese do. So whether it’s smokeless battlefield or the three warfares, they do not see the conflict as purely kinetic as we do conventionally as Americans. And I worry that because of that, we are sleeping at the wheel on very important areas where they are building AI capabilities and we are not figuring out a coherent response.
That could be in PsyOps. So if any of you has kept tabs of Chinese bots on the internet, three years ago, they were rather pathetic and very easy to notice. Now I would say they’re actually fairly sophisticated. Unless you’re really initiated and are aware of it, you could be fooled relatively easily. Same with them exporting an authoritarian AI surveillance model around the world to try to make authoritarianism easier and the default for governments. And cyber is another area as well.
Finally, just to really zoom out, it’s important for us to think through and answer the question of what type of military does AI enhance better? There’s the western model, which is decentralized, devolve authority down, initiative, mobility. So this is at the platoon level, you understand your strategic, operational, tactical objectives and you just figure it out. You can operate laterally with initiative.
And then to the other end, there’s a Soviet-Sino centralized model, which is it maximizes authority to a central unit. The people are effectively treated as drones, cannon fodder, go and can do the will of a central node and they don’t even know why they’re doing it.
And I have some intuition that AI is actually better for our system if implemented properly, but the Chinese certainly think it’s better for their system, allow them to finally do what every authoritarian has longed to do since time immemorial is have absolute knowledge and control on the battlefield. We need to make sure they don’t and make sure our system, thanks to AI is vastly superior.
Tim Walton:
Wanted to build on how do we get our systems superior, I guess? What are some of the near-term evolutions and where do you see AI going in terms of enabling military capabilities more I’d say in the next five years, 10 years beyond. Shyam?
Shyam Sankar:
I think it lets us move from a mental model of deconfliction. As we plan these things, how do we deconflict, to a model of real-time synchronization. So if you go with this, I would offer to you, is the right model centralized or decentralized? It’s like quantum. Is it a wave or a particle? It’s both. And it’s the ability to materialize in the right sort of organizational structure for this moment and rematerialize in a different one the next moment that is going to lead to decisive victory. And I think AI enables you to do exactly that.
And maybe a concrete way of thinking about that is how quickly can the commander get down to actually directing the most tactical echelons? How accurate is the flow of information from the most tactical echelons to the commander? There’s a lot of messy stuff that happens in the middle, a lot of that’s necessary, but also adds latency. So if you can have the best of both worlds, I think we win.
Tim Walton:
Yeah, better cellphone game, but then also better decision-making on top.
Shyam Sankar:
Yeah.
Tim Walton:
Lorenz, do you have any thoughts as well?
Lorenz Meier:
Not specifically to this, no.
Tim Walton:
Sure. But I’d ask I guess in terms of some of the work you guys have been doing, in terms of fielding new operating systems that are unlocking new classes of uncrewed assets, drones, and the like. In the past I think there were some pretty high barriers to be able to develop new classes of drone capabilities. Your companies, others are now lowering I think some of those barriers and probably leading to this proliferation of even more classes of drones. Is that right?
Lorenz Meier:
I think what we are driving, and just to give you a perspective on scale, because that’s actually finally what I’m realizing people are also surprised of, we’re shipping 10,000 units per month, which is a scale that is unheard of in the west. It’s not unheard of in Ukraine. I’ve been seven times there. And so we’re doing the set scale already. So that’s one thing, autonomous mass for real, not conceptual, not hypothetical, but going into combat every month.
The other thing is we’re treating drones as flying computers. And right now I think the conceptual model, operational model still is, it’s a remotely piloted thing with a pilot on sticks. And once you start to do that, then you go further and you’re like, “Okay, this is a computer. How can I actually upgrade that all the time?” And one of the things we did is we literally built an app store and a fleet management system for defense drones. You can go as a field commander even and install an app. We actually built one, I didn’t make this up for this panel, but want to integrate with Athena. And it’s a very simple—
Tim Walton:
It’s integration system just for those who might not be tracking—
Lorenz Meier:
Targeting workbench. Yes. Maven, Maven smart system. You might’ve heard about it.
So basically now without having any sort of integration process, you just install the app. The app talks straight to Athena/Maven. You don’t need to create a standard for that, you just write it and you just deploy it. And you could do that as a COCOM commander in the PACOM or UCOM or whoever needs that relatively on a whim with your own developers. You don’t even need to talk to us the same way as you don’t need to talk to Google to write an Android app and to install it on the Android phones that you have in your inventory.
And so that’s a fundamentally new model that we’re enabling. And so we’ve become the fabric that you can roll out AI on on a massive scale and at a really decentralized fast pace.
Tim Walton:
Thank you.
Matt, we have two other companies here on stage. One is quite large, other is promising but still growing.
Matt Cronin:
Seemingly large. Yeah.
Tim Walton:
How are you viewing this market in terms of differentiation of capability? What stands out and how do you think the market’s going to evolve?
Matt Cronin:
So like defense and defense adjacent AI?
Tim Walton:
Correct.
Matt Cronin:
Yeah. So I guess a couple of things. One, they have to have just the baseline. So they’ve gone through what’s called the idea maze. Like you keep asking them to ask people questions, they’ll actually deeply understand the issues. Great team, great product market fit, the standard VC stuff.
Second, I think an important advantage for startups versus legacies is that they’re AI native. So when they’re approaching a problem that is an important problem, they say, “Okay, I understand how to use AI from the get go.” You’re not just bolting it onto a system that wasn’t designed for it.
And I think the final part is what I mentioned there is that they are going after a very important problem where if you properly apply AI can lead to a profound change in how that industry or that area operates. So another example would be we have a startup, Mariana Minerals, where the mining industry is no pun intended, dirty and it has all sorts of different issues with it where, and it’s in many cases extraordinarily disorganized where they just can apply this software-first AI focused solution and gain all sorts of capabilities that their competitors cannot.
Adrian for manufacturing is another one it went to. They are able to take someone who was flipping burgers or unemployed and within six weeks thanks to AI and training programs and different AI enabled equipment can have them making parts that are literally used in spacecraft.
And then there’s of course the more formal military usage, which is, hey, this is an AI-first system that allows us to engage autonomously in an EW rid environment. All of these sorts of things profoundly change that industry or that military solution. And that’s what we’re really interested in.
Tim Walton:
Is there an AI bubble? I’ve heard the terminology of whether there might be an AI bubble in some of the other areas in the commercial sector, but in the defense world, do you think is there a AI adoption bubble?
Matt Cronin:
No, I don’t think so. And I think the other way to look at it would be there’s some technologies where it’s a question of whether the time has come. So the internal combustion engine was invented in the 1860s, didn’t get massive adoption for years later. I in no way, and we in no way think that’s true of AI. We are literally every day seeing profound applications and use cases being employed in the field successfully using AI. And so there is no indication particularly in this area, and I would just say in general for AI where there’s some shallow level of adoption but not actually deep application usage across society that there’s any real risk in my view of a bubble.
Tim Walton:
Lorenz, I saw you grinning. Do you want to jump in as well?
Lorenz Meier:
Yeah. I am getting the same question on defense technology and it’s like, “What happens if the war in Ukraine is over?” And I’m like, “Well, would you trust Russia the next day?” So no, I think both for AI and autonomy and the whole defense piece, we’re entering a completely new era.
Tim Walton:
And we’re not going back.
Shyam Sankar:
One macro and very micro observation. The macro bit is we lost deterrence. We had the annexation of Crimea in ‘14, the militarization of Spratlys in ‘15, breakout capability with Iran and the nuclear bomb in roughly ‘17, the pogrom in Israel a couple years ago. This whole thing is, it’s not about whether this one conflict is going on in this one theater. It’s the restoration of deterrence and rebalancing the world order.
The very micro thing on is this an AI bubble, I would say there was an astonishing lack of GPU high side that every day there’s not a single day that goes by where our users are not rate limited on their consumption of models because there are no more models. So the demand far outstrips the supply and I think that would tell you where we are in the cycle, which is to say very early.
Tim Walton:
Shyam, you’ve published quite a bit on how we can go faster in talking about Secretary Hegseth’s recent acquisition, joint requirements elimination and foreign military sales speech. Could you share a little bit maybe your thoughts, how do we go faster on AI adoption in particular, on uncrewed assets, on decision support tools? I’m even thinking also on crewed assets and crewed units in terms of being able to give them more AI tools.
Shyam Sankar:
Yeah, well, I think we need to have more parallel efforts. The first would be like I think sometimes we’re asking people who historically came from a crewed background to start employing tactics for uncrewed approaches. I think we need a lot more focus on experimentation, actually going out into the field and using these things in ways that are competitive. Getting out of the lab and in the perfect, driving an autonomous boat on a lake is not the same thing as doing it in complicated sea states.
And how you integrate it into the force is something that you only really get reps at in a battlefield environment. The logistics and supply chain tail of this almost somehow gets totally written off, but how exactly are you emplacing these things? Where are you emplacing them and what is your concept of employment? Where are you dumping them afterwards?
So there’s just a need for a lot more practice. I mean, I think it’s starting to happen, but it starts with kind of a seriousness of putting the weight of effort there and creating lines of effort that compete with the main lines of effort.
Tim Walton:
Lorenz, if I can ask you to build on, your systems are deployed across the world, including places that are involved in conflicts. How are you—
Lorenz Meier:
Very politically correct.
Tim Walton:
Thank you. How are you trying to get this virtuous intelligence development operations cycle where you get feedback, you update your system and quickly get it back to the users who need it?
Lorenz Meier:
You have to use it. It’s as simple as that. So I actually like where the Department of War is going and just saying we need to buy things at scale. We need to shove them down into the units and they need to churn through them.
I have a great internal story because when we developed our first larger munition and FPV drone, we actually, it was great testing everything and then we did final acceptance test and they failed on the last 30 feet. And the question’s like, why the last 30 feet? Well, because during previous development, our teams had waved off to save the aircraft, and so they didn’t find all the failure states on the final approach, which led to me saying, “No, you have to go through 40 per week. You have to go through it, otherwise you don’t learn.”
And for the next size fixed-wing launching munition, it’s an airplane that you launch off a catapult, there’s no way to land it. They have to put it into the target every time, and we’re just building enough of them to test and that way this doesn’t happen again. And I think that’s a great example. It’s the same way as in force design, this has to happen because otherwise you’re not learning.
Shyam Sankar:
That’s a really important point, which we have to treat these things as completely expendable. They’re like a bullet, and therefore you have to link it up in how you think about training. So when you go out to train, you’re going to expend all of these. That then gives you the ability to have a demand signal back to industry. I know minimally I’m going to be consuming this much every year. And it is just like the iPhone. You don’t have one iPhone every 10 years. The ability for Apple to continuously refresh the iPhone is what allows them to continue to make incremental improvements to the platform as they go along.
Right now where we’re buying in such small quantities. This is where Secretary Hegseth and the whole Department of War has it right. You have to go hard and fast with an ambitious approach. Even if you think about World War II, we built 154 different airframes. I’m pretty sure six of them mattered. But you can’t know what those six are ahead of time unless you’re a Stalinist regime.
Tim Walton:
I guess wrong. What are some of the maybe middle tier of acquisition pathways though to be able to establish either consortia where you can have the right companies in the club and get access to the operational feedback from the users, the intel updates and the like? I imagine companies that are operating in Ukraine are doing this in a pretty de facto, haphazard way in terms that they’re getting direct report feedback and they’re iterating on the fly or actual units on the front lines or tinkering with their own drones. But moving forward, what are the promising acquisition pathways by which we can have companies large and small actually get that feedback?
Shyam Sankar:
Well, I think the commercial solutions opening gives you a wide berth to go after here where you basically say what you want, you have people show up, you actually do a competition. Exactly how we would be buying in wartime, how the Ukrainians buy in their wartime. And then you can nab multiple . . . Anything that works, we’re going to buy it, and we’re going to try to keep as broad of a base alive to create as much, not only competition, but surface area for innovation to deliver capability of the war fire.
Tim Walton:
Want to turn back to you, Matt, if I could. We’ve been talking mostly about US and US Department of Defense, Department of War. How do you think US can better collaborate with allies and partners in terms of fielding AI-enabled capabilities, AI capabilities, from your lens maybe focused on the investment, what are co-development, co-investment opportunities? But then I’d like to hear my final panelists as well.
Matt Cronin:
Yeah. So to start off, we need to reform foreign military sales. It’s come up. It’s come up in the hill. It’s come up. The Sec War mentioned this as a future initiative in his speech on November 7. I mean, it is profoundly ridiculous that in some cases our allies have to turn to our enemies to arm themselves because we do not, by our rules, allow our companies to sell to our allies. That’s crazy.
And so there needs to be a fundamental rethink. This is like a broader pet peeve of mine. We’ve, all of us professionally, in the entirety or the entirety of our career grown up in a world where you just kind of thought we’ll always have this unipolar moment. And so we created all of these different rules, whether it’s like, “Let’s not mine in America anymore. Let’s not manufacture in America. Let’s have somebody else do it. Let’s just do all these different steps, maximize bureaucracy, maximize process,” and it seemed like a good idea, but it was based upon what I would argue was fundamentally a fiction to begin with.
But we at the very least no longer live in that reality. And so we have to just really take all of these things that we take for granted ‘cause we just happen to be in this moment, recognize that’s ahistorical and wrong and rejigger our economy and our regulations for competition to ensure the victory of the west and the free world.
Separately, on a more micro note, within the countries, one of the things that I talk to and I have a chance to talk to different leaders in various countries about our companies, ways to work together. One of the things that always comes up and they say, “How can we have startups in our country? Should we want to make a startup culture?” And one of the things we say is, “Well, there’s all sorts of cultural things we don’t need to get into, but if you want startups in your country, you need to just get startups there, get offices there.”
So a lot of our companies, for instance, come from the School of Palantir. They are engineers and they’ve worked there for years. They saw amazing problems, they learned incredible skills, and they also saw, “Wow, I could do really, really hard things. I can beat the system.” And that teaches people to go out if they don’t want to stay in Palantir and make huge amounts of money, they can go and become just . . . engage in startups, and then from there, solve big problems and go after it in a way that someone who just came out of an engineering school or out of the government in many cases just could not do.
And so having the different allies and partners understand, you just have to be a little more welcoming. Our company’s coming at least at first to create that flywheel of innovation and success for your own startup culture.
Tim Walton:
Lorenz, can you share your Transatlantic perspective, please?
Lorenz Meier:
Yeah. We’re a very international company. We have a big office in Ukraine. We’re building presence in Taiwan. So we’re actually very successfully doing that. And I think, which made me partially cynic a little bit, for example, about Europe, because one of the things is if you want to have big scale, well, you have to have a big market. And Europe is right now fragmenting when it comes to defense. Everybody, the funny thing about the Europeans is they didn’t turn away from America of like, “Oh my God, what has happened?”
Lorenz Meier:
. . . Europeans is, they didn’t turn away from America of like, “Oh, my God. What is happening?” They actually turned away from each other, which is even worse. So we’re seeing European acquisition splintering into . . . That’s like every state building their own fighter jet. I mean, good luck. But that’s what they’re doing and I think there’s an opportunity at least on the software side to reunify our allies and partners by bringing them together and trying to talk a little bit of reason into them of like, “Look, maybe you need your individual industrial base because that’s how you culturally have been organized the last 5,000 years, but we need a level of commonality so that we can fight together.”
And when it comes to autonomous systems, it’s a little bit like fighting in NATO without English. That’s the current state of affairs, which is a disaster, which also means that only the American systems right now would be able to wage a war at scale because the individual European systems are not able to talk to each other at this moment. In terms of startup culture, I mean, there are a few cultural things that certainly, I think where America has a culture benefit. The other piece is risk-taking. I think the European governments just need to get better at that.
And we’re even seeing it in Taiwan for example. We’ve been there for a year now and they’re very, very carefully buying autonomous systems. A few here, a few there. It doesn’t feel at all like that country is at risk, and sometimes I feel we are more committed to defend Taiwan than Taiwan is committed itself. It’s a slightly political statement. But what I want to say with that is this speed of relevance, urgency, I sense that in Ukraine. We have it here for some time now. The Europeans don’t feel to me like they feel really threatened by Russia, and also in Taiwan the sense of urgency is not quite there. And for a startup that is the most important thing. Does your customer have a sense of urgency?
Shyam Sankar:
Speed really matters. I think in many ways, Europe is the worst of both worlds. They neither spend a lot of money nor move quickly. I think in general, you could spend half as much money and go four times faster and you’d probably win. You’re optimizing on your own OODA loop there, your industrial-based OODA loop, and I think that enables the young innovative companies to outmaneuver the bigger, slower companies. It curries favor to those who can invest capital in building new things rather than monetizing their entrenched positions.
My advice to Europe was maybe you don’t have to spend as much, although it’d be nice for your own sake, but you should just go four times faster and you’re going to have to figure out the urgency to do it. I think we would find if we went to Ukraine before the invasion that they were pretty slow too and pretty bureaucratic and pretty urgent and there’s a sad truth of human nature here where clarity only seems to come when the barrel of the gun is in your face.
Tim Walton:
Just as a follow-up before I open up the floor to our members of the audience, Palantir’s Maven Smart System has been fielded by US military combatant commands. NATO has also adopted it. It’s providing new opportunities for I think indigenous local apps to be integrated onto that substrate that it provides. Yet at the same time I hear competing visions as to how can countries retain certain levels of sovereignty, either in terms of the data or in terms of the brains, the architecture. What do you think are some of the promising models so that we can have approaches that allow us to fight together and actually give advanced AI capabilities to our allies and partners?
Shyam Sankar:
Yeah, I think a lot of this is a feelings problem. I hope Lauren’s right that for example, with software, Europe could come together. But I think one of the challenges with software, it just seems so easy to do. No one just sits there and thinks, “I could build an F-35. How hard is that?” With software, it’s like, “Well, I learned Python, I could build these F-#5s.”
Tim Walton:
We’ve got great interns.
Shyam Sankar:
So I think a lot of countries, when they look at that lay of the land, it’s like, “Yeah, I wish I could build a sovereign F-35 but I know I can’t, but maybe I could do that in software instead.” And I think that’s actually even less true in software. There’s a reason that I think it’s like 90 percent of all tech market cap is American. There are these accreted advantage . . . That’s more than the market cap of hardware distribution, if you think about it. I think we have to focus very precisely, if we get beyond feelings into the substance.
Well, the NATO Maven stack is running on Maven territory. The models that they’re brought into come from the whole of the European Union, so what parts of the stack do you feel like you need sovereignty over so that you have control? Which those are legitimate concerns, but if you kind of look at it in a very pixelated view, low fidelity, it’s like, why can’t I build this? That’s a different question.
And I think there is an impolite truth not only with Europe, we have it in the US, which is there is an aspect of the defense industrial base, which is a jobs program. I think one of the greatest disservices that we do is it’s incredibly well-marbled wagyu. It’s very hard to tell where the muscle is and where the fat is. It’d be much better if we could just be more honest about, okay, this part of it is here for other reasons. This part needs to be really effing lethal.
Tim Walton:
Good. Questions from our participants, please. I’d ask if you can state your name and affiliation and then ask a concise question or comment. We have a question here in the front and if you can wait for a microphone, please. Thank you.
Audience Member Chip Walter:
Thank you. Chip Walter, Marlin’s bike. Sean, I’ve got to tell you, you’re sounding a lot Army and it’s a little alerting, being a Navy guy. There’s a game coming up if you don’t know—
Shyam Sankar:
Go, Army.
Audience Member Chip Walter:
Oh, geez. Okay, so the question I have is in the Sec War’s 43 page memo in the cover letter that came with it, they talked about acquisition being putting on a war footing. Do you feel now that you’re in there, in the Army, looking around, do you feel that it’s actually being internalized and being institutionalized to speed up the stuff that’s going on? Because from the outside, it doesn’t look like much is changing other than the verbiage.
Shyam Sankar:
Well, I would say I think my service is in the lead here. They’ve already had plans to go from 13 PEOs down to six PAEs. They set the prep fires, and they were ready to maneuver the second the memo hit. So yeah, the Army’s ahead. Look, I think changes are happening. I think everyone else is working through their plans right now. A lot of this stuff, if I was to say, if you’re going to judge it and Sec War even said this, look, this is going to be a war of attrition. There’s going to be entrenched interests that start sending counterfires here and we’re going to send overwhelming fires back.
When we look at the PAEs, what we need to understand is some of them are not going to enter this modern world. We need to be judging the success of it by which PAEs actually get there and set an example of what can be done with these new authorities, with this new approach, with this putting the pebble in the right shoe, so to speak, and using that as a model to kind of push out.
As I said in my 18 theses, it’s really the primacy of people. The person is the program. Is it the nuclear navy or is it Rickover’s nuclear navy? Is it the F-16 or is it John Boyd’s plane? I think if you go back in time over and over again, you would find that the specific people in fact mattered. We’ve forgotten the name of Gene Kranz, but there is no Apollo program without Gene Kranz, and so it is going to be the same thing with the PAEs here, where we’re going to find singularly amazing people and of course everyone likes to say there’s a team behind them, but it does matter. The founder does matter, is what’s going to drive this forward.
Audience Chip Walter:
Great, thanks.
Tim Walton:
Other questions? One in the front here.
Jimmy Goodrich:
I’m going to ask you a question about speed and urgency. Because in our work on this, the one thing that recurs again and you’ve referred to it is not fast enough, especially not fast enough here, but not fast enough with our allies as well. And I wonder how far are we with using AI to try to demonstrate where the world’s going and why, if you sit back, you’re going to really steal some real pain to create some urgency here.
I’ll say I’m struck by, we just did a conference with some European friends and they were very proud that European interlocutor that . . . We bought. We’ve got 24 coming, and I went back and I looked at my office, I said, “Okay, when are these arriving?” Well, the first one arrives in 2030. I’m not criticizing the F-35, although I do have some concerns about what you spend your budget on and what’s sustainable, but it seems to me that there needs to be more of a clear demonstration of how these parts interact and what is a sensible and not sensible program.
I just read something today about French fighters and Swiss fighters being acquired by Ukraine. Okay, great. I’m not an expert like you guys, but I know, you got to train pilots, you got to have infrastructure, you got to have maintenance, you got to have spare parts. Those aren’t going to be on the battlefield the way . . . The president wants to have the war over by then. I don’t quite understand why we are so often making decisions or not making decisions that seem to be oblivious to the urgency and the threat reality we’re living in. Can you tell me how you see that and what we could do to change that if you see it the same way?
Shyam Sankar:
Well, the great Hudson scholar, Dan Pat said, “You got to compete in time,” and I think part of it is a big mindset shift which Sec War’s speech really lays out, which is forget about cost, schedule, performance. I want to enable you to make tradeoffs that actually get the most amount of capability in the least amount of time at the lowest cost. It’s not a date that you will meet. If you can come in before that, that’s great and let’s figure out the incentives around that.
I think a big part of this is when you divide up structurally those who fight the wars from those who acquire the things that other people will use to fight the wars, this is the inevitable entropy of the system and you got to bring those things much closer together. You have to therefore create more of a vote of the war fighter of what they need and when they need it. I think that will inject competition between these. I think it humiliates the acquisition side of this into going faster. There’s some sort of pacing threat short of war that they can respond to today. Today, there is no signal or stimulus there, which is profoundly crazy if you really think about it.
And then I think part of this is really this mindset of what are we . . . If you set up some institution that’s supposed to think about the future of 2040, you’ve already kind of lost, this idea that 2040 would be relevant without winning today. Look no further. This is an AI conference. Where did the Attention Is All You Need paper come from Google in 2017? What motivated those researchers to do that research? They wanted a 3 percent incremental improvement in the productivity of Google Translate. You cannot think of a more banal today problem that they were trying to solve that begot this revolution, right? Innovation is a consequence of productivity. Something I stole from Arthur Herman. I could not agree more with that.
Tim Walton:
Lorenz and Matt, you just want to jump in?
Lorenz Meier:
I think in particular in drones, I think there are two frontier technologies right now that we’re seeing. AI, definitely completely new, and then drones. And in particular on the battlefield, there is a risk that if we’re not climbing that growth first, that we’re going to find ourselves strategically on the back foot, to your point, like 2030 is too far out, in two, three years from now. Where we’ve not openly lost the competition, but it feels so risky to take certain bold moves because we’re worried about being overpowered.
And so to me, in many ways, our company motto is speed is life. Speed is everything. Just particularly in these two technologies, we’ve just got to grind quickly and there is unfortunately, but there is an active war in Ukraine where you can cut your teeth and really prove that your stuff works. And so our model is very, very simple. It’s just get out there as fast as we can at breakneck pace. And it’s actually joyful because to your point, you can forget about requirements, you can forget about performance. You just go out there and the thing is, if you just optimize for speed, you’re also going to overperform constantly.
Tim Walton:
And I would think there’s this conceptual flywheel where if you actually try to go fast, you’re going to deliver something, might be 50 percent performance, but as you go faster and faster on that flywheel, the eventual performance will probably be much better than if you’ve tried to slowly gestate—
Lorenz Meier:
And every action leads to information, and with information you win.
Shyam Sankar:
The rate of learning. Exactly.
Tim Walton:
Question there.
Audience Member Tsiporah Fried:
Thank you. Tsiporah Fried, Hudson Institute. Thank you very much for this presentation. Very, very useful. A few years ago, there was an attempt to build a cooperation between the three Vice Chief of Defense, US, France and UK. And the idea was to build an AI system of predictive maintenance for the C-130J aircraft. And we thought that it would be very simple because it was not sensible, it was predictive maintenance, it was building AI. At that time it was General Selva who launched the ID and the two other, the vice chairmen were really enthusiastic about it and they said, “In six months we want to have a prototype.”
Finally, after two years, we were still struggling because we didn’t see one questions that arose, it was data sharing. And in fact, in an aircraft that we thought was not really relevant in term of security, we have seen that it was used by the special forces. So anything about the maintenance would be very highly confidential in fact. And so the project in itself failed.
It was, I would say, because failure is always a success in terms of innovation, so we learned about it, but today I’m not sure that we would be able to do it still. I mean, the data sharing is still a problem. How do you solve this? How can we share data that can be sensitive, but if we don’t share this data, we cannot fight together and how can we protect our very high-level sensitivity data, for example, when it is related to nuclear systems, et cetera, and be sure that there won’t be a breach?
Shyam Sankar:
I think the sad part about this problem, perennial problem for the last 20 years for me is that it’s not a technology problem. The technology exists to make this completely seamless and has existed and in wartime capacities people have used it to great effect. It’s a human problem, it’s a policy problem. And the only way I’ve ever seen us solve policy problems is to have a bullheaded general, general officer admiral, just keep fighting at it. You have to bend the bureaucracy to its will.
People often cite the example of Secretary Gates having to personally be involved in the MRAP as proof that the system failed. Look how crazy it is that it requires the Secretary of Defense at the time to personally sit on a program to get it to work. I would offer you that we are taking away the wrong lesson. It’s always is going to involve Sec War, Dep Sec Def, whoever, to be personally involved to get it to be done.
When Secretary Bill Perry was thinking about, “I can either reform the Pentagon or I can go around the system and get stealth and GPS to work,” he realized this is a losing fight. I’m going to be bogged down in trench warfare or war of attrition that they’re going to win, or I can get two highly asymmetric capabilities across the finish line. Now this means that a single individual has only so many bullets they can spare. There’s only so many efforts they can bird dog, but I think we would be better to make sure that all of our really high-powered GOs have one or two initiatives that they are just going to relentlessly move across the finish line. That is the only theory of change I’ve ever seen work.
Lorenz Meier:
Maybe I can quickly add to that. I think I have nothing to add to that argument on the MRAP because it will take a body count in war for that to change. The question is how can we be ready for that? And one way to be ready is to use and mandate commercial technology because that is interoperable, is used across nations, and so then you are in a position where you make that political choice to now open the floodgates to actually connect systems.
And I think we’re in that business, I’m sure Palantir is as well, to use commercial technology extensively. And the more the, I would say commercial, military, industrial base is a real thing, not just in China but also here, the more readiness we have for wartime because we can make them creative decisions on a whim.
Matt Cronin:
Yeah. And just to add to that real quick on the data point, there’s a meme in the cyber world of, it shows a picture of a road with one of those little toll gates, but you see, and it’s in a big open field and you just see on both sides tire tracks where everyone just goes around. And what we have done is in the name of cybersecurity or the name of data sovereignty, we do not part with each other. We keep our data close hold within our own, not even knowing what it is or where it is. But then our adversary immediately steals all of it.
And so there has to be more of a recognition of what we’re actually facing and that we actually are facing it together and developing those sorts of systems. That will 100 percent require people to speak kind of bold fundamental truths. And I think most people would rather not do that, but we’re kind of fast approaching people in this room, get it fast, approaching a point where that’s just, that’s no longer a luxury. We’ve been living in this kind of luxury state our entire professional lives. That is over and we have to just actually confront these hard truths and then act accordingly.
Tim Walton:
Another question there in the center, please.
Shawn Venditti:
So on that, I think it’s important to recognize that the US obviously spends more—
Tim Walton:
You can identify yourself. We know each other, but . . .
Shawn Venditti:
What’s that?
Tim Walton:
If you can identify yourself, your affiliation—
Shawn Venditti:
Oh yeah, yeah. Shawn Venditti, Lockheed Martin. So we spend so much money obviously in duplication of capability development in the US more so than any other nation. In the early 2000s, those of us that worked in DOD, right? We’ve seen some waveform development coming out of Russia and it was obvious that there was merging of companies. Saw the same thing in China where they were buying stuff from Russia and then they started to duplicate, build it all, but it’s been very centralized, each capability development within China.
Whereas in the US, we have a lot of duplication. We have limited stem, we have resources that are really kind of swimming in parallel with each other. And so the question is, and Sharam, now you’re in a unique position, how do you incentivize co-opetition within the DOW? Does the PAE get us there? How do we get to a point where we’re using our limited resources and the amount of money we’re spending, but swimming faster than in parallel?
Shyam Sankar:
I think one of the most important jobs of a leader inside of an institution is to grant and revoke monopolies. It’s all a judgment call. There are times where it’s like, “Look, I know this is the team. I know not everyone agrees. Other people want to make competing bids at doing this, but we have to all line up behind this effort.” And there are other times where it’s like, “Actually there’s a lot of fundamental uncertainty. I want to have four competing efforts to go after that. I’m optimizing on what I’m going to learn from that,” and I’ll kind of figure it out as we go along.
Eventually the terminal state is you kind of pick a winner or maybe a handful of winners. When Admiral Raborn was building the submarine-launched ballistic missile, it wasn’t Polaris. There were four competing efforts, Polaris won. And so I think, I wouldn’t go to either end of the extreme barbell, which is like I would say in some places we don’t have enough duplication. Other places we have too much, right? And we’re not managing that artistically. It comes back to that person, that judgment call, that founder figure who’s going to decide when and how to apply this to get the effect and the outcome that they really need.
I think there’s a broader conversation which our country is not ready to have yet, but I asked Claude, “Hey, get me a chart of in 2025 dollars, show me historic department of war, department of defense spending over time.” And I was kind of actually shocked to see that for most time since the Korean War, we have spent roughly $600 billion a year in 2025 dollars. The budget’s been relatively flat over many, many, many decades.
Of course, the appearance of inflation changes how we think about it, and you of course see that as a percentage of GDP, it’s never been lower than it is right now. And now we’re up to 850, something like this. But I think there’s a big question of we could afford more duplication, that duplication bought down risk and gave us capability sooner. The historical analysis of Raborn’s effort is we actually got to Polaris faster because he had competing efforts than we would’ve if we lined up behind singular efforts. And so I think we have to make that time-space trade-off better.
Matt Cronin:
Just to add to that, a couple things. So one, I don’t think China is a good exemplar, not because we’re a democratic authoritarian state, but its model is different. So it’s model, to the extent it uses large prime equivalents, is because it steals IP from your organization or similar organizations, it skips the first multi-billion dollar step and then just builds an F-35 equivalent. So that’s not, “Oh wow, they’ve consolidated and therefore they’re more successful” it’s that they are stealing from us and therefore they’re being successful in stealing from us.
Second, in China, it is not in a matter of consolidation. In fact, where they’re getting the most of their gains is absolute animalistic hyper-competition, which they call involution in the marketplace, both commercially, whether it’s EVs, but also on the military side. And they just basically just, it’s like they throw a knife into a room with a hundred companies and say, “Two of you walk out,” and they just kill each other over the course of several years. The two to come out, they say, “You are national champions. We’re going to do everything we can for you to make sure you succeed.”
So they’re gaining the benefits of competition, whereas America for a long time has not. The way to do that is to, as we did at NASA to unlock launch as we did in World War II, as Shama’s written about extensively to unlock the industrial base. As Ukraine has done to unlock their capabilities, is to unlock the commercial first side of things to allow for the broader economy, the broader set of innovators if they want to come in and then work hard, compete like crazy, and if they have the best capabilities, they should win. It should not be based upon a regulatory moat. It should be based upon what actually is best.
Tim Walton:
Philosophically I’m very aligned. I am though thinking about how do we get to a future in which we have more money to lend itself to more competitions. Does that require, I guess, slimming down the number of priorities? The Department of War recently said, “Hey, we’re going to go from 13 or so R&D priorities now to top six or seven,” if I remember correctly.
Matt Cronin:
Yeah, here’s an amazing statistic. From the once Department of Defense, now Department of War. They have themselves admitted, if they just switched from cost-plus contracts to fixed price milestone like bake-offs, whoever it does the best job goes the next step, they will reduce their cost on their budget by 10 percent. That’s crazy. Just that. Just do that right there. You get 10 percent more dollars. How many more capabilities did you have? How much faster would you have them?
It is not a lack of innovators. It’s not a lack of people who work hard, as Sec War said, it is a bureaucratic process that just grinds people down and takes dollars away, into essentially nothing. Non-productive efforts. If you want to have more, you fix that. You don’t say, “Let’s do less.” Say, “Let’s say let’s do more by fixing the process.”
Shyam Sankar:
And in this conversation, we can’t be agnostic to how much of the investment capital is coming from the private sector. Only 27 percent of Chinese prime’s revenue comes from the PLA. The rest comes from commercial sector, which creates capital that they can use to reinvest in lethality, which is exactly what our industrial base used to look like when Chrysler built cars and missiles. Minivans and missiles or Ford built satellites. And so being the unipolar moment has allowed us to build a defense industrial base that is by and large on kind of like a Galapagos Island, separate and apart from the hyper-competitive forces that the Chinese are leveraging here.
And so if you can do these competitions substantially faster and bet on the winners faster, you would attract far more private capital. So the government gets out of the business of financing the R&D, and that’s not going to be true for everything. You’re always going to have a handful of exquisite systems where it is going to make sense, but I think actually for 95 percent of things, it doesn’t actually make sense and you could go much faster, much more cheaply leveraging private capital.
Matt Cronin:
Just to tag in one more time on that.
Shyam Sankar:
Please.
Matt Cronin:
It’s not just, so you have dollar for dollar, what makes more sense. As Sean noted, you’re getting 5X because DoD is putting in less taxpayer dollars on a commercial first oriented contract system, and then 5X of that is coming from the private sector, and so you’re just getting huge amount more go from just a much smaller amount of LA capital from the taxpayer. Now, on top of that, you are, because you have to get a huge return, you have to find outsized innovations that make a huge difference in the market, you are getting capabilities that asymmetrically defeat the adversary right now.
So in that case, it’s not just a 5X return, it’s 100X return. You’re not building legacy systems that they purposely built their military to defeat. You’re getting the things that actually will defeat them, and that people have spent their entire lives building to make sure it does exactly that. And it’s tested in the field because again, we’re going commercial first. We’re trusting it, we’re trying it, and we know it works. That, if you want capabilities, if you want more, that is how you do it.
Tim Walton:
I imagine those effects are probably outsized for the earlier stages of fueling the capability. The near term, we want have this very heterogeneous force, lots of capabilities we’re rapidly introducing to overwhelm our adversaries in terms of the different challenges they’re facing over time. I imagine as you get into higher capital outlays, the government will probably need to have a larger fraction of that buying. If you’re buying large vessels or—
Matt Cronin:
Yeah, yeah, yeah.
Tim Walton:
. . . more sophisticated aircraft. But in terms of the near term capabilities, we need to overwhelm our adversaries with—
Matt Cronin:
Yes, we’re not making aircraft carriers in garages. No one is proposing that. But to John’s point, those things come faster because they iterate faster. Because there’s more data points, because that’s how the commercial markets work. And so to get to also address that issue of do we want more, then that’s what you got to do. You want more now, you have to go in that direction.
Tim Walton:
Well, please join me in thanking our very quiet panel. I don’t have very many views. Joel Scanlon’s going to close this out. Thank you all.
Joel Scanlon:
Just a note of thanks as the credits roll. Thanks to this panel. Thanks to the very thoughtful discussions all throughout the day, my Hudson colleagues who’ve done a great job moderating these. Our event staff who put this all together, and thanks to all of you for being here and engaging in the discussion. We hope to do more like this, so we hope to see you again.
Join Senior Fellow Michael Sobolik for a conversation with Sarah McLaughlin, a senior scholar at the Foundation for Individual Rights and Expression, about her recent book on this subject, Authoritarians in the Academy.
To discuss the future of Latvian foreign policy, Peter Rough will welcome Latvian Minister of Foreign Affairs Baiba Braže back to Hudson for a fireside chat.
Hudson will host an exclusive luncheon conversation with Dina Kallay, deputy assistant attorney general for international, policy and appellate at the Antitrust Division of the US Department of Justice.
Moderated by Senior Fellow Matthew Boyse, a 35-year foreign service officer, this conversation will explore the evolving landscape of American diplomacy and development.