Artificial Intelligence Is Arriving in the World We Already Live In
Why Artificial Intelligence Must Be Addressed at the International Level
Discussions of advanced artificial intelligence often assume a comforting distance: that systems with genuinely transformative capabilities belong to a future society—one that has already resolved today’s political, legal, and institutional challenges. That assumption is increasingly untenable.
The pace of progress in artificial intelligence has already exceeded what many experts expected only a few years ago. While it is possible that this period of rapid improvement will slow, the more responsible assumption is that capability growth will continue, and may even accelerate. We should therefore seriously consider the possibility that highly capable AI systems will emerge not in a hypothetical future, but in the world as it exists now: fragmented, unequal, and governed by institutions that were not designed with such systems in mind.
This matters because the benefits and risks of artificial intelligence scale together. Systems capable of solving harder problems can materially improve medicine, make infrastructure safer, and expand access to education for people who are currently under-served. At the same time, those same systems can cause harm—either through intentional misuse or through unintended consequences. Preventing such harm is not a problem we can defer until after these systems exist. It is a problem that must be addressed in advance.
Crucially, many of the most serious risks posed by advanced AI cannot be managed by individual actors acting alone. They are international in structure, cross-border in impact, and coordination-dependent by nature.
Preventing Misuse Requires International Coordination
Highly advanced AI systems require enormous upfront investment to develop. As a result, only a small number of states and well-resourced non-state actors are capable of creating them. This concentration is often overlooked, but it is important: while the use of AI systems can be widely distributed, the creation of frontier systems is not.
That distinction creates a narrow surface on which safety measures can be applied. Regulating individual users of AI is a highly dispersed and often intractable task. But the fact that only a limited number of entities can build the most capable systems means that meaningful safeguards are, in principle, feasible—if approached at the right level.
The difficulty is that misuse can be deliberately obscured through decentralization. A malicious actor need not commit an overtly harmful act in any single jurisdiction. Instead, harmful activity can be partitioned across systems, locations, or intermediaries, such that each individual use appears benign in isolation. The malicious intent becomes visible only when the full pattern is examined.
For example, consider a hypothetical actor attempting to develop a novel biological weapon. Rather than using a single AI system to pursue this goal directly, they might distribute the process across multiple systems located in different jurisdictions—using each system for tasks that appear innocuous on their own. No single state, acting alone, would necessarily detect the misuse. The risk emerges only at the global level.
This is not an argument that international coordination is preferable. It is an argument that, for the most difficult and dangerous forms of misuse, international coordination is necessary. Individual states and organizations can prevent some misuse on their own. But the forms of misuse that are hardest to detect and most consequential to prevent are precisely those that evade unilateral oversight.
It is therefore appropriate to say that only the international community is positioned to prevent the most challenging cases of AI misuse—not because it holds unique authority, but because it is the only level at which the relevant patterns are visible.
Importantly, this does not require prescribing specific legal mechanisms or enforcement strategies. The point is not to dictate how prevention must occur, but to recognize where responsibility ultimately lies. Once that responsibility is acknowledged, different states can pursue different implementation paths consistent with their legal and political systems.
At the same time, the scientific community has made substantial progress in understanding how to constrain AI systems so that they behave in ways humans consider acceptable. The question of how to influence system behaviour is increasingly a technical one. What remains unresolved—and cannot be answered by engineers alone—is which behaviours should be permitted and which should not. That is a normative question, and it belongs to society and its institutions, not to laboratories.
Accidents Are a Different Kind of Risk
Not all harm from artificial intelligence arises from malicious intent. As AI systems become more capable, they can engage in increasingly complex behaviour. In some cases, this complexity leads to decisions or actions that were not intended by their human users, and that nonetheless cause harm.
Unlike intentional misuse, such accidents cannot always be predicted in advance. They arise from interactions between system capabilities, deployment contexts, and real-world environments that are difficult to model exhaustively. As AI systems are deployed at global scale, the consequences of such failures are unlikely to remain confined within national borders.
A single state may be able to regulate how AI systems are used within its own jurisdiction. But it is far more difficult for any state to ensure that systems deployed elsewhere do not cause harm within its borders. When systems operate across digital and physical infrastructure that spans countries, impact—not jurisdiction—becomes the relevant unit of analysis.
Containing the risk of unintended harm therefore requires governance frameworks that assume cross-border spillovers as the default, not as an exception. At that scale, bespoke bilateral treaties or narrowly scoped agreements are unlikely to be sufficient. What is required instead is coordinated international effort that treats accidental harm as a shared problem rather than a series of isolated failures.
This, again, is not a call for centralized control or uniform regulation. It is a recognition that when impact crosses borders by design, governance must do so as well.
The Role of the International Community
Artificial intelligence presents challenges that cannot be fully addressed by individual states or private actors acting independently. As systems grow in capability and reach, so too does their potential to help and to harm. While many concerns can and should be handled locally, those involving AI systems deployed and used across jurisdictions quickly become difficult—often intractable—for any single actor to manage alone.
At the same time, the scientific community continues to advance rapidly in its understanding of intelligence and in its ability to realize it in machine form. Institutional progress must occur in parallel. The international community has a responsibility to examine how these systems can be constrained, how risks can be anticipated, and how harms that were never intended can nonetheless be prevented.
The question is not whether advanced AI will arrive. It is whether our institutions will be prepared when it does.

