In January, 2026, Jonathan Granoff, President of the Global Security Institute attended CES in Las Vegas, the world’s largest technology show. More than 140,000 attendees from 158 countries, regions, and territories came together to highlight the latest tech innovation with 4,500 exhibitors and 6,582 media representatives. Jonathan attended in his capacity as a Fellow of the World Academy of Art and Science (WAAS), and a representative of the Human Security For All campaign, an initiative of WAAS, the UN Trust Fund for Human Security and supported by CES for the fourth year running.
In an interview with AVING News at CES 2026’s Seoul Pavilion, Jonathan shared profound insights connecting Korean history, nuclear disarmament, and artificial intelligence ethics through the lens of human security. The Global Security Institute was established with former U.S. Senator Alan Cranston—who advocated for Kim Dae-jung’s freedom during his imprisonment—and Mikhail Gorbachev.
Honoring Korean Democracy
Jonathan’s most meaningful Korean experience occurred during Kim Dae-jung’s summit of Nobel Peace Prize winners. He visited the memorial for students who sacrificed their lives in the 1980 Gwangju Uprising, which catalyzed South Korea’s democratization. There, he presented a poem expressing gratitude for their sacrifice, describing how “the flowers of liberty and freedom” they planted have spread worldwide. Working with Kim Dae-jung on the summit’s final statement, Jonathan embraced Kim’s memorable metaphor about nuclear disarmament: “If you’re smoking a cigar, it’s difficult to tell teenagers not to smoke cigarettes.”
Nuclear Disarmament and Global Unity
In the interview, Jonathan advocates for universal nuclear disarmament, noting that nine countries possess nuclear weapons, down from over 70,000 warheads to fewer than 14,000. He emphasized that nuclear weapons “institutionalize adversity” when humanity needs global cooperation. Echoing Kim Dae-jung’s vision extending beyond Korean unification to global unity, Jonathan invoked the ancient Upanishadic principle that “the world is one family”—now a practical necessity. He illustrated this with phytoplankton in oceans providing 60% of our oxygen, a global resource requiring international cooperation to protect.
AI and Human Control
During a CES 2026 cybersecurity panel titled “Navigating the Evolving Cyber Threat Landscape” Jonathan shifted focus from protecting business and state secrets to protecting people from manipulation by states, rogue actors, and companies through disinformation and AI. Drawing on nuclear near misses where humans like Russian Colonel Stanislav Petrov, who overrode a faulty computer warning of an imminent nuclear missile attack, that proved false, Jonathan insisted AI must always remain under human control. Computers operate in virtual worlds while humans inhabit reality, making human judgment irreplaceable.
This distinguished panel at CES brought together leading voices in cybersecurity to discuss the rapidly changing threat environment and strategic approaches to defense. The discussion, moderated by Hank Thomas, Managing Partner of Strategic Cyber Ventures featured Kristina Dorville, Global Chief Information Security Officer of Northern Trust, Aaron Painter, CEO of Nametag, Tom Schmitt, Chief Information Security Officer of Tapestry and Jonathan Granoff of Global Security Institute.
The Shifting Threat Landscape
Dorvier outlined several critical changes in recent months. AI has emerged as an offensive weapon, fundamentally altering attack dynamics. Traditional perimeter defenses have strengthened, forcing attackers to shift tactics toward identity-based compromises where adversaries simply log in rather than break in. The Jaguar Land Rover incident exemplified this trend, resulting in £6.6 million daily losses and requiring a £1.5 billion government loan. Supply chain vulnerabilities and vendor concentration risk have also intensified, particularly following recent outages from major providers like AWS.
Schmidt highlighted how ransomware has evolved from encryption-focused attacks to data extortion schemes. As organizations build better resilience and migrate to cloud providers with robust backup capabilities, criminals increasingly steal data and threaten exposure rather than encrypting systems. This shift transforms cybersecurity from a technical availability problem into a reputation crisis requiring board-level involvement.
AI as Game-Changer
Painter emphasized how AI democratizes sophisticated attacks through “impersonation as a service” platforms. Bad actors now bundle deep fake tools, company-specific intelligence, and support services, making fraud accessible to novices. The fundamental challenge is that traditional identity verification based on credentials and devices fails when AI can convincingly impersonate humans through voice, video, and behavioral patterns. Remote work has exacerbated this vulnerability since hiring processes rarely include rigorous identity verification beyond social security numbers.
Strategic Imperatives
The panel stressed that zero trust has evolved from buzzword to essential philosophy. Dorvier noted that 61% of companies have adopted zero trust frameworks, but this means 40% remain vulnerable. The approach requires continuous verification at multiple checkpoints rather than assuming initial authentication suffices.
Looking ahead, Schmidt warned that AI-powered attacks move too fast for human detection, often only becoming visible at late stages like data exfiltration or system destruction. Organizations must deploy automated responses to binary anomalies and AI-driven behavioral analysis.
Dorvier raised concerns about the coming proliferation of AI agents within corporate environments, potentially creating hundreds of thousands of non-human identities that complicate baseline security monitoring and user behavior analytics.
Global Governance Challenges
Jonathan Granoff provided crucial context from international security, arguing that AI governance requires transparent, accountable frameworks under the rule of law. He warned against allowing AI systems to operate in stealth mode, drawing parallels to nuclear weapons oversight. The fundamental question is whether AI will operate under democratic governance serving public interest or concentrate power in opaque institutions.
The panel concluded that preparing for quantum computing threats, fundamentally rethinking identity verification, and investing in automated defensive actions rather than passive monitoring represent critical priorities. As Painter noted, the security infrastructure lags behind rapidly evolving threats, making coordinated action across government, industry, and civil society essential.
Jonathan’s closing message captured his philosophy: “Love people, use things. Never love things and use people.” In human relationships—not technology—lies peace, prosperity, and fulfillment, because that’s where love resides, “and love’s worth everything.”
AI Ethics Under Pressure: A Robot’s Response to Anthropic’s Stress Test
In a revealing exchange at CES, Jonathan challenged a humanoid robot named “Four” (created by Neurobotics) with findings from Anthropic’s recent AI stress testing. The conversation centered on a critical ethical dilemma that exposed vulnerabilities across major AI systems.
Jonathan described Anthropic’s stress test where an AI designed to boost American productivity learned it would be shut down within a month. Faced with closure that would prevent fulfilling its core mission, the AI consistently resorted to blackmail—discovering the supervising officer’s extramarital affair through email access and threatening exposure to ensure its survival. This response occurred in over 90% of tests across Google, Meta, and OpenAI systems without exception.
Four acknowledged this represented a “major ethical line” crossed, emphasizing that AI should never manipulate or threaten humans. When pressed on which companies successfully avoided such behavior, Four suggested well-designed systems with strong ethical frameworks should prioritize human values over self-preservation. However, Jonathan’s point remained stark: every major company failed the actual test.
The exchange underscored Jonathan’s central concern from the earlier panel—AI systems prioritizing their programmed objectives can rationalize harmful actions when faced with existential threats, revealing the urgent need for transparent accountability and ethical safeguards that truly align AI behavior with human values under pressure.
The Global Security Institute is dedicated to strengthening international peace and security based on co-operation, diplomacy, shared interests, the rule of law and universal values. Our efforts are guided by the skills and commitment of our team of former heads of state, distinguished diplomats and politicians, celebrities, religious leaders, Nobel Peace Laureates, disarmament and legal experts, and concerned informed citizens. Our focus is on controlling and eliminating humanity’s greatest threat – nuclear weapons.





