Foreign Malign Influence FMI
The US Intelligence Community defines the gray zone as a realm of international relations between peaceful interstate diplomacy, economic activity, and people-to-people contact on one end of the spectrum and armed conflict on the other, and gray zone activities as coercive or subversive actions to achieve objectives at the expense of others in contravention or in the absence of international norms. It was assessed that gray zone activities and campaigns are likely to increase in the coming years and become a dominant feature of great power competition and international relations more broadly because of eroding or nonexistent norms; emerging, evolving, and expanding domains; and perceptions of their comparative advantages.
In November 2025 X (formerly Twitter) introduced a new transparency feature that displayed metadata such as approximate operator location, account creation details, and username history. The rollout of location tags drew particular attention because it revealed that several high-visibility political accounts — many projecting a U.S. identity— appeared to be operated from abroad. Investigations have highlighted accounts traced to countries including Nigeria, India, Thailand, and Eastern Europe, despite their U.S.-centric messaging. Although X acknowledged that the data may be imperfect due to VPN use or travel, the feature provided a meaningful new signal for identifying potentially deceptive account operators.
The tool also surfaced discrepancies in accounts posting from conflict zones. For instance, several profiles claiming to be located in Gaza during wartime were discovered to be operated from countries such as India, the UK, Poland, or Pakistan. Some of these accounts solicited donations, blurring the line between disinformation, exploitation of humanitarian crises, and financial fraud. The visibility of mismatched locations underscored the role of platform metadata in countering both influence operations and opportunistic online scams.
These developments fit within the larger framework of foreign malign influence, a category defined by covert or deceptive efforts by foreign powers or their proxies to sway another country’s public opinion or political behavior. Social platforms continue to provide a favorable environment for such operations. While location transparency may expose less sophisticated actors, advanced operators can still obscure their presence using VPNs, proxy networks, or compromised domestic accounts. As a result, location data should be viewed as one useful input among many rather than a singular indicator of attribution.
Current institutional and policy dynamics further complicate the picture. Restructuring and potential downsizing affecting the U.S. Foreign Malign Influence Center raised concerns about diminished federal capacity in this domain. At the same time, research organizations warn that malign actors are increasingly relying on generative AI, deepfakes, and cross-platform coordination to shape narratives. In this environment, metadata-based transparency tools contribute value but must be coupled with broader systemic responses.
The implications are significant. The new location tags improve public visibility into accounts that may be engaging in inauthentic or foreign influence activities. They demonstrate how easily foreign operators can embed themselves into domestic discourse, especially on polarized political topics or crisis-related content. They also highlight the necessity of combining platform transparency with regulatory oversight, media literacy efforts, and strengthened verification for political or fundraising accounts.
Monitoring will be essential to assess how adversarial actors adapt. Some may migrate to platforms offering less transparency; others may simply adopt more advanced obfuscation techniques. Nonetheless, the initial results show that even limited metadata transparency can expose foreign-based political engagement at scale and help analysts, journalists, and users more effectively assess online authenticity.
The Pervasive Scale of Automated Traffic
The digital ecosystem of the contemporary internet is fundamentally shaped by the substantial presence of automated bot traffic, which represents a complex and multifaceted dimension of online activity. Various cybersecurity firms and research organizations have consistently documented that non-human automated activity constitutes a significant, and in certain contexts, dominant portion of all web traffic. Industry benchmarks frequently cite a figure of approximately 42 percent of all internet traffic being automated, representing nearly half of all digital interactions occurring across global networks. This substantial presence underscores the critical importance of understanding and addressing the implications of bot activity for cybersecurity, platform integrity, and authentic human discourse.
More specialized analyses provide even more granular insights into this automated landscape. The 2025 Bad Bot Report from the prominent cybersecurity firm Imperva offered particularly revealing data, indicating that specifically malicious "bad bots" accounted for 37 percent of all internet traffic. Perhaps more strikingly, this report noted a watershed moment in digital history: for the first time in a decade, total automated traffic—encompassing both beneficial and malicious bots—surpassed genuine human activity, constituting 51 percent of all web traffic. This statistical milestone represents a fundamental shift in the composition of internet activity, highlighting the growing sophistication and prevalence of automated systems. Another perspective from F5 Labs emphasizes the resilience of advanced bot networks, noting that even after extensive mitigation efforts, highly persistent and sophisticated bots still account for over 10 percent of HTTP transactions, illustrating both the technical challenge of combating this activity and the determined persistence of those deploying these systems.
Technical Implementation and Evolving Sophistication
The technical implementation of bot networks demonstrates a remarkable spectrum of sophistication, ranging from elementary scripting to highly advanced systems specifically engineered to emulate human behavior and systematically evade detection mechanisms. Basic bots typically perform straightforward, repetitive HTTP requests, often ignoring complex page elements like JavaScript that characterize modern web applications. However, the contemporary landscape is dominated by increasingly sophisticated operators who leverage advanced tools such as headless browsers—essentially full browser engines operating without a graphical interface—which enable them to execute JavaScript, render complex web pages, and interact with digital interfaces in a manner virtually indistinguishable from genuine human users.
To effectively conceal their geographic and network origins, these advanced bots almost universally employ sophisticated obfuscation techniques, primarily routing their traffic through residential proxy networks. These networks utilize the legitimate IP addresses of real home computers, broadband connections, and mobile devices worldwide, thereby making malicious bot traffic appear to originate from trusted, legitimate residential sources. This technique has become so pervasive and sophisticated that some cybersecurity analysts suggest nearly every cellular IP address globally passes some volume of bot traffic at any given time. Furthermore, as the architecture of modern digital services has evolved, so too have bot targeting strategies, with malicious actors increasingly focusing their efforts on application programming interface (API) endpoints directly. Since contemporary mobile and web applications rely heavily on APIs for data exchange and functionality, this approach provides a more efficient, less detectable vector for automated data scraping, account manipulation, and systematic content farming.
Diverse Funding Mechanisms and Economic Incentives
The funding mechanisms underpinning these extensive bot networks reflect a diverse ecosystem of economic incentives, ranging from straightforward criminal profit-seeking to sophisticated commercial competition and state-aligned geopolitical interests. A primary driver remains direct financial fraud, where bots are funded through various schemes including large-scale ad-fraud operations designed to drain competitors' advertising budgets, coordinated credential stuffing attacks that hijack and resell user accounts across numerous platforms, and outright financial scams such as the deceptive donation-soliciting accounts uncovered by X's location transparency feature that falsely posed as Gazans to exploit humanitarian sympathy for financial gain.
Beyond overtly criminal enterprises, legitimate commercial entities also represent a significant source of funding for certain categories of bots, particularly those deployed for competitive intelligence gathering. Companies frequently employ automated systems to continuously scrape pricing data, inventory information, and proprietary content from competitor websites, providing strategic business advantages in highly competitive markets. Perhaps most impactful from a societal perspective are the state-aligned actors who systematically fund large-scale influence operations, utilizing sophisticated bot networks and coordinated inauthentic accounts to shape public opinion, amplify divisive narratives, and sow societal discord in target nations for strategic geopolitical objectives. Additionally, a more decentralized, grassroots form of funding has emerged through social media platform monetization programs, where individuals in lower-income countries create politically charged, often inflammatory accounts specifically designed to farm engagement and generate advertising revenue, effectively making the manipulation of public discourse a directly profitable entrepreneurial enterprise.
Transparency Tools and Countermeasures
The recent introduction of platform transparency features, most notably the metadata disclosure system rolled out by X in November 2025, provides a critical investigative tool for examining this complex ecosystem of inauthentic activity. By displaying previously hidden metadata such as approximate operator location, account creation details, and username history, these features have exposed specific deceptive practices at an unprecedented scale. The implementation revealed that numerous accounts posing as U.S. political activists were frequently operated from countries including Nigeria, India, Thailand, and Eastern Europe, while accounts claiming to provide real-time reporting from active conflict zones like Gaza were often traced to physical locations in the United Kingdom, Poland, Pakistan, or India.
While this location data remains imperfect—as it can be deliberately obscured through virtual private networks (VPNs) or complicated by legitimate user travel—it nonetheless serves as a powerful first-line indicator of potential inauthenticity. The visibility of these discrepancies forces less sophisticated operators to adopt more costly and complex obfuscation methods while simultaneously providing researchers, journalists, and vigilant users with a tangible metric to assess account authenticity. This technological development represents a significant step in peeling back the layers of anonymity that have historically enabled foreign malign influence campaigns, coordinated harassment, and fraudulent schemes to operate with minimal accountability across digital platforms.
Broader Implications and Future Directions
The pervasive presence of sophisticated bot traffic carries profound implications for digital trust, platform governance, and democratic integrity worldwide. As automated systems become increasingly capable of mimicking human behavior and evading technical detection, the fundamental challenge of distinguishing authentic discourse from artificially amplified narratives grows more complex. The exploitation of these systems by both state and non-state actors to manipulate public opinion, disrupt democratic processes, and undermine social cohesion represents one of the most significant challenges facing contemporary digital societies. Furthermore, the direct financial incentives created through platform monetization models have effectively democratized disinformation, transforming what was once primarily a state-sponsored activity into a commercially viable enterprise for individuals and groups worldwide.
Addressing this multifaceted challenge requires a coordinated approach combining technical countermeasures, regulatory frameworks, and digital literacy initiatives. Platform transparency tools like X's location feature represent an important technological component, but they must be complemented by more robust authentication mechanisms for political and fundraising accounts, clearer labeling of automated systems, and greater resource allocation toward detecting and disrupting coordinated inauthentic behavior. Ultimately, the evolving arms race between detection systems and evasion techniques will likely continue indefinitely, but increased transparency represents a crucial foundation for cultivating a more authentic and trustworthy digital public sphere where human discourse can flourish without being distorted by unseen automated influences.
"By Other Means"
During the Cold War, US diplomat and strategist George Kennan predicted that long-term competition would be characterized by “an extension of armed conflict by other means,” which he described as an enduring norm of international relations. He used “political warfare” to describe “the employment of all means at a nation’s command, short of war, to achieve its national objectives.” • Routine Behavior. Kennan’s influential policy memorandum on political warfare focused on institutionalizing overt and covert US efforts to contain and weaken the USSR. Current PRC, Iranian, North Korean and Russian leaders view some of their actions now described as “gray zone” activities as routine and legitimate tools of statecraft, and publicly and privately ascribe similar behavior to the United States.
Track Two Diplomacy refers to informal, non-governmental efforts by private individuals or groups to help resolve conflicts, typically international or intercultural in nature. It operates in parallel to official governmental negotiations, which are often referred to as Track One Diplomacy. The goal of Track Two diplomacy is to create an environment conducive to official dialogue, either by reducing tensions or by building trust between parties in conflict.
Track Two diplomacy efforts have been significant in the Israeli-Palestinian conflict, where unofficial talks between influential figures from both sides have helped advance peace efforts. During the Cold War, scholars and think tanks in the US and Soviet Union engaged in Track Two diplomacy to discuss arms control and avoid misunderstandings that could lead to escalation.
Security Executive Agent Directive 3 [SEAD 3]: "Reporing Requirements for Personnel with Access to Classified Informaion or Who Hold a Sensitive Position" applies to U.S. Government Civilian, Military and Contractor Personnel. There are three levels of sensitive positions: Non-Critical, Critical, and Special. These levels are determined by the degree to which a compromise of accesses or informaion would cause a “material adverse effect on national security,” based on the nature of the position. While personnel may not have access to classified information in a sensitive position, sensitive positions can impact national security. There is a direct correlation between position sensitivity, the investigation requirements and clearance levels for access to classified informaion:
- Non-Critical Sensitive Position, Secret, and “L”;
- Critical Sensitive Position, Top Secret, and “Q”;
- Special Sensitive, Sensitive Compartment Informaion (SCI), and Top Secret Special Access Programs (SAP).
The SEAD 3 requirements are: if the contact is coninuing; involves bonds of affection, personal obligation, or intimate contact; or any contact that involves the exchange of personal information. If the contact is limited to one lunch, this occurrence is not considered continuing contact and the foreign national contact would not normally be reported. However, if personnel believe the foreign visitor is inappropriately trying to obtain sensitive or classified information, then that interaction should be reported. Personnel are required to report roommates, as cohabitants, to their security office; if they are foreign nationals. A roommate or cohabitant is someone who would have close and continuing contact. Failure to report may be subject to administrative action. Personnel do not need to ask someone’s citizenship in public type forums (conferences or through training) unless the contact becomes close and continuing. If it is believed a foreign individual is trying to obtain sensitive or classified information or pose specific work related quesions, that issue should be reported to the security office. Reports could require further investigation and adjudication until the issue is resolved. This would follow the same investigative process used when potential “issue” information is discovered during a background investigation. This may involve investigators obtaining clarifying information from personnel and/or others. All individuals with security clearances or occupying sensitive positions have due process rights and the opportunity to appeal the denial or revocation of their eligibility.
The 2024 U.S. election is important for Vladimir Putin and Russia, as it could significantly impact U.S.-Russia relations. Bill Browder, architect of the Magnitsky Act named after his attorney who was tortured for a year before being brutally murdered by Putin in prison: Putin is spending exponentially more money to bribe Americans to influence the 2024 election than in 2016.
Russia has a vested interest in the outcome because the next U.S. president will shape foreign policy toward Russia, particularly concerning issues like the war in Ukraine, NATO expansion, sanctions, and global energy markets. If the U.S. elects a president more favorable to diplomatic engagement or less supportive of military aid to Ukraine, this could benefit Russia's strategic goals. On the other hand, a president who continues a hardline approach against Russian aggression could mean prolonged sanctions and continued support for Ukraine, which would further isolate and weaken Russia on the global stage.
Fyodor Lukyanov, editor-in-chief of Russia in Global Affairs, chairman of the Presidium of the Council on Foreign and Defense Policy, and research director of the Valdai International Discussion Club, wrote 12 September 2024, "Twenty years ago, one of the dominant trends was ‘democracy promotion’. The policy of the then neo-conservative US administration (George Bush and Dick Cheney) was based on the ideological postulate that the spread of the democratic form of government around the world was the most reliable guarantee not only of the national interests of the United States, but also of a positive universal order. They felt that one was inseparable from the other.
The range of tools they had was wide: from actively supporting certain social processes (the so-called ‘color revolutions’ – which raged from the post-Soviet space to the Middle East and North Africa) to direct military intervention to effect regime change (from the Balkans to the Middle East again). Whether Washington wanted it or not, democracy became a political and economic tool for external rather than internal use. The notion of the fundamental importance of having elections recognized by an external arbiter – with the right to certify the result – was what emerged. And if that arbiter wasn’t happy with the outcome, it empowered itself to demand a revision, even by force. The implication was that problems with electoral legitimacy were only possible in fragile young democracies. However, even in stable, well established democracies, things do not always run smoothly – even if generally institutions guarantee order....
"Trust in institutions is falling, as it almost always does in times of great change. And the nature of the mistrust is similar to that which created the conditions for ‘color revolutions’ in more fragile states. Hence the constant fears (and they may be genuine) of outside interference and influence. The American and Western European establishment knows very well how to intervene in and influence troubled societies – now they think the same will happen to them....
"Paradoxically, systems accused of being undemocratic are probably better equipped to survive, at least in the short and medium term. They have to constantly demonstrate to citizens that they are capable of solving their problems, whereas a traditional democracy believes that democratic turnover itself is a remedy for problems. In reality, swapping out one party in power for another changes almost nothing, which only exacerbates discontent."
The Federal Bureau of Investigation (FBI) defines FMI as “subversive, covert (or undeclared), coercive, or criminal activities by foreign governments, nonstate actors, or their proxies designed to sow division, undermine democratic processes and institutions, or steer policy and regulatory decisions in favor of the foreign actors’ strategic objectives and to the detriment of their adversaries.” The term ‘foreign malign influence’ means “any hostile effort undertaken by, at the direction of, or on behalf of or with the substantial support of, the government of a covered foreign country with the objective of influencing, through overt or covert means – (A) the political, military, economic, or other policies or activities of the United States Government or State or local governments, including any election within the United States; or (B) the public opinion within the United States.”
Russia remains the predominant threat to U.S. elections. Moscow continues to use a broad stable of influence actors and tactics and is working to better hide its hand, enhance its reach, and create content that resonates more with U.S. audiences. These actors are seeking to back a presidential candidate in addition to influencing congressional electoral outcomes, undermine public confidence in the electoral process, and exacerbate sociopolitical divisions.
Russian efforts to influence the U.S. elections have been a significant concern since the 2016 election. These efforts have typically involved disinformation campaigns, cyberattacks, and attempts to sow discord in American society through social media. Experts and government officials anticipate that Russia may continue employing similar tactics. In previous elections, Russian interference has included:
- Disinformation campaigns: The use of Social Media Company [SMC] platforms like Facebook, Twitter, and YouTube to spread false or misleading information to polarize voters and exacerbate political tensions. Most widely reported these days are attempts by adversaries—hoping to reach a wide swath of Americans covertly from outside the United States—to use false personas and fabricated stories on social media platforms to discredit U.S. individuals and institutions. A foreign group may purposefully spread false or inconsistent information about an existing social issue to provoke all sides and encourage conflict. These threats are supercharged by emerging technologies like AI. New forms of online FMI threats are continuously surfacing, including AI-fueled synthetic deepfakes. And these threats are originating from an increasingly diverse, growing, and more capable group of foreign actors. Russian influence actors have undertaken distinct efforts during this election cycle to build and use networks of U.S. and other Western personalities to create and disseminate Russian-friendly narratives. These personalities post content on social media, write for various websites with overt and covert ties to the Russian Government, and conduct other media efforts.
- Cyberattacks: Attempts to hack political organizations, voter databases, and election infrastructure. In 2016, Russian hackers targeted the Democratic National Committee (DNC) and attempted to access state voting systems. Cyberattacks against political campaigns and government infrastructure include foreign adversaries hacking and leaking sensitive information from computers, databases, networks, phones, and emails
- Influencing specific candidates: Russian efforts have often focused on supporting or undermining particular candidates to serve their geopolitical interests.
- Federal election criminal activity: Efforts to suppress voting or secret funding to help or harm a person or cause: Tactics include political advertising from foreign groups pretending to be U.S. citizens, lobbying by unregistered foreign agents, illegal campaign contributions from foreign adversaries, and spreading of disinformation about election dates or times
Foreign influence operations — which include covert actions by foreign governments to influence U.S. political sentiment or public discourse — are not a new problem. But the interconnectedness of the modern world, combined with the anonymity of the Internet, have changed the nature of the threat and how the FBI and its partners must address it. The goal of these foreign influence operations directed against the United States is to spread disinformation, sow discord, and, ultimately, undermine confidence in our democratic institutions and values.
In 2019, bipartisan majorities of the US Congress recognized the threat to national security posed by FMI in Title 50, Section 3369 of the United States Code titled, “Cooperative Actions to Detect and Counter Foreign Influence Operations,” which includes the finding that foreign actors have used the platforms provided by technology companies to engage in FMI activities that threaten U.S. national security, and will likely continue to do so.
Specifically, Congress found that: "(1) [a hostile power deployed] information warfare against the United States, its allies and partners, with the goal of advancing the strategic interests of the [hostile power] . . . (2) One line of effort deployed as part of these information warfare operations is the weaponization of social media platforms with the goals of intensifying societal tensions, undermining trust in governmental institutions within the United States, its allies and partners in the West, and generally sowing division, fear, and confusion. (3) These information warfare operations are a threat to the national security of the United States and that of the allies and partners of the United States. As former Director of National Intelligence Dan Coats stated, “These actions are persistent, they are pervasive and they are meant to undermine America’s democracy . . .” (7) Because these information warfare operations are deployed within and across private social media platforms, the companies that own these platforms have a responsibility to detect and facilitate the removal or neutralization of foreign adversary networks operating clandestinely on their platforms."
Congress stated in the same section that “it is the sense of Congress that information from law enforcement and the intelligence community is also important in assisting efforts by these social media companies to identify foreign information warfare operations.” The cadence of the FBI’s engagements with SMCs depends on a variety of factors relating to the threat landscape. To ensure that meetings are conducted in a manner fully consistent with any applicable First Amendment principles, FBI personnel are not permitted to direct or suggest that SMCs take any actions concerning content on their platforms.
Foreign actors are turning to commercial firms, such as marketing and public relations companies, to leverage these firms’ expertise in communications, technical sophistication, and to complicate attribution. These firms offer foreign states and other political actors an array of potential services and are often able to operate more nimbly and with fewer bureaucratic hurdles than government entities. Moscow is leveraging Russia-based influence-for-hire firms to shape public opinion in the United States, including with election-related operations. These firms have created influence platforms, directly and discreetly engaged Americans, and used improved tools to tailor content for U.S. audiences, while hiding Russia’s hand.
On April 15, 2021, pursuant to his authorities under IEEPA, the President issued E.O.14024, which declared a national emergency with respect to: "[H]armful foreign activities of the Government of the Russian Federation—in particular, efforts to undermine the conduct of free and fair democratic elections and democratic institutions in the United States and its allies and partners; to engage in and facilitate malicious cyber-enabled activities against the United States and its allies and partners; to foster and use transnational corruption to influence foreign governments; to pursue extraterritorial activities targeting dissidents or journalists; to undermine security in countries and regions important to United States national security; and to violate well-established principles of international law, including respect for the territorial integrity of states—constitute an unusual and extraordinary threat to the national security, foreign policy, and economy of the United States." To implement E.O. 14024, OFAC issued the “Russian Harmful Foreign Activities Sanctions Regulations,” 31 C.F.R. Part 587.
For 2024, U.S. government agencies, including the FBI, NSA, and the Department of Homeland Security, are actively working to counter any potential foreign interference, including from Russia, by improving election security and monitoring disinformation. Moscow’s methods of targeting those it identifies as adversaries are well known – from its illegal and unwarranted invasion of sovereign nations to the unjust imprisonment of innocent persons, to cyberattacks and meddling in foreign elections, to conducting sham elections in Russian-controlled territories of Ukraine.
As part of a series of coordinated actions across the U.S. Government, the Department of State took three steps to counter Kremlin-backed media outlets’ malicious operations seeking to influence or interfere in the 2024 U.S. elections. The Department’s actions include introducing a new visa restriction policy, Foreign Missions Act determinations of RT’s parent company Rossiya Segodnya, and other subsidiaries RIA Novosti, RT, TV-Novosti, Sputnik and Ruptly, and announcing a Rewards for Justice offer.
- a new policy to restrict visa issuance to certain individuals who, acting on behalf of Kremlin-supported media organizations, use those organizations as cover for covert activities, and are responsible for or complicit in these malign efforts. Consistent with U.S. law, visa information is confidential, and the department is not at liberty to name these specific individuals.
- the department designated the operational presence of Rossiya Segodnya and its subsidiaries – RIA Novosti, TV-Novosti, Ruptly, and Sputnik – as foreign missions, as they are effectively controlled by the Government of the Russian Federation. As Foreign Missions Act-designated entities, they will be required to notify the department of all personnel working in the United States and disclose all real property they hold.
- department announced a Rewards for Justice offer to seek information on potential foreign efforts to interfere in U.S. elections, including by organizations such as RaHDit – also known as Russian Angry Hackers Did It – which has previously engaged in covert election interference, influence abroad, and poses a threat to the 2024 U.S. elections.
On 05 September 2024, the US Treasury announced a new round of sanctions against Russian media figures, including RT executives, for allegedly waging a “malign” government-sponsored influence campaign to manipulate American public opinion ahead of the country's presidential election in November. The US Justice Department also unsealed indictments against two Russian citizens for supposedly violating the 1939 Foreign Agents Registration act and producing English-language content for American audiences on behalf of the Russian government.
Russian officials have also weighed in on the targeting of RT and other media channels, with Moscow’s ambassador to Washington, Anatoly Antonov, suggesting that the Democrats are trying to “shift some of the blame for their mistakes during the electoral struggle to Russia” and justify their actions by using “lies” and “trying to discredit Russian media.” “Their goal is clear – to cleanse the information space of inconvenient truth. To thicken the atmosphere of Russophobia, blaming their own failures on external factors,” Antonov said. Russian Foreign Ministry spokeswoman Maria Zakharova said the measures point to the “irreversible degradation” of democratic values in the US and its “transformation into a totalitarian neoliberal dictatorship.”
In Canada,the Public Inquiry into Foreign Interference in Federal Electoral Processes and Democratic Institutions released its Final Report 28 January 2025 following a 15 month investigation. "The Commission found that foreign interference is not new, but that it is increasing and the means and methods are changing. The Government of Canada responded to attempts at interference by putting in place numerous measures and mechanisms to better detect, prevent and counter them; however, the government sometimes took too long to act, and coordination was less than optimal. In some cases, the processes by which information was communicated to decision-makers, including elected officials, were flawed. The Commission also found that the government has been a poor communicator both about the extent of foreign interference that it detected and the means in place to counter it, and that it must find ways to be more transparent.
"Thus far - and this is one of my most important observations - Canada's democratic institutions have held up well and remained robust in the face of attempted foreign interference," said Commissioner Hogue. "That said, foreign interference will never be completely eradicated, and it will always be necessary to be vigilant and fight against it. Democracies around the world are under attack from all sides, and the technological resources available to malicious actors are multiplying. All of us who live in Canada must confront these challenges, together."
|
NEWSLETTER
|
| Join the GlobalSecurity.org mailing list |
|
|
|

