Information Warfare

Bogeyman 

Experienced member
Professional
Messages
8,342
Reactions
60 29,282
Website
twitter.com
Nation of residence
Turkey
Nation of origin
Turkey

Risk Aversion Is at the Heart of the Cyber Response Dilemma


Monica Kaminska is a postdoctoral researcher at the Hague Program for Cyber Norms, Institute for Security and Global Affairs at Leiden University and a PhD candidate in Cyber Security at the University of Oxford.

The New York Times recently received swift pushback from the White House against a claim that it was preparing a “cyberstrike” against Russia in retaliation for the SolarWinds/Sunburst campaign. The Biden administration’s desire to tone down such rhetoric is not a surprise. Although the SolarWinds campaign appears to have been limited to espionage, the kind which the Five Eyes doubtlessly engage in themselves, meaningful and commensurate responses have proven difficult for the United States, even in the context of more disruptive or destructive cyber operations.


The question is why the United States restrains its responses, despite formally adopting a strategy of deterrence [PDF] and despite the wide set of economic, political, and cyber tools at its disposal. In a recent publication, I argue that the cyber domain is viewed as a highly uncertain and potentially escalatory environment, which takes more forceful response options off the table. The recent introduction of the strategies of persistent engagement and defend forward stems not from a shift to a more offensive approach but is instead an extension of the culturally engrained concern to limit uncertainty and risk.

The Problem With Current Responses

The United States’ usual recourse has included economic sanctions, legal indictments, and public attribution statements—or some combination of these instruments. However, the precise policy objective of imposing “risks and consequences” through them is often unclear. Sanctions and indictments tend to target a number of individual hackers for a variety of incidents, which confuses the signal they intend to deliver to the seats of power in Moscow and Pyongyang. Take, for example, the Justice Department’s October 2020 indictment of individuals for their cyber interference in Ukraine, Georgia, the French presidential elections, investigations into the Russian Novichok attack, and the 2018 PyeongChang Winter Olympic Games. The indictment was responding to hacks that ranged from intrusions into electrical grids to “hack and leak” election interference. The adversary could be forgiven for thinking they were being punished for the means used rather than for the operation’s intent or effects.

Economic sanctions can be an inconvenience—although the perpetrators of the most brazen cyber campaigns tend to be some of the most heavily sanctioned regimes on earth, which lessens the impact of new measures. In fact, the blowback effect of sanctions can sometimes mean that they have a greater effect on the sanctioning government than on the sanctioned one. For example, in 2018, the Trump administration was forced to back down from sanctions it imposed on Rusal, a Russian company, after they caused a spike in aluminum prices. Public attribution, for its part, has the objective of clarifying “the rules of the game” in cyberspace but in reality could even be seen as a badge of honor by some of the agencies to which the operations are traced back.

The “Risk Society” and the Roots of Restraint

Writing in the early 1990s, German sociologist Ulrich Beck coined the term “risk society” to describe a society defined by the anticipation of catastrophe, stemming from an awareness of the unintended side effects of modernization. The Chernobyl nuclear incident, global financial crises, transnational terrorism, and mounting evidence of environmental degradation generated a realization that geographic distance no longer offered protection from far-flung disasters and hazards. Moreover, these disasters and hazards were increasingly understood to be self-inflicted—the price that modern societies paid for progress.

This led to the development of the risk society—a pervasive sense of insecurity and fear, born of uncertain future events, that underpins policymaking and strategic thinking. The major trauma of the 9/11 terrorist attacks only served to cement this attitude. In a risk society, each decision is carefully weighed up between the costs of action versus inaction, often resulting in either overzealous pre-emption or paralysis. This new stage of modernity has not, however, affected every society equally. Beck reminds us that risk is culturally framed. Thus, different societies have different risk thresholds.

The Risk Society Goes Cyber

The complexity of the cyber operational environment and the United States’ dependency on internet-connected systems and networks means that it feels asymmetrically vulnerable to cyberattacks. Modernization, as Beck explained, has produced unanticipated and unintended side effects. The awareness of this asymmetric vulnerability features front and center when U.S. decision-makers face the dilemma of how to respond to foreign cyber aggression. Fearful of blowback from their own response or of triggering a cyber tit-for-tat exchange and unintentionally escalating a conflict, they opt instead to employ weaker measures. Domestic publics support such an attitude according to recent survey research. Adversaries are aware of these dispositions, so risk aversion poses a problem for the credibility of deterrent signals.

To address the uncertainties inherent in cyberspace, the United States has opted for risk management practices. These practices do not substitute for the imposition of costs in response to damaging adversary actions, but instead seek to mitigate as much as possible the scale and effects of hostile cyber operations on digital infrastructures.

Particularly notable have been the preventive practices of persistent engagement and defend forward, which aim to proactively “counter attacks close to their origins” in order to “render most malicious cyber and cyber-enabled activity inconsequential [PDF].” In adapting to the cyber domain, U.S. Cyber Command has shifted from being a “response force” to a “persistent force [PDF].” While on the face of it persistent engagement could seem like a more assertive and offensive strategy, it is in fact an outgrowth of the same aversion to risk that produced limited responses—which could seem paradoxical to those in the academic community who have expressed concerns about the strategy’s escalatory potential. By aiming to eliminate potential threats before these reach the homeland, persistent engagement implies the non-acceptance of adverse consequences of potential hostile campaigns and involves constantly surveilling foreign networks and anticipating the actions of adversaries to mitigate risks.


Impose Costs on Russia in the Information Environment

By Major Travis Florio, U.S. Army

The United States needs to make Russia pay a larger price for its information warfare attacks.


John Arquilla presciently argued in 1993 that warfare is no longer about who has the more superior capital, labor, and technology; rather, victory is determined by who has the best information about the battlefield. Over the past decade, Russian information warfare has become more openly aggressive, and the United States must go on the offensive in the information environment (IE) to deter and disrupt Russia’s strategy. Brazen meddling in the cyber domain cannot continue uncontested, and despite the image of a powerful post–Soviet Union “Russian bear” under Vladimir Putin, Russia has many vulnerabilities ripe for exploitation.

The digital connectivity and economic growth technology has brought to the United States has also created a strategic dilemma—the more networked the nation is, the more opportunities there are for adversaries to disrupt critical infrastructure and wreak havoc on U.S. institutions. This is reflected in Russian doctrine, which recognizes an information-psychological aspect of cyber confrontation. Furthermore, Russia is exploiting freedom of speech in open democracies by interjecting loudly into social media debates. This problem does not require the government to take control of private media companies or regulate social media platforms. It does require a well-structured and resourced plan to impose costs on Russia.

Currently, the United States lacks a coherent, comprehensive, and coordinated approach to counter Russian malign influence operations. Russia exploits this confusion by launching multiple disconnected and seemingly contradictory information campaigns, using Soviet tactics of deception and information distortion. Countering its attempts to create havoc is akin to a whack-a-mole tactic; a better strategy is to impose costs.

Russia’s Vulnerabilities

Russia has multiple vulnerabilities: an overreliance on high oil and gas prices, economic decline from sanctions, an aging population, underpaid military conscripts, disaffected civilians, anxiety about Western-backed regime change, and loss of great power status. In addition, Russia fears popular unrest within its borders. Controlling such a large nation, which encompasses about an eighth of the globe’s landmass across 11 time zones, has always been a central dilemma for Russian security. Despite Putin’s desired image of a Russian global powerhouse, its current national policy and strategy reveal weaknesses.

Russia’s obsession with color revolutions and regime change reveals a deep insecurity concerning the legitimacy of Putin’s regime—secure nations comfortable with their governance and succession policy do not obsess over regime change. Although the Russian government controls the media and restrains internet applications, Russians still are connected to the outside world via creative cyber workarounds. Russia is not yet in a position to completely control information flow in and out of its borders, and Putin has more reason to fear social media influence than the United States does. Even the smallest crack in the firewall can have existential ramifications.

Dr. Scott Fisher’s research on pressuring Russia in the IE found that Russia is more reactive to the informational instrument of power than diplomatic, military, or economic instruments. When Moscow’s narrative is undermined or attacked in the marketplace of ideas via news or social media, Russia reacts quickly to stifle the opposition and propose counternarratives.

Ideas and news accessible on the internet are a major vector for instability in authoritarian governments, because of the potential for motivating and mobilizing the population in ways that threaten the ruling party. The Bolotnaya Square riots of 2011–2012, where tens of thousands of middle-class Russians protested against a gerrymandered Putin accession to a third term, reveal the vulnerability of authoritarianism.

Other exploitable areas to foster unrest in Russia include healthcare and quality of life comparisons with first-world countries. Life expectancy for males in Russia is 13 years lower than the global average; pharmaceutical drug accessibility and healthcare infrastructure are grossly underfunded. Raising the retirement age in 2018 incited fierce protests, so much so that the regime had to back down and lower the retirement age for women to 55. The decision-making in Moscow is not above scrutiny, and the Russian population is capable of criticizing government policies.

Another Russian vulnerability is that it openly deceives, expending veracity and integrity capital as if in endless supply. While this can result in a short-term gain, there are long-term ramifications. This is evident in the Ukraine, where years of Russian propaganda oversaturation resulted in a desensitized population. Only 8.9 percent of Ukrainians trust Russian TV; among young people, only 2 percent even watch Russian TV. Allocating resources to confront Russian propaganda of this sort is unnecessary and ineffective, as Russia appears to be damaging itself by its own actions. An often-forgotten lesson in psychological warfare is that propaganda is essentially an offensive tool—to deny a lie in most cases merely gives the lie more circulation. Only the most blatant and pernicious disinformation and misinformation should be countered.

Russia Is Not Invincible

When it comes to influence, Russia’s constant interference may backfire in the long term. A Pew Research Center survey in 2019 of 33 countries determined fewer than half of adults across the globe view Russia favorably. Americans’ views of Russia are the lowest they have been in more than a decade. Even Russia’s largest victories, such as those in Crimea and Donbass, involved traditional military power and managed to galvanize Europe in fiercely anti-Russian ways. When Russia interfered in the 2017 French presidential election, it failed miserably. Once the French public was alerted to the fact that Russia was backing Marine Le Pen, Emmanuel Macron achieved a decisive victory. Despite these failures, Russian information warfare cannot be ignored.

The United States and its allies must take to the offense to deter or disrupt Russian activities in the IE. This offense must leverage psychological operations, deception, cyber, and public affairs across the Department of Defense (DoD) in a comprehensive information operations campaign. The United States needs to rebuild linguistic capabilities and invest in expert psychological operations and information operations personnel with analytical expertise in Russian culture. DoD also needs to expand its cyber capabilities, both offensive and defensive. As outlined in the 2020 Cyberspace Solarium Commission Report, Congress should ensure the Cyber National Mission Force is adequately funded and appropriately sized to confront the Russian cyber threat. While simultaneously building these organic capabilities, the United States should encourage emigration and recruit highly educated Eastern European youth with cyber backgrounds.

Unleashing the power of capitalism and a competitive job market on Eastern Europe will draw away its best and brightest minds. Providing financial incentive to potential cyber criminals will drain Russia’s pool of highly trained cyber personnel and increase its cost of employing hackers. The FBI’s success in luring hackers such as Alexey Ivanov to the Unied States is evidence that economic incentives work. Russia loses approximately 350,000 skilled workers per year to various countries; the United States should encourage siphoning this talent from potential Russian military and criminal career pipelines.

In addition to investing in human capital, the United States should more aggressively promote human rights to encourage protests against the Russian government. Diminishing faith in the electoral system and highlighting human rights violations, although difficult under a controlled media, could increase discontent among the population. Encouraging protests focused on destabilizing the Russian regime may reduce the likelihood that Russia pursues aggressive action abroad or in the IE against the United States.

Another strategy to confront Russian information warfare is public disclosure of the activity and education of U.S. civilians—particularly as it relates to cyber and influence. DoD has used this in the past to expose Russian malign activity, bringing more scrutiny of Russian fake news to reduce the influence of the message. Cyber Command’s hunt-forward operations have also exposed Russian cyber tactics, forcing Russia to react and investigate how its malware was discovered. These countermeasures should continue, with hunt-forward operations conducted robustly overseas in partnership with U.S. allies.

National deterrence policy and strategy are just as important now as they were in the Cold War, only the weapons have changed. The United States can create multiple dilemmas and impose costs on Moscow by investing in human capital, siphoning Russian cyber talent, using protest potential, and continuing hunt-forward operations in coordination with Eastern European allies—while avoiding engaging in wasteful counterpropaganda efforts. Russia wants to operate in a gray area, and it will chip away at United States democracy and hegemony until met with an equal or greater force.

 

Bogeyman 

Experienced member
Professional
Messages
8,342
Reactions
60 29,282
Website
twitter.com
Nation of residence
Turkey
Nation of origin
Turkey

CTIL Files #1: US And UK Military Contractors Created Sweeping Plan For Global Censorship In 2018, New Documents Show​


Whistleblower makes trove of new documents available to Public and Racket, showing the birth of the Censorship Industrial Complex in reaction to Brexit and Trump election in 2016



A whistleblower has come forward with an explosive new trove of documents, rivaling or exceeding the Twitter Files and Facebook Files in scale and importance. They describe the activities of an “anti-disinformation” group called the Cyber Threat Intelligence League, or CTIL, that officially began as the volunteer project of data scientists and defense and intelligence veterans but whose tactics over time appear to have been absorbed into multiple official projects, including those of the Department of Homeland Security (DHS).

The CTI League documents offer the missing link answers to key questions not addressed in the Twitter Files and Facebook Files. Combined, they offer a comprehensive picture of the birth of the “anti-disinformation” sector, or what we have called the Censorship Industrial Complex.

The whistleblower's documents describe everything from the genesis of modern digital censorship programs to the role of the military and intelligence agencies, partnerships with civil society organizations and commercial media, and the use of sock puppet accounts and other offensive techniques.

"Lock your shit down," explains one document about creating "your spy disguise.”

Another explains that while such activities overseas are "typically" done by "the CIA and NSA and the Department of Defense," censorship efforts "against Americans" have to be done using private partners because the government doesn't have the "legal authority."

The whistleblower alleges that a leader of CTI League, a “former” British intelligence analyst, was “in the room” at the Obama White House in 2017 when she received the instructions to create a counter-disinformation project to stop a "repeat of 2016."


Over the last year, Public, Racket, congressional investigators, and others have documented the rise of the Censorship Industrial Complex, a network of over 100 government agencies and nongovernmental organizations that work together to urge censorship by social media platforms and spread propaganda about disfavored individuals, topics, and whole narratives.

The US Department of Homeland Security’s Cybersecurity and Information Security Agency (CISA) has been the center of gravity for much of the censorship, with the National Science Foundation financing the development of censorship and disinformation tools and other federal government agencies playing a supportive role.

Emails from CISA’s NGO and social media partners show that CISA created the Election Integrity Partnership (EIP) in 2020, which involved the Stanford Internet Observatory (SIO) and other US government contractors. EIP and its successor, the Virality Project (VP), urged Twitter, Facebook and other platforms to censor social media posts by ordinary citizens and elected officials alike.

Despite the overwhelming evidence of government-sponsored censorship, it had yet to be determined where the idea for such mass censorship came from. In 2018, an SIO official and former CIA fellow, Renee DiResta, generated national headlines before and after testifying to the US Senate about Russian government interference in the 2016 election.

But what happened between 2018 and Spring 2020? The year 2019 has been a black hole in the research of the Censorship Industrial Complex to date. When one of us, Michael, testified to the U.S. House of Representatives about the Censorship Industrial Complex in March of this year, the entire year was missing from his timeline.

An Earlier Start Date for Censorship Industrial Complex​


735e7382-8df1-4050-8a4a-16e095e243ab.png


Now, a large trove of new documents, including strategy documents, training videos, presentations, and internal messages, reveal that, in 2019, US and UK military and intelligence contractors led by a former UK defense researcher, Sara-Jayne “SJ” Terp, developed the sweeping censorship framework. These contractors co-led CTIL, which partnered with CISA in the spring of 2020.

In truth, the building of the Censorship Industrial Complex began even earlier — in 2018.

Internal CTIL Slack messages show Terp, her colleagues, and officials from DHS and Facebook all working closely together in the censorship process.

The CTIL framework and the public-private model are the seeds of what both the US and UK would put into place in 2020 and 2021, including masking censorship within cybersecurity institutions and counter-disinformation agendas; a heavy focus on stopping disfavored narratives, not just wrong facts; and pressuring social media platforms to take down information or take other actions to prevent content from going viral.

In the spring of 2020, CTIL began tracking and reporting disfavored content on social media, such as anti-lockdown narratives like “all jobs are essential,” “we won’t stay home,” and “open America now.” CTIL created a law enforcement channel for reporting content as part of these efforts. The organization also did research on individuals posting anti-lockdown hashtags like #freeCA and kept a spreadsheet with details from their Twitter bios. The group also discussed requesting “takedowns” and reporting website domains to registrars.

CTIL’s approach to “disinformation” went far beyond censorship. The documents show that the group engaged in offensive operations to influence public opinion, discussing ways to promote “counter-messaging,” co-opt hashtags, dilute disfavored messaging, create sock puppet accounts, and infiltrate private invite-only groups.

In one suggested list of survey questions, CTIL proposed asking members or potential members, “Have you worked with influence operations (e.g. disinformation, hate speech, other digital harms etc) previously?” The survey then asked whether these influence operations included “active measures” and “psyops.”

These documents came to us via a highly credible whistleblower. We were able to independently verify their legitimacy through extensive cross-checking of information to publicly available sources. The whistleblower said they were recruited to participate in CTIL through monthly cybersecurity meetings hosted by DHS.

The FBI declined to comment. CISA did not respond to our request for comment. And Terp and the other key CTIL leaders also did not respond to our requests for comment.

But one person involved, Bonnie Smalley, replied over LinkedIn, saying, “all i can comment on is that i joined cti league which is unaffiliated with any govt orgs because i wanted to combat the inject bleach nonsense online during covid…. i can assure you that we had nothing to do with the govt though.”

Yet the documents suggest that government employees were engaged members of CTIL. One individual who worked for DHS, Justin Frappier, was extremely active in CTIL, participating in regular meetings and leading trainings.

8501059e-d089-4651-a5c9-7a1eba97a133_3020x1442.jpg


bee5ff13-6a28-4c3e-936c-a2e166d74e9e_2556x1240.jpg



CTIL’s ultimate goal, said the whistleblower, ”was to become part of the federal government. In our weekly meetings, they made it clear that they were building these organizations within the federal government, and if you built the first iteration, we could secure a job for you.”

Terp’s plan, which she shared in presentations to information security and cybersecurity groups in 2019, was to create “Misinfosec communities” that would include government.

Both public records and the whistleblower’s documents suggest that she achieved this. In April 2020, Chris Krebs, then-Director of CISA, announced on Twitter and in multiple articles, that CISA was partnering with CTIL. “It’s really an information exchange,” said Krebs.

The documents also show that Terp and her colleagues, through a group called MisinfoSec Working Group, which included DiResta, created a censorship, influence, and anti-disinformation strategy called Adversarial Misinformation and Influence Tactics and Techniques (AMITT). They wrote AMITT by adapting a cybersecurity framework developed by MITRE, a major defense and intelligence contractor that has an annual budget of $1 to $2 billion in government funding.

Terp later used AMITT to develop the DISARM framework, which the World Health Organization then employed in “countering anti-vaccination campaigns across Europe.”

A key component of Terp’s work through CTIL, MisinfoSec, and AMITT was to insert the concept of “cognitive security” into the fields of cybersecurity and information security.

https://substackcdn.com/image/fetch...10df-b17f-4394-9f64-ca0a9eb155c2_509x379.jpeg
3e3e10df-b17f-4394-9f64-ca0a9eb155c2_509x379.jpg


The sum total of the documents is a clear picture of a highly coordinated and sophisticated effort by the US and UK governments to build a domestic censorship effort and influence operations similar to the ones they have used in foreign countries. At one point, Terp openly referenced her work “in the background” on social media issues related to the Arab Spring. Another time, the whistleblower said, she expressed her own apparent surprise that she would ever use such tactics, developed for foreign nationals, against American citizens.

According to the whistleblower, roughly 12-20 active people involved in CTIL worked at the FBI or CISA. “For a while, they had their agency seals — FBI, CISA, whatever — next to your name,” on the Slack messaging service, said the whistleblower. Terp “had a CISA badge that went away at some point,” the whistleblower said.

The ambitions of the 2020 pioneers of the Censorship Industrial Complex went far beyond simply urging Twitter to slap a warning label on Tweets, or to put individuals on blacklists. The AMITT framework calls for discrediting individuals as a necessary prerequisite of demanding censorship against them. It calls for training influencers to spread messages. And it calls for trying to get banks to cut off financial services to individuals who organize rallies or events.

664b5f18-685a-4159-93c9-6d4fecb3061e.jpg


The timeline of CISA’s work with CTIL leading up to its work with EIP and VP strongly suggests that the model for public-private censorship operations may have originated from a framework originally created by military contractors. What’s more, the techniques and materials outlined by CTIL closely resemble materials later created by CISA’s Countering Foreign Intelligence Task Force and Mis-, Dis-, and Maliformation team.

Over the next several days and weeks, we intend to present these documents to Congressional investigators, and will make public all of the documents we can while also protecting the identity of the whistleblower and other individuals who are not senior leaders or public figures.

But for now, we need to take a closer look at what happened in 2018 and 2019, leading up to the creation of CTIL, as well as this group’s key role in the formation and growth of the Censorship Industrial Complex.

“Volunteer” and “Former” Government Agents

https://substackcdn.com/image/fetch...d80e-b3b2-4d31-ba19-165684275a5f_422x203.jpeg
1.png


Bloomberg, Washington Post and others published credulous stories in the spring of 2020 claiming that the CTI League was simply a group of volunteer cybersecurity experts. Its founders were: a “former” Israeli intelligence official, Ohad Zaidenberg; a Microsoft “security manager,” Nate Warfield; and the head of sec ops for DEF CON, a hackers convention, Marc Rogers. The articles claimed that those highly skilled cybercrime professionals had decided to help billion-dollar hospitals, on their own time and without pay, for strictly altruistic motives.

In just one month, from mid-March to mid-April, the supposedly all-volunteer CTIL had grown to “1,400 vetted members in 76 countries spanning 45 different sectors,” had “helped to lawfully take down 2,833 cybercriminal assets on the internet, including 17 designed to impersonate government organizations, the United Nations, and the World Health Organization,” and had “identified more than 2,000 vulnerabilities in healthcare institutions in more than 80 countries.”

At every opportunity the men stressed that they were simply volunteers motivated by altruism. “I knew I had to do something to help,” said Zaidenberg. ”There is a really strong appetite for doing good in the community,” Rogers said during an Aspen Institute webinar.

And yet a clear goal of CTIL’s leaders was to build support for censorship among national security and cybersecurity institutions. Toward that end, they sought to promote the idea of “cognitive security” as a rationale for government involvement in censorship activities. “Cognitive security is the thing you want to have,” said Terp on a 2019 podcast. “You want to protect that cognitive layer. It basically, it’s about pollution. Misinformation, disinformation, is a form of pollution across the Internet.”

Terp and Pablo Breuer, another CTIL leader, like Zaidenberg, had backgrounds in the military and were former military contractors. Both have worked for SOFWERX, “a collaborative project of the U.S. Special Forces Command and Doolittle Institute.” The latter transfers Air Force technology, through the Air Force Resource Lab, to the private sector.

https://substackcdn.com/image/fetch...98f-eb20-4ec7-9f78-a1ab41b13c97_2550x1286.png
9689398f-eb20-4ec7-9f78-a1ab41b13c97_2550x1286.png


According to Terp’s bio on the website of a consulting firm she created with Breuer, “She’s taught data science at Columbia University, was CTO of the UN’s big data team, designed machine learning algorithms and unmanned vehicle systems at the UK Ministry of Defence.

Breuer is a former US Navy commander. According to his bio, he was “military director of US Special Operations Command Donovan Group and senior military advisor and innovation officer to SOFWERX, the National Security Agency, and U.S. Cyber Command as well as being the Director of C4 at U.S. Naval Forces Central Command.” Breuer is listed as having been in the Navy during the creation of CTIL on his LinkedIn page.

In June, 2018, Terp attended a ten-day military exercise organized by the US Special Operations Command, where she says she first met Breuer and discussed modern disinformation campaigns on social media. Wired summed up the conclusions they drew from their meeting: “Misinformation, they realized, could be treated the same way: as a cybersecurity problem.” And so they created CogSec with David Perlman and another colleague, Thaddeus Grugq, at the lead. In 2019, Terp co-chaired the Misinfosec Working Group within CogSec.

Breuer admitted in a podcast that his aim was to bring military tactics to use on social media platforms in the U.S. “I wear two hats,” he explained. “The military director of the Donovan Group, and one of two innovation officers at Sofwerx, which is a completely unclassified 501c3 nonprofit that's funded by U. S. Special Operations Command.”

Breuer went on to describe how they thought they were getting around the First Amendment. His work with Terp, he explained, was a way to get “nontraditional partners into one room,” including “maybe somebody from one of the social media companies, maybe a few special forces operators, and some folks from Department of Homeland Security… to talk in a non-attribution, open environment in an unclassified way so that we can collaborate better, more freely and really start to change the way that we address some of these issues.”

The Misinfosec report advocated for sweeping government censorship and counter-misinformation. During the first six months of 2019, the authors say, they analyzed “incidents,” developed a reporting system, and shared their censorship vision with “numerous state, treaty and NGOs.”

In every incident mentioned, the victims of misinformation were on the political Left, and they included Barack Obama, John Podesta, Hillary Clinton, and Emmanuel Macron. The report was open about the fact that its motivation for counter-misinformation were the twin political earthquakes of 2016: Brexit and the election of Trump.

“A study of the antecedents to these events lead us to the realization that there’s something off kilter with our information landscape,” wrote Terp and her co-authors. “The usual useful idiots and fifth columnists—now augmented by automated bots, cyborgs and human trolls—are busily engineering public opinion, stoking up outrage, sowing doubt and chipping away at trust in our institutions. And now it’s our brains that are being hacked.”

The Misinfosec report focused on information that “changes beliefs” through “narratives,” and recommended a way to counter misinformation by attacking specific links in a “kill chain” or influence chain from the misinfo “incident” before it becomes a full-blown narrative.

The report laments that governments and corporate media no longer have full control of information. “For a long time, the ability to reach mass audiences belonged to the nation-state (e.g. in the USA via broadcast licensing through ABC, CBS and NBC). Now, however, control of informational instruments has been allowed to devolve to large technology companies who have been blissfully complacent and complicit in facilitating access to the public for information operators at a fraction of what it would have cost them by other means.”

The authors advocated for police, military, and intelligence involvement in censorship, across Five Eyes nations, and even suggested that Interpol should be involved.

https://substackcdn.com/image/fetch...ges/92fd9cb7-cc57-49f3-a14b-e03dc410e4ea.heic
92fd9cb7-cc57-49f3-a14b-e03dc410e4ea.jpg


The report proposed a plan for AMITT and for security, intelligence, and law enforcement collaboration and argued for immediate implementation. “We do not need, nor can we afford, to wait 27 years for the AMITT (Adversarial Misinformation and Influence Tactics and Techniques) framework to go into use.”

The authors called for placing censorship efforts inside of “cybersecurity” even while acknowledging that “misinformation security” is utterly different from cybersecurity. They wrote that the third pillar of “The information environment” after physical and cybersecurity should be “The Cognitive Dimension.”

The report flagged the need for a kind of pre-bunking to “preemptively inoculate a vulnerable population against messaging.” The report also pointed to the opportunity to use the DHS-funded Information Sharing and Analysis Centers (ISACs) as the homes for orchestrating public-private censorship, and argued that these ISACs should be used to promote confidence in government.

It is here that we see the idea for the EIP and VP: “While social media is not identified as a critical sector, and therefore doesn’t qualify for an ISAC, a misinformation ISAC could and should feed indications and warnings into ISACs.”

Terp’s view of “disinformation” was overtly political. “Most misinformation is actually true,” noted Terp in the 2019 podcast, “but set in the wrong context.” Terp is an eloquent explainer of the strategy of using “anti-disinformation” efforts to conduct influence operations. “You're not trying to get people to believe lies most of the time. Most of the time, you're trying to change their belief sets. And in fact, really, uh, deeper than that, you're trying to change, to shift their internal narratives… the set of stories that are your baseline for your culture. So that might be the baseline for your culture as an American.”

In the fall, Terp and others sought to promote their report. The podcast Terp did with Breuer in 2019 was one example of this effort. Together Terp and Breuer described the “public-private” model of censorship laundering that DHS, EIP, and VP would go on to embrace.

Breuer spoke freely, openly stating that the information and narrative control he had in mind was comparable to that implemented by the Chinese government, only made more palatable for Americans. “If you talk to the average Chinese citizen, they absolutely believe that the Great Firewall of China is not there for censorship. They believe that it's there because the Chinese Communist Party wants to protect the citizenry and they absolutely believe that's a good thing. If the US government tried to sell that narrative, we would absolutely lose our minds and say, ‘No, no, this is a violation of our First Amendment rights.’ So the in-group and out-group messaging have to be often different.”

Ekran görüntüsü 2023-11-29 163154.png


“SJ called us the ‘Hogwarts school for misinformation and disinformation,’” said the whistleblower. “They were superheroes in their own story. And to that effect you could still find comic books on the CISA site.”

CTIL, the whistleblower said, “needed programmers to pull apart information from Twitter, Facebook, and YouTube. For Twitter they created Python code to scrape.”

The CTIL records provided by the whistleblower illustrate exactly how CTIL operated and tracked “incidents,” as well as what it considered to be “disinformation.” About the “we won’t stay home” narrative, CTIL members wrote, “Do we have enough to ask for the groups and/or accounts to be taken down or at a minimum reported and checked?” and “Can we get all troll on their bums if not?”

They tracked posters calling for anti-lockdown protests as disinformation artifacts.

“We should have seen this one coming,” they wrote about the protests. “Bottom line: can we stop the spread, do we have enough evidence to stop superspreaders, and are there other things we can do (are there countermessagers we can ping etc).”

CTIL also worked to brainstorm counter-messaging for things like encouraging people to wear masks and discussed building an amplification network. “Repetition is truth,” said a CTIL member in one training.

CTIL worked with other figures and groups in the Censorship Industrial Complex. Meeting notes indicate that Graphika’s team looked into adopting AMITT and that CTIL wanted to consult DiResta about getting platforms to remove content more quickly.

When asked whether Terp or other CTIL leaders discussed their potential violation of the First Amendment, the whistleblower said, “They did not… The ethos was that if we get away with it, it’s legal, and there were no First Amendment concerns because we have a ‘public-private partnership’ — that’s the word they used to disguise those concerns. ‘Private people can do things public servants can’t do, and public servants can provide the leadership and coordination.’”

Despite their confidence in the legality of their activities, some CTIL members may have taken extreme measures to keep their identities a secret. The group’s handbook recommends using burner phones, creating pseudonymous identities, and generating fake AI faces using the “This person does not exist” website.

In June 2020, the whistleblower says, the secretive group took actions to conceal their activities even more.

68ec6500-632c-4548-8dea-b023466715f8.jpg


One month later, In July 2020, SIO’s Director, Alex Stamos emailed Kate Starbird from the University of Washington’s Center for an Informed Public, writing, “We are working on some election monitoring ideas with CISA and I would love your informal feedback before we go too far down this road . . . . [T]hings that should have been assembled a year ago are coming together quickly this week.”

That summer CISA also created the Countering Foreign Influence Task Force which has measures that reflect CTIL/AMITT methods and includes a “real fake” graphic novel the whistleblower said was first pitched within CTIL.

The “DISARM” framework, which AMITT inspired, has been formally adopted by the European Union and the United States as part of a “common standard for exchanging structured threat information on Foreign Information Manipulation and Interference.”

Until now, the details of CTIL’s activities have received little attention even though the group received publicity in 2020. In September 2020, Wired published an article about CTIL that reads like a company press release. The article, like the Bloomberg and Washington Post stories that spring, accepts unquestioningly that the CTIL was truly a “volunteer” network of “former” intelligence officials from around the world.

But unlike the Bloomberg and Washington Post stories, Wired also describes CTIL’s “anti-misinformation” work. The Wired reporter does not quote any critic of the CTIL activities, but suggests that some might see something wrong with them. “I ask him [CTIL co-founder Marc Rogers] about the notion of viewing misinformation as a cyber threat. “All of these bad actors are trying to do the same thing, Rogers says.”

In other words, the connection between preventing cyber crimes, and “fighting misinformation,” are basically the same because they both involve fighting what the DHS and CTI League alike call “malicious actors,” which is synonymous with “bad guys.”

“Like Terp, Rogers takes a holistic approach to cybersecurity,” the Wired article explains. “First there’s physical security, like stealing data from a computer onto a USB drive. Then there’s what we typically think of as cybersecurity—securing networks and devices from unwanted intrusions. And finally, you have what Rogers and Terp call cognitive security, which essentially is hacking people, using information, or more often, misinformation.”

CTIL appears to have generated publicity about itself in the Spring and Fall of 2020 for the same reason EIP did: to claim later that its work was all out in the open and that anybody who suggested it was secretive was engaging in a conspiracy theory.

“The Election Integrity Partnership has always operated openly and transparently,” EIP claimed in October 2022. “We published multiple public blog posts in the run-up to the 2020 election, hosted daily webinars immediately before and after the election, and published our results in a 290-page final report and multiple peer-reviewed academic journals. Any insinuation that information about our operations or findings were secret up to this point is disproven by the two years of free, public content we have created.”

But as internal messages have revealed, much of what EIP did was secret, as well as partisan, and demanding of censorship by social media platforms, contrary to its claims.

EIP and VP, ostensibly, ended, but CTIL is apparently still active, based on the LinkedIn pages of its members.


 

Follow us on social media

Top Bottom