Checking Vitals for Journalists: Getting Back to Basics

We have had a few years to survey the journalist training-advising landscape, and even participate on occasion, and the clearest conclusion is that journalists should focus on the important security principles.  On balance, it appears that 80% of online, seminar, and workshop gatherings focus on email, phone, and computer threats, plus electronic storage of data.  The remaining 20% of operational, physical, and personal security is well placed, but unfortunately run by well-meaning, but unequipped people who have little idea of where to begin.  The tenuous connection forged between digital and non-digital security is unprincipled, wayward, and on the whole a very dangerous course.

That many up-and-coming voices in the Privacy Movement, separate from the heavy hitters that have legitimate concerns and have operated before, echo frequently digital forensics and occasionally basic security behaviors.  Online privacy does not require changing many behaviors day-to-day, other than installing, opening, and using computer and phone applications.  Attending an evening workshop, the new recruit simply downloads Signal to the iPhone, sets up encrypted email, and installs the Tor browser, and minutes later she walks the streets as a “special operator.”  No indoctrination.  No change in daily activities.  Just a comfortable stroll into a war-zone.  And that is the tragedy: that 80% of time and energy devoted to Digital Ash yields little in long-term safety unless the foundation has been laid.

Journalists further face the folowing paradox: Targeted electronic surveillance is very rare, perhaps 10% of fieldwork worldwide, generally when they investigate government actors or commit felonies.  The remaining 90% of fieldwork attracts only physical surveillance by ground teams.  Supposing one implements all the recommended digital security, the aggressors will then turn to physical surveillance.  In either scenario, the main concern will be physical surveillance, so training only in electronic surveillance leaves a huge gap in readiness.

It seems to us the brute reality of dark trades below-the-felony-line is that very few people, agencies, governments, among others, care about journalists’ electronic communications.  Unless one in the United States is involved in serious felonies (most often non-violent that have to do with handling classified information), no local, state, or federal authority will devote resources to pull data from mass surveillance programs.  There has to be a trigger to warrant retrieval, such as felony leaking and such.  That FBI agent's time is better spent on bank fraud, wire fraud, securities fraud, money laundering conspiracy, identify theft, federal computer crimes, and RICO cases.  The Baltimore Sun staff should be far from attracting targeted electronic surveillance commensurate with investigating Russian or Chinese criminal hackers.  Non-state actors may use some signals tools, but more often than not will deploy a ground team to target friends and family, then scour open source data online and get exactly what they need to exploit, coerce, compromise, and disrupt.

The picture is slightly different abroad.  State actors such as Azerbaijan, Brazil, Russia, and Pakistan, for example, may target foreign correspondents and their native reporters.  Even then, the quantity of U.S.-based journalists traveling and working there is so low as to warrant about 10% of training in public.  The non-profit digital workshops, in-person and online, appear to treat every area of interest in the world like Azerbaijan.  That increases the sense of adventure for New York City staff who never leave New York City, and polishes the small areas of ego reserved for sharing tools with friends and coworkers.  But the preparedness industry loses its mission of preparing journalists when the consequence of training is journalists well-suited to meet threats in remote areas they will never enter.  An analogy is a medical school curricula composed of one year healthcare fundamentals and three years studying examination and treatment of Fraser-Jequier-Chen syndrome (FJC), a rare disorder characterized by a cleft epiglottis and larynx, extra fingers and toes, extra kidney, and pancreatic and bone abnormalities.  We might as well relabel PGP as FJC and Tor as HCS (Holmes Collins syndrome).

Security training for non-felons should start with the basics: physical security, personal security, and non-digital operational security circa the U.S. Army in 1950.  These are the essentials needed to navigate in all areas of operation before adding on specialty and mission-specific rehearsals.  Land navigation and map reading, surveillance of targets, counter-surveillance, compartmenting information, compartmenting assignments, compartmenting equipment, area studies, caches, route maps, spot maps, cover and concealment, communications systems, and human source operations.  With this baseline, then we can build that extra 10% that may include some digital tools. 

Critically, without the baseline, one is automatically unprepared no matter the quality of tools or honorable intent.  Imagine the Privacy Movement recruit who gets lost in Mexico City, iPhone and iWatch stolen, and no money and no caches to retrieve.  The reserve skill tank is empty, and she gets attacked by street dogs, picked up by police, and finally handled by a State Department consulate.  That scenario is more likely than the NSA tacking up five minutes to signals emitted by non-felon journalists working in the U.S.  Now imagine the felon leaking classified materials.  The spotlight is wide, narrows, and zeroes in on someone very quickly. 

Levity aside, the point is 80% digital and 20% physical, personal, and operational is backwards.  It should be 80% non-digital fundamentals and with a solid baseline, then add 20% digital.  One of the reasons for this imbalance is the ease-of-use and speed of install of digital applications, which is a tools-based curriculum: here are the tools to use.  Operational security cannot be coupled with digital security unless one first understands behavior-based operating -- and thinking of every step as a behavior that may be exploited.  This demands careful attention be paid to countermeasures, as explained below.  Tools are helpful and often needed, but behavior is the foundation of living the security lifestyle—essentially becoming indoctrinated into a secure environment—not just loading a computer program when on-the-job.

For example, practicing the "need-to-know" principle with work-related assignments would exponentially limit one's exposure.  Adding compartmented files and separating materials adds more and more layers of security.  And adding the most rudimentary operational security scheme, which can be tailored, but serves as a solid guide is absolutely essential.  Planning, preparation, sketching alternatives, diagramming and flow charting possible scenarios in a team environment is required for this type of work -- felon, non-felon, safety, etc.

With the rise of the use of the military acronyms, and with that use covering up the original meaning, it is worth sketching that outline here:

This is a guideline structured on the Department of Defense (DOD) and specifically the Defense Security Service, or DSS.  The widely used acronym OPSEC is a military acronym designed for military operations worldwide, and it does apply to dark trades as well, illustrated by criminal organizations, shadow groups, resistance groups, and, in our new age of surveillance, journalists and human rights workers.

The picture looks like this, and it is not sequential and does not require compartmented details—that is, a lot of intelligence data overlaps, but categories are helpful. 

First, we identify critical information. 

Second, we analyze adversaries’ intent and capabilities. 

Third, we analyze our vulnerabilities to exploitation by adversaries. 

Fourth, we assess our risks based on what we want to do. 

And fifth, we apply countermeasures to reduce adversaries’ abilities to exploit our vulnerabilities.

The second part is crucial.  We must comprehend and analyze capabilities and intent.  We could be dealing with binoculars or thermal imaging, signatures matching a printer to page, fingerprint collection, facial and gait recognition, etc.  Intent is mostly vague and obvious, but can be teased out.  The third part is equally crucial: what are we doing that makes us vulnerable to those capabilities?  I may walk into public transport with cameras, walk into a building with an ATM machine that has a camera, send and receive mail, associate with certain people and organizations, emit electronic signatures, step foot in mud and leave shoe prints, or perhaps leave touch DNA or saliva on a used straw.  Now the big part: sure, we do a lot of things that make us vulnerable, but let us balance the risk.  Then finally come up with some reasonable countermeasures.  Dealing with state surveillance by intelligence agencies, for instance, assume the best capabilities, a lot of vulnerability, and design some serious countermeasures that create chaos for investigators.