US misfires fighting ISIS online, Part III: ‘Shouldn’t grade your own homework’

AP16354729727488.jpg

In this image provided by the U.S. Army, then-Lt. Col. Victor Garcia walks during a change of command ceremony at Fort Bragg, N.C. Garcia, a 1990 West Point graduate and decorated officer who served in Afghanistan and Iraq, led U.S. Central Command’s information operations division from 2013 through July 2016. The division is the command’s epicenter for firing back at the Islamic State’s online propaganda machine. An AP investigation found it is plagued by incompetence, skewed data and cronyism. (Staff Sgt. Christopher Franklin/U.S. Army via AP)

This is Part III of a four-part series on issues plaguing WebOps and the U.S. counterintelligence campaign against the Islamic State. Start here at Part I.

To determine whether WebOps – U.S. Central Command’s online counterintelligence operation aimed at winning hearts and minds – actually dissuades people from becoming radicalized, Colsa Corp.’s scoring team analyzes the interactions employees have online and tries to measure whether the subjects’ comments reflect militant views or a more tolerant outlook.

Three former members of its scoring team told the AP they were encouraged by a manager to indicate progress against radicalism in their scoring reports even if they were not making any.

US misfires in online fight against Islamic State
Part II: ‘Do you speak Arabic?’
Part IV: ‘Untouchable’

One employee, who said she left to find meaningful work, recalled approaching a Colsa manager to clarify how the scoring was done shortly after starting her job. She said he told her that the bottom line was “the bread we put on the table for our children.”

The boss told her that the scoring reports should show progress, but not too much, so that the metrics would still indicate a dangerous level of militancy online to justify continued funding for WebOps, she said.

She was shocked. “Until my dying day, I will never forget that moment,” she said.

She, like other former employees, spoke only on condition of anonymity for fear of retribution from Colsa that could affect future employment.

The manager she spoke to declined to comment. AP withheld his name because of security concerns.

Employees and managers routinely inflate counts of interactions with potential terrorist recruits, known as “engagements,” according to multiple workers. Engagements are delivered in tweets or comments posted on social media to lists of people and can also be automated. That automation is at times used to inflate the actual number of engagements, said two former workers, including the one who talked about colleagues faking their language abilities.

The worker who left in disgust explained that a single tweet could be programmed to be sent out to all the followers of a target individually, multiple times. So the targets and their followers get the same tweets tagged to them over and over again.

“You send it like a blind copy. You program it to send a tweet every five minutes to the whole list individually from now until tomorrow,” a former employee said. “Then you see the reports and it says yesterday we sent 5,000 engagements. Often that means one tweet on Twitter.” The person said that he saw managers printing out the skewed reports for weekly briefings with CENTCOM officers. But the volume made it look like the WebOps team’s work was “wow, amazing,” he said.

Army Col. Victor Garcia, former head of the information operations division, said Colsa has a done a good job under his watch, that the data is sufficiently scrutinized and the program is succeeding.

In 2014, a group of more than 40 Defense Department data specialists came to Tampa to evaluate the program. Their unclassified report, obtained by AP, identified what one of the authors called “serious design flaws.” For instance, the report found that any two analysts were only 69 percent likely to agree on how to score a particular engagement. The author said a rate of 90 percent or higher is required to draw useful conclusions.

The report found that computers would be as accurate or better than analysts and could evaluate effectiveness more quickly — and cheaply.

What Central Command really needed, the report said, was outside oversight.

“You shouldn’t grade your own homework,” said the author, a former U.S. military officer and data specialist once stationed at Central Command. The author, one of many people who signed off on the report, spoke on condition of anonymity for fear of professional retribution.

He said the report was given to officers, including Garcia, and to Colsa. The author said the suggestions were not implemented and WebOps managers resisted multiple attempts at oversight. The author said that when he directly appealed to Garcia for outside assessment, an officer under Garcia said the effort would cloud the mission.

“The argument was that WebOps was the only program at Central Command that was directly engaging the enemy and that it couldn’t function if its staff was constantly distracted by assessment,” he said. The argument worked, he said, and Colsa was not forced or instructed to accept outside oversight.

Garcia disputed that account but would not elaborate on what steps were taken to address the Defense Department data specialists’ concerns. The Government Accountability Office issued a report in 2015 on WebOps oversight, but it is classified.

This is the third in a four-part series on issues plaguing WebOps and the U.S. counterintelligence campaign against the Islamic State.