The Office of Personnel Management breach in June 2015 was a big wake up call to our federal government, and, in its wake, a number of initiatives were launched to improve the government’s cybersecurity posture. Despite several concrete improvements, progress has stalled in some areas, as demonstrated by a series of assessments conducted since the breach occurred.
In the fall of 2015, the Government Accountability Office (GAO) conducted its first assessment under the Federal IT Acquisition Reform Act, which covers cybersecurity as well as other areas of IT. Out of 24 agencies, none received an A, two received Bs, five got Cs, 14 got Ds and three agencies — the Department of Education, the Department of Energy and NASA — received failing grades.
Over the following six months, seven agencies raised their scores, and one saw its score go down. By the time of the following assessment, in late 2016, 12 agencies improved their scores and, again, one fell. The most recent version of the scorecard, released by the House Oversight and Government Reform Committee on June 14, shows that progress had stalled. Only four agencies improved their scores, and five saw their scores fall.
Today, only one agency, US AID, scored an A. Seven agencies scored Bs, 10 got Cs, five got Ds, and one agency, the Department of Defense, fell to an F, according to a copy of the scorecard obtained by Federal News Radio. Zeroing in on the transparency and risk management scores, five agencies received failing grades.
The government had three main problem areas to address after the OPM breach: management, bureaucracy and the technology itself. While there has been some progress on the tech front, many of the bigger organizational issues remain.
The buck stops… where?
A key lesson of the OPM breach was that the problems started at the top. “The long-standing failure of OPM’s leadership to implement basic cyber hygiene, such as maintaining current authorities to operate and employing strong multi-factor authentication, despite years of warnings from the inspector general, represents a failure of culture and leadership, not technology,” the House Oversight Committee wrote in its 241-page report about the causes and consequences of the breach.
Real cybersecurity improvements start when an organization, including its top leaders, are aware of and engaged in the problem. The OPM actually improved in this area, from a B score last summer to an A in this month’s FITARA scrorecoard.
Other agencies fared poorly. In fact, leadership is one of the areas that saw the least progress after the OPM breach. Last August, nine agencies received Fs for the degree of authority of their CIOs. This improved only slightly this month, to seven.
In his testimony before the House Committee on Oversight and Government Reform earlier this month, Gartner research director Rick Holgate criticized the government’s slow improvements in this area. “If CIOs and agency leadership are not regularly interacting with each other, CIOs and IT professionals will forever be playing catch-up, leading to excess costs, performance gaps, and security flaws,” he told the committee.
The lack of accountability at the top of the organization was the number one lesson of the OPM breach, says Anthony Dagostino, global head of cyber risk at London-based Willis Towers Watson. Dagostino is an active member of the FBI’s Infragard program and is also involved with various working groups at the Department of Treasury, Department of Homeland Security and the Senate Commerce Committee.
Both President Barack Obama and President Donald Trump have directly addressed this issue in their cybersecurity executive orders, he says. “The executive orders really hold the executive department heads and agency heads responsible for cyber risk management and cybersecurity, instead of the IT department,” he says. Trump’s executive order was signed in March, and includes a number of measures designed to strengthen cybersecurity, including a mandate to use the NIST framework to manage risk.
“The NIST cybersecurity framework has really taken hold, not just in the government but across the US, and is becoming a de facto standard for looking at and assessing an organization’s cybersecurity posture,” says Richard Spires, chairman of the board at Resilient Network Systems and former CIO of the Department of Homeland Security.
“I think it sends the right message to agencies about how important this is,” he says. “Obviously, the proof is in actually doing it and carrying it out, but I think these are very positive steps that are being taken.”
While the OPM received an A for its leadership in the latest FITARA scorecard, its overall score barely budged, from a D in October 2015 to a D+ this month. There were a number of areas in which OPM desperately needed to improve its technology. Users were able to access the systems with just passwords, for example, and the critical databases were not encrypted. Much of the infrastructure was old and out of date, and there was a lack of network security controls.
The lack of strong authentication was a particularly thorny problem because the federal government already had a two-factor system in place, the Personal Identity Verification card. OPM ignored that mandate, and none of the agency’s 47 major applications required PIV authentication, according to the audit report.
Since then, two-factor authentication has been deployed for all users accessing the OPM’s new National Background Investigations Bureau, launched last fall to replace the old Federal Investigative Services. In addition, government agencies — and the industry in general — are moving away from having large databases full of passwords or biometric data.
“The White House made a big push to make sure that OPM and every other agency is using strong authentication for everwhere,” says Jeremy Grant, managing director for technology, business and strategy at Washington, D.C.-based Venable LLP. Grant headed the national strategy for trusted identity in cyberspace for the Obama administration.
One approach is the FIDO Alliance, which is a platform that allows websites and applications to authenticate users using scanners that store the biometric information on user devices, instead of in a central database. “The upcoming guidance from NIST recognizes FIDO as the highest level of assurance for authentication,” Grant says.
First, the system protects fingerprints, retina scans and other biometric information from being stolen in the first place, by storing it in a hardened, secure area on a smartphone or other device. Second, if the fingerprint image is stolen anyway, say, during the OPM breach, and someone creates a dummy finger with it, the attacker would also have to steal the user’s authentication device in order to make use of it, says Grant.
“They’d have to steal my phone, and incapacitate me so that I couldn’t use my Find my iPhone function and brick it,” he says. “And if someone has stolen my phone and has me incapacitated, I have much bigger problems.”
Some agencies do have to keep biometric data on file, he agrees. Police departments, for example, have to collect fingerprints at crime scenes. And, of course, the federal government has to collect fingerprints when it does its security checks.
“When you have to store highly sensitive information, it should be absolutely table stakes to use multi-factor authentication,” says Brett McDowell, executive director at the FIDO Alliance.
In addition, access should be limited to just those people who need it, says Gus Coldebella, attorney at Boston-based Fish & Richardson P.C. and former acting general counsel of the U.S. Department of Homeland Security under George Bush. “You have to determine if an employee is entitled to see some segment of information, and should be restricted to just that,” he says. “That employee might not actually be that employee. It might be a nation-state sponsored actor who successfully spear-phished the credentials.”
Finally, the information itself should be stored in encrypted form. OPM finally began a big encryption push in 2016, and will completed by the end of this year, OPM CISO Cord Chase told the House Oversight Committee in February.
One of the problems uncovered after the initial breach was that many of its systems were being used without a Security Assessment and Authorization. By the end of 2016, according to an audit report, 18 major systems still had no valid authorization in place.
In addition, the audit showed a significant staffing problem, which caused the OPM to backslide in its compliance with the Federal Information Security Management Act. “There has been an extremely high employee turnover rate for the ISSO positions, and OPM has struggled to backfill these vacancies,” said Michael Esser, OPM’s assistant inspector general for audits, in his report. “In addition, there have been five different individuals in the role of the chief information officer in the past three years.”
Finally, there’s the problem of old equipment. BeyondTrust surveyed federal IT managers earlier this year and found that 47 percent of federal agencies still use Windows XP. “Windows XP is highly insecure and many of the newer anti-virus, multi-factor authentication, and even security tools just do not work on unsupported platforms anymore,” says Morey Haber, VP of technology at Phoenix-based. “Commercial businesses will not make money or develop for platforms that are end of life. There is no sustainability model for it.”
According to Gartner’s Holgate, legacy systems in the federal government have an average age of 14 years, compared to 10 in the private sector.
In addition to ripping and replacing, one option is to move to cloud-based infrastructure. Here, too, the federal government lags behind. “Federal agencies reported in 2016 that they spend 3 percent of their total IT expenditures on cloud services,” says Holgate. “That is significantly less than private sector peers, for which benchmarking shows 12 percent.”
Moving to the cloud isn’t necessarily more secure, says Ken Kartsen, VP of federal sales at Santa Clara, Calif.-based McAfee LLC. Kartsen has been working with federal government clients in various areas of cybersecurity for nearly 20 years. “But if you look at the underlying infrastructure, especially infrastructure as a service, you at least start with a safe and secure system,” he says.
He has seen some progress in this area, he adds. “Two years ago, I didn’t know of any large component of infrastructure that was outsourced to the cloud,” he says. “Two years later, it’s very different. The momentum is definitely there.”
The FedRamp program, for example, pre-approves cloud vendors to make it easier and faster for government agencies to move to the cloud. “That shows to me that the government is moving very aggressively,” he says. “I think we’re going to see a lot more infrastructure outsourced over the next couple of years.”
Long-term impact is yet to be felt
The OPM breach was unlike most other breaches, and creates problems that can’t be fixed by reissuing credit cards and offering credit monitoring services. Nearly 22 million Social Security numbers were breached, which cannot be reissued. And that’s just the start.
There are also background investigations on people applying for security clearances, as well as their spouses, which includes things like criminal and financial histories and information about their friends, family members and business acquaintances. More than a million fingerprints were also lost. “The scariest thing is the fallout we haven’t yet seen, the potential corruption of data, the long-term effects of espionage on national security,” says Willis Towers Watson’s Dagostino. The data could be used to unmask covert agents, for example.
In addition, the sensitive background information can help a foreign power to find and recruit potential intelligence sources. “It was a great hit to the United States, and I don’t believe we’ve seen the full impact of it,” says Brian White, COO at Baltimore-based RedOwl. White has previously worked at the Department of Homeland Security. “And we are still not doing everything possible to protect our most sensitive information,” he adds.