Two video game researchers have discovered a slew of zero day vulnerabilities in the engines that run popular first person shooter games like “Quake 4,” “Monday Night Combat,” “Crysis 2” and “Homefront,” among others that could put their servers and the gamers who use them in danger.
Archive for May, 2013
If you provide personal information to a company and they lose it through some kind of data breach, how quickly do you want to know about it? I think for most of us the gut-instinct answer is “immediately,” but that’s not the way it works in Canada. Here, it could be weeks, two months, three months or longer before people find out, and there’s not much happening to speed things up.
The Conservative party’s recent opposition to NDP MP Charmaine Borg’s private member’s bill for a new data breach notification has raised eyebrows within the IT community, because in some ways the whole thing just underscores the Harper government’s failure to address the issue itself. For the better part of two years now Bill C-12, which is different than Borg’s Bill C-475 but seeks many of the same objectives, has been languishing in Parliament. Both pieces of legislation would put stricter rules around how and when organizations inform the public if their IT security has been compromised. Everyone seems to agree this is important, but passing a law that will actually be enforced remains elusive.
There’s little question that Canadians lack adequate data breach notification standards. Look at the Honda Canada incident from 2011, where customer names, addresses, vehicle ID numbers and Honda Financial Services account numbers were lost in February but not reported publicly until May. Not surprisingly, a class-action lawsuit was launched in response. The HRDSC loss of student loan information and SIN numbers, which just came to light at the beginning of this year, was allegedly kept quiet for two months. When a laptop with information about 52,000 investors was taken from the Investment Industry Association of Canada (IIAC), brokers didn’t find out for four weeks. What more needs to happen here before the government decides some kind of data breach notification law is a top priority?
“It really perplexes me,” said David Fraser, a lawyer based in Halifax who specializes in the Internet, technology and privacy. “While it’s not perfect, Bill C-12 comes as close as possible to consensus view of what would be wise to put into legislation.”
The main sticking points, Fraser suggested, are questions about the thresholds for notification. Are the rules applicable to any breach, or just ones that could lead to misuse of personal information or some kind of harm? How quick is quick enough? What kind of access should police have during a breach investigation? “It could be the government is worried about losing political capital,” Fraser added, because passing C-12 might put a spotlight on the public sector’s own IT security practices and “would focus attention on just how inadequate it is.”
Obviously, no one wants to be associated with losing customer or citizen data, and experts sometimes imply that if you’re involved in a breach, your brand will suffer in some horrible way. But it’s not like Honda Canada has gone out of business, and student loan data will continue to be stored on HRSDC systems. In fact, there’s evidence to suggest disclosure helps prevent further problems. Earlier this month Telus and the Rotman School of Management released their annual IT security industry study, whey they talked with those responsible for protecting information in all kinds of organizations. This was one of the comments in the report: “There is nothing more effective at demonstrating the urgency of enhancing the security posture of an organization than a breach at a competitor, especially if the breach takes place in Canada. These events bring home the risks – it makes them real.”
Canadian organizations shouldn’t wait for lawmakers to tell them to report data breaches more quickly. To be proactive, thorough and demonstrate the utmost concern for those affected by lost data is just a good business practice – and a much better way to say “We’re sorry.”
A laptop containing the protected health information (PHI) from a now closed private practice has led an oral surgeon to report a health data breach to the State of New York, according to documents acquired by PHIprivacy.net.
On Jan. 11, 2013, the attorneys from the law firm Harter Secrest Emery, LLP filed a New York State Security Breach Reporting Form and other documents to the NYS Department of State Division of Consumer Protection in accordance with the Information Security Breach and Notification Act on behalf of its client, Lee D. Pollan, DMD, PC. Pollan had been in private practice for 37 years before closing his practice at the end of 2011.
The cause of the breach is the theft of a laptop owned by Pollan sometime after Nov. 6, 2012 and discovered on Nov. 15, 2012. The laptop contained PHI for 13,806 patients, all of which are New York residents, and comprised:
• Patient names
• Dates of birth
• Social Security numbers
• Diagnose and surgery billing codes
• Dates of service
• Person responsible for the billing
In a sample notification letter to his patients, Pollan indicated that the information on the laptop was not encrypted. “Although the computer itself and the software on the computer were password protected, the files on this laptop were not encrypted. I do have a backup drive of the contents of this computer so your information is still available. The backup information is being encrypted.” Patients were not offered identity theft protection.
In the same letter, Pollan revealed he became aware that the laptop was missing when he was attempting “to look up some patient information on this computer” at his current office. A preliminary search would seem to indicate that Pollan is affiliated with the University of Rochester Medical Center, although this information could not be confirmed by either the firm representing Pollan or the oral surgeon himself at the time of this report.
Still unclear are the motivations for keeping the PHI of patients at his closed practice. The Centers for Medicare Medicaid Services (CMS) requires that patient records for Medicare beneficiaries be retained for at least five years (cf. 42CFR482.24). State-run Medicaid programs vary in terms of duration.
Access to the documents is available via PHIprivacy.net.
Hardly a day goes by without a report of a data breach of a financial institution, newspaper, hospital, utility company, or government agency. The breaches may be caused by an employee sharing sensitive security information, or by a domestic or foreign hacking group who managed to penetrate the security perimeter of the company, just to name two.
After a company is hit by a data breach, sensitive consumer information can be at risk. This can include personal health information, bank accounts, Social Security numbers, or other financial or personally identifiable information. Although a company is itself the victim of a data breach, it may in turn be the target of class action litigation arising out of the breach, due to the loss of its customers personal data.
In data breach litigation, standing is a threshold requirement that must be met in order for the litigation to move forward. In Clapper, Director of National Intelligence, et al. v. Amnesty International USA et al, the United States Supreme Court examined the standing requirement in the context of a constitutional challenge to the Foreign Intelligence Surveillance Act. (See 133 S. Ct. 1138, Feb. 26, 2013.) While it involved the issue of standing under the FISA, Clapper will likely have an impact on future data breach cases. In Clapper the court found that an injury must be impending and that potential plaintiffs cannot manufacture standing by inflicting injury on themselves. The courts reasoning in Clapper will have a direct impact on data breach cases by requiring plaintiffs to move beyond the mere speculative possibility of injury and by foreclosing plaintiffs from asserting certain types of damages as evidence of standing. Companies defending against data breach litigation should consider Clapper when trying to dismiss the case for lack of standing.
In Clapper, a group of attorneys and human rights, labor, legal and media organizations (the respondents) challenged section 1881a of FISA, claiming that it was unconstitutional. Section 1881a, enacted as part of the FISA Amendments Act of 2008, supplemented pre-existing FISA authority by creating a new framework under which the U.S. government could seek the authorization of the Foreign Intelligence Surveillance Court to target and intercept certain foreign communications emanating from abroad from non-U.S. persons. Unlike traditional FISA surveillance, section 1881a does not require the government to demonstrate probable cause that the target of electronic surveillance is a foreign power or agent of a foreign power.
The respondents claimed that their work required them to engage in sensitive and sometimes even privileged telephone and email communications with colleagues, clients, and sources from outside the U.S. The respondents also believed that these persons were likely targets of surveillance under section 1881a. The respondents further claimed that section 1881a negatively impacted their ability to locate witnesses, obtain information, cultivate sources, and communicate confidential information to their clients. As a result, the respondents were forced to cease engaging in certain types of electronic communications and engaged in costly measures to protect communications.
The threshold question before the court was whether the respondents had standing to bring suit challenging the constitutionality of section 1881a. The standing doctrine is built on the separation-of-powers principle and is designed to limit federal court jurisdiction to certain cases and controversies. To establish Article III standing, an injury must be: 1) concrete, particularized, and actual or imminent; 2) fairly traceable to the challenged action; and 3) redressable by a favorable ruling.
The respondents claimed that they had established an injury in fact traceable to section 1881a because there was an objectively reasonable likelihood that their communications with foreign contacts would be intercepted under section 1881a at some point in the future. The Court rejected the objectively reasonable likelihood standard, finding that the respondents argument rested on a highly speculative fear of injury that may occur at some point in the future. The court emphasized that the respondents theory of standing relied on a highly attenuated chain of possibilities and that a mere speculative chance of future harm was insufficient. The threatened injury must be impending.
The respondents further claimed that they had suffered injury because the risk of surveillance forced them to assume costly and burdensome measures to protect the confidentiality of their communications. The court found that the respondents contention that they incurred certain costs as a reasonable reaction to the risk of harm was unpersuasive because the harm the respondents sought to avoid was not impending. In other words, the court found that the respondents could not manufacture standing by inflicting harm on themselves based on their fear of a hypothetical future harm that was not imminent. Ultimately, the court concluded that the respondents injuries were too speculative and tenuous and that the requirement of standing was not met.
Clapper involves the issue of standing in the context of the FISA, the courts reasoning in Clapper will likely have an impact on how data breach cases are litigated. First, the court emphasized the importance of the injury being imminent and impending in order to satisfy the standing requirement. In data breach cases, customers of the breached company may be concerned that there is the possibility of identity theft sometime in the future. Applying Clappers reasoning leads to the conclusion that the possibility of future harm is insufficient to meet the standing requirement. Second, the court emphasized how a plaintiffs act of inflicting harm on itself was insufficient to meet the standing requirement. In data breach cases, customers of the breached company sometimes claim that they suffered harm because they purchased credit monitoring services to mitigate against the risk of possible future identity theft. Citing Clapper, companies defending against data breaches will be able to argue that purchasing credit monitoring services is based on the hypothetical fear of future harm and that plaintiffs cannot manufacture standing by inflicting damages on themselves. Ultimately, the Clapper decision provides an additional avenue for breached companies to use to defend against data breach litigation.
This article originally appeared in Law Technology News.
The Carna botnet, more formally known as the Internet Census 2012, stirred up a hornet’s nest of controversy when it was unveiled in March to a number of popular security mailing lists. An unidentified researcher had found more than 420,000 embedded devices that were accessible online with default credentials, uploaded a small binary to those devices and used them to conduct an Internet scan of the IPv4 address space.
Questions about the ethics and legality of the project quickly surfaced, as did the realization that there was a massive amount of data waiting to be analyzed, and potentially millions of vulnerable enterprise network devices, industrial control systems and home networking gear that needed patching.
Parth Shukla, a relatively new member of Australia’s AusCERT, was one of the first to pore through the data collected by Carna. He received an uncompressed 910 MB file from the researcher that held approximately 1.2 million rows of information, and after restructuring the data in order to properly analyze it, he quickly realized that there was immediate need to share his findings publicly.
“Public awareness; it was my duty to get this information out there and make people aware that it’s pretty bad,” Shukla said.
Shukla presented his research last week at an AusCERT event in Australia last week and told Threatpost today that he has begun sharing country- and region-specific data with local CERTs that have made requests.
“I heard about the project about a week after it was published and my first thought was that this was a historic moment because we had captured the state of the Internet at the end of IPv4,” Shukla said. “That was my first reaction, that it was awesome. The information is out there; the bad guys know it and are using it, let’s do something with it.”
Several things immediately stood out as Shukla’s research progressed, most eye-popping was the speed and ease at which one could find vulnerable devices, not to mention the number of ownable machines that are sitting online. For example, finding a vulnerable device worldwide scanning at a rate of 10 IPs per second would take just under five minutes. Narrowing the scope to China, for example—which had the largest number of IPs in the Carna data—a vulnerable device would pop up every 46 seconds scanning at the same rate of 10 IPs per second. Shukla found on average one vulnerable device for every 456 IP addresses and 1.79 subnets in China. And China may not be the worst offender, he said basing that theory on the fact that other countries have a worse infected-to-allocated-IP ratio.
“Of the 1.2 million devices, more than half are in China (57 percent),” Shukla said, adding that he understands his analysis could make China a target. But with tools such as the Shodan search engine that accomplish the same thing coupled with the realization that most of the devices in the Carna report are likely owned that it was imperative to share the analysis. “That’s an important figure to pitch at people. Fifty seconds and we have a device with a default credential over Telnet; admin/admin and you’re in.”
The creator of Internet Census 2012 developed a binary that was uploaded to the insecure devices found during the scan to look for other devices. The binary included a Telnet scanner that would fire different default login combinations at the devices such as root/root or admin/admin, or would attempt to access devices without a password. The binary also included a manager that would provide the scanner with IP address ranges and then upload them to an IP address.
“We deployed our binary on IP addresses we had gathered from our sample data and started scanning on port 23 (Telnet) on every IPv4 address. Our telnet scanner was also started on every newly found device, so the complete scan took only roughly one night. We stopped the automatic deployment after our binary was started on approximately thirty thousand devices,” the researcher said in his paper. “The completed scan proved our assumption was true. There were in fact several hundred thousand unprotected devices on the Internet making it possible to build a super-fast distributed port scanner.”
Shukla hopes that manufacturers of networking gear who send products to market with default credentials will be among the first to heed his call to change the current state of affairs. While he is sharing his data with other CERTs and ISPs/telcos, he’s finding some pushback because the data does not come with timestamp information restricting the means in which providers that use DHCP, for example, can address the issue.
“I’ve spent days giving people data and pointers on how to use it,” Shukla said, adding that he hopes manufacturers will be among the first to act. “Hopefully CERTs in those countries can go to manufacturers with this data and tell them that, for example, ‘50 percent of the devices in our country are yours, what are you going to do about it?’ ”
Shukla said his next step would be to analyze the traceroutes of the vulnerable devices, as well as some anomalous data such as some of the same IP records appearing in more than one country. He also hopes to deliver continent-specific data.
“I want to put together as much information for most of the CERTs to be able to do something about it,” he said. “There’s still quite a lot of analysis left to be done.”
Updated to correct the size of the uncompressed file received by Parth Shukla to 910 MB. The previous report of 9 TB is the size of the uncompressed public torrent.
Two security engineers for Google say the company will now support researchers publicizing details of critical vulnerabilities under active exploitation just seven days after they’ve alerted a company.
That new grace period leaves vendors dramatically less time to create and test a patch than the previously recommended 60-day disclosure deadline for the most serious security flaws.
The goal, write Chris Evans and Drew Hintz, is to prompt vendors to more quickly seal, or at least publicly react to, critical vulnerabilities and reduce the number of attacks that proliferate because of unprotected software.
Vendors have long been criticized for using responsible disclosure to their advantage to delay issuing a fix as long as possible, sometimes even years. Only once a patch is issued does a researcher reveal details of the software flaw. Under the concept of full disclosure, both the company and the public are given details at the same time.
The 60-day notice was announced almost three years ago by a Google security team, which included Evans, as a compromise between full and responsible disclosures for critical vulnerabilities, particularly those that require complex coding to fix. But the regular appearance of zero-day exploits targeting unpatched software has prompted Google to reconsider that timeline.
“Our standing recommendation is that companies should fix critical vulnerabilities within 60 days — or, if a fix is not possible, they should notify the public about the risk and offer workarounds,” the two said in a blog post today. “We encourage researchers to publish their findings if reported issues will take longer to patch. Based on our experience, however, we believe that more urgent action — within seven days — is appropriate for critical vulnerabilities under active exploitation. The reason for this special designation is that each day an actively exploited vulnerability remains undisclosed to the public and unpatched, more computers will be compromised.”
Anticipating pushback, the pair acknowledge a week’s notice is unrealistic in some instances. But, they believe, it provides enough time for a company to provide mitigations — such as temporarily disabling a service or restricting access — to reduce the risks of further exploits in the wild.
“As a result, after seven days have elapsed without a patch or advisory, we will support researchers making details available so that users can take steps to protect themselves,” they wrote.
The same deadline will apply to those bughunters who discover vulnerabilities in Google products too, they said.
“By holding ourselves to the same standard, we hope to improve both the state of web security and the coordination of vulnerability management.”
Cloud-based note taking software Evernote this week pushed out three new security features including two-factor authentication for some users’ accounts, in hopes of adding an extra layer of protection to accounts.
Article source: http://threatpost.com/amazon-joins-authentication-game/
A new strain of banking malware, Beta Bot, has been refined over the last few months to target ecommerce and comes complete with an array of features to help prevent it from being caught by usual security measures.
According to research conducted by RSA Security’s Limor Kessem, the bot started out in January as an HTTP bot and then made the gradual transition to a banking Trojan. Kessem, who’s part of RSA’s Cybercrime and Online Fraud Communications’ division, said Beta Bot has many attack vectors.
The malware has been seen targeting everything from large financial institutions to social networking sites, along with “payment platforms, online retailers, gaming platforms, webmail providers, FTP and file-sharing user credentials … domain registrars for the common malware use of registering new resources,” Kessem said.
Interestingly enough, the bot is deployed on machines after a user clicks through and allows it. Once it’s in though, the malware has an array of self-defense mechanisms.
Users whose machines are infected by the malware will find themselves unable to reach whatever antivirus and security provider websites the attacker selects. When trying to reach one of those sites, they’ll get redirected to an IP address of the attacker’s choosing instead.
The malware knows better than executing in virtual machines and can avoid sandboxes as well, Kessem said. It can even block other types of malware from spreading on the system by “terminating their processes” and blocking code injections.
The Trojan goes on to log stolen data in a MySQL database, download malicious files, remotely control the infected PC and trick users into making fake banking transactions.
Kessem spoke with Beta Bot’s developer who claims he is selling binaries for the malware and providing technical support but doesn’t plan to sell the builder, opting instead to keep it private. Builds can be purchased though for between $320 and $500 with a customized server-side control panel interface in underground online forums.
Banking Trojans are continuing to grow more sophisticated in order to stay ahead of curve of advanced detection methods. Last month it was reported that Shylock, the credential-swiping Trojan that relies mainly on man-in-the-browser attacks had begun to weed out less profitable banks and updated its infrastructure to avoid downtime. Developers behind the ubiquitous Zeus Trojan were also found in April peddling tweaked versions of the malware, complete with customized botnet panels, via social networks like Facebook.
The endless loop that is the disclosure debate got a jolt of energy yesterday when Google said it would support researchers’ disclosure of details on actively exploited critical vulnerabilities just seven days after the researcher has notified the vendor in question.
Google hopes the policy change—almost three years ago Google was all for a 60-day window between notification and disclosure–prompts vendors to react quicker to big bugs and stifles zero-day attacks.
In the meantime, it will be interesting to watch from the sidelines whether this decision strains relationships between vendors and researchers. Google engineers, for example, are often cited in Microsoft security advisories when patches and cumulative security updates are released for Windows, Internet Explorer and other products.
Windows’ dominant desktop share and IE’s large presence in the browser market make Microsoft a perennial target. And through Trustworthy Computing and Microsoft’s coordinated vulnerability disclosure policies, the company has worked hard to improve its public and internal position on security and the priority it places on secure development in its products.
A seven-day turnaround is admittedly tricky, Google engineers Chris Evans and Drew Hintz said in announcing the change yesterday. They did couch it a bit saying that vendors would have seven days to respond with either a patch or security advisory.
“This is certainly an interesting move by Google. And I can see their reasons behind doing this, especially if people are currently targeted by a certain vulnerability,” said Nils, head of research with MWR InfoSecurity of the UK. Nils has made more than $100,000 writing successful exploits for vulnerabilities in Google’s Chrome browser, Firefox and Internet Explorer during Pwn2Own contests, and is a noted bug-hunter.
Google reconsidered its 60-day timeline in light of an unabating rash of zero-day exploits, Evans and Hintz said.
“Our standing recommendation is that companies should fix critical vulnerabilities within 60 days — or, if a fix is not possible, they should notify the public about the risk and offer workarounds,” Evans and Hintz said. “We encourage researchers to publish their findings if reported issues will take longer to patch. Based on our experience, however, we believe that more urgent action — within seven days — is appropriate for critical vulnerabilities under active exploitation. The reason for this special designation is that each day an actively exploited vulnerability remains undisclosed to the public and unpatched, more computers will be compromised.”
Nils, for one, said few researchers would be in an appropriate position to understand whether vulnerabilities could be immediately exploited in certain products.
“This might lead to cases where a vulnerability is disclosed early using the potential for public exploitation as an excuse, putting end-users at risk,” Nils said.
Microsoft declined an opportunity to comment for this article, as did Adobe, which has joined Microsoft atop the hackers’ hit parade. Vulnerabilities in Adobe Flash and Reader have been exploited in numerous high-profile attacks and even Adobe’s own infrastructure was attacked last year and a valid certificate was stolen and used to sign malicious utilities often used in targeted attacks. Adobe’s Connect User site was also compromised late last year.
An Adobe spokesperson did offer a statement that the company’s policy is to patch zero day vulnerabilities as quickly as possible, often before seven days.
“We don’t foresee this changing our relationship with outside researchers. Again, our policy has always been to respond as quickly as we possibly can to 0-day issues,” the Adobe statement said.
Microsoft’s coordinated vulnerability disclosure principle deems that researchers privately disclose new vulnerabilities to the vendors or a central coordinator such as a CERT, enabling the vendor in question time to analyze the vulnerability and prepare a patch. Microsoft’s stance is that once a patch is released, the researcher would then share his findings.
“If attacks are underway in the wild, and the vendor is still working on the update, then both the finder and vendor work together as closely as possible to provide early public vulnerability disclosure to protect customers,” Microsoft said in a post on its Security Response Center. “The aim is to provide timely and consistent guidance to customers to protect themselves.”
Google’s own security researchers have been in the middle of some interesting exchanges on this subject. Tavis Ormandy has disclosed Microsoft vulnerabilities publicly before the company has released a patch, the first time in 2010 giving the company only five days notice regarding a flaw in the company’s Security Help Center product. That angered Microsoft’s security team, but Ormandy said he published the details because he thought attacks against the flaw were likely as attackers had studied weaknesses in protocol handlers before and making details public would help organizations defend themselves more effectively.
Just last week, Ormandy posted to the Full Disclosure mailing list details on a Windows memory vulnerability, looking for help from the community with an exploit. “I don’t have time to work on silly Microsoft code, so I’m looking for ideas on how to fix the final obstacle for exploitation,” Ormandy wrote on Full Disclosure. He also wrote on his personal blog that Microsoft treats researchers with hostility and are difficult to work with.
“I would advise only speaking to them under a pseudonym, using Tor and anonymous email to protect yourself,” he wrote.
Google, meanwhile, said it will hold itself to the same seven-day standard, something Nils noted as well.
“It will be very interesting to see how Google will deal with the issues if Android is the affected platform, without a good patching infrastructure in place,” Nils said.
Article source: http://threatpost.com/and-on-the-seventh-day-they-disclose/
Once upon an Internet, a security researcher who discovered a vulnerability had very limited options for what to do with that information. He could send it to the vendor and hope someone cared enough to patch it; he could post it to a mailing list for all to see; or, if he had the right contacts, he could attempt to sell it. The rise of vendor-sponsored bug bounty programs in recent years has changed that dynamic forever, providing a nice source of both recognition and income for security researchers. But the threat landscape may have already outstripped the existing reward systems, creating the need for an alternative.
Bug bounty programs have been a boon for both researchers and the vendors who sponsor them. From the researcher’s perspective, having a lucrative outlet for the work they put in finding vulnerabilities is an obvious win. Many researchers do this work on their own time, outside of their day jobs and with no promise of financial reward. (One could argue that no one is asking them to look for these vulnerabilities, so they shouldn’t expect any reward, but that’s a separate discussion. They are doing it, and it’s a net benefit in most cases.) The willingness of vendors such as Google, Facebook, PayPal, Barracuda, Mozilla and others to pay significant amounts of money to researchers who report vulnerabilities to them privately has given researchers both an incentive to find more vulnerabilities and a motivation to not go the full disclosure route.
For the vendors, bug bounty programs serve several purposes. They help establish good working relationships with researchers, increasing the likelihood that someone who finds a vulnerability in their products will come to them first. Rewards also serve as a relatively inexpensive way to identify and repair those vulnerabilities. Even at the high end of the scale, which is occupied by Google’s infrequent special rewards that can reach into the tens of thousands of dollars, the money is not a major expense for the companies. Most bounties are in the $1,000-$5,000 range.
However, those dollar figures are dwarfed by the ones available from the biggest bug bounty program of them all: the private vulnerability market. Researchers willing to go that route can make their year with just one sale. The prices vary greatly depending upon the product in which the vulnerability is found, as well as who the buyer is, but critical flaws in high-profile applications such as Internet Explorer or Windows can bring six figures. And there is no shortage of buyers in this game, with defense contractors, governments, private brokers and others all willing to pony up for good bugs. That level of money can be very attractive for a researcher, especially when the alternative is to report it to the vendor and perhaps get nothing in return other than an acknowledgement in a security bulletin from Microsoft.
Despite the money available from various sources, some researchers still wind up posting the details of their findings publicly for variety of reasons. Some may not have the contacts to sell a bug on the open market, others may be too young or otherwise ineligible for vendor reward programs and still others may have tried to go to the vendor with their bug and been rebuffed for some reason. This set of circumstances could be an opportunity for the federal government to step in and create its own separate bug reward program to take up the slack.
Certain government agencies already are buying vulnerabilities and exploits for offensive operations. But the opportunity here is for an organization such as US-CERT, a unit of the Department of Homeland Security, to offer reasonably significant rewards for vulnerability information to be used for defensive purposes. There are a large number of software vendors who don’t pay for vulnerabilities, and many of them produce applications that are critical to the operation of utilities, financial systems and government networks. DHS has a massive budget–a $39 billion request for fiscal 2014–and a tiny portion of that allocated to buy bugs from researchers could have a significant effect on the security of the nation’s networks. Once the government buys the vulnerability information, it could then work with the affected vendors on fixes, mitigations and notifications for customers before details are released.
If a researcher finds a vulnerability in a product covered by this program, however that would be defined, he would have the option of selling the information to the government rather than simply publicly disclosing it. This would also help keep some of these vulnerabilities off the private market where their eventual use is unknown at best.
US-CERT and the ICS-CERT already perform part of this function, working with researchers and vendors to coordinate patches and disclosure timelines. The difference in what they’d be doing would be negligible, but the effect could be huge. Manufacturers of SCADA and ICS (industrial control system) software have been notoriously slow to fix vulnerabilities and indifferent, if not outright hostile, to security researchers who try to report serious bugs to them. This would be a problem even if the applications in question were just desktop software, but these are the systems that control some of the country’s more vital networks. This is not a theoretical problem turning on possible vulnerabilities and speculative attacks. There are serious attacks against these systems occurring right now, and no one can afford for the vendors to sit on their hands any longer.
This plan certainly wouldn’t solve the entire problem. Nothing short of unicorns writing magical bug-free software will do that. And government involvement in security usually isn’t a desired outcome, but in this case it may be the best alternative.