Adult Friend Finder members exposed in data breach; Ford’s new backup assist

JENNIFER IS GOING TO JOIN US SHORTLY WITH MUCH MORE ON THIS. MEANWHILE, FORD IS ABOUT TO MAKE IT EASIER THAN EVER TO TOW YOUR TRAILER OR WHATEVER YOU HAVE AROUND. FIRST, IT MIGHT BE THE MOST EMBARRASSING DATA BREACH YET. OUR CONSUMER EXPERT AMY DAVIS HAS MORE ON THE LEAK AT ADULT FRIEND FINDER. A LOT OF THESE DATA BREACHES CAN BE FRUSTRATED, UNNERVING. THIS ONE, SCANDALOUS. WE LIKE IT. SO TALK ABOUT PRIVATE INFORMATION GETTING INTO THE WRONG HANDS. THEY HAVE CONFIRMED INFORMATION FOR AS MANY AS FOUR MILLION OF ITS MEMBERS MAY HAVE BEEN STOLEN. ACCORDING TO THE U.S.A. TODAY, THAT INCLUDES NAMES, ADDRESSES, eMAILS, DATES OF BIRTH, THE USUAL STUFF, BUT ALSO USERS’ SEXUAL PREFERENCES, THEIR MARITAL STATUS, WHATEVER THEY CLAIMED IT WAS. IF YOU’RE NOT FAMILIAR WITH THE FIND, THEY PROMISE THE ABILITY TO HOOK UP, FIND SEX OR MEET SOMEONE HOT NOW. SITE BOTH OF THE 64 MILLION MEMBERS, BUT FOR NOW, THEY SAY THEY DON’T KNOW EXACTLY WHAT MEMBERS’ DATA WAS BREACHED. DO YOU LOVE YOUR TRUCK BUT DON’T KNOW HOW TO PARK IT? ADMIT IT. ADD ON A TRAILER FOR HAULING STUFF AND YOU’RE REALLY IN TROUBLE, UNLESS YOU HAVE THIS NEW FEATURE YOU CAN GET ON THE 2016 FORD FULL-SIZED PICKUP. DRIVERS JUST TURN A NOBODY TO LET THE TRUCK KNOW HOW FAR AND IN WHICH DIRECTION THE TRAILER SHOULD GO AND THE PRO TRAILER BACKUP ASSISTER DOES IT FOR YOU. WHAT?! EXACTLY. THAT’S AWESOME. NOW MAYBE I CAN GET A TRUCK! AS A GUY, YOU ALWAYS LIKE THE CHALLENGE OF BACKING THE THING UP. I ONCE DID A THREE-POINT TURN WITH A TRAILER. THAT WAS INTERESTING. THIS IS COOL. THEY SAY THEY WANT TO GIVE DRIVERS MORE TIME ON THE LAKE INSTEAD OF IN THE LAKE. I WAS GOING TO SAY, I NEED

Article source: http://www.click2houston.com/news/adult-friend-finder-members-exposed-in-data-breach-fords-new-backup-assist/33165610

,

No Comments

Stopping Data Breaches: Whose Job Is It Anyway?

The 2015 Data Breach Investigations Report, released last month by Verizon, estimated that there were 2,122 confirmed data breaches in 2014, generating $400 million in losses. This week we learned that one attack that was not included in this count happened in June 2014, targeting CareFirst BlueCross Blue Shield, serving 3.4 million customers in Maryland, Virginia and the District of Columbia. CareFirst only recently discovered the breach—names, birthdates, and email addresses of 1.1. million members—after putting in place new security measures.

We also found out this week that last month hackers redirected traffic from the Federal Reserve Bank of St. Louis’ research website to rogue pages. In its notice to users, the St. Louis Fed warned them that they may have been exposed to “phishing, malware and access to user names and passwords.” And Australian telecoms group Telstra said hackers gained access to the network of its Asian subsidiary Pacnet, and that it “was made aware of the breach” when its purchase of Pacnet was finalized on April 16.

To prevent the continuing loss of money, reputation, and customers, companies must make stopping cybercrime a team effort, internally and externally.  Collaboration is the essence of preventing data breaches and responding to them effectively.

I came to this conclusion after listening to a presentation by Jason Malo, a Research Director in CEB TowerGroup’s Retail Banking practice, at the 2015 CEB Financial Services Technology Summit. Malo pointed out that security should not be considered only the job responsibility of the Chief Information Security Officer (CISO). On-going collaboration across multiple internal teams and their leaders is crucial.

While the CISO plays a leadership role in discovery, mitigation and analysis of a data breach and is in charge of management and monitoring across all business lines, other teams and their respective leaders should be involved in a variety of roles in different stages of a response to a data breach. These include the CIO and CTO providing technical support and the Chief Compliance Officer, the communications team, and line of business executives taking a lead role in the disclosure stage and in enabling customers.

CMBO1621115SYN_More Than Information

The last stage of the response to a data breach—empowering customers—is also the first step towards preventing more data breaches in the future. Collaborating with your customers, like collaborating internally, is crucial for minimizing the impact of a data breach and lessening the probability of being hacked again.

Malo suggests that contrary to the trend towards a “frictionless” customer experience—the idea that fraud should be detected and corrected without customer involvement—it is better to empower customers. This includes customers who are looking to take a more active role in protecting their data and those that need to be nudged to do so.

Article source: http://www.forbes.com/sites/gilpress/2015/05/22/stopping-data-breaches-whose-job-is-it-anyway/

,

No Comments

Security Researchers Wary of Proposed Wassenaar Rules

Professional security researchers concerned about proposed changes to the Computer Fraud and Abuse Act (CFAA) that include stiff penalties for what today is considered legitimate offensive research, are worried about another impending punch to the gut.

The Commerce Department’s Bureau of Industry and Security today made public its proposal to implement the controversial Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies in the U.S. In a computer security context, the agreement imposes export controls on certain dual-use technologies; for some these rules are a harkening back to the Crypto Wars of the ’90s. A 60-day comment period opens today and ends July 20.

Specifically, the BIS proposal seeks to regulate and control the export of what it calls intrusion software, providing a broad definition of such in the process, something that some researchers and experts fear could not only further chill legitimate vulnerability analysis, but also impact sales of some security software.

The newly defined term “intrusion software,”–whose intent is to implement and enforce controls on the delivery of surveillance software such as FinFisher and tools developed by Hacking Team, as two examples–also seems to encompass commercial penetration-testing tools that include encryption.

“Vulnerability research is not controlled nor would the technology related to choosing, finding, targeting, studying and testing a vulnerability be controlled,” said Randy Wheeler, director of the BIS, today during a conference call. “The development, testing, evaluating and productizing of an exploit or intrusion software, or of course the development of zero-day exploits for sale, is controlled.”

Experts, as well as the BIS, hope that researchers will submit comments to the proposed rule inside the 60-day window.

The European Parliament implemented new language in Wassenaar, presented in December 2013, to stem the use by governments of targeted surveillance malware to spy on activists, journalists and others, which they said was a violation of their human rights.

“This was perhaps a way to stop companies like FinFisher and Hacking Team from being able to export targeted surveillance software to governments like Bahrain, which does not seem unreasonable to me,” said Electronic Frontier Foundation global policy analyst Eva Galperin. “But one of the things they did was write the language messily and broadly, and open to troublesome interpretation. It’s important to tread carefully.

“One of the biggest problems is that people who are writing this language are not security researchers and likely have a limited understanding of how security research is conducted and how threats and exploits are shared,” Galperin said. “This is why they have a comment period. What I would like the security community to understand is that this is the junction to step in and set them straight.”

Intrusion software, defined in Wassenaar, is:

“Software ‘specially designed’ or modified to avoid detection by ‘monitoring tools,’ or to defeat ‘protective countermeasures,’ of a computer or network-capable device, and performing any of the following:

(a) The extraction of data or information, from a computer or network-capable device, or the modification of system or user data; or

(b) The modification of the standard execution path of a program or process in order to allow the execution of externally provided instructions.”

The proposed rules also identify network penetration testing products as intrusion software, especially those currently classified as encryption products.

“The definition is too broad. It includes the fundamental components of all vulnerability research in the definition, and will hinder the sharing and publication of important security research,” said Katie Moussouris, chief policy officer at HackerOne and former senior security strategist at Microsoft, where she was instrumental in developing the company’s numerous vulnerability bounties and awards for defensive technologies.

“The intent here is to regulate surveillance software, like FinFisher. Instead of focusing on data exfiltration, which is what FinFisher and other software like it does to the victim, these proposed definitions erroneously focus on the ‘intrusion’ piece,” Moussouris said. “That’s where it veers sharply off target, and onto controlling the wrong technology.”

Wheeler said BIS hopes to see particular comments on the impact on vendors due to the licensing burden that would accompany such controls. Also, within scope for comments is the impact on legitimate vulnerability research and software audits, Wheeler said.

“Vendors who make software that fall under these broad definitions will have additional overhead in applying for export licenses, potentially creating a trade disadvantage for US-based companies dealing with the burden of compliance,” said Moussouris. “This will favor larger companies who can absorb the overhead, also possibly affecting market competition and ultimately, innovation in US security technology could suffer.”

This is the second time this year that researchers are facing legislative and regulatory threats to legitimate offensive research.

In January, the Obama administration, in response to the damaging Sony hack and massive Target and Home Depot data breaches of late 2013 and 2014, turned its attention to the CFAA. Proposed amendments redefined what it means to exceed authorized access to a system, adding vagaries to the language that would put legitimate research in the crosshairs, while expanding its scope.

“Exceeds authorized access means to access a computer with authorization and to use such access to obtain or alter information in the computer (a) that the accesser is not entitled to obtain or alter; or (b) for a purpose that the accesser knows is not authorized by the computer owner.”

In addition, the CFAA amended its punishments, with stiffer penalties for those convicted of hacking, doubling some sentences while elevating other offenses to felonies.

“Researchers are already discouraged from discussing their tools and vulnerability research by existing laws like CFAA and DMCA in the U.S.,” Moussouris said “The additional requirement of applying for an export license and having to share source code during the application process will discourage them further.”

The EFF’s Galperin, however, hopes that researchers pump the brakes on some of the early consternation.

“Some of the misconceptions come from a lack of understanding of what Wassenaar is, what it does and how it’s implemented,” she said. “A lot of people see these proposals and assume that now it’s law. At the end of 2013, Wassenaar made changes to the language that include limitations on and licensing requirements on the export of certain types of surveillance and intrusion equipment. It’s possible that language was not entirely clear, so obviously the security industry went wild and said it’s illegal to export exploits, that we are doomed. That’s absolutely not the case.

“Every country that signed on to Wassenaar (the U.S. included) had to implement this language in a way it felt the language was meant to be implemented. Now with the U.S., this is what the Department of Commerce thinks Wassenaar means, and this is how it proposes to change export rules to be line with what Wassenaar says.”

Article source: https://threatpost.com/security-researchers-wary-of-proposed-wassenaar-rules/112937

No Comments

Security Questions Not So Secure

The Internet knows a lot about you, including your mother’s maiden name, your favorite food, and what street your first pet grew up on. And, according to some new research from Google, attackers have a good chance of figuring those things out pretty easily, too.

The security questions that Google and other companies ask users as part of account-recovery operations are seen by both security experts and users as more of an annoyance than a safeguard. Some of the information in the answers to these questions is relatively easy to find, through social media profiles and other places. And some of it is fairly easy to guess.

Google researchers put together a new paper that illustrates just how easy this process is for attackers, and by extension, the limited value of security questions. For example, Google found that with just one attempt an attacker could guess an English-speaking user’s favorite food 19.7 percent of the time. Within 10 attempts an attacker would have a 43 percent chance of guessing a Korean-speaking user’s favorite food.

Google’s research is based on hundreds of millions of security questions answered by users during the course of millions of account-recovery attempts, and what the researchers found is that questions with easy-to-remember answers aren’t secure and questions with difficult-to-remember answers aren’t useful. The company also discovered that some tactics users employ to make their answers more difficult for attackers to guess aren’t effective.

“Many different users also had identical answers to secret questions that we’d normally expect to be highly secure, such as ‘What’s your phone number?’ or ‘What’s your frequent flyer number?’. We dug into this further and found that 37% of people intentionally provide false answers to their questions thinking this will make them harder to guess. However, this ends up backfiring because people choose the same (false) answers, and actually increase the likelihood that an attacker can break in,” Elie Bursztein, Google’s Anti-Abuse Research Lead, and Ilan Caron, software engineer, wrote in an analysis of the data the research produced.

The company’s research also revealed that 40 percent of English-speaking users couldn’t remember their secret question’s answer when they needed to. People aren’t great at this kind of thing, and adding more complexity to the process only makes things worse.

“According to our data, the ‘easiest’ question and answer is ‘What city were you born in?’—users recall this answer more than 79% of the time. The second easiest example is ‘What is your father’s middle name?’, remembered by users 74% of the time. If an attacker had ten guesses, they’d have a 6.9% and 14.6% chance of guessing correct answers for these questions, respectively,” the Google analysis says.

“But, when users had to answer both together, the spread between the security and usability of secret questions becomes increasingly stark. The probability that an attacker could get both answers in ten guesses is 1%, but users will recall both answers only 59% of the time. Piling on more secret questions makes it more difficult for users to recover their accounts and is not a good solution, as a result.”

Some Web services are moving to the use of one-time codes sent via text as a part of the account-recovery process, which is a smoother and easier method.

Article source: https://threatpost.com/security-questions-not-so-secure/112949

No Comments

1.1 Million Affected by CareFirst BlueCross BlueShield Breach

CareFirst BlueCross BlueShield announced yesterday that attackers gained access to a single company database containing the sensitive and personal information of more than a million of its current and former health insurance customers.

BlueCross BlueShield (BCBS) is a federation of health insurance providers serving nearly one-third of the U.S. population. CareFirst is the mid-Atlantic subsidiary of BCBS, delivering health insurance to customers in the District of Columbia, Maryland and Virginia.

In an effort to downplay the attack, CareFirst CEO Chet Burrell and other spokespersons are claiming that Social Security numbers, medical claims, employment, payment card and financial information were not exposed in the breach. However, the database did contain member-created user names, names, birth dates, email addresses and subscriber identification numbers. The breach did not expose passwords, which were both encrypted and stored on a separate server.

Trent Telford, CEO of data security firm Covata, told Threatpost in an email that it’s not always clear why an attacker might want to steal certain information, like names and addresses and usernames, but that doesn’t mean these sorts of data don’t hold value.

“If a company holds personal information on behalf of its customers, partners and employees it is its responsibility to encrypt it and remove the inherent value of this data for thieves and malicious actors,” Telford said. “It is encouraging in the case of CareFirst BlueCross BlueShield that some of its valuable customer data is safe because it is encrypted. The more companies encrypt their customer data, the less they are going to be targets for attacks.”

CareFirst claims it initially detected the attack but incorrectly believed it had contained the attack and prevented the attackers from accessing any information. It only became aware of the full scope of the attack after hiring an incident response firm to perform a network analysis, partly because of a recent spate of cyberattacks targeting similar healthcare companies. The company determined on April 21, 2015, that there was an intrusion of CareFirst’s systems and that it occurred on June 19, 2014. As is the industry standard, CareFirst is offering affected customers two years of free credit monitoring services.

CareFirst is not responding to requests for specific details about the breach, as the incident is part of an ongoing FBI investigation.

CareFirst is in the process of contacting affected customers. Only those customers who registered an online account with CareFirst before June 20, 2014, would have been impacted by the breach. Affected customers will receive an email or an unsolicited phone call with a code redeemable for two years of free credit monitoring. They will also be forced to reset the passwords to their online accounts.

Article source: https://threatpost.com/1-1-million-affected-by-carefirst-bluecross-blueshield-breach/112951

No Comments

Head-Scratching Begins on Proposed Wassenaar Export Control Rules

Two things worth noting from yesterday’s unveiling of the Bureau of Industry and Security’s proposed Wassenaar rules for the U.S. that weren’t so overt: a) The U.S. generally leads the way in implementing Wassenaar changes, and this time it’s been beaten by the EU by almost 18 months; and b) requests for comments, such as the 60-day period that opened yesterday, are uncommon.

“I think this means [BIS] had trouble understanding fully the scope and understanding potentially negative repercussions for overregulating,” said Collin Anderson, an independent security researcher who has spent many hours studying the Wassenaar controls. “I think it means they’re still trying to figure out what to do with this rule.”

BIS, a bureau under the U.S. Commerce Department, published rules yesterday that left some scratching their heads in confusion, and others scurrying for cover because of its potential implications on legitimate vulnerability research and exploit development, and the use of commercial penetration testing and other dual-use technologies.

What has experts such as Anderson concerned is the rules’ broad definition of “intrusion software,” which is at the center of the document. As defined in the rules, intrusion software is:

“Software ‘specially designed’ or modified to avoid detection by ‘monitoring tools,’ or to defeat ‘protective countermeasures,’ of a computer or network-capable device, and performing any of the following:

(a) The extraction of data or information, from a computer or network-capable device, or the modification of system or user data; or

(b) The modification of the standard execution path of a program or process in order to allow the execution of externally provided instructions.”

Anderson said this definition paves the way for an expansion of the rules as implemented by the EU, and beyond their original intent of imposing export controls and licensing requirements on spying software such as FinFisher and Hacking Team, which reportedly has been sold to and used by oppressive regimes around the world. Previous language in the rules protected some off-the-shelf commercial malware, and dual-use tools available to researchers, from export controls. Anderson’s interpretation is that’s no longer the case.

“What it looks like is that the rules are not going to provide that exception; now everything except open source is controlled,” Anderson said. “That was not anticipated. For them to consider zero days and exploits as commodities and as controlled, was not expected, and something I’ve argued against in two papers. That’s an expansion.”

Randy Wheeler, director of BIS, confirmed during a teleconference yesterday afternoon that the development, testing, evaluating and productizing of exploits, zero days and intrusion software would be controlled, but the same did not apply to vulnerability research.

“Vulnerability research is not controlled nor would the technology related to choosing, finding, targeting, studying and testing a vulnerability be controlled,” she said.

With the devil buried in many of the details, Anderson said it’s important to note that no discussions have been had on the issue of export controls on exploits and zero days, nor how they’re interpreted in the rules. He hopes that the security community will take full advantage of the two-month comment period, and contribute to the process. Anderson said he anticipates that he, other researchers, and civil organizations will make themselves available to collect comments from researchers, and submit a larger comment on behalf of the many.

“There’s a large topic covered here, and it’s problematic,” Anderson said. “Is the sale of a vulnerability in a bug bounty considered a control? It’s a problem since that’s a major source of security research and funding for security research globally. These sorts of things, we don’t have a real solid answer for and that’s problematic.”

Anderson also wonders how the transfer of knowledge applies under Wassenaar. For example, would Iranian engineers and security researchers be excluded from projects under the rules? “That’s less likely,” Anderson said, “but there’s no answer to that, especially when it’s considered a subset of a completely separate problem.”

Runa A. Sandvik, a privacy and security researcher, said that this implementation of Wassenaar would put export control authorities in a position where they would be the ones directing and driving security research.

“The broad definition of intrusion software could mean that we end up with control of commonplace research, as opposed to the technologies the [Wassenaar Arrangement] set out to control originally,” Sandvik said. “This attempt to define which technologies to regulate reminds me of the whitelist/blacklist approach in computer security.”

Dual-use technologies such as Metasploit from Rapid7 and Core Security’s pen-testing tools would also have to be sorted out (a request for comment from Rapid7 was not returned in time for publication).

“A hammer, for example, is a tool that can be used for both good and bad. If the authorities do not fully understand every single use case of said hammer, they will fail to properly regulate its use. Not to
mention the fact that use cases change and new ones develop; who knows how people will use a hammer ten, twenty or thirty years from now,” Sandvik said. “BIS has said that defensive pen-testing tools could be re-appropriated for offensive purposes and therefore be in line with control, but this
would likely require researchers to go through the whole process again to get these tools approved as ‘good tools.’

“If this passes, we may find ourselves in a future where revisions attempt to expand the already broad definitions. This seems like a slippery slope.”

Article source: https://threatpost.com/head-scratching-begins-on-proposed-wassenaar-export-control-rules/112959

No Comments

Charter Communications Fixes Website Data Leak Vulnerability

Internet-cable-television provider Charter Communications recently fixed an issue with its website that was inadvertently leaking the information of tens of thousands of customers.

Customers’ payment details, modem serial numbers, device names, account numbers, home addresses, were being spilled from the company’s site, charter.com, according to researchers.

Eric Taylor, CTO at the startup firm, Cinder, and fellow researcher Blake Welsh stumbled upon the vulnerability. The two previously discovered a similar bug in Verizon’s online system where customers’ user IDs, phone numbers, and device names were being exposed.

The issue with Verizon’s site was reported by Buzzfeed and ultimately closed by the telecom giant last week.

According to Fast Company, which interviewed Taylor about the issue, the problem with Charter stems from the fact that cable company identified customers using their IP address. By using a Firefox add-on – X-Forwarded-For Header – to do so, an attacker could apparently impersonate a Charter customer’s IP address as their own. Since Charter keeps customers’ account details under their IP addresses, an attacker could easily glean that information.

According to Taylor, the technique is worse than the Verizon bug he helped dig up and could even be automated on a larger scale.

“In theory, anyone with minor programming skills could code an automated program that scans every Charter IP and returns the customers billing info,” Taylor told Fast Company.

Charter, based in Stamford, Conn., offers services to nearly 6 million customers across 29 states, including Texas, California, and New York.

Taylor points out that by using a Charter customers’ IP address, an attacker could make a header modification, visit a Charter URL and then claim they forgot their username. The attacker will then be prompted to create a new one and after going through the motions, the attacker could eventually learn the name, address, and username of the customer associated with the IP address.

In fact, by trying to create a username on the site, a form, complete with the users’ last name and home address, pops up. Taylor claims that an attacker could apparently learn more about the user either through API links or accessing the site’s source code, according to Taylor.

While Charter has fixed the issue – Fast Company notified the company and it was fixed within several hours – a spokesperson with Charter claims the number of customers that were actually affected by the issue was fewer than one million and that most of its users use a different version of the site.

“The vast majority of Charter customers use a version of the site on which this security vulnerability was not an issue,” the spokesperson told Fast Company, adding that it was continuing to look into the issue but that it has “seen no evidence of any password or data hacks.”

Article source: https://threatpost.com/charter-communications-fixes-website-data-leak-vulnerability/112962

No Comments

Ersatz Scheme Deceives Hackers, Protects Stored Passwords

Researchers at Purdue University have developed a scheme that protects stolen passwords from offline cracking.

The project is explained in a paper called “ErsatzPasswords – Ending Password Cracking” (pdf) written by Purdue University researchers Mohammed H. Almeshekah, Christopher N. Gutierrez, Mikhail J. Atallah and security pioneer Eugene H. Spafford.

Similar in theory to the Honeywords Project, developed by Ari Juels and Ron Rivest at MIT, Ersatz Passwords instead present the attacker with a long list of phony passwords, and simultaneously trigger an alert within the system notifying admins of an attempted cracking.

The paper explains that the process of computing the real password hash would require an attacker to have access to a hardware security module resident in the authentication server. That dependency makes offline cracking almost impossible. The presentation of the phony passwords is unlike Honeywords, which mixes a list of phony passwords alongside the real ones in a database; in the Ersatz scheme, the real passwords are never available to the hacker.

The researchers said that a system-side initialization of the scheme involves the application of a hardware-dependent function that is applied to each stored hash and fed to the same hash function with the original salt.

“After that, the output is stored in the password file replacing the old stored value,” the researchers wrote. “If an adversary obtains this file and tries to crack any user passwords, the probability that he will get any apparent match is negligible, even if a user password is from a standard dictionary.”

The researchers assert that this puts a serious dent in the effectiveness of offline cracking tools such as John the Ripper. The attacker would, as a result, need access to the hardware in order to properly access the correct hashes.

“An adversary with knowledge of the scheme cannot distinguish between a password file that was computed using our scheme or using the traditional scheme. Even under a stronger assumption, where the adversary knows that the file has been computed using the new scheme, the attacker gains no advantage as he cannot crack the user passwords without access to hardware used to compute the function HDF,” the researchers wrote. “In the case where the attacker is an insider, any extensive use of the HDF can be easily noticed with a clear spike in API usage.”

Almeshekah said the project was motivated by the continuous string of breaches involving leaked hashed password files, and ongoing frustration with users’ reliance on weak passwords and ineffective policies.

“Our work enhances the security of password files, and by extension the security of users’ accounts. We eliminate the possibility of offline password cracking and, at the same time, deceive attackers when they try to crack stolen files by presenting them with fake passwords,” Almeshekah said.

The paper explains as well several methods by which the phony passwords are generated within the scheme, as well as the plausibility that the stolen passwords could still be cracked. Almeshekah said all the source code for the project is available on GitHub.

“Our implementation can easily be integrated into productions system without continuous need of monitoring. A one-time change would alter the password files in the OS to be machine-dependent and render their cracking impossible, while presenting attackers with fake passwords. Another side benefit of our scheme is that it can distribute the underground market for stolen passwords,” Almeshekah said. “Adversaries will perceive an additional risk of using cracked passwords as they know of the existence of such scheme. This will add risk on their side and, hopefully, reduce the value of stolen passwords black market.”

Article source: https://threatpost.com/ersatz-scheme-deceives-hackers-protects-stored-passwords/112973

No Comments

Shoddy Android Factory Reset Exposes Private Data, Encryption Keys

The churn of Android devices, whether older smartphones being traded in or sold online, makes device sanitization imperative. The native feature in the OS, however, may not be doing as thorough a job as advertised.

A paper, “Security Analysis of Android Factory Resets” (pdf), published by Ross Anderson and Laurent Simon of the University of Cambridge in the U.K., throws back the curtain on the incompleteness of Android Factory Reset, leaving as many as half a billion devices exposed to data loss, including credential theft and exposure of personal emails and chats. Another 630 million devices, Anderson and Simon said, are likely not properly erasing the internal SD card where multimedia files are stored.

The researchers studied the behavior of 21 Android smartphones from five vendors running different versions of the OS starting at Froyo (2.3) to Gingerbread (4.3). Most flagrant was the recovery of Google credentials from all devices with the flawed reset option, Anderson and Simon said, putting backed up data at risk as well as access to other services. They added that even the use of full-disk encryption on a device doesn’t completely mitigate the issue because the shoddy reset leaves behind enough of the encryption key that it is recoverable.

Anderson and Simon said the economic trickle down and accountability fostered onto vendors is real.

“So data sanitization problems have the potential to disrupt market growth. If users fear for their data, they may stop trading their old devices, and buy fewer new ones; or they may continue to upgrade, but be reluctant to adopt sensitive services like banking or healthcare apps, thereby slowing down innovation,” the researchers wrote. “Last but not least, phone vendors may be held accountable under consumer protection or data protection laws.”

The failures, they said, range from a lack of support from the OS for proper deletion of the data partition in flash memory in older 2.3 devices, incomplete upgrades pushed to flawed devices by vendors, a lack driver support in newer devices to properly address deletion, a lack of Android support on internal and external SD cards in newer devices, and the inability of full disk encryption in newer versions of the OS to help.

“When removing a file, an OS typically only deletes its name from a table, rather than deleting its content,” the researchers wrote. “The situation is aggravated on flash memory because data update does not occur in place, i.e. data are copied to a new block to preserve performance, reduce the erasure block count and slow down the wear.”

The best level of sanitization, Anderson and Simon said, would be Analog or Digital sanitization which makes data reconstruction impossible, even in a firmware bypass. In most cases, however, Android settles for Logical sanitization, which erases flash blocks via standard hardware interfaces such as eMMC or even the ioctl system call in the Linux kernel. Android’s Logical implementations, however, are incomplete, the researchers found, putting data stored in the data partition (application private directories where Google and third-party credentials are stored), internal SD card (stores multimedia files), or the external SD card (which can be physically removed and behaves similar to the internal card).

The results, the researchers said, were not pretty. Data partition sanitization degrades over time; in older versions, such as Froyo (2.2) sanitization was logical using the yaffs2 file system, which ensured that a partition could not be reformatted without proper sanitization. Froyo also used ioctl’s MEMERASE command for digital sanitization. When Gingerbread rolled around, eMMC had replaced yaffs2 and partitions were able to be reformatted without being sanitized, Anderson and Simon said. They said 90 percent of the data partition was sanitized insecurely, at most a few hundred megabytes deleted. Only the HTC Wildfire S passed muster since it stuck with yaffs2, they said.

“We verified that the phone binaries indeed contained the newest code from AOSP, i.e. with logical sanitization support. We then turned our attention to lower level code, and found that vendor upgrades likely omitted device drivers necessary to expose the logical sanitization functionality from the underlying eMMC,” the researchers wrote. “In practice, this means that the secure command BLKSECDISCARD is not supported by ioctl.”

As for the primary, or internal, SD card, it has never been logically sanitized putting up to 340 million devices at risk, while the researchers said the Android doesn’t even attempt to sanitize the external SD card at all.

Anderson and Simon said they used SQLite file-carving to recover multimedia files from the partitions, and pattern matching techniques to recover the remaining data since it sticks to rigid and distinct file formats.

“For example, we recovered some ‘Conversations’ (SMSes, emails, and/or chats from messaging apps) in all devices  using pattern matching. Compromising conversations could be used to blackmail victims,” the researchers wrote. “Gmail app emails were stored compressed. By searching for relevant headers, we were able to locate candidates and then decompress them. We found emails in 80 percent of our sample devices, but generally only a few per device.”

Article source: https://threatpost.com/shoddy-android-factory-reset-exposes-private-data-encryption-keys/112979

No Comments

eBay Fixes Reflected File Download Flaw

Article source: https://threatpost.com/ebay-fixes-reflected-file-download-flaw/112983

No Comments