Archive for September, 2013

How to mitigate risk associated with a customer’s potential data breach

When it comes to managed and cloud services, regardless of who you talk to, security is a touchy
subject. No one denies the importance of a good defense, but no one wants to accept the risk of a
breach, either. Service providers can adopt best practices to mitigate risk, but industry experts
say there’s no way to completely eliminate
the risk associated with data breach
.

“The customer of managed services and cloud wants to put as much risk away from themselves and
onto the service provider as possible. The service provider wants to transfer or avoid as much risk
as possible, but they also want the sale,” said Charles Weaver, CEO of Chico, Calif.-based MSPAlliance,
an industry association and certification body for managed services and cloud computing
professionals. In essence, service providers and their customers are at direct odds with each
other.

To further complicate matters, experts agree a breach
is inevitable. “In security, if you’re the good guy, you’re playing defense. If you play defense
long enough, you lose. Something is going to get you,” said Benson Yeung, senior partner and
founder of Triware Networld Systems LLC, a
full-service consulting and integration company based in Santa Clara, Calif.

In security, if you’re the good guy, you’re playing defense. If
you play defense long enough, you lose. Something is going to get you.

Benson Yeung,
senior partner and founder, Triware Networld Systems

“We can’t accept the liability, and the reason why is it’s impossible. If you think about
technology today, let’s say the Windows OS, you can’t go another month without getting another
patch,” said Steven Reese, CTO of
Ontario, Calif.-based Sigmanet Inc., a
value-added reseller and IT consultancy offering a variety of solutions and managed services.

Reese continued: “You have to think about risk in two veins: the likelihood of a breach and the
impact of a breach. No matter what I do, I can’t change the impact. If a credit card number is
stolen, it’s stolen. The impact to the customer is the same no matter what I do. From our
perspective, the best I can do in this industry is reduce the likelihood of a breach —
implementing multiple
layers of security
, taking a defensive posture, applying good
policy
. Doing that helps curb a lot of that problem we run into with regards to security
breaches, but in order to fully insure somebody, it would be an impossibility.”

Yeung addresses the problem head-on with new clients. “I always like to put myself as well as
clients in the mindset that at some point in time they’re going to get hacked. I start with the
worst case scenario: What is the worst that can happen if we get hacked today?” Yeung said. This
discussion allows Yeung to identify the client’s most critical assets. However, he doesn’t focus
solely on shoring up defenses. He also addresses incident response. “To me, recovery is more
important than defense. We all do defense, but how will you recover when the defense fails?”

The value of contracts

Unlike most service providers, Yeung doesn’t take any action to limit his company’s liability
should a customer be breached. He doesn’t like contracts, and he doesn’t have insurance,
he said. Instead, he builds relationships with his clients based on trust. “It doesn’t matter
what’s written down. If they don’t trust you, it doesn’t matter. I wouldn’t take on any project
where I wasn’t trusted 100 percent. The entire industry is built on trust. Whenever there is no
trust, it’s just not going to work,” he said.

Robby Hill, founder and CEO of
Florence, S.C. -based HillSouth, an IT
consulting firm offering managed services and integration services to small and medium-sized
businesses, said his company has contracts updated every few years with an attorney, and they
include a standard indemnity clause. However, the contracts provide little if any protection. “What
we found out in the healthcare sectors is that [the standard indemnity clauses] don’t apply to
healthcare data with the new regulations that [have] come down. Everyone’s responsible whether they
indemnify themselves or not,” he said.

Customer responsibility

Regardless of the type of data service providers handle, it presents some risk — but the risk
extends beyond the solution provider. “The information people give at a doctor’s office or
healthcare office is no different than what the bank might have. It’s very sensitive information
that can be used for identity theft. We are right to want to protect it, but we need to understand
that the risk spreads through the whole ecosystem, not just the provider,” Hill said.

This, of course, includes the customer’s environment. “That’s hard for the customer to
understand: that at some point there still falls a level of liability on the customer. There’s only
so much we can do before it comes down to [the fact that they] need to care for [their]
information,” Reese said.

As an example, Reese said there is little he can do if data becomes corrupt. “The reality is if
that happens, there’s no level of recovery I can provide. If it’s backed up, it’s not really lost
or corrupt. But if someone’s data goes away and there’s no backup, there’s nothing I can do.
…  Even the biggest data centers still provide the same kind of [contractual] language. They
can’t be held liable to lost or corrupt data,” he said.

Best practices to mitigate risk

Regardless of the service provider’s view of risk, there are standard best practices that can
limit the company’s liability in the case of a breach. According to Weaver, it all starts with “a
really well-prepared set of service agreements. I don’t mean some circa-1995 Webhosting contract
copied from the Internet with the other company’s name scratched off and the service provider’s own
put on there. I mean a real contract drafted or at least reviewed by an attorney who knows cloud
and managed services,” Weaver said. “A well-crafted agreement should reflect accurately what the
service provider is capable and willing to do in terms of risk.”

Tips to mitigate risk related to a customer data breach

1. Hire a lawyer to draft or review a well-developed service contract, keeping in mind that some
U.S. regulations, such as in the healthcare field, override indemnification that is usually
possible through contractual agreements.

2. Buy insurance to cover events that might not be foreseen in the contract.

. Employ individuals with technical certifications in the areas they are working in.

4. Get your business certified by a certification body like MSPAlliance or Smithers Quality
Assessments.

5. Get your business audited by an independent third party.

“That will tell the customer what the MSP is willing to do and not do. It will tell the MSP that
if [they’re] saying something that doesn’t jibe with what’s in the agreement, then [they] have a
problem. It shows them where they’re skirting the boundaries of their own comfort zone,” Weaver
said.

Weaver also advises service providers to purchase insurance. “It’s a stopgap. It will handle the
overflow or whatever the contract did not cover,” Weaver said. “It will help catch the stuff that
happens in the relationship but that may not have been anticipated. … Insurance plus a good service
agreement will give a lot of protection to the customer and the service provider,” he said.

Finally, Weaver said that service providers need to demonstrate that they are qualified to
deliver the services they are selling. “You’ve provided the agreement that will provide a legally
binding relationship. Insurance will provide financial assurance to the customer and the MSP, but …
how do you prove what you’re doing? That’s where audit and certification come in,” Weaver said.
This means having individuals certified in the areas important to the customer as well as having
the service provider’s company certified and audited.

“A benefit to the MSP is being able to communicate and prove very quickly what you can do for a
customer. A lot [of service providers] just rely on sales and marketing to build that trust. A
well-certified bench of technicians in a certain area as well as having the company certified is
good for proving fairly quickly that you are who you say you are and that you’re capable of doing
the work,” Weaver said.

Being certified can also have an immediate financial benefit. Weaver said Lords of London, the
MSPAlliance’s insurance policy carrier, gives MSPAlliance member customers who are certified and
audited a percentage back. “If they have insurance and are certified, they’ll see a lower premium,”
Weaver said.

“When working with insurance companies that understand technology solutions providers, you’ll
start to see all these questions related to ‘Do you do this? Do you do that? Are you implementing
these procedures?’ I take that seriously. That’s what the insurance company wants us to have,” Hill
said. “Any time carriers have suggestions we can take to reduce risk, we take them.”

“Our industry is an industry where we take on a lot of risks. We just have to know up front what
we’re doing and take steps every day to mitigate the risks that we take. Most solutions providers
have access to confidential data on any given day from a client. That’s the nature of our business.
Everybody in our industry should take that seriously and put the right tools and processes in place
to protect the information your customers trust you with,” Hill said.




This was first published in September 2013

Article source: http://searchitchannel.techtarget.com/feature/How-to-mitigate-risk-associated-with-a-customers-potential-data-breach

,

No Comments

Mother of All Data Breaches Shows Need for Layered Security


An identity theft service has hacked several data Relevant Products/Services broker behemoths, according to a seven-month investigation by KrebsOnSecurity, and yes, it may be the mother of all hacks.

Here’s the backstory: For the past two years, SSNDOB.ms marketed itself on underground cybercrime forums as a reliable and affordable service that customers can use to look up Social Security numbers, birthdays and other personal data on any U.S. resident, Krebs reports. The price: from 50 cents to $2.50 a record and from $5 to $15 for credit and background checks. The subscription-based service accepted anonymous virtual Relevant Products/Services currencies like Bitcoin and WebMoney.

Late last month, Krebs reports, network Relevant Products/Services analyses uncovered that credentials SSNDOB admins used were also responsible for operating a botnet that apparently tapped into the internal systems of large data brokers. LexisNexis confirmed that it was compromised as far back as April 10. Krebs reports that a program installed on the server Relevant Products/Services was designed to open an encrypted channel of communications from within LexisNexis’s internal systems to the botnet controller on the public Internet.


Five Data Brokers Breached

“Two other compromised systems were located inside the networks of Dun Bradstreet, a Short Hills, New Jersey data aggregator that licenses information on businesses and corporations for use in credit decisions, business-to-business marketing and supply chain management,” Krebs explains. “According to the date on the files listed in the botnet administration panel, those machines were compromised at least as far back as March 27, 2013.”

According to Krebs, the fifth server compromised as part of this botnet was located at Internet addresses assigned to Kroll Background America. Kroll, which is now part of HireRight, provides employment background, drug and health screening. Altegrity owns both Kroll and HireRight. Krebs says files left behind by intruders into the company’s internal network suggest the HireRight breach extends back to at least June 2013.

“An initial analysis of the malicious bot program installed on the hacked servers reveals that it was carefully engineered to avoid detection by antivirus Relevant Products/Services tools,” Krebs says. “A review of the bot malware in early September using Virustotal.com — which scrutinizes submitted files for signs of malicious behavior by scanning them with antivirus software Relevant Products/Services from nearly four dozen security Relevant Products/Services firms simultaneously — gave it a clean bill of health: none of the 46 top anti-malware tools on the market today detected it as malicious (as of publication, the malware is currently detected by six out of 46 anti-malware tools at Virustotal).” (continued…)

 

Article source: http://www.cio-today.com/story.xhtml?story_id=132004JX9AEC

,

No Comments

Experian Data Breach Resolution Reveals Five Common Mistakes Made When …

COSTA MESA, Calif., Sept. 30, 2013 /PRNewswire/ — A data breach is an issue that can affect any organization and National Cyber Security Awareness Month is an opportune time for organizations to start to prepare for an incident or enhance their current response plan. With experience handling thousands of breaches, Experian Data Breach Resolution is observing the commemorative month by providing key insight into how to overcome common mistakes companies experience when handling a data breach.

“While there has been great progress among businesses and institutions in data breach prevention, breaches can still occur and it’s important to execute the right steps after an incident,” said Michael Bruemmer, vice president at Experian Data Breach Resolution. “Being properly prepared doesn’t stop with having a response plan. Organizations need to practice the plan and ensure it will result in smooth execution that mitigates the negative consequences of a data breach.”

Those possible outcomes can include a loss of customers, regulatory fines and class-action lawsuits. Studies show that a majority of organizations had or expect to have a data breach that results in the loss of customers and business partners, and more than 65% of companies have or believe they will suffer serious financial consequences as a result of an incident[1]. Among companies that had breaches, the average cost reported of incidents was $9.4 million in the last 24 months. These costs are only a fraction of the average maximum financial exposure of $163 million that the companies surveyed (breached or not) believe they could suffer due to cyber incidents[2].

Experian Data Breach Resolution will present on this topic at The International Association of Privacy Professionals (IAPP) Privacy Academy held in Bellevue, Seattle, on Oct. 1 at the conference session titled, “Managing the Top Five Complications in Resolving a Data Breach.” Those not in attendance can view the presentation through a live stream at http://www.ustream.tv/experiandbr and pose questions to the panelists in real time via Twitter using the hashtags #databreach and #iapp.

According to Bruemmer, three of the most common mistakes include:

— No engagement with outside counsel — Enlisting an outside attorney is

highly recommended. No single federal law or regulation governs the

security of all types of sensitive personal information. As a result,

determining which federal law, regulation or guidance is applicable

depends, in part, on the entity or sector that collected the information

and the type of information collected and regulated. Unless internal

resources are knowledgeable with all current laws and legislations, it

is best to engage legal counsel with expertise in data breaches to help

navigate through this challenging landscape.

— No external agencies secured — All external partners should be in place

prior to a data breach so they can be called upon immediately when a

breach occurs. The process of selecting the right partner can take time

as there are different levels of service and various solutions to

consider. Plus, it is important to think about the integrity and

security standards of a vendor before aligning the company brand with

it. Not having a forensic expert or resolution agency already identified

will delay the data breach response process.

— No single decision maker — While there are several parties within an

organization that should be on a data breach response team, every team

needs a leader. Determine who will be the driver of the response plan

and primary contact to all external partners. Also, outline a structure

of internal reporting to ensure executives and everyone on the response

team is up to date and on track during a data breach.

Depending on the industry, additional oversights may involve securing proper cyber insurance and following the Health Insurance Portability and Accountability Act (HIPAA) and Health Information Technology for Economic and Clinical Health Act (HITECH). The complete list and tips to overcome these issues will be addressed by Bruemmer at the IAPP Privacy Academy presentation.

For the Experian Data Breach Resolution schedule of presentations, visit http://www.experian.com/data-breach/events.html.

Additional data breach resources, including Webinars, white papers and videos, can be found at http://www.experian.com/databreach.

Read Experian’s blog at http://www.experian.com/dbblog.

About Experian Data Breach Resolution

Experian is a leader in the data breach resolution industry and one of the first companies to develop products and services that address this critical issue. As an innovator in the field, Experian has a long-standing history of providing swift and effective data breach resolution for thousands of organizations, having serviced millions of affected consumers. For more information on the Experian Data Breach Resolution division at ConsumerInfo.com, Inc. and how it enables organizations to plan for and successfully mitigate data breach incidents, visit http://www.experian.com/databreach.

About Experian

Experian is the leading global information services company, providing data and analytical tools to clients around the world. The Group helps businesses to manage credit risk, prevent fraud, target marketing offers and automate decision making. Experian also helps individuals to check their credit report and credit score, and protect against identity theft.

Experian plc is listed on the London Stock Exchange (EXPN) and is a constituent of the FTSE 100 index. Total revenue for the year ended March 31, 2013 was

US$4.7 billion. Experian employs approximately 17,000 people in 40 countries and has its corporate headquarters in Dublin, Ireland, with operational headquarters in Nottingham, UK; California, US; and Sao Paulo, Brazil.

For more information, visit http://www.experianplc.com.

Article source: http://www.darkreading.com/vulnerability/experian-data-breach-resolution-reveals/240161991

,

No Comments

Silent Circle Moving Away From NIST Ciphers in Wake of NSA Revelations

The first major domino to fall in the crypto world after the NSA leaks by Edward Snowden began was the decision by Lavabit, a secure email provider, to shut down in August rather than comply with a government order. Shortly thereafter, Silent Circle, another provider of secure email and other services, said it was discontinuing its Silent Mail offering, as well. Now, Silent Circle is going a step further, saying that it plans to replace the NIST-related cipher suites in its products with independently designed ones, not because the company distrusts NIST, but because its executives are worried about the NSA’s influence on NIST’s development of ciphers in the last couple of decades.

Jon Callas, one of the founders of Silent Circle and a respected cryptographer, said Monday that the company has been watching all of the developments and revelations coming out of the NSA leaks and has come to the decision that it’s in the best interest of the company and its customers to replace the AES cipher and the SHA-2 hash function and give customers other options. Those options, Callas said, will include non-NIST ciphers such as Twofish and Skein.

“At Silent Circle, we’ve been deciding what to do about the whole grand issue of whether the NSA has been subverting security. Despite all the fun that blogging about this has been, actions speak louder than words. Phil [Zimmermann], Mike [Janke], and I have discussed this and we feel we must do something. That something is that in the relatively near future, we will implement a non-NIST cipher suite,” Callas wrote in a blog post explaining the decision.

Twofish is a cipher suite written by Bruce Schneier and it was one of the finalists during the AES competition, but lost out to the Rijndael algorithm. It has been resistant to cryptanalysis thus far, and Callas said it also has the advantage of being an easy replacement for AES in Silent Circle’s products. The company also will be replacing SHA-2, an older NIST hash function, with Skein, which was a finalists in the recently completed SHA-3 competition.

“We are going to replace our use of the AES cipher with the Twofish cipher, as it is a drop-in replacement. We are going to replace our use of the SHA–2 hash functions with the Skein hash function. We are also examining using the Threefish cipher where that makes sense. (Full disclosure: I’m a co-author of Skein and Threefish.) Threefish is the heart of Skein, and is a tweakable, wide-block cipher. There are a lot of cool things you can do with it, but that requires some rethinking of protocols,” Callas said.

The decision by Silent Circle comes at a time when there are many unanswered questions about the NSA‘s influence on cryptographic algorithm development, specifically those standards developed by NIST. The National Institute of Standards and Technology is responsible for developing technical standards for the U.S. federal government and many of those standards are adopted by other organizations, specifically crypto standards. Recent revelations from the NSA leaks have shown that the NSA has some unspecified capabilities against certain crypto algorithms and also has been working to influence NIST standards development. In response to one of these revelations, NIST itself has advised people to stop using the Dual EC_DRBG random number generator developed under its supervision.

“The DUAL_EC_DRBG discussion has been comic. The major discussion has been whether this was evil or merely stupid, and arguing the side of evil has even meant admitting it is technologically a stupid algorithm, which sends the discussion into an amusing spiral of meta-commentary,” Callas said.

Silent Circle’s move away from AES and SHA-2 shouldn’t be seen as an indictment of those two ciphers, Callas said, but more of an indication that there are better options out there without the shadow of potential NSA influence hanging over them.

“This doesn’t mean we think that AES is insecure, or SHA–2 is insecure, or even that P–384 is insecure. It doesn’t mean we think less of our friends at NIST, whom we have the utmost respect for; they are victims of the NSA’s perfidy, along with the rest of the free world. For us, the spell is broken. We’re just moving on. No kiss, no tears, no farewell souvenirs,” he said.

Image from Flickr photos of Marcin Wichary

Latest Tweet from:


Categories: Cryptography, Government

Comments (2)

  1. John September 30, 2013 @ 11:20 am

    1

    You guys should publish an article for developers on alternative cryptographic algorithms. If we’re NSA-paranoid, what should we use for symmetric encryption, asymmetric encryption, secure random number generation, and password hashing? I think a lot of developers went for NIST-approved algorithms because of the implied trust the crypto community placed in these. But now what do we pick?


  2. Alternatives September 30, 2013 @ 1:03 pm

    2

    I don’t know which have had new attacks published since the compettions, but here’s some options beyond AES and SHA-2:
    AES Finalists (encryption)
    *Rijndael (turned into AES)
    Serpent
    Twofish
    RC6
    MARS

    SHA-3 finalists (hashing)
    BLAKE (based on ChaCha, which is based on Salsa20)
    Grostl
    JH Function
    *Keccak (being turned into SHA-3)
    Skein (based on Threefish)

    eSTREAM (stream cipher) Round 3 software survivors:
    HC-128 (in Software 128-bit portfolio)
    Rabbit (in Software 128-bit portfolio)
    Salsa20/12 (in Software 128-bit portfolio)
    SOSEMANUK (in Software 128-bit portfolio)
    HC-256 (in Software 256-bit portfolio)
    Salsa20/12 (in Software 256-bit portfolio)
    CryptMT (Version 3)
    Dragon
    LEX
    NLS (NLSv2, encryption-only)

    NESSIE (block ciphers):
    MISTY1 (64-bit blocks)
    Camellia
    SHACAL-2

    NESSIE (hash)
    Whirlpool

    CRYPTREC (March 2013) block cipher
    Camellia

    CRYPTREC (March 2013) stream cipher
    KCipher-2

    Other ciphers:
    Threefish (related to Skein)


Leave A Comment Cancel Reply

Recommended Reads

nsa_keith

NSA Trying to Change the Surveillance Narrative

Gen. Keith Alexander and the National Security Agency continue to struggle to win back the public and political support they’ve lost while keeping their tenuous grasp on the collection tools they’ve been employing for more than a decade.

Read more…

Article source: http://threatpost.com/silent-circle-moving-away-from-nist-ciphers-in-wake-of-nsa-revelations/102452

No Comments

4th Cybersecurity Framework Workshop: Good News and Bad News

I had a chance to visit a number of industrial events this year and can see the evolution of cybersecurity in the industrial field. One of these was the 4th National Institute of Standards and Technology’s (NIST) Cybersecurity Framework Workshop (CFW). Kaspersky was in attendance at the previous events, but the main difference with this one, was that now we had sponsors.

The 4th Workshop was another round to gather feedback on the latest version of the cybersecurity framework published on August 28, 2013. My takeaways from this workshop include (well, not too far from the previous 3rd workshop):

  • The Cybersecurity Framework  is not about “how,” it’s about “what”
  • The CFW is more of a marketing push for newbies and a refresher for pros
  • There is a huge demand for industrial people to decide on how.
  • Whitelisting and Default Deny are a must

Overall, the resulting framework is not specific enough for any of the Government-specified 17 Critical Infrastructure Sectors, to understand the practical steps of implementing a cybersecurity strategy or to at least understand the practical set of instruments (aka security controls).

For those who are not familiar, the Framework consists of five functions, categories for each of those functions, subcategories for each category; and separately, security profiles and maturity tiers.

Functions describe in general, what your cybersecurity should consist of: Identify, Prevent, Detect, Respond and Recover. Most people agree on these functions, while some argue that Improve/Update should be explicitly added in the security domain. As opposed to many other frameworks, security is becoming more obsolete, because while you may be secure today, in 12 months that could no longer be the situation because of new attack methods.

The Categories included in the Framework are  comprehensive as well. But, unfortunately, the subcategories (please find the full list in the document itself, see page 14) are a mix between abstract categories which helps to see the domain and potential goals, but leaves the selection of methods to the reader, and technical security controls that many sectors find inapplicable or incomplete. So it’s unsurprising that for the second Workshop in a row we see the same story: whenever any of the workgroup starts speaking about subcategories the work stalls. Most of the participants failed to examine the entire list, besides representatives of different sectors are unsatisfied with the way subcategories are set at all – for their own reasons.

Overall, the subcategories decided upon can be considered quite a failure. For example, the only control related to Industrial Control Systems simply says, “PR.PT-5: Manage risk to specialized systems, including operational technology (e.g., ICS, SCADA, DCS, and PLC) consistent with risk analysis”. It’s very specific, and helpful for OT people, if you know what I mean.

Apparently, it’s rather hard to have “Security Controls” done in a universal way for different sectors – including  IT and industrial systems and this doesn’t even take into account smaller sectors. For example, financial sectors normally are believed to take care of data quite well, but the example of Treasury was quite illustrative – all data is public, so confidentiality isn’t a major concern, but transaction and data integrity of shares in peoples’ possession, is a must. This is similar to the situation for industrial controls systems.

My impression is that NIST has decided to leave the work on defining the exact set of subcategories and controls to individual critical infrastructure sectors.

However, this method is not good for certain sectors depending on the industrial network, as there are 9 sectors where industrial systems prevail, but regulators and industry associations are different – DoE, DoT. So it is unclear whether each sector has to do the “instantiation” of the framework on their own, and whether or not this should be repeated nine times with different results, as they share much of commonalities due to their reliance on Industrial Control Systems.

Also, NIST will leave the Framework implementation details to each sector. One of the questions that’s wasn’t answered at the workshop was, “How do you implement security along this framework, or at least, what will you start with?”

One option  is to remove subcategories from the framework, to make it consistent, and to try not to present universal security controls, but rather make the Categories a goal-setting framework.

The Framework also includes another dimension – Profiles (what does your organization need among the variety of categories and controls – what are  your security priorities, based on the business specifics), and Tiers (how mature are you in cybersecurity). While it seems to be common sense, all of the frameworks in different domains basically share the same approach on “Flexibility” and “Maturity”. However in practice, in CFW it’s rather a mess because it is unclear how to measure what Tier you have and in turn what that tier stands for.

So what’s the good news?

  • NIST adopted Kaspersky Lab’s whitelisting (Default Deny) approach for security for Critical Infrastructures – namely, “PR.PT-3: Implement and maintain technology that enforces policies to employ a deny-all, permit-by-exception policy to allow the execution of authorized software programs on organizational systems (aka whitelisting of applications and network traffic)”. We believe that this totally makes sense, and we are happy to know that our voice has been heard and our vision shared.
  • A major goal and major impact of the Cyber Security Framework is marketing – pushing all Critical Infrastructures, including many of those who do not yet have any cyber security programs, to start doing something, and providing more of a budget to CISOs of those who have a clear vision already. Many people suggested putting a framework in a marketing brochure to make it clear.
  • The third positive as a result of this workshop, is that once pushed widely, the Framework can help people from different companies. Helping companies better understand each other in the cybersecurity domain is important, as most critical infrastructures are interconnected and outsourcing to each other, which can bring a serious domino effect in a potential cyber security incident. Cybersecurity marketing efforts could be helpful in many countries for the sake of cybersecurity of  Critical Infrastructures.
  • Fourth, sector-specific jobs on specifying the security controls and mapping the Framework to existing sector and industry standards will be done, though that person or group has not been identified. The cyberframework could become a cross-reference between different sector’s standards and frameworks, which will also help build better understanding between entities on the technical level.

While the Cybersecurity Framework may serve as the first step in pushing Critical Infrastructure security,  the only way to actually increase protection is to make sure it goes with step two (where do I start?) and step three (what are the best practices to follow?) for each of the Critical Infrastructure sectors. Among these sectors, there are 10 industrial-centric ones that  are less-experienced in IT security overall and have a different nature to their processes (high-availability instead of high-confidentiality).

So, the question still remains –how can we make industrial security more practical for the current threat landscape?

Kaspersky Lab is actively exploring possible options with our industrial partners.

For more details read http://business.kaspersky.com/4th-cybersecurity-framework-workshop-marketing-is-good-but-not-for-engineers/

Article source: http://threatpost.com/4th-cybersecurity-framework-workshop-good-news-and-bad-news/102453

No Comments

New Project Sonar Crowdsources Embedded Device Vulnerability Analysis

The state of embedded device security is poor, and there hasn’t been much in the way of discussion to the contrary. It’s well established that vendors skimp on security, selling for example, routers and other networking gear protected only by default passwords, or other critical devices engineered to be accessible with a simple telnet command. These actions pose an enormous risk to the infrastructure supporting those devices, leaving them open to attack by hackers. Those vulnerabilities can lead to data loss, network performance degradation, or worse put lives in danger if critical services such as water or power are impacted.

For Metasploit creator HD Moore, this was a call to action. Moore has invested serious time into examining data from previous scans of the IPv4 address space looking for equipment exposed by shoddy default configurations and other vulnerabilities. His own Critical.io project, along with the Internet Census 2012, the Carna botnet and a host of academic and research tools that scan the Internet and return bulk data on device exposures has done plenty to shine a harsh light on the risks these Web-facing devices.

But Moore believes there is plenty of room for additional analysis. He’s advanced his work by collaborating with a team of researchers at the University of Michigan on Project Sonar, a repository of scan data that has been responsibly collected by the researcher community. Moore said he hopes to engage the security community into not only analyzing the data produced by scans of public-facing networks, but also contributing data sets. Project Sonar is being hosted by the University of Michigan at scans.io.

“We need more eyes on it because we need the shame to fall on these vendors for the terrible products they’re producing,” Moore said, adding as an example, that he’s found upwards of 10,000 command shells sitting online accessible via telnet that would give an outsider root access to the device in question. “The fact that we’ve got issues like that where there’s not even a pretense of security, yet these devices are not getting any better and in some cases we’re seeing an expansion of the vulnerable devices year over year, that was a call to action to me to make it harder for vendors to avoid the scrutiny they deserve.

“The thing is a lot of people like to see results and like to see the tiny pictures but not many people want to dig into and pull stuff out,” Moore said. “We’re going to try to do that make it palatable for amateur researchers and every day IT admins to use as a resource.”

Currently, there are five data sets hosted by Project Sonar, formally known as the Internet-Wide Scan Data Repository; the two teams used a host of tools to collect the data including ZMap, an Internet scanner developed at UM, UDPBlast, Nmap, and MASSCAN among others. Two datasets were contributed by the University of Michigan and those include scans of HTTPS traffic looking for raw X.509 certificates (43 million have been included from 108 million hosts) as well as data from an IPv4 scan on port 443 conducted last October to measure the impact of Hurricane Sandy. Rapid7 has also contributed three data sets: service fingerprints from Moore’s Critical.IO project; a scan of IPv4 SSL services on port 443; and a regular DNS lookup for all IPv4 PTR records.

“After going through the data enough times, it became obvious there are so many different vulnerabilities and issues that really just take some human eyes on things,” Moore said. “It really doesn’t make sense to sit on this amount of data and not share it.”

Researchers and IT managers can use the data in a variety of ways; in bulk, researchers could generate vulnerability data per vendor or per product, or on a narrower scope, the data can be used to do asset inventory, for example, on a particular IP range in order identify existing vulnerabilities. A Rapid7 team used the data, for example, to accelerate a penetration test on an 80,000-node network. Moore said an entire asset inventory was done in about 20 minutes as opposed to three days with customary tools and scans.

Early feedback has been positive, and Moore said some researchers have already begun to build Web services and queries around the data. Moore added that UM and Rapid7 hope that additional datasets will eventually be contributed, so long as they collection efforts are done legally and within ethical bounds. It’s for that reason, Moore said, that neither UM nor Rapid7 will host data collected from the Internet Census or Carna botnet for this project, the legality of which is still in question.

“Right now we’re steering away from offering any kind of Web service; I don’t want to have a service where folks are depending on me to get them results, nor do I want to be responsible for seeing what queries they run,” Moore said. “It’s not what we’re trying to solve. We’re taking the bulk data that’s multiple gigabytes, 5-6 terabytes, and make that available on the website in bulk form for anyone who’s doing research to download it. At the same time, we’re taking different slices of the data as well and saying ‘Let’s just take the name fields for this packet,’ or parse out a particular field and make those available for folks who are doing more casual testing.”

 

Latest Tweet from:


Categories: Vulnerabilities

Leave A Comment Cancel Reply

Recommended Reads

Article source: http://threatpost.com/new-project-sonar-crowdsources-embedded-device-vulnerability-analysis/102457

No Comments

Experian Data Breach Resolution Reveals Five Common Mistakes Made When …


COSTA MESA, Calif., Sept. 30, 2013 /PRNewswire via COMTEX/ —
A data breach is an issue that can affect any organization and National Cyber Security Awareness Month is an opportune time for organizations to start to prepare for an incident or enhance their current response plan. With experience handling thousands of breaches, Experian Data Breach Resolution is observing the commemorative month by providing key insight into how to overcome common mistakes companies experience when handling a data breach.

“While there has been great progress among businesses and institutions in data breach prevention, breaches can still occur and it’s important to execute the right steps after an incident,” said Michael Bruemmer, vice president at Experian Data Breach Resolution. “Being properly prepared doesn’t stop with having a response plan. Organizations need to practice the plan and ensure it will result in smooth execution that mitigates the negative consequences of a data breach.”

Those possible outcomes can include a loss of customers, regulatory fines and class-action lawsuits. Studies show that a majority of organizations had or expect to have a data breach that results in the loss of customers and business partners, and more than 65 percent of companies have or believe they will suffer serious financial consequences as a result of an incident[1]. Among companies that had breaches, the average cost reported of incidents was $9.4 million in the last 24 months. These costs are only a fraction of the average maximum financial exposure of $163 million that the companies surveyed (breached or not) believe they could suffer due to cyber incidents[2].

Experian Data Breach Resolution will present on this topic at The International Association of Privacy Professionals (IAPP) Privacy Academy held in Bellevue, Seattle, on Oct. 1 at the conference session titled, “Managing the Top Five Complications in Resolving a Data Breach.” Those not in attendance can view the presentation through a live stream at http://www.ustream.tv/experiandbr and pose questions to the panelists in real time via Twitter using the hashtags #databreach and #iapp.

According to Bruemmer, three of the most common mistakes include:

— No engagement with outside counsel — Enlisting an outside attorney is highly recommended. No single federal law or regulation governs the security of all types of sensitive personal information. As a result, determining which federal law, regulation or guidance is applicable depends, in part, on the entity or sector that collected the information and the type of information collected and regulated. Unless internal resources are knowledgeable with all current laws and legislations, it is best to engage legal counsel with expertise in data breaches to help navigate through this challenging landscape.

— No external agencies secured — All external partners should be in place prior to a data breach so they can be called upon immediately when a breach occurs. The process of selecting the right partner can take time as there are different levels of service and various solutions to consider. Plus, it is important to think about the integrity and security standards of a vendor before aligning the company brand with it. Not having a forensic expert or resolution agency already identified will delay the data breach response process.

— No single decision maker — While there are several parties within an organization that should be on a data breach response team, every team needs a leader. Determine who will be the driver of the response plan and primary contact to all external partners. Also, outline a structure of internal reporting to ensure executives and everyone on the response team is up to date and on track during a data breach.

Depending on the industry, additional oversights may involve securing proper cyber insurance and following the Health Insurance Portability and Accountability Act (HIPAA) and Health Information Technology for Economic and Clinical Health Act (HITECH). The complete list and tips to overcome these issues will be addressed by Bruemmer at the IAPP Privacy Academy presentation. For the Experian Data Breach Resolution schedule of presentations, visit http://www.experian.com/data-breach/events.html.

Additional data breach resources, including Webinars, white papers and videos, can be found at http://www.experian.com/databreach.

Read Experian’s blog at http://www.experian.com/dbblog.

About Experian Data Breach Resolution Experian® is a leader in the data breach resolution industry and one of the first companies to develop products and services that address this critical issue. As an innovator in the field, Experian has a long-standing history of providing swift and effective data breach resolution for thousands of organizations, having serviced millions of affected consumers. For more information on the Experian Data Breach Resolution division at ConsumerInfo.com, Inc. and how it enables organizations to plan for and successfully mitigate data breach incidents, visit http://www.experian.com/databreach.

About ExperianExperian is the leading global information services company, providing data and analytical tools to clients around the world. The Group helps businesses to manage credit risk, prevent fraud, target marketing offers and automate decision making. Experian also helps individuals to check their credit report and credit score, and protect against identity theft.

Experian plc is listed on the London Stock Exchange (EXPN) and is a constituent of the FTSE 100 index. Total revenue for the year ended March 31, 2013 was US$4.7 billion. Experian employs approximately 17,000 people in 40 countries and has its corporate headquarters in Dublin, Ireland, with operational headquarters in Nottingham, UK; California, US; and Sao Paulo, Brazil.

For more information, visit http://www.experianplc.com.

Experian and the Experian marks used herein are service marks or registered trademarks of Experian Information Solutions, Inc. Other product and company names mentioned herein are the property of their respective owners.

[1] Experian Data Breach Resolution and Ponemon Institute study, Is Your Company Ready for a Big Data Breach? 2013

[2] Experian Data Breach Resolution and Ponemon Institute study, Managing Cyber Security as a Business Risk: Cyber Insurance in the Digital Age 2013

SOURCE Experian Data Breach Resolution

Copyright (C) 2013 PR Newswire. All rights reserved

Article source: http://www.marketwatch.com/story/experian-data-breach-resolution-reveals-five-common-mistakes-made-when-handling-a-breach-2013-09-30

,

No Comments

Data breach lessons: How to rewrite rules

As embarrassing and costly as a big data breach might be for an organization, many security professionals will tell you such an incident can be good news in the long run for a business’s risk posture. Sometimes even after numerous warnings from security and risk advisers, the only way for senior managers to sit up and pay attention to a set of risks is to have an incident from that risk detailed blow by blow in the business press.

“Once an organization has gone through all that pain, they’re forever changed,” said Lucas Zaichkowsky, an enterprise defense architect at AccessData. “Your whole outlook changes.”

 For all of the problems that breaches bring, they also present a learning opportunity and potential for developing better processes that improve the day-to-day effectiveness of IT security. But that growth can occur only if organizations spend the time to thoroughly analyze the event to find the fundamental risk factors that contributed to a compromise.

 “If you haven’t taken the time to figure out what’s wrong in your program or your technology, then it’s pretty natural that it’s going to happen again,” says Vinnie Liu, managing partner for security consulting firm Bishop Fox.

Unfortunately, some organizations today tend to engage in a type of whack-a-mole brand of incident response, responding to breaches and malware outbreaks only by cleaning up systems affected by the incidents but never delving into root causes, says James Phillippe, leader of threat and vulnerability services for the U.S. at Ernst Young. Meanwhile, he says, “the root cause — weak network controls, poor user education, weak policies, or perhaps improper architecture configurations — will persist.”

On the other end of the spectrum, many organizations recognize that they can’t simply clean up systems after a breach and carry on as before. But because they react quickly without analyzing why things went wrong, they end up wasting a lot of money. And then they still end up breached again.

“I think a lot of recidivism stems from the knee-jerk reactions,” Liu says. “You see something wrong, you buy a bunch of tools, you drop them in place, and you think you’re safe.”

This is why leveraging a breach for more executive buy-in, budget, and meaningful change requires you to use that event “in a balanced manner, not in a panic attack,” says Robert Stroud, international vice president of ISACA.

Once a thorough post-mortem is done, he recommends either using an existing risk model or developing a new one and running the operational and financial impacts of the breach outcome through that model to understand how that changes risk calculations. From there, an organization can more clearly understand if it needs to only change a few controls, or if it needs to make a major overhaul in security processes.

“More often than not, we see organizations go, ‘Hey, we’ve got to do something about that, let’s just do it,’ and they start executing immediately,” Stroud says. “Organizations will go without any assessment, and spend significant money on potential vulnerability without any understanding of the business impact or risk exposure, potentially costing their business significant money. It might be more money than the risk itself.”

 As the experts have explained, establishing the new normal following a breach is going to take post-mortem analysis, and it’s also going to require changing risk models. But, more significantly, it is going to involve sustained investment. The cost of upping the security game is easy to overlook amid all of the more picayune line-items of breach response, but process improvement should be part of the overall response budget once a breach has come to light.

 “People talk about overlooking the cost of credit monitoring, reporting, fees, and things like that,” Liu says. “But from what we’ve seen, I think some of the biggest investments that have to be made over the long term following a breach is for changing process.”

 

Article source: http://www.informationweek.in/informationweek/news-analysis/282896/breach-lessons-rewrite-rules

,

No Comments

Mother of All Data Breaches Shows Need for Layered Security


An identity theft service has hacked several data Relevant Products/Services broker behemoths, according to a seven-month investigation by KrebsOnSecurity, and yes, it may be the mother of all hacks.

Here’s the backstory: For the past two years, SSNDOB.ms marketed itself on underground cybercrime forums as a reliable and affordable service that customers can use to look up Social Security numbers, birthdays and other personal data on any U.S. resident, Krebs reports. The price: from 50 cents to $2.50 a record and from $5 to $15 for credit and background checks. The subscription-based service accepted anonymous virtual currencies like Bitcoin and WebMoney.

Late last month, Krebs reports, network Relevant Products/Services analyses uncovered that credentials SSNDOB admins used were also responsible for operating a botnet that apparently tapped into the internal systems of large data brokers. LexisNexis confirmed that it was compromised as far back as April 10. Krebs reports that a program installed on the server Relevant Products/Services was designed to open an encrypted channel of communications from within LexisNexis’s internal systems to the botnet controller on the public Internet.


Five Data Brokers Breached

“Two other compromised systems were located inside the networks of Dun Bradstreet, a Short Hills, New Jersey data aggregator that licenses information on businesses and corporations for use in credit decisions, business-to-business marketing and supply chain management,” Krebs explains. “According to the date on the files listed in the botnet administration panel, those machines were compromised at least as far back as March 27, 2013.”

According to Krebs, the fifth server compromised as part of this botnet was located at Internet addresses assigned to Kroll Background America. Kroll, which is now part of HireRight, provides employment background, drug and health screening. Altegrity owns both Kroll and HireRight. Krebs says files left behind by intruders into the company’s internal network suggest the HireRight breach extends back to at least June 2013.

“An initial analysis of the malicious bot program installed on the hacked servers reveals that it was carefully engineered to avoid detection by antivirus Relevant Products/Services tools,” Krebs says. “A review of the bot malware in early September using Virustotal.com — which scrutinizes submitted files for signs of malicious behavior by scanning them with antivirus software Relevant Products/Services from nearly four dozen security Relevant Products/Services firms simultaneously — gave it a clean bill of health: none of the 46 top anti-malware tools on the market today detected it as malicious (as of publication, the malware is currently detected by six out of 46 anti-malware tools at Virustotal).” (continued…)

 

Article source: http://www.newsfactor.com/story.xhtml?story_id=1230048QWDS3

,

No Comments

Holy Cross Hospital Informs Former Patients Of Data Breach

FT. LAUDERDALE (CBSMiami) – Nearly 10,000 former patients of Holy Cross Hospital have received letters in the mail notifying them that their personal information may have been accessed by a former employee.

According to hospital officials, they recently learned that an employee had inappropriately accessed thousands of patient records. The information included patient names, dates of birth, addresses and Social Security Numbers.

As a precaution, the hospital sent out letters to all of the 9,900 patients whose demographic information was accessed (appropriately or potentially inappropriately) by this person from November 2011 and August 2013.

An investigation by the hospital found the employee may have wanted the information to file fraudulent tax returns. They’ve since been terminated and the hospital wants them to face criminal prosecuted.

All patients affected by this incident have been offered free credit monitoring services.

Article source: http://miami.cbslocal.com/2013/09/24/holy-cross-hospitals-inform-former-patients-of-data-breach/

,

No Comments