How to avoid being the next Yahoo

How to avoid being the next Yahoo

It’s no longer about whether or not you’ll get attacked, it’s about knowing what the repercussions are and if you have the right controls to minimize or completely eliminate the fallout. In order to be able to do this effectively, you need be attuned with your network controls and architecture. Asking the right questions can get you there and also ensuring that network architects are aligned with business and security goals.

VArmour CEO Tim Eades offers a few questions decision makers should be asking to ensure they keep their organizations from being the next Yahoo.

If we were subject to a data breach, how would our controls and processes appear when described on tomorrow’s front page news? 

Why is this important?

This line of thinking is focused on stewardship and accountability for the infrastructure, products and services the firm offered. Are we iterating, are we staying up to date, are we seeking advice and learning from the experiences of others?

What should the answer be?

Controls and processes should be standardized and documented, staff trained/qualified to perform their duties and understand their roles and responsibilities. Independent internal auditors and external auditors are appropriate for the job. Also, regular testing responses to major incidents is important, as is working closely with vendors and industry experts to create, maintain and certify standards.

Red flags?

Half documented security standards that are not likely to get done. Also, lack of communication and a false sense of confidence can lead to huge problems.

What are our most critical and/or regulated applications and data systems?

Why is this important?

Identifying the most valuable environments enables appropriate controls to be put in place. These controls can then help identify malicious or anomalous behavior and spur the appropriate action. These systems can also be prioritized for remediation where controls don’t exist—particularly, important where there is a significant infrastructure sprawl.

What would we expect the answer to be?

Ideally there would be a system of record that includes dependencies backed with current and accurate data. The network team should be active in the process and ensure that they have risk assessed and the dependency mapping against network infrastructure.

What are the red flags?

We don’t know or don’t need to know because the network is fully resilient. We are aware of some of the critical assets. Or it hasn’t been maintained for some time.

Where and how are these critical systems connected?

Why is this important?

Knowing the answer to this question means that you have done something with the inventory data and stand half a chance of responding during some form of incident or as part of forward planning for remediation, investing and improvement.

What would we expect the answer to be?

The team should be able to provide the network dependencies as they map to the systems—is it in a single data center, many? What network services does it leverage, any single points of failure, is it attached to a legacy unsupported switch? The description should include some form of segmentation and control. In reality, most organizations are poor at upkeep and a good portion of critical assets are connected to legacy networks, simply because they have been around the longest.

What are the red flags?

Same as question two, you don’t want to hear “Partially. Incomplete. Inconsistent.” If there is mention of legacy, lack of resilience, or no mention of security zones or controls in the description, you’re also in trouble.

Can you report on which systems are accessing our critical/regulated applications and data? Can we tell if anything changes?

Why is this important?

Being able to get the answer for this means: One, your team knows what the critical assets you need to report against. Two, at least some monitoring tools are in place. Three, the monitoring tools are connected to the right controls. Four, it helps validate whether access is appropriate or not. And finally, action can be taken. Knowing what happens should include details of access recertification and changes to management and approvals processes. This is closely coupled with the ability to control access and respond to anomalous or malicious activity.

What would we expect the answer to be?

Yes— and it should be available as a self-service model based on entitlement. But unless the environment is new, with a zero trust policy, most will struggle having this knowledge in any complete or consistent way. However, that is really the point of the question. It is important to understand why these systems are talking, as it ties into policy administration. Further, access should be reviewed regularly and have clear processes for recertification and change management. In essence, the more active the controls or tools the more positive an answer.

What are the red flags?

A cobbled together report from disparate systems, that requires that the data be validated and more. Or the tooling isn’t there and it’s not certain which services we would need to report against (see 1).

How have we segmented the network?

Why is this important?

This question is intended to establish how network security has been approached at the organization. Understanding the prevailing architecture is important for determining the level of maturity and awareness of the limitations associated with a perimeter-based approach.

What would we expect the answer to be?

Ideally, there would be no presumption of trust for networks. There should be a series of domains and tenants for business units, environments and application classes. Controls within and between these domains and tenant should be state and application aware.

What are the red flags?

The network is not segmented—the internal network is trusted and we rely on a hardened perimeter.

How does this segmentation protect our critical/regulated applications and data?

Why is this important?

Do the teams believe that the current approach to network security provides sufficient protection for the critical/regulated applications and data and are they able to back that up. Controls should stand up to internal/external audit and levels compliance should be readily available.

What would we expect the answer to be?

System administrators should know how these systems are protected from other network zones and geographical regions. How granular— such as microsegementation—how effective and how application aware the measures are should be well understood.

What are the red flags?

None or minimal segmentation between internal network zones, geographical regions and critical/regulated applications and data is an issue.

How does this segmentation and associated controls reduce the opportunity for an attacker to laterally move within our environments?

Why is this important?

The blueprint for data breach is to get in and move around and find valuable assets and data, much of which is unstructured and highly distributed. Once in a “trusted” network, adversaries can go anywhere and potentially compromise/access more and more systems leading to a domino effect.

What would we expect the answer to be?

We would want the controls to allow measures to restrict communication between authorized systems only and to reduce the attack surface available to an attacker exploiting common protocol vulnerabilities. They should also prevent protocol hijacking of existing connections (stateful) or be able to prevent control disablement (the agent problem).

What are the red flags?

There is no clear understanding of how hacker can compromise networks via unassuming devices.

How can these controls be used to better mitigate vulnerabilities in our systems?

Why is this important?

It isn’t always possible to patch every system or retire every vulnerable platform. In these cases it is necessary to bring in independent control to mitigate the risk.

What would we expect the answer to be?

We would want to see distributed security platform that provides the capability to rapidly deploy application aware controls to help protect vulnerable workloads until they can be remediated. Traditional deployments wouldn't be able to apply controls  with the required level of granularity or proximity to the workload to be effective.

What are the red flags?

There are no control points where we could apply controls. We don’t have an inventory to work from.

IDG Insider

PREVIOUS ARTICLE

«Lenovo Yoga Book review: Unique touch features let you be hands-on creative

NEXT ARTICLE

Are you ready for remote project management?»

Add Your Comment

Most Recent Comments

Resource Center

  • /view_company_report/775/aruba-networks
  • /view_company_report/419/splunk

Poll

Crowdfunding: Viable alternative to VC funding or glorified marketing?