Category Archives: F5

  • 0

Security Rule Zero: A Warning about X-Forwarded-For

Category : F5

Proxies operate on the premise that they exist to forward requests from one system to another. They generally add some value – otherwise they wouldn’t be in the middle – like load balancing (scale), data leak prevention (security), or compression (performance).

The thing is that the request sent by the client is otherwise passed, unmodified, to its target destination.

Here’s where things can get dicey. Today, we see more than half of all apps delivered via a proxy make use of X-Forwarded-For. 56% of real, live apps are using it, which makes it a pretty significant piece of data. X-Forwarded-For is the custom HTTP header that carries along the original IP address of a client so the app at the other end knows what it is. Otherwise it would only see the proxy IP address, and that makes some apps angry.

That’s because a good number of applications rely on knowing the actual IP address of a client to help prevent fraud and enable access. If you’ve logged into your bank, or Gmail, or your Xbox account lately (hey, it’s where Minecraft lives, okay?) from a device other than the one you typically use, you might have gotten a security warning. Because the information about where you log in from is also tracked, in part to detect attempted fraud and misuse.

Your actual IP address is also used to allow or deny access in some systems, and as a means of deducing your physical location. That’s why those e-mail warnings often include “was that you logging in from Bulgaria?”

Some systems also use X-Forwarded-For to enforce access control. WordPress, for example, uses the .htaccess file to whitelist access based on IP addresses. No, it’s not the best solution, but it’s a common one, and you have to at least give them props for trying to provide some app protection against misuse.

Irrespective of whether it’s a good idea or not, if you’re going to use X-Forwarded-For as part of your authentication or authorization scheme, you should probably make a  best effort attempt to ensure it’s actually the real client IP address. It is one of the more commonly used factors in the overall security equation; one that protects the consumer as much as it does corporate interests.

But if you are blindly accepting whatever the client sends you in that header, you might be enabling someone to spoof the value and thereby bypass security mechanisms meant to prevent illegitimate access. I can spoof just about anything I want, after all, by writing a few lines of code or grabbing one of the many Chrome plug-ins that enables me to manipulate HTTP headers with ease.

One of the ways to ensure that you’re getting the actual IP address is to not trust user input. Yes, there’s that Security Rule Zero again. Never trust user input. And we know that HTTP headers are user input, whether they appear to be or not.

If you’ve got a proxy already, great. If not, you should get one. Because that’s how you extract and put the right value in X-Forwarded-For and stop spoofers in their tracks.

Basically, you want your proxy to be able to reach into a request and find the actual, IP address that’s hidden in its IP packet. Some proxies can do that with configuration or policies, others require some programmatic magic. However you get it, that’s the value you put into the X-Forwarded-For HTTP header, and proceed as normal. Doing so ensures that the apps or downstream services have accurate information on which to make their decisions, including those regarding access and authorization.

For most architectures and situations, this will mitigate the possibility of a spoofed X-Forwarded-For being used to gain unauthorized access. As always, the more pieces of information you have to form an accurate understanding of the client – and its legitimacy – the better your security. Combining IP address (in the X-Forwarded-For) with device type, user-agents, and other tidbits automatically carried along in HTTP and network protocols provides a more robust context in which to make an informed decision.

Stay safe!

Resources for handling X-Forwarded-For:

Source: https://f5.com/about-us/blog/articles/security-rule-zero-a-warning-about-x-forwarded-for-28596?sf175780091=1

Author: LORI MACVITTIE


  • 0

How to prepare your company for the new European Union privacy laws

Category : F5

The European Union is replacing the patchwork of individual EU countries’ privacy laws with a new EU-wide legal framework called the General Data Protection Regulation, or GDPR, by May 2018, that carries stricter rules and stiffer penalties. But regardless of the new law, if you haven’t already, Incorporating data protection into your culture is a practice you should adopt.

By May 2018, any company holding data on citizens of the European Union (EU) will face new data privacy rules known as the General Data Protection Regulation (GDPR). The new regulations may produce more compliance headaches for your company as well as strict penalties for non-compliance.

The new legal framework replaces the EU Data Protection Directive and the patchwork of individual EU countries’ privacy laws. For most privacy infringements, penalties for violating certain requirements include fines of €10 million or 2 percent of global annual revenue, whichever is higher. Violating core principles could result in fines of €20 million or 4 percent of global annual revenue, whichever is higher.

The GDPR splits companies into two types: those that process data and those that control it. If you are established in the EU or you offer goods or services to companies or individuals in the EU, you are responsible for complying with the law or potentially facing stiff penalties.

The law applies to any personal information (PI) and in some instances expands the legal definition of PI from previous rules. For example, while Internet addresses were not explicitly considered PI in the previous EU directive, they are now.

WHAT SHOULD YOU DO?

You need to become familiar with the GDPR, find out if you need to comply with the regulations, and, if so, plan to meet its requirements. For example, you may need to hire or appoint a data protection officer, make changes to meet stricter security requirements or create a process for notifying of any breach within 72 hours of discovery.

Besides studying the GDPR and determining how it applies to you, security and risk professionals should consider these steps:

1. Incorporate data protection into your culture

Data protection should be part of every company’s culture, not just to meet GDPR requirements, but because it’s a good basic practice. Forgoing data protection will only make the GDPR requirements more onerous. Besides, ignoring your customers’ privacy is not a good long-term strategy. It increases the chance that your business will run afoul of the law or suffer the wrath of angry customers. And there’s always the chance you’ll make headlines in a way you didn’t intend.

2. Put someone in charge

The GDPR requires some businesses to have a data protection officer who is responsible for ensuring that a company is in compliance. The data protection officer may lead impact assessments and maintain mandated documentation, among other requirements.

3. Take only what you need

There’s another way to clear the privacy bar on data: don’t collect it in the first place. If you’re processing or storing PII you don’t need, stop. Minimizing the data you’re collecting means less to protect.

4. Audit, adapt, repeat

Industry news often seems like a roll call of companies that at one time complied with data-protection regulations—HIPAA, SOX, PCI DSS, or GLBA—but still found themselves breached. Testing your applications and infrastructure early and often to make sure they comply with GDPR and any other data protection regulations is good practice. Every change to infrastructure and business processes should be examined to gauge its impact on your company’s compliance.

Regulations like the GDPR highlight the need for you to be familiar with compliance. While 2018 may seem like a long way off, you should start looking at the regulations today.

Source: https://wemakeappsgo.f5.com/business-strategy/how-to-prepare-your-company-for-the-new-european-union-privacy-laws/?utm_source=social&utm_medium=linkedin&utm_campaign=a15&sf174499085=1

Author: David Holmes


  • 0

Phishing: The Secret of Its Success and What You Can Do To Stop It

Category : F5

Introduction

Phishing has proved so successful that it is now the number one attack vector.1 The Anti-Phishing Working Group reports that in the first half of 2017 alone, more than 291,000 unique phishing websites were detected, over 592,000 unique phishing email campaigns were reported, and more than 108,000 domain names were used in attacks.2 In 2016, the FBI’s Internet Crime Complaint Center (IC3) received phishing reports from more than 19,000 victims.3 However, IC3 also notes that only an estimated 15% of victims ever report crimes to law enforcement, so the actual total could exceed 125,000. Of the 19,000 reported cases, the total cost exceeded $31 million.

In this report, we explore why phishing campaigns work so well, how unsuspecting users play into the hands of attackers, and what organizations can do about it.

How Phishers Bait Their Hooks with Information You Volunteer

Seven minutes until his next meeting, Charles Clutterbuck, the CFO of Boring Aeroplanes, had just enough time to answer a few emails. He flopped onto his padded leather chair and tapped out his password. A dozen emails glowed unread at the top of his inbox stack. He skimmed down the list of names and subjects when one caught his eye. It was a from an old friend. With a nod, he clicked it up. “How’s it going, Clutt?” the email began. He smiled at the old nickname from the dorm days when he first met Bill. Funny that Bill was emailing him at his work address, but that question was quickly forgotten as he skimmed the message.


From: Bill Fescue b.fescue@blafmail.com
Sent: Thursday, July 6, 2017 12:16
To: Charles Clutterbuck c.clutterbuck@boringaeroplanes.com
Subject: My new hoss


How’s it going, Clutt?

Hit the track with my new Falkens and, guess what? Tremendous grip! No more wheel spins. Check out my track time and cornering: http://vizodsite.com/istruper_video_10

See you at next week’s Autocross?

Bill


As you might have guessed, this is a spear phishing email.

In spear phishing, the attacker leverages gathered information to create a specific request to trick someone into running something or giving up personal information. It’s an extremely successful technique and attackers know this. In fact, the Anti-Phishing Working Group reports that phishing has gone up 5,753% over the past 12 years.4

Phishers work by impersonating someone trusted by the target, which requires crafting a message that is credible and easily acceptable. To do this, the phisher needs information about the target to construct their disguise and bait the hook. They get this information by research and reconnaissance.

In the example above, an executive at a military plane parts supplier received an email apparently from a friend. His interest in car racing—as well as his friend’s name and style of speaking—was plucked off social media. The attacker spent a few minutes of web research on car racing to get the vernacular right and then created an email account in the friend’s name. The link is to a site with a video server that sends an exploit geared to the target’s laptop operating system (gleaned from research on the company infrastructure). It loads specialized malware built to exfiltrate aerospace intellectual property. Easy, peasy.

So, we know that attackers are gathering information from social networks and various Internet sources, but just how much information is available? Defenders spend quite a bit of energy preventing the obvious information leaks like passwords, crypto keys, and personally identifiable information (PII). Those are high impact information leaks, but what about the low impact ones?

It’s worth exploring what’s typically discovered in an attacker’s passive electronic reconnaissance. And, that’s not counting active recon like calling the company’s main phone number and trying to extract information via pretexting5 or going onsite for dumpster diving.6 This is all low-risk stuff that can happen in secret from afar. But, as the Great Detective said, “You know my method. It is founded upon the observation of trifles.”7

How Attackers Collect Data About Your Employees

We’ve seen an everyday example of how easily a competent corporate executive (or any other employee, for that matter) can be drawn into a phishing scam through social engineering. Now let’s look at how some of the seemingly innocent actions we take (information we post) on the Internet make the job of a phisher simple—like taking candy from a baby.

Since spear phishers go after a specific organization, they need to know who works there before they can begin their targeting. A lot of people tag themselves on various social media sites as an employee of a particular company. LinkedIn is a site that provides lots of details on where people work. Quora is another site where tech people congregate:

Quora_Sample

Through these sites, it’s not hard for phishers to gather up a list of names of employees at a specific organization.

Social Media and Personal Information

Despite the security team’s best efforts to prevent it, employees will share and spread information about themselves all over the Internet. Social media companies expend tremendous effort to encourage people to join and post information about themselves. Some valuable bits of information that attackers can use are:

  • Work history
  • Education information (college and high school attended)
  • Family and relationship information
  • Comments on links
  • Dates of important life events
  • Places visited
  • Favorite sites, movies, TV shows, books, quotes, etc.
  • Photographs

Profiling

All these pieces of information provide powerful leverage points for attackers, but they also provide a lot of valuable indirect information. As our phishing example points out, attackers can observe the writing style of the people they want to impersonate. Beyond that, they can also create detailed psychological profiles of victims. There are a number of tools and techniques available to do things like:

  • Analyze sentiment to determine people’s opinions and political leanings8
  • Analyze posting times to determine when people are awake (and asleep) and what their home time zones are9
  • Determine an individual’s personality type, which can inform manipulation techniques10
  • Analyze relationships and friendship ties11

With sites like Facebook that host nearly 2 billion users,12 it’s very easy to craft a Google search for someone with “[name] [location] site:facebook.com” to find their page.

Many social media users are part of interest groups, which can provide useful leverage points for a phisher.

Facebook_Interest_Groups_Sample

Even when someone sets their social media profile to “private,” it’s still not too difficult for an attacker to break in and get what they want. Here is a hacking service being advertised on a Darknet for just that purpose:

Darknet_Hacking_Service_Sample

People Search Engines

In addition to social media sites, there are numerous “people search” sites like Pipl, Spokeo, and ZabaSearch. Many of these sites pull together profiles based on dozens of resources. Sometimes they’re not very helpful, like this example for me, because I’m a paranoid security guy:

:

People_Search_Sample

However, different sites can dig up some interesting data, like this example:

People_Search_Info_Sample

Note how this site provides Facebook information, email address, annual income, education, phone number, age range, and even racial profiling. Here’s some typical information you can get from these kinds of sites:

  • Home address
  • Mobile phone number
  • Home (landline) phone number
  • Age
  • Salary range
  • Spouse and family
  • Email address, which leads to possible usernames
  • Middle name
  • Maiden name

Most employees don’t think about things like this—because most employees don’t think like bad guys. It doesn’t occur to them how much personal and work-related information they are freely volunteering on various websites—or how easy they make it for phishers to pull information together into some pretty comprehensive professional dossiers. The lesson here is thinkbefore you volunteer information about yourself and your work, and limit the number of websites where you do this.

How Attackers Gather Data about Your Organization

When attackers want to go after a specific organization but need to know which individuals within that organization to target, then they need to dig through corporate and business records. They can start simply with the ownership records, which are freely available over the web, as in this example:

Publicly traded companies have even more information available online from their SEC filings. Here is an excerpt from a recent 8-K filing from F5 about our new corporate headquarters:

Many corporations that have been around for more than a few years have probably been involved in a lawsuit or three. Attackers can pull those records, as well, like this example from now defunct Eastern Airlines:

Like the people search databases we saw earlier, there are also aggregator search tools for corporations, such as OpenCorporates, that pull together a lot of this information into a single place.

These sources can help attackers build profiles of individuals and department names, which are powerful tools for flavoring their phishing bait. Scanning a company’s website can also give you clues about business partners and affiliates, for which you can repeat all of these searches.

Your Organization’s Internet Presence

Everyone active on the Internet has an IP address, and IP addresses can provide some basic information about where they terminate and who owns them.

Granted, some of this information can be misleading because IP addresses can trace back to the ISP rather than the actual organization. But, sometimes attackers get lucky. Most of the time, they can uncover where sites are being hosted and gain some basic information about the company’s network configuration.

In addition to the IP address information, every organization with a domain has domain registration information. Like IP information, for most sizable organizations, it’s going be generic and not reveal much that’s useful. But again, sometimes attackers get lucky.

Corporate Email Addresses

Where else can attackers find usable email addresses for an organization? There are many companies out there that can help with that. Here’s Hunter,13 which not only provides email addresses but also provides hints on the email format used.

 

Beware of Data Leaking Out of Your Equipment

To pull off successful phishing scams, at a minimum, attackers need information about your organization and your employees. We’ve already seen several ways they go about getting this information. But one area organizations often overlook is the information that’s leaking out of their systems.

Improperly configured network systems and applications can leak internal configuration and infrastructure information. This can include information like server names, private network addresses, email addresses, and even usernames. Devices and software that have been known in the past to leak internal data onto the Internet include DNS servers, self-signed certificates, email headers, web servers,14 web cookies, and web applications.15

Here is a simple example of how a sloppily configured web server can reveal the internal IP addressing scheme:

HTTP/1.0 200 OK
Date: Mon May 22 15:31:46 PDT 2017
Server: Macrohard-YYZ/6.0
Connection: Keep-Alive
Content-Type: text/html
X-Powered-By: BTQ.NET
Accept-Range: bytes
Last-Modified: Sat, May 20 04:14:01 PDT 2017
Content-Length: 1433
Connection-Location: http://192.168.0.10/index.htm

Attackers can also comb through web application source code to look for developer names, internal code words, and even references to supposedly hidden services.16 Almost all of these kinds of technical information leakages are rated very low impact and are usually deprioritized in remediation.

Application Platform Discovery

Applications are rarely built from scratch but are instead assembled from libraries and existing frameworks. All of these application components can contain vulnerabilities as well as clues to the development team and processes in an organization. There are numerous easy-to-use tools that can uncover what is being deployed. Here is the BuiltWith tool’s analysis of a site:

 

Email Headers

An excellent source of internal configuration information can be gleaned from email headers. Attackers can simply fire off a few email inquiries to folks at an organization and see what they can find. Here’s a typical email header using our example company, Boring Aeroplanes, from our phishing example. Note both internal and external IP addresses ae shown, along with server names:

Received: from edgeri.boringaeroplanes.com (host-12-154-167-196.boringaeroplanes.com. [312.154.167.296])
Received-SPF: pass (google.com: domain of charles.clutterbuck@boringaeroplanes.com
designates 312.154.167.296 as permitted sender) client-ip=312.154.167.296;
Received: from edgeri.boringaeroplanes.com (172.31.1.48) by
WEXCRIB00001059.corp.internal.boringaeroplanes.com(172.31.1.42) with Microsoft
 SMTP Server id 14.3.301.0; Fri, 28 Apr 2017 10:40:36 -0400
Received: from WEXCRIB00001065.corp.internal.boringaeroplanes.com(70.338.297.31)
by WEXCRIB00001059.corp.internal.boringaeroplanes.com (172.31.1.42) with
Microsoft SMTP Server (TLS) id 14.3.301.0; Fri, 28 Apr 2017 10:39:23 -0400
Received: from WEXCRIB00001054.corp.internal.boringaeroplanes.com
([169.254.9.522]) by WEXCRIB00001065.corp.internal.boringaeroplanes.com
([70.338.297.31]) with mapi id 14.03.0301.000; Fri, 28 Apr 2017 10:39:31 -0400
From: “Clutterbuck, Chuck” <charles.clutterbuck@boringaeroplanes.com>
Subject: Inquiry
Thread-Topic: Inquiry
Thread-Index: AdLAKumC2+2KaqenReOr0muBBLJpfQ==
Date: Fri, 28 Apr 2017 14:39:30 +0000
Accept-Language: en-US
x-originating-ip: [10.16.15.170]
x-keywords4: SentInternet
x-cfgdisclaimer: Processed
MIME-Version: 1.0
Return-Path: 

From this, attackers have a number of IP addresses, and they know what software the mail server is running and how email flows out of the organization.

How Attackers Pull it all Together, and How You Can Fight Back

By now, it should be pretty evident why phishing scams are becoming so rampant. Information about individuals and corporations is readily available and easy to find on the Internet, making it easy for attackers to pull phishing schemes together—and with great success.

None of the bits of information we discussed in previous sections is particularly dangerous by itself, so most people are not concerned. However, one of the principal tenets of information theory is that each piece of information becomes more valuable as you find more related pieces of information. One bit of low impact information is slightly useful. Two bits of related information makes both more useful. Add three, five, or ten pieces and the value can become inestimable.

What Does a Phisher Need?

Let’s walk through how an attacker can use specific information about individuals and corporations to build a phishing scam. Their first, key objective is to zero in on the correct person within the organization to accept the phishing “hook.” This means finding the names of persons through organizational data research. The attacker’s goal is to identify the people in key positions who have access to the data to be hacked. Barring that, attackers try to find the people who know the people in key positions so they can work their way through the inside network toward the goal. If that doesn’t work, an attacker can also go after individuals at trusted partner or supplier companies, leveraging their relationships and access to find a way in.

Once an attacker identifies the specific individuals, they can psychologically profile them based on their social media postings and affiliations. (In some cases, instead of phishing, an attacker might look for websites that the victim frequents and compromise those sites to plant drive-by downloads.17 This is called a Watering Hole Attack.18)

For crafting a phishing email, an attacker can use all the social media postings and organizational information to create the lure. They can go directly at an individual’s interests and friends, like in the example given above. They can also go indirectly and use organizational information and spoof the company’s HR department to ask employees to verify basic information.19Knowing which individuals to impersonate in HR can help solidify the phishing email.

The attack doesn’t end there. The cyber crook wants to break into the network and probably plant malware to steal data. To make sure the malware works properly, they customize it for the appropriate versions of software running internally and the IP networks in use. In the example used in the beginning of this report, the attacker sent an exploit specifically tailored for the version of software running on the victim’s machine. Sneaking stolen data back out, called exfiltration, is always a challenge, but knowing what internal servers there are and where they’re located can provide an easy roadmap.

What to Do

There’s a limit to what we as security professionals can do to keep people from sharing information on social media. In government agencies, there are more restrictions and education around this kind of behavior (called operational security20). In the private and commercial world, corralling such behavior is much harder. So, security awareness training, citing these examples, is a good place to start. At least users will be aware of the consequences of their sharing and be forewarned to the deviousness of the attacks. Users should also be urged to report any suspicious emails and verify with IT or Security before running outside software or providing their login credentials.

A good resource you can offer your users is this advice from Public Intelligence on how to reduce their online exposure by “opting out.”21 The fewer bits of data attackers can latch onto, the better.

It is a good idea for your security team (or better yet, your threat intelligence team) to periodically scan your own organization or hire a penetration tester. This could give you clues as to who and where attackers will strike first.

Closing the information leakage on your Internet-facing gear is often not hard to do and is recommended. Every door you close denies an attacker another puzzle piece of information. All domain and IP registries should be set up with generic role names and identifiers instead of the names of individuals. Most IT folks do this anyway to reduce potential spam, but it doesn’t hurt to check.

Lastly, contracting with a good penetration testing firm to do reconnaissance and a social engineering test is a great way to see what you might have missed. It’s better to pay and control the results of a mock attack than have to live through a real one.


Source:

:

People_Search_Sample

However, different sites can dig up some interesting data, like this example:

People_Search_Info_Sample

Note how this site provides Facebook information, email address, annual income, education, phone number, age range, and even racial profiling. Here’s some typical information you can get from these kinds of sites:

  • Home address
  • Mobile phone number
  • Home (landline) phone number
  • Age
  • Salary range
  • Spouse and family
  • Email address, which leads to possible usernames
  • Middle name
  • Maiden name

Most employees don’t think about things like this—because most employees don’t think like bad guys. It doesn’t occur to them how much personal and work-related information they are freely volunteering on various websites—or how easy they make it for phishers to pull information together into some pretty comprehensive professional dossiers. The lesson here is thinkbefore you volunteer information about yourself and your work, and limit the number of websites where you do this.

How Attackers Gather Data about Your Organization

When attackers want to go after a specific organization but need to know which individuals within that organization to target, then they need to dig through corporate and business records. They can start simply with the ownership records, which are freely available over the web, as in this example:

Publicly traded companies have even more information available online from their SEC filings. Here is an excerpt from a recent 8-K filing from F5 about our new corporate headquarters:

Many corporations that have been around for more than a few years have probably been involved in a lawsuit or three. Attackers can pull those records, as well, like this example from now defunct Eastern Airlines:

Like the people search databases we saw earlier, there are also aggregator search tools for corporations, such as OpenCorporates, that pull together a lot of this information into a single place.

These sources can help attackers build profiles of individuals and department names, which are powerful tools for flavoring their phishing bait. Scanning a company’s website can also give you clues about business partners and affiliates, for which you can repeat all of these searches.

Your Organization’s Internet Presence

Everyone active on the Internet has an IP address, and IP addresses can provide some basic information about where they terminate and who owns them.

Granted, some of this information can be misleading because IP addresses can trace back to the ISP rather than the actual organization. But, sometimes attackers get lucky. Most of the time, they can uncover where sites are being hosted and gain some basic information about the company’s network configuration.

In addition to the IP address information, every organization with a domain has domain registration information. Like IP information, for most sizable organizations, it’s going be generic and not reveal much that’s useful. But again, sometimes attackers get lucky.

Corporate Email Addresses

Where else can attackers find usable email addresses for an organization? There are many companies out there that can help with that. Here’s Hunter,13 which not only provides email addresses but also provides hints on the email format used.

Beware of Data Leaking Out of Your Equipment

To pull off successful phishing scams, at a minimum, attackers need information about your organization and your employees. We’ve already seen several ways they go about getting this information. But one area organizations often overlook is the information that’s leaking out of their systems.

Improperly configured network systems and applications can leak internal configuration and infrastructure information. This can include information like server names, private network addresses, email addresses, and even usernames. Devices and software that have been known in the past to leak internal data onto the Internet include DNS servers, self-signed certificates, email headers, web servers,14 web cookies, and web applications.15

Here is a simple example of how a sloppily configured web server can reveal the internal IP addressing scheme:

HTTP/1.0 200 OK
Date: Mon May 22 15:31:46 PDT 2017
Server: Macrohard-YYZ/6.0
Connection: Keep-Alive
Content-Type: text/html
X-Powered-By: BTQ.NET
Accept-Range: bytes
Last-Modified: Sat, May 20 04:14:01 PDT 2017
Content-Length: 1433
Connection-Location: http://192.168.0.10/index.htm

Attackers can also comb through web application source code to look for developer names, internal code words, and even references to supposedly hidden services.16 Almost all of these kinds of technical information leakages are rated very low impact and are usually deprioritized in remediation.

Application Platform Discovery

Applications are rarely built from scratch but are instead assembled from libraries and existing frameworks. All of these application components can contain vulnerabilities as well as clues to the development team and processes in an organization. There are numerous easy-to-use tools that can uncover what is being deployed. Here is the BuiltWith tool’s analysis of a site:

Email Headers

An excellent source of internal configuration information can be gleaned from email headers. Attackers can simply fire off a few email inquiries to folks at an organization and see what they can find. Here’s a typical email header using our example company, Boring Aeroplanes, from our phishing example. Note both internal and external IP addresses ae shown, along with server names:

Received: from edgeri.boringaeroplanes.com (host-12-154-167-196.boringaeroplanes.com. [312.154.167.296])
Received-SPF: pass (google.com: domain of charles.clutterbuck@boringaeroplanes.com
designates 312.154.167.296 as permitted sender) client-ip=312.154.167.296;
Received: from edgeri.boringaeroplanes.com (172.31.1.48) by
WEXCRIB00001059.corp.internal.boringaeroplanes.com(172.31.1.42) with Microsoft
 SMTP Server id 14.3.301.0; Fri, 28 Apr 2017 10:40:36 -0400
Received: from WEXCRIB00001065.corp.internal.boringaeroplanes.com(70.338.297.31)
by WEXCRIB00001059.corp.internal.boringaeroplanes.com (172.31.1.42) with
Microsoft SMTP Server (TLS) id 14.3.301.0; Fri, 28 Apr 2017 10:39:23 -0400
Received: from WEXCRIB00001054.corp.internal.boringaeroplanes.com
([169.254.9.522]) by WEXCRIB00001065.corp.internal.boringaeroplanes.com
([70.338.297.31]) with mapi id 14.03.0301.000; Fri, 28 Apr 2017 10:39:31 -0400
From: “Clutterbuck, Chuck” <charles.clutterbuck@boringaeroplanes.com>
Subject: Inquiry
Thread-Topic: Inquiry
Thread-Index: AdLAKumC2+2KaqenReOr0muBBLJpfQ==
Date: Fri, 28 Apr 2017 14:39:30 +0000
Accept-Language: en-US
x-originating-ip: [10.16.15.170]
x-keywords4: SentInternet
x-cfgdisclaimer: Processed
MIME-Version: 1.0
Return-Path: 

From this, attackers have a number of IP addresses, and they know what software the mail server is running and how email flows out of the organization.

How Attackers Pull it all Together, and How You Can Fight Back

By now, it should be pretty evident why phishing scams are becoming so rampant. Information about individuals and corporations is readily available and easy to find on the Internet, making it easy for attackers to pull phishing schemes together—and with great success.

None of the bits of information we discussed in previous sections is particularly dangerous by itself, so most people are not concerned. However, one of the principal tenets of information theory is that each piece of information becomes more valuable as you find more related pieces of information. One bit of low impact information is slightly useful. Two bits of related information makes both more useful. Add three, five, or ten pieces and the value can become inestimable.

What Does a Phisher Need?

Let’s walk through how an attacker can use specific information about individuals and corporations to build a phishing scam. Their first, key objective is to zero in on the correct person within the organization to accept the phishing “hook.” This means finding the names of persons through organizational data research. The attacker’s goal is to identify the people in key positions who have access to the data to be hacked. Barring that, attackers try to find the people who know the people in key positions so they can work their way through the inside network toward the goal. If that doesn’t work, an attacker can also go after individuals at trusted partner or supplier companies, leveraging their relationships and access to find a way in.

Once an attacker identifies the specific individuals, they can psychologically profile them based on their social media postings and affiliations. (In some cases, instead of phishing, an attacker might look for websites that the victim frequents and compromise those sites to plant drive-by downloads.17 This is called a Watering Hole Attack.18)

For crafting a phishing email, an attacker can use all the social media postings and organizational information to create the lure. They can go directly at an individual’s interests and friends, like in the example given above. They can also go indirectly and use organizational information and spoof the company’s HR department to ask employees to verify basic information.19Knowing which individuals to impersonate in HR can help solidify the phishing email.

The attack doesn’t end there. The cyber crook wants to break into the network and probably plant malware to steal data. To make sure the malware works properly, they customize it for the appropriate versions of software running internally and the IP networks in use. In the example used in the beginning of this report, the attacker sent an exploit specifically tailored for the version of software running on the victim’s machine. Sneaking stolen data back out, called exfiltration, is always a challenge, but knowing what internal servers there are and where they’re located can provide an easy roadmap.

What to Do

There’s a limit to what we as security professionals can do to keep people from sharing information on social media. In government agencies, there are more restrictions and education around this kind of behavior (called operational security20). In the private and commercial world, corralling such behavior is much harder. So, security awareness training, citing these examples, is a good place to start. At least users will be aware of the consequences of their sharing and be forewarned to the deviousness of the attacks. Users should also be urged to report any suspicious emails and verify with IT or Security before running outside software or providing their login credentials.

A good resource you can offer your users is this advice from Public Intelligence on how to reduce their online exposure by “opting out.”21 The fewer bits of data attackers can latch onto, the better.

It is a good idea for your security team (or better yet, your threat intelligence team) to periodically scan your own organization or hire a penetration tester. This could give you clues as to who and where attackers will strike first.

Closing the information leakage on your Internet-facing gear is often not hard to do and is recommended. Every door you close denies an attacker another puzzle piece of information. All domain and IP registries should be set up with generic role names and identifiers instead of the names of individuals. Most IT folks do this anyway to reduce potential spam, but it doesn’t hurt to check.

Lastly, contracting with a good penetration testing firm to do reconnaissance and a social engineering test is a great way to see what you might have missed. It’s better to pay and control the results of a mock attack than have to live through a real one.


21 https://publicintelligence.net/njroic-opting-out/

Source: https://f5.com/labs/articles/threat-intelligence/identity-threats/phishing-the-secret-of-its-success-and-what-you-can-do-to-stop-it?sf173773424=1

Author: Ray Pompon


  • 0

Presenting security and risk to board members

Category : F5

Your board’s time—and attention—is limited. But the security of your company, its reputation, and its financial health can all depend on how well your board members understand the business risks you face, and how you plan to mitigate them. Keep it short, and make it matter. This article looks at IT and security budgets and explains how to balance against a risk security profile.

t’s that time. You have to report on the state of enterprise security to your board. The presentation is critical: the security of your company, its reputation, and its financial health all depend on you. Your board members need to understand the business risks you face, and how you plan to mitigate them. But their time—and attention—is limited. Keep it short, and make it matter.

Follow these six steps to achieve your goals.

1. Cyber threats are real—stick to the facts

They’ve heard the numbers. As much as $575 billion is lost to cyber crime annually. Data breaches can cost more than $400 million. Information like this falls on deaf ears. Board members are numb. But they need to understand the general risks of doing business online—which are endemic—versus the threats that face your industry, and your business specifically. If your organization’s largest risk is related to a lack of controls or inadequate processes, they need to know that. Most importantly, they need to know what you are doing about it. Don’t go to the board with problems for which you haven’t figured out solutions.

Tell a compelling story about a security breach, preferably in your industry. Give examples from your own company. Identify critical information assets—intellectual property, sensitive customer data—and paint a picture of what would happen and what it would cost if they were compromised.

2. Provide metrics that convince

If you have gaps in security control that you are struggling to get resources to fix, give them evidence proving that you are continuously under attack and your networks are constantly probed. Make it clear that sooner or later, the bad guys will succeed. Educate them. Surprise them.

  • 73 percent of companies suffered at least one security breach in the past year
  • About a third of employees targeted for phishing will open fraudulent emails
  • More than one in 10 take the bait—and it only takes one
  • Less than two minutes elapse from the hacker hitting send to your systems being compromised
  • Hackers are inside your organization, on average, for at least four months before they’re discovered
  • Web apps are the number one entry point for breaches

3. Get their support in adopting a culture of security

Human error accounts for 58 percent of cyber breaches. A secure business is a business in which everyone is educated about threats and does their part to reduce risk. This starts with rigorous—and repeated—training, and perhaps even commitment to a standard like ISO 27001.

4. Convince them they need incident response help

Encourage the board to face facts: all organizations today face the very real possibility they will be breached. How much damage you suffer depends on how quickly and effectively you respond, so why not get prepared? Most companies don’t have the skills for effective incident response (IR). You need technical, forensic, legal, and public relations support to get through the trauma. Your best bet: a third party with specialized expertise. A good IR firm will have your back.

5. Discuss cyber insurance

Cyber insurance is integral to your security strategy. Yet only 19 percent of companies have cyber insurance. And most are grossly underinsured, with only 12 percent of the total costs of a typical breach covered. Cyber insurance is the fastest-growing insurance in the world, projected to increase 300 percent from $2.5 billion today in annual premiums by 2020. Do the math for your board. Calculate how much your business can absorb from a breach without financial catastrophe. Pick a level of risk that you are comfortable with, and insure the rest.

6. Get them to champion those efforts for which you didn’t get budget approval

You have done your homework and already secured funds for some of your efforts. If you have risk areas that need addressing that you don’t have budget to address, board members need to know this and either accept the risk or champion a solution. There’s no better way to get something accomplished than by saying that “the board” requested it get done.

IN CONCLUSION

As you go through this exercise, be a little selfish. If you’re not getting the support you need to defend against existential threats, think of your own reputation and career. If your board doesn’t get it, it might be time for you to consider your options.

Source: https://wemakeappsgo.f5.com/people-and-technology/presenting-security-and-risk-to-board-members/?utm_source=social&utm_medium=linkedin&utm_campaign=a5&sf160619233=1

 

Author: Ryan Kearny


  • 0

Cloud First also means Security First—here are steps for getting there

Category : F5

Cloud security is ever-increasingly becoming an inescapable part of business—the employees of enterprises using the cloud without security are ensuring it. When you move to the cloud, you need to prioritize security—ID federation, user access policies, WAF, data access and protection policies. This article is about the state of the art of preparation, as well as a solid long-term strategy for portability in case you have to move again.

The cloud has become an inescapable part of doing business—and so has cloud security.

Moving an application to the cloud or adopting a new cloud service can be a mixed blessing. For the most part, cloud service providers tend to make their applications more secure than an individual company’s security team would. However, statistics suggest most cloud applications used by employees—an estimated 94.8 percent—are not entirely enterprise-ready. Many companies lack the policies they need to be as secure as possible.

The ranks of enterprises with no cloud policies are rife with employees bringing in their own mobile devices and using their preferred services. An increasingly mobile workforce and the emergence of connected business devices, from printers to your company’s heating system to the break room refrigerator—the Internet of Things—are powered by on-demand services, making the cloud even more important at work. According to the cloud access security broker Netskope, in the third quarter of 2016, the average company had 1,031 cloud applications being used by employees.

With attackers becoming more sophisticated, you need to secure your cloud applications and make smart decisions about how to spend resources on security. While much of the focus on cloud security is on better development practices by app creators, for many companies that are consumers of apps and cloud services—and not creators—the applications often must be secured with zero visibility into their inner workings: the proverbial “black box.” For that reason, securing apps in the cloud should be treated much like securing on-premises devices.

 One thousand thirty-one cloud applications are being used by employees at the average enterprise.

Here are three basic steps to extending your security in the cloud:

Get visibility

Just as a business needs to be aware of what is going on with its own infrastructure, your security teams should also have visibility into the use and security of any cloud services. You need to know not only how employees are accessing cloud services, but which employees are accessing them.

You should take advantage of all the logging functionality offered by your cloud provider. Your provider should also be transparent in how it secures its infrastructure and provides information about security controls.

Source: https://wemakeappsgo.f5.com/people-and-technology/cloud-first-also-means-security-first-here-are-steps-for-getting-there/?utm_source=social&utm_medium=linkedin&utm_campaign=a16&sf141392920=1

Author: Robert Haynes


  • 0

Reaper: The Professional Bot Herder’s Thingbot

Category : F5

This isn’t your mama’s botnet. This is a proper botnet. If you were the world’s best IoT botnet builder and you wanted to show the world how well-crafted an IoT botnet could be, Reaper is what you’d build. It hasn’t been seen attacking anyone yet, and that is part of its charm. But, what is it doing? We’ve got some ideas.

Oct 31, 2017 Update

The intentions of Reaper are as unclear today as they were a week ago. We hold to our position that the interesting aspect of Reaper is not its current size, but its engineering, and therefore its potential.

From a pure research perspective, we’re interested in howReaper is spreading. Instead of targeting weak auth like a common thingbot, Reaper weaponizes nine (and counting) different IoT vulnerabilities.

We think the current media focus on “the numbers” instead of the method is a tad myopic. See the next “update” section below for our clarification.

What’s in a Name?

The good people at 360’s Network Security Research Lab (“Netlab 360”) have been monitoring this thingbot the longest, and they named it IoT_reaper.1 They sort of sat on the story for a while, watching Reaper evolve. Not long afterward, Check Point Software Technologies discovered it and named it IOTroop, but Brian Krebs’ article2 has given the original moniker some momentum. So, let’s go with Reaper for now.

Size and Position

Krebs puts the current size of Reaper at over one million IoT devices. We have data that suggests it could include over 3.5 million devices and could be capable of growing by nearly 85,000 devices per day. The reason Reaper has gotten so big and, honestly, the reason we’re so impressed with its construction is that, unlike its predecessors, Mirai and Persirai, Reaper uses multiple attack vectors. Mirai used default passwords. Persirai used the blank username + password combo, which frankly is such a doofus security error on the part of the manufacturer that we feel it barely deserves to have a CVE.

Reaper is almost showing off by not even trying the password cracking, and instead just exploiting different vulnerabilities (RCEs, web shells, etc.) in nine different IoT vendor devices.

Oct 31, 2017 Update (continued)

Reports on the “size” of Reaper vary. We’ve scanned 750,000 unique devices that match the nine vulnerabilities currently exploited by Reaper. We regularly scan 85,000 new, “Reaper-compatible” devices per day. We don’t know which of them are actually infected, but there’s no reason that Reaper itself couldn’t infect them, unless its authors didn’t want it to.

The nine vulnerabilities currently used by Reaper are fairly rudimentary, as vulnerabilities go. If the thingbot authors were to include a few dozen existing vulnerabilities that fit Reaper’s device-targeting profile, we think they could grow the thingbot by an additional 2.75 million nodes. If they wanted to. Adding that 2.75 million to the 750,000 that are currently “Reaper-compatible” gives the number 3.5 million.

Note: We will not be disclosing the additional CVEs as that would simply expedite the authors’ exploits.

The actual size of Reaper is probably limited to whatever size its authors want it to be.

Right now it feels like its authors are experimenting. Building and testing. Maybe Reaper is pure research. We don’t know, and that’s kind of why we respect it.

Reaper Has Better IoT Security

Unlike many of the devices that it infects, Reaper has an update mechanism. How impressive is that? If it weren’t malicious, it might qualify to meet the standards of the new “Internet of Things (IoT) Cybersecurity Improvement Act of 2017” federal requirements. Heck, the authors could even make a distribution out of it and it could become the default remote management platform for IoT.

Is It Malicious?

So far, Reaper hasn’t been seen attacking anyone with massive volumetric DDoS attacks. Yes, that’s a good thing. At least one of us thinks it might never be seen attacking anyone. If Reaper were to start being used as the ultimate Death Star weapon, that would cheapen its value. It would also result in active takedown campaigns.

Remember how at least two strike-back bots were created to combat Mirai after it attacked Krebs, OVH, and Dyn? Brickerbot actively wiped the filesystems of infected IoT devices (in many cases, turning them into little more than bricks). Hajime was more polite and merely blocked ports and left a cute little note informing the device owner that their device was participating in attacks and please stahp!

If Reaper starts attacking people with DDoS, it will turn from a marvel of thingbot infrastructure engineering into—yawn—another volumetric attack tool. The bot herders would be hunted down by law enforcement (à la the Mirai case3) and the bot would be disassembled.

What Is It Doing?

Right now, Reaper is an object lesson for IoT manufacturers and security researchers. It’s like a giant blinking red light in our faces every day warning us that we’d better figure out how to fix IoT security soon.4

Figure 1: F5’s depiction of Persirai—the mother of Reaper?

We’ve been monitoring the Persirai botnet for the last six months. We regularly measured Persirai at 750,000 IP cameras. Persirai was never seen attacking anyone, either, and we speculated about what it could be doing.

So, besides DDoSing victims, there are about a dozen different ways that a bot herder could monetize a botnet of this size. Off the top of our heads, in no particular order:

  • Spam relays (each bot could send 250 emails a day)
  • Digital currency mining (increasingly unlikely, though)
  • Tor-like anonymous proxies, which can be rented
  • Crypto ransom
  • Clickjacking
  • Ad fraud
  • Fake ad, SEO Injection
  • Fake AV fraud
  • Malware hosting

Reaper’s mission could be any one, or even several of those.

Since Reaper is also composed of many digital video devices, we could speculate this: What if both Persirai and Reaper are actually surveillance networks?

Think of the intel you could gather with access to millions of video cameras. Nation-states with active intelligence programs would be drooling all over themselves to get access to that data. The US, China, Russia, and North Korea are all obvious suspects because who else but a nation-state could process or store all the input?

Is There a Lesson Yet?

As predicted, we will continue to see more thingbots arise as we expect “things” to be the attacker infrastructure of the future. Just because Reaper is the latest, doesn’t mean it will be the last. We’ve added Reaper to the list of botnets that we’re monitoring. We suspect that entire existing botnets will get folded into it (whether they wanted to or not).

If Reaper doesn’t attack anyone or give away its intentions, it may enter the same mythical space occupied by the Conficker worm of the late 2000s. At its peak, Conficker infected over 10 million5Windows computers and caused great concern because it could have done an insane amount of damage. But it was never activated, and it remains a study in bot construction.

The obvious lesson is that the state of IoT security is still incredibly poor, and we need to do a better job of threat modeling6 the Internet7 of8 Things9.


Source: https://f5.com/labs/articles/threat-intelligence/cyber-security/reaper-the-professional-bot-herders-thingbot?sf134799814=1

Author: DAVID HOLMES, JUSTIN SHATTUCK


  • 0

The Future OF IoT Security Through the Eyes of F5 Threat Researches

Category : F5

I recently had the opportunity to sit down with two of F5’s top threat researchers, Sara Boddyand Justin Shattuck, to pick their brains about IoT, its current state of “security,” and what we can expect to see in terms of threats, attacks, and mitigations in the future. Justin and Sara are co-authors of three IoT threat research reports published by F5 Labs.

Q: What brought you to the point of doing this research on the Internet of Things (IoT)?

Justin: That’s an interesting question because, as a researcher, I don’t look at IoT the same way most people do. The media typically talks about IoT in terms of WiFi devices connected to the Internet— whether it’s DVRs, IP cameras, or smart baby monitors—and, in some ways, creates a lot of hysteria and confusion around IoT. I see IoT as an evolution. Today, my research has moved beyond WiFi. I’m looking at devices on other radio frequencies, such as cellular—research with devices that serve as entry points into what IoT truly is.

So, the WiFi aspect of “IoT” seems like a long time ago for me, but at the same time, it’s an ongoing problem that we’re going to face and still have to try to find solutions for. Forbes did a great article recently on IoT initiatives and how 95% of companies plan to deploy IoT devices in the next three years. If the vast majority of companies are planning to do put stuff on the Internet and use IoT, it’s going to make our lives as security professionals hell. So, the reality is that the problem is going to get bigger in the future no matter how much research we do.

Sara: Agreed. We’re just beginning to see the tip of the iceberg of IoT threats. And Justin’s right; from a researcher’s perspective, the “media version” of IoT is boring to security professionals. They just roll their eyes when you mention it! But the reason it’s still a relevant story is because cleaning up stuff on Internet takes decades—if it ever happens at all. We’re just now starting to understand the IoT threat at a higher level. It will take years for the rest of the industry to start doing something about it and addressing the threat. Meanwhile, the threat continues to grow. So, as researchers, we have to keep talking about it, staying on top of it, and telling people what’s going on. It was the potential threat that initially got me interested, but it’s the continually growing threat that keeps me interested in it.

Q: From your perspective, what are the biggest insights in the current (volume 3) report, IoT: The Rise of Thingbots?

Justin: Expect the unexpected. The data we collect and the data we work from is very consistent, but we’ve learned to expect the unexpected every quarter when it comes to the results. When we look at the volume of activity by date and time and correlate that to temporal events in our physical world, there’s some consistency in the amount of time between recon, exploitation, and attacks in large campaigns. But otherwise, the “fallout” period—that is, the months following any attack—is complete, random chaos, as we saw post-Mirai.

Sara: I would agree with that. We’ve been good at understanding the dataset, so we’re confident when we say, “this is what we think is happening,” but it always turns out to be far bigger than we expected. For instance, pre-Mirai, we knew attackers were building a really big thingbot that could launch a large attack, but did we know it would be as big as Mirai? I don’t think we thought about it in that context. I was surprised by how big Mirai was. Now, a year after Mirai, it’s obvious from our most current data that something massive is being built right now, because the level of activity we’re seeing is orders of magnitude higher than what it took to build Mirai. I definitely believe something big is going to attack sooner rather than later. We should all be bracing ourselves for impact. Yet, no one has screamed from the rooftops that thingbots are something we should be concerned about. It’s time to start screaming now!

Q: How would you describe the level of concern among enterprises?

Sara: There seems to be very little concern among enterprises. The prevailing attitude seems to be, “I’ll deal with it when it hits me.”

Justin: More and more IoT devices are coming online, and we’re seeing more and more activity from these devices. It’s not slowing down; it’s continually growing, yet no one is voicing concern at the enterprise level as or for consumers of these devices. So, our assumption that people just aren’t concerned about the threat seems pretty correct.

I can give you two examples of that. An industrial company I know of uses IoT devices to monitor and control their equipment. Someone unauthorized was clearly connecting to these devices, but instead of exploiting the underlying equipment or industrial control systems, they were just using the devices’ bandwidth to send spam email and text messages. The company had outsourced the management of IoT devices to a third party, which left all the default values (admin passwords, weak authentication) in place, so they obviously weren’t concerned about threats. The industrial company only became aware there was a problem when they noticed a jump in the bills on their gateways.

I know of another company that measures risk associated with various pieces of their network, hardware, and subsystems based on which ones they believe will be exploited sooner rather than later. The IoT devices they use—which they know are vulnerable—are so deprioritized that they obviously aren’t worried about any threat. Their position is that they’ll deal with the problem when it breaks! So, it’s a bit surprising the high percentage of companies (as I mentioned earlier—some 95%) that are already leveraging or plan to leverage IoT, yet their level of concern about the risk is still very low.

Sara: I think that goes back to the typical problem between security and business. As security professionals, our job is to secure things and mitigate risk in a way that still lets the business operate. IoT is the new, shiny ball in business opportunity; it presents such huge opportunities for the business (and mankind in general) that companies are not willing to step away from that opportunity. It’s our job to come behind them and patch the problems within the solutions they’re deploying. But, technology aside, just as human beings, we tend to adapt to problems and find ways to treat them versus fixing them. In the same way that we don’t cure cancer, instead we find ways to treat it, I don’t think we will “fix” the current IoT problem. I think we’ll adapt and find ways to deal with the attacks, and hopefully fix on a go-forward basis.

Justin: We’ve reached a point where packets are being flung across the Internet at 100 gigabits per second all day long, so it’s not reasonable to expect that we can fix the problem. Our role is to make the cost of performing attacks significantly more costly and difficult for attackers.

Q: What do you think the future holds for IoT security?

Justin: My initial thought is that the notion of “security by obscurity” will no longer be acceptable. This is the idea that it’s okay for manufacturers to bring a product to market that has little to no effective means of security because only a few people know about it, it’s highly specialized or proprietary, isn’t well known, serves a relatively small market, or won’t be broadly implemented or deployed. The thought being, “I can put this device out there, and as long as no one finds it, it’s okay—no big deal.” It’s a little like thinking I can leave my windows unlocked, and as long as I don’t tell anyone or draw attention to them, no one will break in. But when they do break in, it is a big deal. For a long time, IOT manufacturers have been able to get away with security by obscurity. I’m more confident now (in 2017) that device manufacturers will not be able to get away with that approach, and that will help reduce risk.

Sara: I also think the amount of residual risk humans are willing to accept—especially when it comes to cybersecurity—is much higher than should be. People are all too comfortable with their personal data being compromised and public. You can win the risk argument that says, “We need to do something about these devices because they’re going to get compromised and turned into Death Star-sized botnets,” but IoT attacks go way beyond DDoS into personal privacy and data theft, and you still have the problem of dealing with the level of residual risk the global population is willing to accept. This goes back to the two examples of companies that Justin mentioned earlier. So, it’s still an uphill battle to get people to take the threat seriously. I don’t see that changing anytime soon, but I would love to be proven wrong!

Q: What’s your opinion about the proposed legislation, “IoT Cybersecurity Improvements Act of 2017”?

Sara: We have to start somewhere. There are two parts to this legislation that I like; the first is to set a purchasing standard for government agencies. This is really important—and it’s what we advocate to customers already: to purchase smart and do their due diligence before they buy and deploy any new technology. This legislation would set purchasing standard for IoT devices for the federal government. The hope is that if the legislation passes, manufacturers will start to become more responsible and build security into their devices, because they want to sell products, not just to government entities but to commercial businesses as well.

Even with good legislation, however, you still have the “retroactive” problem, which is how to deal with the billions of devices already in use that don’t meet those standards. Those devices won’t magically disappear. It’s great to see legislation that tries to address the problem and deal with it in better ways moving forward, but new legislation can never really fix the existing problem.

The second part of this legislation is the provision to protect researchers who are identifying vulnerabilities in these devices. This is really, really, really needed! Researchers are testing these devices because the manufacturers are not, and somebody needs to be doing it.

Justin: I agree; as a researcher, I consider that a critical part of this bill. Researchers who work in good faith and are trying to do the right thing by disclosing vulnerabilities to manufacturers need to know they won’t be prosecuted and thrown in jail for the good work they’re doing. There’s still a healthy fear among researchers about that. Doing disclosures is difficult—and you’re never sure how they will be received or what the repercussions will be. I’d say about 90% of the serious stuff is discovered by independent researchers who get probably 1% of the credit or compensation. I base that on the number of CVEs, phishing sites, and malware discovered, etc. that’s presented to the world by individuals not associated with big-name security outfits.

Q: What does the future look like for IoT?

Justin: I’m hopeful for the future because another industry is building up at the same time and getting lots of attention. Machine learning, AI, and data sciences are going to be a big part of our future. As researchers, cybersecurity experts, and InfoSec professionals; we continue to research and monitor and go after bad guys and provide information to law enforcement agencies, etc., but ultimately, machines will be doing a more of the initial work. We’re already teaching machines to do that, because it’s physically impossible for humans to sift through all the information that can be collected. (It’s difficult already for us to sift through all the data we collect for the F5 Labs IoT reports.) That’s the fluffy, warm and fuzzy, rainbows and unicorns, upside we can look forward to. That’s why I appreciate the F5 Labs “fact is greater than fear” and “data is power” slogans—the future doesn’t have to be dark and ominous; skull and cross bones.

Sara: AI and machine learning is where we’ll get really creative and start to solve this IoT problem. I recently read a fascinating article about the fungal ecosystem that operates like an underground Internet, connecting plants to one another. It can be used for good—to share nutrients, for example—and as a defense to wipe out unwanted plants by spreading toxins throughout the ecosystem. I think that IoT will become this for the Internet someday. When it comes to machine learning and AI, I can see where IoT devices will become the “neural net nodes” that collect data, do the processing, and provide status on things. And then you have AI in there detecting when a node is down (maybe it’s compromised) and transferring resources, etc. I think IoT will be that way in the future. But it will take a while for us to get over the hump, because we’re barely at the point of understanding the problem. It takes researchers years to really understanding it and then it takes business a couple of more years to react. We’re still in the infant phase, but in the future, I think it’s going to be really cool.

Justin: Even now, people have a hard time trying to find appropriate ways to visually represent data, and it’s even harder to find ways to transform and analyze that data. It’s a small community who does this. The technology does exist, though. It reminds me of the 1990s “SETI” (search for extraterrestrial intelligence) application. You installed it on your home computer to process information as part of a team or individually and it would submit it. Now we do similar things for cryptocurrency—bitcoin mining essentially distributes a very large amount of work across many devices that are all working to solve the same type of problem. It’s an example of how we’ve found ways to process large swaths of information very, very quickly. We’re already doing this with things like cryptocurrency and advertising. There is a lot of data science, AI, and machine learning that goes into projects that analyze data. Think of the differences that can be made in the world if we use those same resources that give you relevant ads while shopping online toward information security problems.

Sara: Problems will always keep getting bigger, but we’re finding better ways to address them. There will be a new and bigger Mirai attack, everyone will panic, and then we’ll get level-headed about it and start trying to fix it again. That’s just how human beings behave. That hasn’t changed in centuries.

Source: https://f5.com/labs/articles/threat-intelligence/cyber-security/interview-with-the-experts-the-future-of-iot-security-through-the-eyes-of-f5-threat-researchers?sf125369335=1

Author: Sara Boddy


  • 0

Where does a WAF fit in the data path?

Category : F5

Web application firewalls (WAFs) are an integral component of application protection. In addition to being a requirement for complying with PCI-DSS, WAFs are excellent at protecting against the OWASP Top 10. They’re also a go-to solution for addressing zero-day vulnerabilities either through rapid release of signature updates or, in some cases, the use of programmatic functions to virtually patch applications while a long term solution is being deployed.

The question is, where do you put such protection?

There are options, of course. The data path contains multiple insertion points at which a WAF can be deployed. But that doesn’t mean every insertion point is a good idea. Some are less efficient than others, some introduce unacceptable points of failure, and others introduce architectural debt that incurs heavy interest penalties over time.

Ideally, you’ll deploy a WAF behind your load balancing tier. This optimizes for utilization, performance, and reliability while providing the protection necessary for all apps – but particularly for those exposed on the Internet.

Recommended Placement: WAF behind Load Balancing Tier
Utilization

The resource requirements (CPU and the like) involved in making a load balancing decision are minimal. This is generally why a LB is able to simultaneously support millions of users, and WAFs require more utilization – because they’re inspecting the entire payload and evaluating it against signatures and policies to determine whether the request is valid and safe.

Modern data center models borrow heavily from cloud and its usage based cost structure. Utilization becomes a key factor in operational costs. Higher utilization leads to additional resource requirements, which consumes budgets. Optimizing for utilization is therefore a sound strategy for constraining costs in both the data center and in public cloud environments.

Reliability

It is common practice to scale WAFs horizontally. That is, you use the LB to scale WAFs. This architectural decision is directly related to utilization. While many WAFs scale well, they can still be overwhelmed by flash traffic or attacks. If the WAF is positioned in front of the LB, you either need another LB tier to separately scale it or you risk impacting performance and availability.

Alternative Placement: WAF in front of One Load Balancing Tier…and behind Another
Performance

Performance is a key concern in an application economy. With so many variables and systems interacting with data as it traverses the data path, it can be frustrating to nail down exactly where performance is being bogged down let alone to tune each one without impacting others. As has been noted many times before, as load on a system increases, performance decreases. This is one of the unintended consequences of not optimizing for utilization, and a key reason why seasoned network architects use a 60% utilization threshold on network devices.

Deploying a WAF behind the LB tier eliminates the need for an upstream designated WAF load balancing tier, which removes an entire layer of network from the equation. While the processing time eliminated may not seem like much, those precious microseconds spent managing connections and scaling WAF services and then doing it again to choose a target app instance/server matters. Eliminating this tier by deploying the WAF behind the LB tier gives back precious microseconds that today’s users will not only notice, but appreciate.

Visibility

Visibility is a key requirement for security solutions in the data path. Without the ability to inspect the entire flow – including the payload – much of the security functions of a WAF are rendered moot. After all, most malicious code is found in the payload, not in protocol headers. Positioning a WAF behind the LB tier enables decryption of SSL/TLS before traffic is passed on to the WAF for inspection. This is a more desirable architecture because it is likely the load balancer will need visibility into secured traffic anyway, to determine how to properly route requests.

Recommended Configuration: Decryption and Inspection for added Security

All that said, a WAF fits in the data path pretty much anywhere you want it to. It’s an L7 proxy-based security service deployed as an intermediary in the network path. It could ostensibly sit at the edge of the network, if you wanted it to. But if you want to optimize your architecture for performance, reliability, and utilization at the same time, then your best bet is to position that WAF behind the load balancing tier, closer to the application it is protecting.

With the right tools, comprehensive WAF coverage can significantly reduce your exposures, as well as your operating costs. Learn more about protecting your apps from the OWASP Top 10 and other threats by registering for F5’s upcoming webinar, Thursday, October 26 at 10 a.m. PT.

Source: https://f5.com/about-us/blog/articles/where-does-a-waf-fit-in-the-data-path-27579?sf123388921=1

Author:  LORI MACVITTIE


  • 0

Example-driven Insecurity Illustrates Need for WAF

Category : F5

Learning online is big. Especially for those who self-identify as a developer. If you take a peek at Stack Overflow’s annual developer survey (in which they get tens of thousands of responses) you’ll find a good portion of developers that are not formally trained:

  • Among current professional developers globally, 76.5% of respondents said they had a bachelor’s degree or higher, such as a Master’s degree or equivalent.
  • 20.9% said they had majored in other fields such as business, the social sciences, natural sciences, non-computer engineering, or the arts.
  • Of current professional developers, 32% said their formal education was not very important or not important at all to their career success. This is not entirely surprising given that 90% of developers overall consider themselves at least somewhat self-taught: a formal degree is only one aspect of their education, and so much of their practical day-to-day work depends on their company’s individual tech stack decisions.

Note the highlighted portion from the survey results. I could write a thesis on why this is true, but suffice to say that when I was studying for my bachelor’s, I wrote in Pascal, C++, and LISP. My first real dev job required C/C++, so I was good there. But later I was required to learn Java. And SQL. I didn’t go back to school to do that. I turned to books and help files and whatever other documentation I could get my hands on. Self-taught is the norm whether you’re formally educated or not, because technology changes and professionals don’t have the time to go back to school just to learn a new language or framework.

This is not uncommon at all, for any of us, I suspect. We don’t go back to school to learn a new CLI or API. We don’t sign up for a new degree just to learn Python or Node.js. We turn to books and content on the Internet, to communities, and we rely heavily on “example code.”

ways devs teach themselves

still rely on blogs and documentation, not just from our own engineers and architects, but other folks, too. Because signing up for a Ph.D now isn’t really going to help learn me* the ins and outs of the Express framework or JQuery.

It’s no surprise then that network engineers and operations (who, being the party of the first part of the second wave of DevOps, shall henceforth be known as NetOps) are also likely to turn to the same types of materials to obtain those skills they need to be proficient with the tools and technologies required. That’s scripting languages and APIs, for those just tuning in. And they, too, will no doubt copy and paste their hearts out as they become familiar with the language and systems beginning to automate the production pipeline.

And so we come to the reason I write today. Example code.

There’s a lot of it. And it’s good code, don’t walk away thinking I am unappreciative or don’t value example code. It’s an invaluable resource for anyone trying to learn new languages and APIs. What I am going to growl about is that there’s a disconnect between the example code and security that needs to be addressed. Because as we’re teaching new folks to code, we should also be instilling in them at least an awareness of security, rather than blatantly ignoring it.

I say this because app security is not – repeat NOT – optional. I could throw stat after stat after stat but I hope at this point I’m preaching to the choir. App security is not optional, and it is important to promulgate that attitude until it’s viewed as part and parcel to development. Not just apps, mind you, but the scripts and systems driving automation at the fingertips of DevOps and NetOps.

I present as the source of my angst this example.

example violates security rule zero.png_thumb[2]_thumb

The code itself is beautiful. Really. Well formatted, nice spacing. Readable. I love this code. Except the part that completely violates Security Rule Zero.

THOU SHALT NOT TRUST USER INPUT. EVER.

I’m disappointed that there’s not even a head nod to the need to sanitize the input. Not in the comments nor in the article’s text. The code just passes on “username” to another function with nary a concern that it might contain malicious content.

But Lori, obviously this code is meant to illustrate implementation of some thing that isn’t designed to actually go into production. It’s not a risk.

That is not the point. The point is that if we continue to teach folks to code we ought to at least make an attempt to teach them to do it securely. To mention it as routinely as one points out to developers new to C/C++ that if you don’t allocate memory to a pointer before accessing it, it’s going to crash.

I could fill blog after blog with examples of how security and the SDLC is given lip-service but when it comes down to brass-tacks and teaching folks to code, it’s suddenly alone in a corner with an SEP (somebody else’s problem) field around it.

This is just another reason why web application firewalls are a critical component to any app security strategy. Organizations need a fire break between user input and the apps that blindly accept it as legitimate to avoid becoming the latest victim of a lengthy list of app security holes.

Because as much as we like to talk about securing code, when we actually teach it to others we don’t walk the walk. We need to be more aware of this lack of attention to security – even in example code, because that’s where developers (and increasingly NetOps) learn – but until we start doing it, we need security solutions like WAF to fill in the gaps left by insecure code.
* Or English, apparently. Oh come on, I do that on purpose. Because sometimes it’s fun to say it wrong.

Source: https://f5.com/about-us/blog/articles/example-driven-insecurity-illustrates-need-for-waf-27704?sf119697594=1

Author: LORI MACVITTIE


  • 0

Example-driven Insecurity Illustrates Need for WAF

Category : F5

Learning online is big. Especially for those who self-identify as a developer. If you take a peek at Stack Overflow’s annual developer survey (in which they get tens of thousands of responses) you’ll find a good portion of developers that are not formally trained:

dev education
  • Among current professional developers globally, 76.5% of respondents said they had a bachelor’s degree or higher, such as a Master’s degree or equivalent.
  • 20.9% said they had majored in other fields such as business, the social sciences, natural sciences, non-computer engineering, or the arts.
  • Of current professional developers, 32% said their formal education was not very important or not important at all to their career success. This is not entirely surprising given that 90% of developers overall consider themselves at least somewhat self-taught: a formal degree is only one aspect of their education, and so much of their practical day-to-day work depends on their company’s individual tech stack decisions.

Note the highlighted portion from the survey results. I could write a thesis on why this is true, but suffice to say that when I was studying for my bachelor’s, I wrote in Pascal, C++, and LISP. My first real dev job required C/C++, so I was good there. But later I was required to learn Java. And SQL. I didn’t go back to school to do that. I turned to books and help files and whatever other documentation I could get my hands on. Self-taught is the norm whether you’re formally educated or not, because technology changes and professionals don’t have the time to go back to school just to learn a new language or framework.

This is not uncommon at all, for any of us, I suspect. We don’t go back to school to learn a new CLI or API. We don’t sign up for a new degree just to learn Python or Node.js. We turn to books and content on the Internet, to communities, and we rely heavily on “example code.”

ways devs teach themselves

still rely on blogs and documentation, not just from our own engineers and architects, but other folks, too. Because signing up for a Ph.D now isn’t really going to help learn me* the ins and outs of the Express framework or JQuery.

It’s no surprise then that network engineers and operations (who, being the party of the first part of the second wave of DevOps, shall henceforth be known as NetOps) are also likely to turn to the same types of materials to obtain those skills they need to be proficient with the tools and technologies required. That’s scripting languages and APIs, for those just tuning in. And they, too, will no doubt copy and paste their hearts out as they become familiar with the language and systems beginning to automate the production pipeline.

And so we come to the reason I write today. Example code.

There’s a lot of it. And it’s good code, don’t walk away thinking I am unappreciative or don’t value example code. It’s an invaluable resource for anyone trying to learn new languages and APIs. What I am going to growl about is that there’s a disconnect between the example code and security that needs to be addressed. Because as we’re teaching new folks to code, we should also be instilling in them at least an awareness of security, rather than blatantly ignoring it.

I say this because app security is not – repeat NOT – optional. I could throw stat after stat after stat but I hope at this point I’m preaching to the choir. App security is not optional, and it is important to promulgate that attitude until it’s viewed as part and parcel to development. Not just apps, mind you, but the scripts and systems driving automation at the fingertips of DevOps and NetOps.

I present as the source of my angst this example.

example violates security rule zero.png_thumb[2]_thumb

The code itself is beautiful. Really. Well formatted, nice spacing. Readable. I love this code. Except the part that completely violates Security Rule Zero.

THOU SHALT NOT TRUST USER INPUT. EVER.

I’m disappointed that there’s not even a head nod to the need to sanitize the input. Not in the comments nor in the article’s text. The code just passes on “username” to another function with nary a concern that it might contain malicious content.

But Lori, obviously this code is meant to illustrate implementation of some thing that isn’t designed to actually go into production. It’s not a risk.

That is not the point. The point is that if we continue to teach folks to code we ought to at least make an attempt to teach them to do it securely. To mention it as routinely as one points out to developers new to C/C++ that if you don’t allocate memory to a pointer before accessing it, it’s going to crash.

I could fill blog after blog with examples of how security and the SDLC is given lip-service but when it comes down to brass-tacks and teaching folks to code, it’s suddenly alone in a corner with an SEP (somebody else’s problem) field around it.

This is just another reason why web application firewallsare a critical component to any app security strategy. Organizations need a fire break between user input and the apps that blindly accept it as legitimate to avoid becoming the latest victim of a lengthy list of app security holes.

Because as much as we like to talk about securing code, when we actually teach it to others we don’t walk the walk. We need to be more aware of this lack of attention to security – even in example code, because that’s where developers (and increasingly NetOps) learn – but until we start doing it, we need security solutions like WAF to fill in the gaps left by insecure code.
* Or English, apparently. Oh come on, I do that on purpose. Because sometimes it’s fun to say it wrong.

 

Source: https://f5.com/about-us/blog/articles/example-driven-insecurity-illustrates-need-for-waf-27704

Author: LORI MACVITTIE


Support