Category Archives: F5

  • 0

Where does a WAF fit in the data path?

Category : F5

Web application firewalls (WAFs) are an integral component of application protection. In addition to being a requirement for complying with PCI-DSS, WAFs are excellent at protecting against the OWASP Top 10. They’re also a go-to solution for addressing zero-day vulnerabilities either through rapid release of signature updates or, in some cases, the use of programmatic functions to virtually patch applications while a long term solution is being deployed.

The question is, where do you put such protection?

There are options, of course. The data path contains multiple insertion points at which a WAF can be deployed. But that doesn’t mean every insertion point is a good idea. Some are less efficient than others, some introduce unacceptable points of failure, and others introduce architectural debt that incurs heavy interest penalties over time.

Ideally, you’ll deploy a WAF behind your load balancing tier. This optimizes for utilization, performance, and reliability while providing the protection necessary for all apps – but particularly for those exposed on the Internet.

Recommended Placement: WAF behind Load Balancing Tier
Utilization

The resource requirements (CPU and the like) involved in making a load balancing decision are minimal. This is generally why a LB is able to simultaneously support millions of users, and WAFs require more utilization – because they’re inspecting the entire payload and evaluating it against signatures and policies to determine whether the request is valid and safe.

Modern data center models borrow heavily from cloud and its usage based cost structure. Utilization becomes a key factor in operational costs. Higher utilization leads to additional resource requirements, which consumes budgets. Optimizing for utilization is therefore a sound strategy for constraining costs in both the data center and in public cloud environments.

Reliability

It is common practice to scale WAFs horizontally. That is, you use the LB to scale WAFs. This architectural decision is directly related to utilization. While many WAFs scale well, they can still be overwhelmed by flash traffic or attacks. If the WAF is positioned in front of the LB, you either need another LB tier to separately scale it or you risk impacting performance and availability.

Alternative Placement: WAF in front of One Load Balancing Tier…and behind Another
Performance

Performance is a key concern in an application economy. With so many variables and systems interacting with data as it traverses the data path, it can be frustrating to nail down exactly where performance is being bogged down let alone to tune each one without impacting others. As has been noted many times before, as load on a system increases, performance decreases. This is one of the unintended consequences of not optimizing for utilization, and a key reason why seasoned network architects use a 60% utilization threshold on network devices.

Deploying a WAF behind the LB tier eliminates the need for an upstream designated WAF load balancing tier, which removes an entire layer of network from the equation. While the processing time eliminated may not seem like much, those precious microseconds spent managing connections and scaling WAF services and then doing it again to choose a target app instance/server matters. Eliminating this tier by deploying the WAF behind the LB tier gives back precious microseconds that today’s users will not only notice, but appreciate.

Visibility

Visibility is a key requirement for security solutions in the data path. Without the ability to inspect the entire flow – including the payload – much of the security functions of a WAF are rendered moot. After all, most malicious code is found in the payload, not in protocol headers. Positioning a WAF behind the LB tier enables decryption of SSL/TLS before traffic is passed on to the WAF for inspection. This is a more desirable architecture because it is likely the load balancer will need visibility into secured traffic anyway, to determine how to properly route requests.

Recommended Configuration: Decryption and Inspection for added Security

All that said, a WAF fits in the data path pretty much anywhere you want it to. It’s an L7 proxy-based security service deployed as an intermediary in the network path. It could ostensibly sit at the edge of the network, if you wanted it to. But if you want to optimize your architecture for performance, reliability, and utilization at the same time, then your best bet is to position that WAF behind the load balancing tier, closer to the application it is protecting.

With the right tools, comprehensive WAF coverage can significantly reduce your exposures, as well as your operating costs. Learn more about protecting your apps from the OWASP Top 10 and other threats by registering for F5’s upcoming webinar, Thursday, October 26 at 10 a.m. PT.

Source: https://f5.com/about-us/blog/articles/where-does-a-waf-fit-in-the-data-path-27579?sf123388921=1

Author:  LORI MACVITTIE


  • 0

Example-driven Insecurity Illustrates Need for WAF

Category : F5

Learning online is big. Especially for those who self-identify as a developer. If you take a peek at Stack Overflow’s annual developer survey (in which they get tens of thousands of responses) you’ll find a good portion of developers that are not formally trained:

  • Among current professional developers globally, 76.5% of respondents said they had a bachelor’s degree or higher, such as a Master’s degree or equivalent.
  • 20.9% said they had majored in other fields such as business, the social sciences, natural sciences, non-computer engineering, or the arts.
  • Of current professional developers, 32% said their formal education was not very important or not important at all to their career success. This is not entirely surprising given that 90% of developers overall consider themselves at least somewhat self-taught: a formal degree is only one aspect of their education, and so much of their practical day-to-day work depends on their company’s individual tech stack decisions.

Note the highlighted portion from the survey results. I could write a thesis on why this is true, but suffice to say that when I was studying for my bachelor’s, I wrote in Pascal, C++, and LISP. My first real dev job required C/C++, so I was good there. But later I was required to learn Java. And SQL. I didn’t go back to school to do that. I turned to books and help files and whatever other documentation I could get my hands on. Self-taught is the norm whether you’re formally educated or not, because technology changes and professionals don’t have the time to go back to school just to learn a new language or framework.

This is not uncommon at all, for any of us, I suspect. We don’t go back to school to learn a new CLI or API. We don’t sign up for a new degree just to learn Python or Node.js. We turn to books and content on the Internet, to communities, and we rely heavily on “example code.”

ways devs teach themselves

still rely on blogs and documentation, not just from our own engineers and architects, but other folks, too. Because signing up for a Ph.D now isn’t really going to help learn me* the ins and outs of the Express framework or JQuery.

It’s no surprise then that network engineers and operations (who, being the party of the first part of the second wave of DevOps, shall henceforth be known as NetOps) are also likely to turn to the same types of materials to obtain those skills they need to be proficient with the tools and technologies required. That’s scripting languages and APIs, for those just tuning in. And they, too, will no doubt copy and paste their hearts out as they become familiar with the language and systems beginning to automate the production pipeline.

And so we come to the reason I write today. Example code.

There’s a lot of it. And it’s good code, don’t walk away thinking I am unappreciative or don’t value example code. It’s an invaluable resource for anyone trying to learn new languages and APIs. What I am going to growl about is that there’s a disconnect between the example code and security that needs to be addressed. Because as we’re teaching new folks to code, we should also be instilling in them at least an awareness of security, rather than blatantly ignoring it.

I say this because app security is not – repeat NOT – optional. I could throw stat after stat after stat but I hope at this point I’m preaching to the choir. App security is not optional, and it is important to promulgate that attitude until it’s viewed as part and parcel to development. Not just apps, mind you, but the scripts and systems driving automation at the fingertips of DevOps and NetOps.

I present as the source of my angst this example.

example violates security rule zero.png_thumb[2]_thumb

The code itself is beautiful. Really. Well formatted, nice spacing. Readable. I love this code. Except the part that completely violates Security Rule Zero.

THOU SHALT NOT TRUST USER INPUT. EVER.

I’m disappointed that there’s not even a head nod to the need to sanitize the input. Not in the comments nor in the article’s text. The code just passes on “username” to another function with nary a concern that it might contain malicious content.

But Lori, obviously this code is meant to illustrate implementation of some thing that isn’t designed to actually go into production. It’s not a risk.

That is not the point. The point is that if we continue to teach folks to code we ought to at least make an attempt to teach them to do it securely. To mention it as routinely as one points out to developers new to C/C++ that if you don’t allocate memory to a pointer before accessing it, it’s going to crash.

I could fill blog after blog with examples of how security and the SDLC is given lip-service but when it comes down to brass-tacks and teaching folks to code, it’s suddenly alone in a corner with an SEP (somebody else’s problem) field around it.

This is just another reason why web application firewalls are a critical component to any app security strategy. Organizations need a fire break between user input and the apps that blindly accept it as legitimate to avoid becoming the latest victim of a lengthy list of app security holes.

Because as much as we like to talk about securing code, when we actually teach it to others we don’t walk the walk. We need to be more aware of this lack of attention to security – even in example code, because that’s where developers (and increasingly NetOps) learn – but until we start doing it, we need security solutions like WAF to fill in the gaps left by insecure code.
* Or English, apparently. Oh come on, I do that on purpose. Because sometimes it’s fun to say it wrong.

Source: https://f5.com/about-us/blog/articles/example-driven-insecurity-illustrates-need-for-waf-27704?sf119697594=1

Author: LORI MACVITTIE


  • 0

Example-driven Insecurity Illustrates Need for WAF

Category : F5

Learning online is big. Especially for those who self-identify as a developer. If you take a peek at Stack Overflow’s annual developer survey (in which they get tens of thousands of responses) you’ll find a good portion of developers that are not formally trained:

dev education
  • Among current professional developers globally, 76.5% of respondents said they had a bachelor’s degree or higher, such as a Master’s degree or equivalent.
  • 20.9% said they had majored in other fields such as business, the social sciences, natural sciences, non-computer engineering, or the arts.
  • Of current professional developers, 32% said their formal education was not very important or not important at all to their career success. This is not entirely surprising given that 90% of developers overall consider themselves at least somewhat self-taught: a formal degree is only one aspect of their education, and so much of their practical day-to-day work depends on their company’s individual tech stack decisions.

Note the highlighted portion from the survey results. I could write a thesis on why this is true, but suffice to say that when I was studying for my bachelor’s, I wrote in Pascal, C++, and LISP. My first real dev job required C/C++, so I was good there. But later I was required to learn Java. And SQL. I didn’t go back to school to do that. I turned to books and help files and whatever other documentation I could get my hands on. Self-taught is the norm whether you’re formally educated or not, because technology changes and professionals don’t have the time to go back to school just to learn a new language or framework.

This is not uncommon at all, for any of us, I suspect. We don’t go back to school to learn a new CLI or API. We don’t sign up for a new degree just to learn Python or Node.js. We turn to books and content on the Internet, to communities, and we rely heavily on “example code.”

ways devs teach themselves

still rely on blogs and documentation, not just from our own engineers and architects, but other folks, too. Because signing up for a Ph.D now isn’t really going to help learn me* the ins and outs of the Express framework or JQuery.

It’s no surprise then that network engineers and operations (who, being the party of the first part of the second wave of DevOps, shall henceforth be known as NetOps) are also likely to turn to the same types of materials to obtain those skills they need to be proficient with the tools and technologies required. That’s scripting languages and APIs, for those just tuning in. And they, too, will no doubt copy and paste their hearts out as they become familiar with the language and systems beginning to automate the production pipeline.

And so we come to the reason I write today. Example code.

There’s a lot of it. And it’s good code, don’t walk away thinking I am unappreciative or don’t value example code. It’s an invaluable resource for anyone trying to learn new languages and APIs. What I am going to growl about is that there’s a disconnect between the example code and security that needs to be addressed. Because as we’re teaching new folks to code, we should also be instilling in them at least an awareness of security, rather than blatantly ignoring it.

I say this because app security is not – repeat NOT – optional. I could throw stat after stat after stat but I hope at this point I’m preaching to the choir. App security is not optional, and it is important to promulgate that attitude until it’s viewed as part and parcel to development. Not just apps, mind you, but the scripts and systems driving automation at the fingertips of DevOps and NetOps.

I present as the source of my angst this example.

example violates security rule zero.png_thumb[2]_thumb

The code itself is beautiful. Really. Well formatted, nice spacing. Readable. I love this code. Except the part that completely violates Security Rule Zero.

THOU SHALT NOT TRUST USER INPUT. EVER.

I’m disappointed that there’s not even a head nod to the need to sanitize the input. Not in the comments nor in the article’s text. The code just passes on “username” to another function with nary a concern that it might contain malicious content.

But Lori, obviously this code is meant to illustrate implementation of some thing that isn’t designed to actually go into production. It’s not a risk.

That is not the point. The point is that if we continue to teach folks to code we ought to at least make an attempt to teach them to do it securely. To mention it as routinely as one points out to developers new to C/C++ that if you don’t allocate memory to a pointer before accessing it, it’s going to crash.

I could fill blog after blog with examples of how security and the SDLC is given lip-service but when it comes down to brass-tacks and teaching folks to code, it’s suddenly alone in a corner with an SEP (somebody else’s problem) field around it.

This is just another reason why web application firewallsare a critical component to any app security strategy. Organizations need a fire break between user input and the apps that blindly accept it as legitimate to avoid becoming the latest victim of a lengthy list of app security holes.

Because as much as we like to talk about securing code, when we actually teach it to others we don’t walk the walk. We need to be more aware of this lack of attention to security – even in example code, because that’s where developers (and increasingly NetOps) learn – but until we start doing it, we need security solutions like WAF to fill in the gaps left by insecure code.
* Or English, apparently. Oh come on, I do that on purpose. Because sometimes it’s fun to say it wrong.

 

Source: https://f5.com/about-us/blog/articles/example-driven-insecurity-illustrates-need-for-waf-27704

Author: LORI MACVITTIE


  • 0

Bots are the Bane of Turing Security Test

Category : F5

Bots are cool. Bots are scary. Bots are the future. Bots are getting smarter every day.

Depending on what kind of bot we’re talking about, we’re either frustrated or fascinated by them. On the one hand, chat bots are often considered a key component of business’ digital transformation strategies. On the consumer side, they provide an opportunity to present a rapid response to questions and queries. On the internal side, they can execute tasks and answer questions on the status of everything from a recently filed expense report to the current capacity of your brand-spanking-new app.

On the other (and admittedly darker) hand, some bots are bad. Very bad. There are thingbots – those IoT devices that have been compromised and joined a Death Star botnet. And there are bots whose only purpose is to scrape, steal, and stop business through subterfuge.

It is these latter bots we are concerned with today, as they are getting significantly smarter and sadly, they are now the majority of “users” on the Internet.

bad bot impact

Seriously. 52% of all Internet traffic is non-human. Now some of that is business-to-business APIs and legitimate bots, like search indexers and media bots. But a good portion of it just downright bad bots. According to Distil Networks, which tracks these digital rodents, “bad bots made up 20% of all web traffic and are everywhere, at all times.” For large websites, they accounted for 21.83% of traffic – a 36.43% increase since 2015. Other research tells a similar story. No matter who is offering the numbers, none of them are good news for business.

Distil Networks’ report notes that “in 2016, a third (32.36%) of sites had bad bot traffic spikes of 3x the mean, and averaged 16 such spikes per year.” Sudden spikes are a cause of performance problems (as load increases, performance decreases) as well as downtime.

If the bots are attacking apps on-premises, they can cause not only outages, but drive the cost associated with that app to go up. Many apps are still deployed on platforms that require licenses. Each time a new instance is launched, so is an entry in the accounting ledger. It costs real money to scale software. Regardless of licensing costs, there are associated costs with every transaction because hardware and bandwidth still aren’t as cheap as we’d like.

In the cloud, scale is easier (usually) but you’re still going to pay for it. Neither compute nor bandwidth is free in the cloud, and like their on-premises counterparts, the cost of a real transaction is going to increase thanks to bot traffic.

The answer is elementary, of course. Stop the traffic before it gets to the app.

This sounds far more easy than it is. You see, security is forced to operate as “player C” in the standard interpretation of the Turing Test. For those who don’t recall, the Turing Test forces an interrogator (player C) to determine which player (A or B) is a machine and which is human. And it can only use written responses,  because otherwise, well, duh. Easy.

In much the same way today, security solutions must distinguish between human and machine using only digitally imparted information.

Web App Firewalls: Player ‘C’ in the Turing Security Test

Web application firewalls (WAF) are designed to be able to do this. Whether as a serviceon-premises, or in the public cloud, a WAF protects apps against bots by detecting them and refusing them access to the resources they desire. The problem is that many WAF only filter bots that match known bad user-agents and IP addresses. But bots are getting smarter, and they know how to rotate through IP addresses and switch up user-agents to evade detection. Distil notes this increasing intelligence when it points out that 52.05% of “bad bots load and execute JavaScript—meaning they have a JavaScript engine installed.”

Which means you have to have to have a whole lot more information about the “user” if you’re going to successfully identify – and reject – bad bots. The good news is that information is out there, and it’s all digital. Just as there is a great deal that can be learned from a human’s body language, speech patterns, and vocabulary choices, so can a great deal be gleaned from the digital bits that are carried along with every transaction.

With the right combination of threat intelligence, device profiling, and behavioral analysis, A WAF can correctly distinguish bad bots from legitimate users – human or bot. Your choice determines how whether or not a bot can outsmart your security strategy and effectively “win” the Turing Security Test.

  • Threat Intelligence 
    waf works

    Threat intelligence combines geo-location with proactive classification of traffic and uses intelligence feeds from as many credible sources as possible to help identify bad bots. This is essentially “big security data” that enables an entire ecosystem of security partners to share intelligence that results in timely and thus more accurate identification of the latest bots attempts.

  • Device Profiling 
    Profile a device includes comparing requests against known BOT signatures and identity checks. Operating system, network, device type – everything that can be gleaned from a connection (and there’s a lot) can be used. Fingerprinting is also valuable because it turns out that the amount of information (perhaps inadvertently) shared by browsers (and bots alike) is pretty close to enough to uniquely identify them. A great read on this theory can be found on the EFF site. I’ll note that it’s been statistically determined that as of 2007, it required only 32.6 bits of information to uniquely identify an individual. User-agent strings contain about 10.5 bits, and bots freely provide that.
  • Behavioral Analysis 
    In a digital world, however, profiles can change in in an instant and location can be masked or spoofed. That’s why behavioral analysis is also part of identifying bad bots from legitimate traffic. This often takes the form of some sort of challenge. We see this as users in captchas and “I’m not a robot” checkboxes, but those are not the only means of challenging bots. Behavioral analysis also watches for session and transaction anomalies, as well as attempts to brute force access.

Using all three provides more comprehensive context and allows the WAF to correctly identify bad bots and refuse them access.

We (that’s the Corporate We) have always referred to this unique combination of variables as ‘context’. Context is an integral component of many security solutions today – access control, identity management, and app security. Context is critical to an app-centric security strategy, and it is a key capability of any WAF able to deal with bad bots accurately and effectively. Context provides the “big picture” and allows a WAF to correctly separate bad from good and in doing so protect valuable assets and constrain the costs of doing business.

The fix is in. Bots are here to stay, and with the right WAF you can improve your ability to prevent them from doing what bad bots do – steal data and resources that have real real impacts on the business’ bottom line.

Soyrce: https://f5.com/about-us/blog/articles/bots-are-the-bane-of-turing-security-test-27890?sf117565046=1

Author: LORI MACVITTIE


  • 0

Six Steps to Finding Honey in the OWASP

Category : F5

According to Verizon’s 2014 Data Breach Investigations Report,1 “Web applications remain the proverbial punching bag of the Internet.”2 Things haven’t improved much since then.

What is it about web applications that makes them so precarious? There are three primary answers. First, since most web applications are configured or coded specifically for the organizations they serve, they are more unique than commercial off-the-shelf software, which is often rigorously tested to a wide marketplace. Because of this uniqueness, developers must pay extra attention to each application in order to find and eliminate security problems.

Second, most web applications are available to the entire Internet, which means anyone, at any time, can poke and pry to try to break them. There are nearly 4 billion people on the Internet, and most of them primarily use the web.3 That doesn’t count enormous numbers of bots trawling the web every day; some say there are more bots than humans viewing sites.4 If you have a website, someone—or something—is looking at it.

Third, the World Wide Web itself was never designed with robust security features. HTTP, the protocol underlying all web traffic, is stateless, meaning each request for data between client and server is independent of all previous requests. Add-on protocols and tools such as web cookies and session management trackers are needed to maintain consistency from when users first authenticate until they retrieve data.5 As with all add-on tools, these are less than ideal solutions and can introduce gaps in coverage and capability.

Enter the OWASP Top 10, the most famous project of the Open Web Application Security Project (OWASP). A release candidate of the 2017 OWASP Top 106 is out and due to be finalized in November. This new version updates the 2013 list by combining two old items, “Insecure Direct Object References” and “Missing Function Level Access Control,” into a new item called “Broken Access Control.” Another change is the dropping of item 10, “Unvalidated Redirects and Forwards,” which is still a risk but considered less of a problem than it was in 2013. With these two modifications, the OWASP Top 10 has room for two more items: “Insufficient Attack Protection” and “Unprotected APIs.”

These changes have generated a few unenthusiastic industry reactions, among them complaints that some of the new items are too broad or just plain unnecessary. OWASP, for its part, has done well in working through this feedback with a call for additional data and by extending the list’s release until November 2017. None of their work on the Top 10 is secret, so anyone is free to review and comment.

Beyond understanding the OWASP Top 10 security risks in relation to the web applications your organization builds and uses, what else should you be doing? There are a few simple steps you can follow to ensure long-term upkeep of OWASP issues.

1. Understand your OWASP scope.

OWASP is already part of numerous compliance requirements and contractual obligations. Review your legal agreements and regulatory environment to see what you might be legally obligated to do with OWASP. This may entail talking to your legal department and/or reviewing the contracts with them. There are numerous international security standards related to compliance that reference the OWASP Top 10, and you may fall under one or more of them. It’s better to know now than have an unpleasant surprise later.7

2. Scan all web applications.

Scan and test all the web applications your organization depends on against the OWASP Top 10. This means anything you’ve written and anything you use. If you’re using a third-party web application and depend on it, ask the vendor for a copy of their latest web application vulnerability test. If they don’t have one, speak to Legal about getting that requirement added to the next contract. If they haven’t done a scan, it’s likely they aren’t paying attention to vulnerabilities at all and that their application already has holes you don’t know about.

3. Share results.

Share your findings of the previous two steps with your company’s executive decision makers (at the very least, the CIO) as well as the development engineering team. Make sure your messaging is appropriate for each audience. Executives want to hear the bottom line regarding business risk and dependency, not technical detail. Developers want technical detail, especially the specific steps on how the vulnerability can be exploited and what an attacker can do with it.

4. Educate and inform.

Your web developers should be familiar with the OWASP Top 10, and OWASP Top 10 training may be contractually required by your company. Beyond telling the developers about the obligations and vulnerabilities you found, you should educate them on the entire OWASP Top 10 list. You can take this a step further and draft a security policy regarding web application security to help inform the entire technical and operational staff of its importance. Some of the key elements of such a policy can include:

  • All Internet-facing web-based applications will be tested against the OWASP Top 10 vulnerabilities at least once a quarter.
  • A secure coding standard based on industry best practices will be followed.
  • Developers will have adequate security training in the OWASP Top 10.
  • Developers will use threat modeling to look for common attacks, such as those described in the OWASP Top 10, to ensure their applications can defend against them.
  • IT will ensure that test data, default accounts, and passwords are removed or changed when web applications are deployed live.
  • Security vulnerabilities will be tracked, risk-reviewed, and fixed.
  • Periodic reviews and auditing will be done against this policy.

5. Firewall what you can’t fix.

Web application security can leverage specialized defensive tools, like web application firewalls that are specifically designed to analyze and block application attacks. They go beyond standard firewalls in that you program them to match the unique application requirements of your website. Some can also take data feeds from web application vulnerability scanners and do “virtual patches” by blocking previously uncovered but unmatched web vulnerabilities. The downside is that web application firewalls are complex and require customization to function correctly, but a lot of that work could be outsourced.

6. Become part of the OWASP community.

Join OWASP, attend a meeting, or, at the very least, review some of their material. The OWASP Top 10 is just a tiny part of the material generated by thousands of individuals and hundreds of companies over nearly two decades. There’s a lot of useful and inspirational material on their site.

Do you have issues with how the OWASP Top 10 list looks? Share your thoughts and contribute at https://github.com/OWASP/Top10/issues.

In general, the OWASP Top 10 falls into the category of “Basic stuff you should be doing so you don’t look negligent if you get hacked.” It represents the minimum level of web application security that you need to meet. Don’t get stung.


1 http://www.verizonenterprise.com/resources/reports/rp_Verizon-DBIR-2014_en_xg.pdf

2 http://www.networkworld.com/article/2176124/malware-cybercrime/verizon–web-apps-are-the-security-punching-bag-of-the-internet.html

3 http://www.internetworldstats.com/stats.htm

4 https://www.recode.net/2017/5/31/15720396/internet-traffic-bots-surpass-human-2016-mary-meeker-code-conference

5 https://www.owasp.org/index.php/Session_Management_Cheat_Sheet

6 https://www.owasp.org/index.php/File:OWASP_Top_10_-_2017_Release_Candidate1_English.pdf

7 https://www.owasp.org/index.php/Industry:Citations

Source: https://f5.com/labs/articles/cisotociso/strategy/six-steps-to-finding-honey-in-the-owasp?sf116548986=1

Author: Ray Pompon


  • 0

Vulnerable Stuff Running in Containers Still Vulnerable Stuff

Category : F5

It has been said – and not just by me – that encrypted malicious code is still malicious code. The encryption does nothing to change that, except possibly blind security and app infrastructure to its transport through the network.

The same is true of apps and platforms running in containers. If the app is vulnerable, it is vulnerable whether it’s running atop an OS, in a virtual machine, or nowadays, in a container. If the app is vulnerable in the data center, it’s vulnerable in the cloud. And vice-versa.

Containers aren’t designed to magically protect applications. They provide some basic security at the network layer, but the network is not the application. Applications have their attack surface, comprising its code and interfaces (APIs) as well as the protocols (HTTP, TCP) and the app stack it requires. None of which changes by adding an entry to an IP table or otherwise restricting inbound requests to those coming from the ingress to the containerized environment.

The reason I bring this up is thanks to Sonatype’s 2017 DevSecOps Survey. In it, 88% of the over 2200 respondents agreed container security was important, but only 53% leverage security products to identify vulnerable applications/OS/configurations in containers.container-security-sonatype-2017

The first two pieces of that statistic – applications and OS – jumped out at me, because they are two of the components of a fully realized app stack that don’t necessarily change based on location or operational model (cloud, container, virtualization, etc…). An app or API with an SQLi or XSS vulnerability is not magically imbued with protection when it moves between models. That vulnerability is in the code. The same is true for platforms, which are inarguably part of the app security stack. A vulnerability in the handling of HTTP headers in Apache when running on Linux will still exist if that app is moved from a traditional, OS-based to a containerized model.

It’s important – imperative, even – that we continue to identify vulnerabilities in the full app stack regardless of where or in what form the app is deployed.

It is also just as important to keep in place those app protections already employed for traditional apps when moving to containers. A web application firewall is just as beneficial for apps deployed in containers as it is for apps deployed in the cloud as it is for apps deployed in traditional environments.

As are other the other security tools the survey found used by respondents such as static and real-time scanning solutions (SAST, DAST, IAST, and RASP). While web application firewall (WAF) use exceeds that of other tools, SAST and SCA (Source Code Analysis) are also common. SCA is a static means of rooting out problems before delivery. I’ll date myself and note that tools like lint fall into the SCA tool category, and while these don’t expose vulnerabilities resulting from the interaction of code (and with users) in real-time, they can find some of the more common mistakes made by developers that result in memory leaks, crashes, or the infamous buffer overflow.  maturedevopsusewaf

I know what you’re thinking. You’re thinking, “Lori, I just read Stack Overflow’s 2017 Developer Survey Results, and JavaScript is by far the number one preferred language of developers. And JavaScript is interpreted, so all this buffer overflow and memory leak stuff is just bad memories from the old days when you were coding in C/C++.”

Except that JavaScript – and other modern, interpreted languages – is ultimately implemented in a language closer to the circuit board, like C/C++. And as has been shown in the past, if one is clever enough, one can use that fact to craft an exploit of the system.

And even if that’s not a concern, there are plenty of other vulnerabilities in any code based on libraries used or a misused system call that breaches security on the server side. Current surveys say 80% of apps are composed from open source components. The Sonatype survey further noted that there has been a 50% increase in verified or suspected breaches related to open source components from 2014 to 2017. Many of which are written in languages that lend themselves to more spectacular mistakes, both because they are less controlled and because there are fewer and fewer developers proficient in those languages.

The point being that any code is prone to contain vulnerabilities. And since code is the building block of apps, which are the face of the business today, it’s important to scan and protect them no matter where or how they’re deployed.

Containers or cloud. Traditional or virtual. All applications should be scanned for vulnerabilities and protected against platform and protocol exploits. Period.

Apps should be thoroughly scanned and tested during development, and then tested again in production. Both are necessary, because the Fallacy of Decomposition tells us that introducing new components changes the baseline. New interactions can force previously undiscovered vulnerabilities to the fore.

To protect apps, consider the following:

  • Employ code and app analysis tools in development. Build them into the CI/CD pipeline if possible.
  • Test again in production, in case interaction with other components/apps surfaces issues.
  • Keep aware of protocol and platform vulnerabilities, as well as those discovered in third party libraries you may use
  • Integrate a web app firewall into your architecture. Even if you don’t use it in blocking mode, it is an invaluable resource in the event a protocol/platform zero-day or library vulnerability is discovered.

Stay safe!

Source: https://f5.com/about-us/blog/articles/vulnerable-stuff-running-in-containers-still-vulnerable-stuff-27580?sf113516371=1

Author: Lori MacVittie 


  • 0

Maslow’s Hierarchy of Automation Needs

Category : F5

If there’s anything hotter than containers right now, it’s probably something based on containers, like serverless. In addition to being the “cool” kid on the block right now, it’s also the focus of just how important automation and orchestration* is to modern applications.

Just about every day brings some new component to container architectures and, with it, the appropriate level of automation. If something can be automated in a container world, it will. A plethora of APIs and a reliance on configuration/template-based artifacts drive the near-continuous evolution of automation within container-based environments. If the API economy is important to business’ digital transformation, then the other API economy is critical to IT’s digital transformation.

Container environments are no exception, and as they continue to make their way into production environments, it becomes increasingly important for upstream (N-S) services to not only be able to deliver the apps being scaled and served from within these highly volatile environments, but to integrate with them at the automation and orchestration layers.

That’s because some of the layers of this “stack” of technology are more volatile than others. Container creation and destruction, for example, happens on a far more regular basis than that of whole clusters being created and/or destroyed. Indeed, Data Dog HQ’s Docker Adoption report notes that “as of March 2017, roughly 40 percent of Datadog customers running Docker were also running Kubernetes, Mesos, Amazon ECS, Google Container Engine, or another orchestrator.” Automation/orchestration is critical to container success, particularly at its core. That means that as we’re moving along, it is imperative that some automation needs are met as quickly (and thoroughly) as possible, because they’re the foundation for everything else.

This is true for cloud environments, as well. It isn’t just volatility that drives the need for automation and orchestration, after all, it’s also agility. The complex “self-service” model associated with cloud computing demands automation to support the notion of cloud computing’s elastic, on-demand architectures within a utility computing model. There is an identifiable set of automation needs common to both cloud and containers that lay the foundation for understanding its value to the business.

Much like Maslow’s Hierarchy of Needs, the Hierarchy of Automation Needs builds upon the premise that there are “deficiency” needs which must be met in order to progress upwards toward higher-order needs that enable growth.

One must satisfy lower level deficit needs before progressing on to meet higher level growth needs. When a deficit need has been satisfied it will go away, and our activities become habitually directed towards meeting the next set of needs that we have yet to satisfy. These then become our salient needs. However, growth needs continue to be felt and may even become stronger once they have been engaged. Once these growth needs have been reasonably satisfied, one may be able to reach the highest level called self-actualization.  — https://www.simplypsychology.org/maslow.html

At its core, this particular Hierarchy of Automation Needs focuses on foundational needs around scaling applications and their composite services. As the very talented F5 engineer who first presented this concept explains: “The automation at the bottom is most valuable because it happens the most and is crucial to an application staying online. The automation at the top is least valuable because it happens less frequently (maybe only “once”) and is happening at the same time that many folks will be doing manual things already.”hierarchy-auto-needs

These are the “basic” needs of automating the creation/destruction of containers/virtual machines and apps is critical to not just scale but efficacy of scale. That efficacy is important to constraining costs and alleviating the burden of scale traditionally laid on (costly) manual operations. Given the volume and frequency with which such events occur today, automation is a requirement, not a nice to have. It is a deficiency need, which means if it is not met, there’s little reason to worry about higher-order, less frequently occurring automation.

This is especially true for containers, and seems validated by Data Dog again when it notes “containers have an average lifespan of 2.5 days, while across all companies, traditional and cloud-based VMs have an average lifespan of 23 days.” This is, it seems, impacted by automation. “In organizations running Docker with an orchestrator, the typical lifetime of a container is less than one day. At organizations that run Docker without orchestration, the average container exists for 5.5 days.”

The ability to automate the creation/destruction of containers and subsequently apps is imperative to realizing the speed and agility of scale required for organizations to reap the benefits of containers both in traditional and cloud-based environments.

The higher order automation needs are pretty much routing needs and fall into what are growth needs. Growth needs are sought once basic (deficiency) needs have been met. Cluster creation/destruction and routing changes happen infrequently and only become valuable once the app is in place and able to scale. This becomes imperative once the environment migrates from dev/test into a production environment and is relied upon to deliver business value by serving up applications. After all, routing to an app that can’t respond is like giving a hug to a starving man. The warm fuzzy feeling doesn’t address the basic need for food. A 2016 Mesosphere survey of its users found that 62% of them were already using containers in production environments. A 2017 Portworx survey at container-focused conference DockerCon found that 67.2% of its respondents were using containers in production. Which means the routing needs are quickly becoming important, at least to the subset that is container-adopting organizations.

container-automation-needs

The ultimate goal of business’ self-actualization – growth – cannot be achieved until the entire hierarchy is fulfilled; that is, until the entire “stack” is automated and orchestrated. So it should be a given that those who primarily live on the N-S data plane will orchestrate from the bottom up, enabling scaling needs of the environment first and working up to the routing needs, where the transition from N-S to E-W traffic takes place. This is evident as well in the continued development of the more tightly coupled ecosystem around container orchestrators themselves, such as Kubernetes. One of the more recent developments has been focused on the “routing” needs with the inclusion of ingress controllers that route at layer 7 (URI / API) to ensure the N-S transition to E-W services takes place seamlessly. That component, too, must eventually be integrated (automated) to satisfy routing needs and continue to propel organizations up the hierarchy to realize growth.

Translating this to container-specific constructs enables us to map the container environment automation needs to those services in the network that support its need to scale to realize growth. Both the container and network/application service constructs require automation. Creating a hundred containers to scale an app without simultaneously automating the services responsible for managing them during delivery yields unsatisfied needs. These networking constructs can exist as either native container constructs or as existing app services adapted to fit natively into the container environment. Implementation details will vary based on architecture and operational requirements, though the existence of both sets of constructs is necessary to satisfy basic (scale) and growth (routing) needs to realize success.

In all cases, the success of containers and cloud are heavily reliant on their ecosystems and that means reliance on the Other API economy to enable the automation and orchestration critical to business growth through digital transformation initiatives.

* I find it frustrating that the concepts of automation and orchestration seem to be conflated within the container world, juxtaposing one for the other. But for now, let’s just ignore that and I’ll try to keep the pedant inside where it belongs. Inside my head. Screaming. Desperately.

Source: https://f5.com/about-us/blog/articles/maslows-hierarchy-of-automation-needs-27389?sf112562773=1

Author:  Lori MacVittie


  • 0

Forget the kids. We need this for IT Automation.

Category : F5

Our youngest is nine (AND A HALF! DON’T FORGET THE HALF!) and he’s currently enthralled with robots. Not just playing with them, but programming them. With both parents holding advanced degrees in computer science, you can image we’re thrilled and encouraging of this particular trend.

So of course robots that offer easy-to-learn programming models is a thing in our house right now. One of the toys for this summer was from Wonder Workshop, whose robots can be programmed via a “block” like language.

This is the norm for kids today. This child plays with a number of ‘games’ on his various devices, all which use the same “block” like model for constructing programs. Even his Lego Mindstorms Ev3 robot uses a similar mechanism, where programming constructs are blocks you wire together. Variables and conditions are set by selecting them, and there’s no actual “code” displayed on the screen.

But you know it’s there.

Back in the day when I was evaluating BPM (Business Process Management) solutions, they used a similar paradigm to enable business stakeholders to define processes. Its interface reflected the nature of processes, which is to say you built a flowchart using a drag-n-drop model, but much of the construction of the orchestration was accomplished via an easier-to-learn interface. Like those my youngest uses today, only more Visio like. A lot more Visio like now that I think about it.

Forget the kids. This is the direction IT automation needs to go. There is no reason for the complexity inherent in IT automation today other than no one has yet to recognize that if we want to encourage more of it – and by folks who aren’t naturally coders – we need to find a better way to construct the workflows that represent the IT processes used to deploy, manage, and configure IT infrastructure.

it workflows eniacThe first thing we need to agree on is that programmatic doesn’t necessarily mean “you can do anything you want cause, code.” That’s true, in the most liberal sense of the word, but it also means the ability to change or define behavior programmatically. There are a limited set of actions required to execute a workflow in IT, and most are enabled today by the (other) API economy. By providing an interface that encapsulates those limited set of actions and provides clear and easy to understand logic constructs (if/then, while, iterative functions), we could ostensibly eliminate the tendency toward the unstructured scripting mechanisms that introduce far more technical debt into IT operations than most software developed in enterprises in the past. This level of constrained abstraction would also enable non-native coders (network and storage engineers and architects) to produce well-constructed workflows that are highly maintainable (a significant driver of standardization). When coupled with a serverless backbone to execute workflows, this model reduces the investment required to create, maintain, and execute workflows appropriate to IT operations.

That’s important, because as we need to approach IT automation with an eye toward sustainability. A script today works great, but can it scale along with people and processes in the future?

Now, maybe we don’t need a solution quite this simple (or colorful) but the premise on which the interface is designed is important, I think, to adapt as we look toward the future of IT automation and how we build (and maintain) the code that will eventually make IT go. Sadly, we tend to transfer the complexity in the underlying systems to the design of systems (and thus interfaces) that interact and control them. We want to expose every knob and button possible. At a minimum, an API broker that provides a way to aggregate the natural complexity of CLIs turned APIs into more comprehensible operational tasks would be a boon for those tasked with automating IT networks and app services. Logging in can be a complex process comprising multiple steps that are repeated every time. Composing them into a single “service” makes them repeatable and infinitely more auditable in addition to consistent. Combine that with a (more) intuitive interface and we’ve got ourselves an IT automation winner.

The interfaces for kids “coding” apps today proves that you can do that without making their users feel like they’re staring at an ENIAC with no manual to guide them. We can do better, and if we’re going to scale internal digital transformation to keep up with the business, we need to.

Source: https://f5.com/about-us/blog/articles/forget-the-kids-we-need-this-for-it-automation-27250?sf110653081=1

Author:  Lori MacVittie


  • 0

Cloud Chaos, the Identity and Access Control Conundrum

Category : F5

As you deliver more cloud-based applications, managing and securing access can become a development and deployment nightmare. Whether you are building your own customer facing apps or providing app access for employees, it needs to be simple and secure.  How can you implement strong access controls over open networks with simplicity and do it at scale?  Join us in this webinar featuring F5 Sr. Security Solutions Architect Michael Koyfman as we discuss how you can apply the latest access and authentication technologies.

What you’ll learn in the webinar:

  • Trends we are seeing in managing identity and access
  • Approaches to effectively manage access to applications regardless of their location
  • Best practices to manage access policies

Complete the form (we promise not to share your information) and you’ll be registered for this webinar. **This is an end-user only event.** If you are a partner and want to learn more about the content of this meeting, please contact your channel account manager.


  • 0

F5 on AWS: How MailControl Improved their Application Visibility and Security

Category : F5

Organizations like MailControl often discover they need to gain additional visibility into encrypted incoming and outgoing application traffic to detect potential threats or anomalies. F5 BIG-IP Virtual Edition (VE) on Amazon Web Services (AWS) delivers an advanced application delivery controller (ADC) that goes beyond balancing application loads, enabling inspection of inbound and outbound application traffic. Join our webinar with AWS to discover how F5 was able to help MailControl boost their visibility into the email traffic flowing through their application. By using virtualized F5 services on Amazon Web Services (AWS), the organization increased its application monitoring capabilities and improved security for its customers, while simultaneously automating processes to support its agile DevOps process.

Join us to Learn:

  • Best practices for implementing a full suite of application protection tools including WAF and DDoS to guard your valuable data
  • How to utilize enhanced identity and access management (IAM) policies to meet unique business needs
  • The importance of inspecting inbound and outbound application traffic for threats and anomalies
When: August 23, 2017 | 10 am PDT/1 pm EDT
Who Should Attend:

Technology Decision Makers, Cloud Architects, IT Managers, IT Security Professionals, Security Architects, Solutions Architects, Systems Engineers

AWS Speaker:  Matt Lewhess, Solution Architect

F5 Speaker:   Nathan McKay, Security Solution Engineer

Customer Speaker:  Corey Wagehoft, Director of Infrastructure, MailControl


Support