Category Archives: F5

  • 0

IDaaS, Everything but the Directory Sync

Category : F5

Back in 2011, Marc Andreesen famously declared that “Software is eating the world.” We have seen this come to fruition, although today I would update this declaration to be “SaaS is eating the world.” SaaS and the subscription-based delivery of business applications have become the preferred consumption model for most organizations. Market analyst firm IDC predicts that virtually all software vendors will have fully shifted to a SaaS delivery model by 2018[1].

We love our SaaS. And what’s not to love? The pay-as-you-go pricing is business-friendly. It enables velocity of scale (up or down), reduces local infrastructure footprint, lowers capital costs, yada yada yada – if you are reading this blog, you probably already know all this stuff.

But here’s the thing with SaaS, we still need to implement IT security controls. While we rely on the service provider to secure the platform, we need to ensure access to our SaaS-delivered business apps is well protected. The threat of compromised accounts is arguably the biggest security risk to adopting public cloud SaaS offerings. We can’t have employees using weak or shared passwords for these apps, and sticky notes on the user’s desk make us cringe. However, strong password policies make it hard for employees, especially if they must change them regularly.

We need an identity and access management solution for cloud apps that enables strong policy without putting the administrative burden on users or IT staff. And of course, we want this delivered in an identity as a service (IDaaS) model. There are some good IDaaS offerings on the market today, like those from Ping Identity and Okta. These solutions offer SSO and SAML-based federation for cloud-based apps. Your employees simply authenticate to the IDaaS and have seamless access to all their cloud apps. Simple, easy, secure access to the cloud apps they need.

Sounds great, right? Just copy or synchronize your on-premises user directory to the IDaaS vendor’s platform, configure some SAML-enabled SaaS applications and you are ready to federate. Wait, what? Copy my directory to the cloud? Let me think about that…

We all want the simplicity and security benefits of SSO for cloud and SaaS, but having copies of the corporate directory in a 3rd party’s platform is not for everyone. While I truly believe that service providers take security seriously, they also can be a frequent attack target because of the sensitive data they host. Limiting risk in the cloud just makes good security sense.

The reports of the on-premises directory’s death have been greatly exaggerated. At F5, we have customers that just don’t want expose their directories to the public cloud. However, there is a way to get all the benefits of IDaaS without the need to put your directory in the IDaaS platform – what is known as SAML identity chaining. This is where the IDaaS federation identity provider (IdP) can redirect to an on-premises IdP, like the F5 BIG-IP APM, that has secure access to the on-premises corporate directory. Employees can be transparently authenticated via the on-premises directory and the appropriate SAML assertion can be provided to the back to the IDaaS for federated SSO to SaaS apps.

This IdP chaining model also enables on-premises access policies to be extended to cloud applications. Multi-factor authentication (MFA) and contextual-based policy access for apps can also be added. Pretty cool right?

If you are considering implementing IDaaS but have reservations about sharing your corporate directory in the cloud, IdP chaining can help ease your concerns. Most market-leading IDaaS vendors support IdP chaining and F5 BIG-IP APM has experience working with just about all of them. Go forth and IDaaS without fear…

Source: https://f5.com/about-us/blog/articles/idaas-everything-but-the-directory-sync-27137?sf88941836=1
Author: MARK CAMPBELL


  • 0

Good Enough is only Good Enough Until It Isn’t

Category : F5

Let’s talk turkey. Or crème eggs. Or boxes of candy. What do they have in common? They’re all associated with holidays, of course. And, it turns out, those holidays are the number one generator of both profits and poor performance by websites.

Consider recent research from the UK, “which involved more than 100 ecommerce decision makers” It “revealed that more than half (58%) admitted to having faced website speed issues during last year’s peak period.”

Now, we all know that performance is important, that even microseconds of delay results in millions of dollars in losses. Nuff said.

The question is, what can you do about it?

The answer lies in remembering Operational Axiom #2: As Load Increases Performance Decreases.

It doesn’t matter whether the app server is running in the cloud or in the data center, in a virtual machine or a container. This axiom is an axiom because it is always true. No matter what. The more load you put on a system, the slower it runs. Period.

The key to better performance is to balance the need to keep costs down by maximizing load while simultaneously optimizing for performance. In most cases, that means using whatever tools you can to restore that balance, especially in the face of peak periods (which place a lot of stress on systems, no matter where they may be).

1. Balance the load

This is why good enough (rudimentary) app services aren’t. Because while they often effortlessly scale, they don’t necessarily actually balance the load across available resources. They don’t necessarily provide for the intelligence necessary to select resources based on performance or existing load. Their ‘best effort’ isn’t much better than blind chance.

Balancing load requires an understanding of existing load so new requests are directed to the resource most likely to be able to respond as quickly as possible. Basic load balancing can’t achieve this because its focus is purely on algorithmic-based decisions, which rarely factor in anything other than static weighting of available resources. Real-time decisions require real-time information about the load that exists right now. Otherwise you aren’t load balancing, you’re load distributing.

2. Reduce the load

It’s more than just selection of resources that aids in boosting performance while balancing the load. Being able to employ a variety of protocol enhancing functions that reduce load without impairing availability is also key. Multiplexing and reusing TCP connections, offloading encryption and security, and reassigning compressions duties to upstream services relieve burdens on app and web services that free up resources and have real impacts on performance.

Servers should serve, whether they’re in the cloud or in the data center, running in a container or a VM. Cryptography and compression are still compute-heavy functions that can be performed by upstream services designed for the task.

3. Eliminate the load

Eliminating extra hops in the request-response path also improve performance. Yes, you can scale horizontally across load balancing services, but doing so shoves another layer of decision making (routing) into the equation that takes time both in execution (which one of these should service this request?) and in transfer time (send it over the network to that one). That means less time for the web or app server to do its job, which is really all we wanted in the first place. Under typical load, the differences between one system managing a million connections and ten systems each managing a portion of the same may be negligible. Until demand drives load higher and the operational axioms kick in there, too. Because it’s not just about the load on the web or app server that contributes to poor performance, it’s the entire app delivery chain.

The more capacity (connections) your load balancing service can concurrently handle, the fewer instances you need. That reduces the overhead of managing yet another layer of resources that require just as careful attention to operational axiom #2 as any other service.

Performance continues to be a significant issue for retailers, and with the rapidly expanding digital economy it will become (if it isn’t already) an issue for everyone with a digital presence. In the rush that always happens before holidays, folks become even less tolerant of poor performance. What was good enough the day before, isn’t. More often than not performance issues are not the fault of the application, but rather the architecture and the services used to deliver and secure it. By using the right services with the right set of capabilities, organizations are more likely to be able to keep from running afoul of performance issues under heavy load.

Good enough is good enough, until it isn’t. Then it’s too late to cajole frustrated customers to come back. They’ve already found another provider of whatever it was you were trying to sell them.

Source: https://f5.com/about-us/blog/articles/good-enough-is-only-good-enough-until-it-isnt-26766?sf86428739=1

Author: Lori MacVittie


  • 0

Cloud Month on DevCentral

Category : F5

The term ‘Cloud’ as in Cloud Computing has been around for a while. Some insist Western Union invented the phrase in the 1960s; others point to a 1994 AT&T ad for the PersonaLink Services; and still others argue it was Amazon in 2006 or Google a few years later. And Gartner had cloud computing at the top of their Hype Cycle in 2009.

No matter the birth year, cloud computing has become an integral part of an organization’s infrastructure and is not going away anytime soon. A 2017 SolarWinds IT Trends report says 95% of businesses have migrated critical applications to the cloud and F5’s SOAD report notes that 20% of organizations will have over half their applications in the cloud this year. It is so critical that we’ve decided to dedicate the entire month of June to the Cloud.

We’ve planned a cool cloud encounter for you this month. We’re lucky to have many of F5’s cloud experts offering their ‘how-to’ expertise with multiple 4-part series. The idea is to take you through a typical F5 deployment for various cloud vendors throughout the month. Mondays, we got Suzanne Selhorn & Thomas Stanley covering AWS; Wednesdays, Greg Coward will show how to deploy in Azure; Thursdays, Marty Scholes walks us through Google Cloud deployments including Kubernetes.

But wait, there’s more!

On Tuesdays, Hitesh Patel is doing a series on the F5 Cloud/Automation Architectures and how F5 plays in the Service Model, Deployment Model and Operational Model – no matter the cloud and on F5 Friday #Flashback starting tomorrow, we’re excited to have Lori MacVittie revisit some 2008 #F5Friday cloud articles to see if anything has changed a decade later. Hint: It has…mostly. In addition, I’ll offer my weekly take on the tasks & highlights that week.

Below is the calendar for DevCentral’s Cloud Month and we’ll be lighting up the links as they get published so bookmark this page and visit daily! Incidentally, I wrote my first cloud tagged article on DevCentral back in 2009. And if you missed it, Cloud Computing won the 2017 Preakness. Cloudy Skies Ahead!

June 2017

Monday Tuesday Wednesday Thursday Friday
28 29 30 31

1

Cloud Month on DevCentral Calendar

2

Flashback Friday: The Many Faces of Cloud

Lori MacVittie

3
4 5

Successfully Deploy Your Application in the AWS Public Cloud

Suzanne Selhorn

6

Cloud/Automated Systems need an Architecture

Hitesh Patel

7

The Hitchhiker’s Guide to BIG-IP in Azure

Greg Coward

8

Deploy an App into Kubernetes in less than 24 Minutes

Marty Scholes

9

Flashback Friday: The Death of SOA Has (Still) Been Greatly Exaggerated

-Lori

10
11 12

Secure Your New AWS Application with an F5 Web Application Firewall

-Suzanne

13

The Service Model for Cloud/Automated Systems Architecture

-Hitesh

14

The Hitchhiker’s Guide to BIG-IP in Azure – ‘Deployment Scenarios’

-Greg

15

Deploy an App into Kubernetes Even Faster (Than Last Week)

-Marty

16

Flashback Friday: Cloud and Technical Data Integration Challenges Waning

-Lori

17
18 19

Shed the Responsibility of WAF Management with F5 Cloud Interconnect

-Suzanne

20

The Deployment Model for Cloud/Automated Systems Architecture

-Hitesh

21

The Hitchhiker’s Guide to BIG-IP in Azure – ‘High Availability’

-Greg

22

Deploy an App into Kubernetes Using Advanced Application Services

-Marty

23

Flashback Friday: Is Vertical Scalability Still Your Problem?

-Lori

24
25 26

Get Back Speed and Agility of App Development in the Cloud with F5 Application Connector

-Suzanne

27

The Operational Model for Cloud/Automated Systems Architecture

-Hitesh

28

The Hitchhiker’s Guide to BIG-IP in Azure – ‘Life Cycle Management’

-Greg

29

Peek under the Covers of your Kubernetes Apps

-Marty

30

Cloud Month Wrap!

Titles subject to change…but not by much.


  • 0

Forget uptime. A low MTTR is the new ‘5 9s’ for IT

Category : F5

Outages are expensive. Whether they’re ultimately the result of an attack or a failure in software or hardware isn’t that relevant. The costs per minute of downtime are increasing, thanks to the growing reliance on APIs and web apps of the modern, digital economy.

For some, those costs are staggering. It’s estimated that Amazon’s 40 minutes of downtime back in 2013 cost them $2.64M. That’s $1100 per second for those disinclined to do the math. If you think that’s horrifying, consider Google, whose 5-minute downtime in the same year cost them $109K per minute (or $1816.67 per second) for a whopping total of $545K. For 5 minutes. Technically, if that was all they suffered, that’s the vaunted “5 9s” IT is tasked with achieving.

How often do outages happen? Too often, apparently. If you’ve never seen this one, take a gander at pingdom’s live outage map. It’s built from data culled from its over 700,000 global users. This morbidly fascinating map displays outages occurring in the past hour across the globe. The bright flashes depicting outages are a nice touch; really drives home the splash they make with users.

Which is to say an unwanted one.

The digital economy exacerbates this problem. Earlier this year an S3 outage at Amazon knocked out a whole spate of customers’ apps and web sites. But lest you pin this problem on public cloud providers, a quick dive into the site builtwith.com will quickly erase that belief. The percentages of sites taking advantage of CDNs and APIs is perhaps alarmingly high if you consider the dependency that incurs on someone else’s uptime. It’s hard to find a site that doesn’t rely on at least one external API or service, which increases the possibility of downtime because if that external service is down, so are you.

Basically, IT settled on “5 9s” because it is impossible to achieve 100% availability. The key today, when per second costs are skyrocketing thanks to the shift of the economy into the digital realm, is to minimize downtime. In other words, setting goals that require a low mean-time to resolution (MTTR), is just as critical – maybe more – than trying to eliminate downtime.

One of the key measures of “high performing organizations” in Puppet Labs’ 2016 State of DevOps Report is MTTR, defined as the time it takes to restore service when a service incident occurs (e.g. unplanned outage or service impairment). The highest performing organizations (based on the report’s assessment) take less than one hour while medium and low performing organizations take “less than one day”. “In other words” the report notes, “high performers had 24 times faster MTTR than low performers.”

You’ll note the question wasn’t “if” there is a service incident. It was “when” there is a service incident. The assumption is that an incident will occur, and thus the key is to minimize the time to resolution. A 2016 survey by IHS reported that “on average, survey respondents experience 5 downtime events per month, and 27 hours of downtime per month” cost the average mid-sized organization $1 and their larger counterparts up to $60M.

If we assume Murphy’s Law still presides over Moore and Conway, the answer is to try to minimize MTTR in order to reduce the time (and costs) associated with inevitable downtime.

That means visibility is critical, which means monitoring. Lots and lots of monitoring. But not just the website, or the web app, or the API – we need to monitor the full stack. From the network to the app services to the application itself. That’s something not everyone does, and when they do, they appear to do it inconsistently.

atlassian-incident-response

Consider the 2017 xMatters|Atlassian DevOps Maturity survey in which 50% of respondents declared they “wait for operations to declare a major incident” before responding. A frightening 1/3 of companies “learn about service interruptions from their customers.”

In a digital economy, every second matters. Not just because it costs money but because it negatively impacts future revenues, as well. Decreasing brand value and trust with customers results in fewer purchases, users, and eventually stagnating growth. That’s not a direction organizations’ should going.

Monitoring is the first step to detecting issues that cause outages. But monitoring alone doesn’t help MTTR. Communication does. Alerting the relevant stakeholders as soon as possible and arming them with the information they need to troubleshoot the issue will assist in a faster time to resolution. That means sharing – one of the four key pillars of DevOps – is key to improving MTTR. Even if you aren’t embracing other aspects of DevOps at a corporate level yet, sharing is one you should consider elevating to a top level initiative. Whether it’s through ChatOps or e-mail, a mobile app or a dynamically updated wiki page, it’s imperative that the information gleaned through monitoring be shared widely across the organization.

A hiccup in a switch or server may seem innocuous, but left alone it might wind up knocking out half the services a critical app depends on. In the 2017 State of the Network study conducting by Viavi, 65% of network and systems administrators cite “determining whether problem is caused by network, system, or apps” as their number one challenge when troubleshooting application issues. Greater visibility and full-stack monitoring is one way to address this challenge, by ensuring that those responsible for finding the root cause have at hand as much information about the status and health of all components in the data path as possible.

Visibility is key to the future of IT. Without it, we can’t achieve the level of automation necessary to redress outages before they occur. Without visibility we can’t reduce MTTR in a meaningful way. Without it, we really can’t keep the business growing at a sustainable rate.

Visibility, like security, should be a first class citizen in the strategy stack driving IT forward. Because outages happen, and it is visibility that enables organizations to recover quickly and efficiently, with as little damage to their brand and bottom line as possible.

Authors:  Lori MacVittie si F5 Networks.


  • 0

Now that HTTPS is almost everywhere, what about IPv6?

Category : F5

Let’s Encrypt launched April 12, 2016 with the intent to support and encourage sites to enable HTTPS everywhere (sometimes referred to as SSL everywhere even though the web is steadily moving toward TLS as the preferred protocol). As of the end of February 2017, EFF (who launched the effort) estimates that half the web is now encrypted. Now certainly not all of that is attributable to EFF and Let’s Encrypt. After all, I have data from well before that date that indicates a majority of F5 customers enabled HTTPS on client-facing services, in the 70% range. So clearly folks were supporting HTTPS before EFF launched its efforts, but given the significant number of certificates* it has issued the effort is not without measurable success.

On Sept 11, 2006, ICANN “ratified a global policy for the allocation of IPv6 addresses by the Internet Assigned Numbers Authority (IANA)”. While the standard itself was ratified many years (like a decade) before, without a policy governing the allocation of those addresses it really wasn’t all that significant. But as of 2006 we were serious about moving toward IPv6. After all, the web was growing, mobile was exploding, and available IPv4 addresses were dwindling to nothing.

We needed IPv6 if not for its enhanced security then for its expanded address space that would allow us to support billions of connected devices and things.

And yet the adoption rate is abysmal. Consider that “the cloud” was born in an age when IPv6 was available. And yet it took until late 2016 for Amazon AWS and Microsoft Azure to turn on IPv6 in their cloud offerings for compute instances.

This has led some to lamenting that if we can get HTTPS almost everywhere in such a short time, why are we still seeing such a small percentage of sites supporting IPv6? Google estimates that 16.06% of users are IPv6 enabled (which is interesting when compared to service providers support as tracked by the World IPv6 Launch) but only 10% of web sites (according to W3C Techs) support it.

site support ipv6 http2To be fair, HTTPS was not new. EFF was merely encouraging and empowering folks to enable what was already at their fingertips. HTTPS is well-supported, well-understood, and thoroughly baked. So perhaps it would be more fair to compare it to a newer standard, one with similar drawbacks such as incompatibility with previous standards, like HTTP/2.

Back in May 2015, a new version of a stalwart web standard was ratified: HTTP/2. Like IPv6, it is incompatible with previous versions. Unlike “SSL Everywhere”, supporting IPv6 or HTTP/2 is not simply a case of acquiring a certificate and enabling HTTPS on your web servers or infrastructure. While it’s true that moving from HTTP to HTTPS can be disruptive – it can impact your network infrastructure – it’s not the same level of disruption as incurred by IPv6 or HTTP/2.

Moving to new foundational protocols requires a transitional approach; one that requires support for both the old and the new simultaneously until some future point in time. That means “dual-stacks” for every device through which traffic might flow. This is a Herculean effort for some organizations, and an architectural nightmare for others. Just as software incurs technical debt, networks incur architectural debt, and it is likely the case that the “interest payments” on that architectural debt make it difficult to build a valid case for adopting IPv6. After all, it’s not like it’s a requirement or anything. Business will continue if you don’t support IPv6.

Or will it?

Let’s remember that originally, HTTP/2 was going to require TLS/SSL. There was some grumbling and eventually it was made optional. Browser builders blithely ignored that and only provided support for HTTP/2 over TLS/SSL, effectively forcing the requirement on everyone. In late 2015 Google began prioritizing HTTPS-enabled sites in search rankings. And in 2016, Apple made similar moves that required all native apps to use App Transport Security, again effectively forcing the move to HTTPS.

Basically, HTTPS has been forced by those on the client side to support it.

For IPv6 right now there’s no similar requirement. We all watched as IPv4 addresses disappeared but it had relatively little or no impact. So no one feels a real impetus (yet) to make a move that’s potentially going to be disruptive and expensive. But as more things emerge it’s entirely possible that they’ll eventually come out of the box supporting only IPv6. Things have small form factors and their processing power is limited. Less is more in the Internet of Things. That’s one of the reasons many IoT devices eschew HTTP in favor of MQTT; it’s smaller, faster, and more efficient than its heavier web cousin. Supporting both IPv4 and IPv6 is similar. Because they are incompatible most devices support one or the other. And eventually they’re going to choose one and everyone will be scrambling to support it.

fabricated iot

Even if they don’t, the IPv4 addressed available today can accommodate less than 20% of the 20 billion devices projected to be in use by 2020 (Gartner). IPv6 supports way more than even Cisco’s more aggressive predictions of 50 billion devices. And that’s just the IoT.

Cloud, too, is problematic because it can’t buy up enough IPv4 addresses to support its growing customer base. If IaaS is going to grow as predicted, cloud providers must move to IPv6. Which is no doubt in part behind the move by Amazon and Microsoft to do so.

What all these means is that a forcing function will eventually come that requires IPv6 support. It may be IoT, or it may be the cloud, itself. It may be the explosively disruptive combined force of the two on your business. Either way, we’re going to have to transition at some point, and it’s always best if you aren’t rushed into it. We’ve had a long time to work out the kinks with IPv6, and there are more than enough solutions on the market today to support the dual-stack approach to get the transition going. So if you haven’t yet, it’s time to seriously consider enabling IPv6, before you’re forced into it by the things business needs to keep on growing.

* Yes, we could dive into the number of seemingly fraudulent certificates and that blindly handing them out like candy is fraught with risk, but that’s another issue for another day.


  • 0

F5 Application Connector, Connecting and Controlling Cloud Apps

Category : F5

Applications are moving to public clouds. Maybe not as fast as the market predicted (hoped?) in its early years, but they are moving nonetheless. Our own State of

percentage apps in cloud

Application Delivery surveys tell us that 1 in 5 respondents planned to have over 50% of their application portfolio in “the cloud.” And while we’re still seeing a lot of “the cloud” is private and on-premises, there is ample proof that public cloud is growing. Some of the challenges cited  by respondents still revolve around security, specifically the ability to provide the same level of security off-premises in the cloud as they do now on-premises, in the data center.

The thing is that those services organizations use now to secure, scale, and speed up applications aren’t going away. In fact, 39% of respondents in our survey declared they would not deploy an application without security services like web application and network firewalls, DDoS attack protection, IPS/IDS, and anti-bad-things-that-infect-our-networks.

The challenge is that some of these services are not available in the public cloud, and some that are available turn out to be shallow imitations of the more robust and capable enterprise-deployed services in use today. Purely public cloud models aren’t designed to allow the kind of control over network and application services required, after all, which makes parity difficult for providers to achieve.

Yet customers want to take advantage of public cloud, especially for new and disposable applications.

Cloud interconnects – or colo cloud if you prefer – were designed with just this scenario in mind. At the cloud edge, at the interconnect provider, lies control over common services, while apps can happily live, scale, and succeed inside the public cloud.  The cloud interconnect (colo cloud) is a way to equally and equitably service SaaS and applications running in a public cloud with the same services common to the enterprise data center. This is particularly useful for the web applications typically deployed in a

f5 app connector diagram

public cloud in terms of providing secure access via HTTPS. Whether SSL or TLS, keys and certificates must be issued, managed, and stored somewhere, and many enterprises prefer it be a single, certified location to reduce the risks of a distributing such sensitive data across multiple locations.

This seems like the ideal “hybrid” data center cloudy architecture we’ve been looking for to solve the public-cloud-with-control conundrum. But of course if it were, I wouldn’t need to write a blog post, would I? The problem becomes how to “connect” the applications inside the public cloud with the common services they need back at the cloud edge in the cloud interconnect.

Say Hello to F5 Application Connector

The F5 Application Connector is a lightweight proxy instance you deploy in the public cloud. It discovers your apps, and via a secure connection back to an F5 BIG-IP deployed in your preferred cloud interconnect provider that enables app services insertion and management. That means you can provide the same security, performance, and availability services you offer on-premises, in the data center, at the cloud edge in the interconnect while taking advantage of public cloud compute to deploy and scale web applications. Migrations between public cloud providers – or even high-availability architectures employing more than one provider – are dramatically simplified because you don’t have to migrate services and apps. The apps and an F5 Application Connector is all you need to freely move between providers without compromising or changing any of the app services you need to make sure apps are secure, fast, and available.

Because the F5 Application Connector is a proxy-based solution, no public IP addresses are needed in the public cloud environment, an organization’s risk is reduced by reducing potential points of entry without impeding the ease of access often touted as one of the primary benefits of public cloud computing.

It further optimizes budgets by focusing generalized public cloud compute on general purpose application logic and taking advantage of purpose-built compute within BIG-IP at the cloud edge to perform the more complex and compute intense cryptographic processing associated with encrypted traffic. Inspection and termination can occur at the edge safely, allowing apps to focus on processing valuable business transactions.

F5 Application Connector provides organizations with the confidence to lift-and-shift or go native in public cloud environments by enabling a solution capable of addressing challenges with security, scale, and performance typically addressed by app services.

You can get more information here.


  • 0

The Hunt for IoT

Category : F5

How in the world do Death Star-sized botnets come about? Attackers don’t possess such immense power on their own; they must commandeer it. That means they’re perpetually on the hunt for vulnerable IoT devices that they can compromise.

F5 Labs and our data partner, Loryka1, have been monitoring this hunt for over a year now. In our first report, DDoS’s Newest Minions: IoT Devices, we proved what many security experts had long suspected: IoT devices were not only vulnerable, they were already being heavily exploited to pull off large, distributed denial-of-service (DDoS) attacks.

Data collected throughout the remainder of 2016 shows an even steeper growth in “the hunt” than we had imagined. The annual growth rate was 1,373%, with a clear spike in Q4—1.5 times the combined volume in Q1 through Q3. This isn’t surprising, given the timing of the Mirai botnet. And while the number of participating networks in the second half of 2016 stayed relatively flat at 10%, the number of unique IP addresses participating within those networks grew at a rate of 74%. Clearly, threat actors within the same networks have increased their activity.

Explosive Growth in IoT Atacks

So, who exactly is involved in the IoT hunt? Here are some key findings of this report:

  • Networks in China (primarily state-owned telecom companies and ISPs) headlined the threat actor list, accounting for 44% of all attacks in Q3 and 21% in Q4.
  • Trailing behind China, the top threat actors in Q3 were Vietnam and the US, and Russia and the UK in Q4. (The UK surprisingly jumped to third place in Q4 with most activity coming from an online gaming network.)
  • Russia, Spain, the US, and Turkey were the top 4 targeted countries (in that order) in Q3 and Q4.
  • Russia, at 31% in Q3 and 40% in Q4, was the number one target of all top 50 source countries.

What can concerned enterprises do to deal with the IoT threat?

  • Have a DDoS strategy that can support attack sizes beyond your network capacity.
  • Ensure all of your critical services have redundancy, even those you outsource.
  • Put pressure on IoT manufacturers to secure their products, and don’t buy products that are known to be insecure or compromised.
  • Share your knowledge—about vulnerable devices, attacks and threat actors, successful mitigation efforts, and potential solutions—with other security professionals.

To see the full version of this report, click “Download” 


  • 0

Why Networks Matter to App Architecture

Category : F5

A lot of articles have been written on the topic of the sometimes tumultuous relationship between app architectures and the network. For the most part, these have focused on how changes in the app architecture impact the network and the app services used to provide speed, scale, and security. Today, however, we’re going to turn that relationship around and look at how the network has a pretty significant impact on applications and, in turn, on innovation.

I was reminded of that by a recent post on High Scalability, in which its author illustrates why the network matters and how the evolution occurred – right up to today with serverless and why it’s possible to actually consider a world in which the Internet effectively is the computer. It’s long, but a good read, and I encourage you to take some time to read through it. I’ll sum up here, but there’s a lot I’m not hitting that you’ll find interesting in the source article.

Back in the days of dial-up access to the Internet, web sites were mostly text with perhaps one or two (low-quality) images. If you wanted something interactive you fired up gopher or telnet, and used a text-based terminal. There was simply no way the last-mile over dial-up provided for anything more complex.

As the speed of dial-up increased, eventually being replaced with the first “broadband” offerings, apps started to display more images and began to break up into multiple pages. Because the network was fast enough to transmit that information without the consumer getting bored and running off to play Diablo. This pattern continued until scale became an issue. It was no longer speed holding sites back, but scale. Load balancing was suddenly a gold mine.

Network speeds continued to increase – and not just the last-mile but inside the data center and along the Internet’s backbone. Web 2.0 introduced the notion of web apps to the world, giving us responsive, interactive web sites that took advantage of the network’s ability to ensure the scale and speed of data being exchanged.

Application architectures changed because of network advances. Without speed and scale, the world of Web 2.0 would never have been born, because it simply would not have satisfied the need for speed that is innate in ever consumer. But these apps were still of a traditional three-tier model, comprising a presentation layer, a logic layer, and the data layer. They were merely distributed across the Internet.

Soon after, SOA (Service Oriented Architectures for you youngins – get off my lawn, by the way) was all the rage. Using a combination of standards (SOAP, XML) and building on existing service-oriented concepts, “web services” took over. Web services and SOA introduced the concept of decomposing applications into individual services. If that sounds familiar, it should, because today we call that concept “microservices.”

The problem for web services was that XML is a beefy format and parsing it out on the client (or server) took time. Because XML was at the heart of SOA, this meant each service consumed X amount of time to exchange over the network and process. As there is a limited amount of time available in which to process a request from a consumer, this necessarily limited the number of services into which an application could be reasonably decomposed.  Two or three services were the most one could hope to achieve.

Today, the networks are faster and fatter from end to end. Data center (and cloud) networks are measured in gigabits per second, not megabits per second, and even broadband connections would put the early corporate network speeds to shame. That means faster transfers over the network. Combined with incredible increases in compute and I/O speed (because Moore’s Law is right), applications have been able to decompose into tens and even hundreds of services that can be called and executed within expected response parameters. We call these microservices.

These changes in the network have enabled modern application architectures and APIs. It’s encouraged real-time exchange of information in a way that would never have been possible in the early aughts of the century. In much the same way technology is now considered to be a key component of business strategy rather than taking on its traditionally supportive role, the network is increasingly a key component of applications. As we watch the next wave of architectures rolling in (that’s serverless), we’ll note that without a highly responsive, integrated network and app service tier providing near instantaneous response to scale and security events, such computing models are unattainable.

It’s less now about the speed of the network (we’re reaching the limits of the speed of light) and more about the speed of the network to respond to events like scale up and down, stop an in-progress attack, or route around problems in the network or app infrastructure. The next generation of networking is software-defined, software-driven, and software-enabling. It’s also migrating toward a scalability model that embraces a just-in time approach requiring nearly instantaneous reaction speeds from services providing access, scale, and security to the services hosted in those containers.

“The network” as we tend to refer to it, is comprised of services residing in a variety of software and hardware. The ability of “the network” to respond and provide services in a just-in time model will, in part, determine the success of these emerging application architectural models.

“The network” has never been more important than it is right now.


  • 0

Where We’re Headed Now. Charting the Path for a Faster, Smarter, and Safer IoT

Category : F5

The main theme of this year’s Mobile World Congress (MWC) in Barcelona was the ongoing digital transformation toward a “connected society.” 5G will be instrumental in the adoption of Internet of Things (IoT) technologies such as smart homes, smart cities, industry applications, high-speed media delivery, traffic control, autonomous driving, and big data analytics.

At the event, MWC participants shared their forecasts for IoT adoptions and showcased demos and trials of emerging, innovative use cases.

F5 Offers IoT Insight

F5 experts in security, network functions virtualization (NFV), IoT, and 5G met with customers from around the globe, participated in multi-vendor panel discussions, and held booth demonstrations on enabling the IoT via 5G technology with NFV. F5 experts also explored the features and vulnerabilities of IoT protocol MQTT, as well as F5 security solutions for IoT protocols.

Technology demonstration focus areas included:

  • Service providers can use F5 solutions to secure, simplify, automate, and customize application and services delivery. They can also gain insights on subscriber behavior and effective management of network traffic with a wide range of policy enforcement capabilities.
  • F5 solutions enable service providers to use mobile policies and apply DNS web control filtering on a per-subscriber level. This helps service providers monetize safe-browsing and parental controls as additional packaged security features.
  • F5 BIG-IP programmability helps service providers deploy new services faster to market, and quickly spin up and spin down network services. Service providers can lower capital expenses by adding application-layer services. They can also lower operational expenses through automation and deployment of predefined or custom services.
  • F5 IoT Gateway capabilities provide a central point of control and visibility between an organization’s IoT platform and the IoT endpoint. It also protects an organization’s infrastructure with behavioral DDoS attack mitigation and L4–7 application security protection.

IoT Security Concerns

A major topic of interest for attendees was IoT security. Mobile devices are increasingly being used in DDOS attacks, as was the case in the assaults on Netflix and Twitter in 2016.

To prevent DDoS attacks, F5 experts stress that service providers must collaborate with industry regulators and ecosystem partners to build secure and seamless platforms. Secure platforms will provide consistent connectivity and real-time insights to transform customer engagement and support the needs of end users. By analyzing intelligence gained from data collection, service providers can redefine their business models to tailor services for these users.

Platforms must also ensure end-to-end security with tight authentication of devices, authorization, and network-enforced policy for access to communication paths. In addition, the encryption of data for safety and privacy is critical for service providers to deliver an optimal customer experience and increase revenue streams.

Learn More

For more information on topics covered at MWC, please see the links below:

The Internet of Things: Security and Business Impacts on Service Providers – White Paper

Benefits of a Tier-1 US Operator’s NFV Implementation – Blog Post

IoT Message Protocols: The Next Security Challenge for Service Providers? – Blog Post

Hello IoT, Goodbye Security Innocence – Blog Post

Breaking Through the NFV Fog in 2017 – Contributed Article

F5 Networks Reveals New Solutions to Boost Service Provider IoT and 5G-Readiness – Press Release


  • 0

6 steps to prepare your architecture for the cloud

Category : F5

Face it: most IT architectures are complicated. And if you’re considering moving to cloud, you’re right to be concerned about the vast changes that will be required of your architecture—and your organization—as you make your transition.

The good news is that if you’re like most companies, you’ve done this before. Many times. About every three to five years you overhaul your core architectures. You adjust how you deliver applications. You strive to increase performance, enhance security, and reduce costs.

The bad news is that with cloud, things will be even more complicated. You might not have control over services. You may not be able to hard code connections or do things the old way.

There will be some pain. But, like they say, “No pain, no gain,” right?

Here are six steps to get started.

1. Assess what you have

What is the state of your applications? How many do you have? How important are they to your business? What sorts of data do they hold, and—most importantly—what are the dependencies between them?

Start thinking about the categories your apps will fit into. You will have three options.

  1. Adopt SaaS
  2. Migrate to the cloud
  3. Keep them where they are

2. Decide which apps are ripe for outsourcing to SaaS

Do the easy part first. Identify your apps that are virtual commodities. You’re likely to find a lot of them. Do you really need to support your own Exchange server, your out-of-date HR system, or your homegrown sales automation tools? Are they worth the effort of your people or the OpEx you incur? If not, save yourself a lot of trouble by subscribing to a sales, HR, productivity, or other appropriate solution. Let third parties do your heavy lifting. You’ll get obvious, quick wins with SaaS.

3. Analyze and decide on the rest

Next you’ll need to assess your remaining apps and decide which to migrate to cloud and which to keep where they are.

Ask yourself the following questions: If we move app X, how many things will break? Where are the data stores? What are the dependencies? What network services are they using? Which apps require workarounds to normal procedures and protocols to make them work?

You’ll have answers to those questions for many of your apps. For others, you may not know the answers until you actually try to move them. The greater the risk of breakage and the more complicated and less known the dependencies are, the more likely you are to keep an app where it is.

As you map out these dependencies, document them. This will be useful even if only a few of your apps end up in the cloud.

4. Standardize

Next, examine your app delivery policies and look for opportunities to standardize. You should have a limited number of standard load balancing policies—say 10—rather than hand-tuned configurations for every app. Determine standardized storage tiers. Define standardized network services. Talk to your developers about the benefits of standardization and gain their commitment. Make templates to help them deploy things quickly and easily.

5. Simplify and secure access

Ask yourself who is going to be accessing each app and from where. You have to plan for user behavior, connectivity, and appropriate bandwidth. Many of the applications that you seek to move to the cloud—whether private or public—may need to be more readily accessible from anywhere. Moving them to the cloud will place less stress on the infrastructure.

There are also authentication and security issues; most businesses have traditionally used network rather than app controls to determine access. In a public cloud, you may need new access technologies—gateways that determine access in ways that simply didn’t exist before.

6. Plan your architecture

When you go to the cloud, the architecture will be different because the constructs aren’t static. For monolithic applications like databases, the mechanisms that were formerly tied to specific IP addresses or other constant constructs won’t work in the cloud. You may need additional load balancers or proxies that will help provide consistency in an environment that is always changing. Make additional points of control so you can ensure that everyone can access your apps consistently and without disruption.

“Lift and shift” isn’t easy

This is hard stuff. As we said at the beginning, IT architectures are complicated.

While it may not be easy, it’s worthwhile—for the cost savings (OpEx and CapEx) and scalability alone. And some enterprises have achieved massive savings just by preparing for cloud. By assessing your existing app inventories, analyzing dependencies, documenting everything, and standardizing and simplifying as much as possible, you’ll be in the perfect position to decide what to move and what not to move.


Support