Organizations like MailControl often discover they need to gain additional visibility into encrypted incoming and outgoing application traffic to detect potential threats or anomalies. F5 BIG-IP Virtual Edition (VE) on Amazon Web Services (AWS) delivers an advanced application delivery controller (ADC) that goes beyond balancing application loads, enabling inspection of inbound and outbound application traffic. Join our webinar with AWS to discover how F5 was able to help MailControl boost their visibility into the email traffic flowing through their application. By using virtualized F5 services on Amazon Web Services (AWS), the organization increased its application monitoring capabilities and improved security for its customers, while simultaneously automating processes to support its agile DevOps process.
The Internet of Things (IoT) and, specifically, the hunt for exploitable IoT devices by attackers, has been a primary area of research for F5 Labs for over a year now—and with good reason. IoT devices are becoming the “cyberweapon delivery system of choice” by today’s botnet-building attackers. And, why not? There are literally billions of them in the world, most of which are readily accessible (via Telnet) and easily hacked (due to lack of security controls). Why would attackers rent expensive resources in hosting environments to build their botnets when so many devices are “free” for the taking?
Across all of our research, every indication is that today’s botnets, or “thingbots” (built exclusively from IoT devices) will become the infrastructure for a future darknet.*
In our third semi-annual report on this topic, we continue to track Telnet attack activity and, through a series of global maps showing infected systems, we track the progression of Mirai, as well as a new thingbot called Persirai. We also include a list of the administrative credentials attackers most frequently use when launching brute force attacks against IoT devices.
Here are the key findings based on analysis of data collected between January 1 through June 30, 2017:
Telnet attack activity grew 280% from the previous period, which included massive growth due to the Mirai malware and subsequent attacks.
The level of attacking activity at the time of publishing doesn’t equate to the current size of Mirai or Persirai, indicating there are other thingbots being built that we don’t yet know about. Since there haven’t been any massive attacks post Mirai, it’s likely these thingbots are just ready and waiting to unleash their next round of attacks.
93% of this period’s attacks occurred in January and February while activity significantly declined in March through June. This could mean that the attacker “recon” phase has ended and that the “build only” phase has begun. Or, it could just be that attackers were momentarily distracted (enticed) by the Shadow Brokers’ release of EternalBlue.*
The top attacking country in this reporting period was Spain, launching 83% of all attacks, while activity from China, the top attacking country from the prior two periods, dropped off significantly, contributing less than 1% to the total attack volume. (Has China cleaned up compromised IoT systems?)
The top 10 attacking IP addresses all came from one hosting provider network in Spain: SoloGigabit.
SoloGigabit was the source of all attacks coming from Spain in this period. Given that SoloGigabit is a hosting provider with a “bullet proof” reputation, we assume this was direct threat actor traffic rather than compromised IoT devices being forced by their thingbot master to attack.
The top 50 attacking IP addresses resolve to ISP/telecom companies, and hosting providers. While there were more ISPs and telecom IP addresses on the top 50 list, when looking at volume of attacks by industry, the overwhelming number came from hosting providers.
Although IoT devices are known for launching DDoS attacks, they’re also being used in vigilante thingbots to take out vulnerable IoT infrastructure before they are used in attacks*and to host banking trojan infrastructure.* IoT devices have also been subject to hacktivism attacks,* and are the target of nation-state cyber warfare attacks.*
As we see in this report with Persirai, attackers are now building thingbots based on specific disclosed vulnerabilities* rather than having to launch a large recon scan followed by brute forcing credentials.
From a manufacturing and security perspective, the state of IoT devices hasn’t changed, nor did we expect it to. In the short term, IoT devices will continue to be one of the most highly exploitable tools in attackers’ cyber arsenals. We will continue to see massive thingbots being built until IoT manufacturers are forced to secure these devices, recall products, or bow to pressure from buyers who simply refuse to purchase vulnerable devices.
In the meantime, responsible organizations can do their best to protect themselves by having a DDoS strategy in place, ensuring redundancy for critical services, implementing credential stuffing solutions, and continually educating employees about the potential dangers of IoT devices and how to use them safely.
To see the full version of this report, click “Download” .
A recent survey finds 27% of users claim poor performance is not only frustrating, but it stresses them out.
Years ago, the comedian Louis CK offered up what is now a classic lament (for some of us, anyway), which has been dubbed, “Everything is Amazing and Nobody is Happy.” If you’ve seen it, bear with me for those who haven’t. In it, Louis watches those around him become frustrated with using technology in an airplane, and points out the amusing reality of our impatience with slow WiFi onboard while flying through the air – an amazing feat of engineering in the first place.
The lament is one that perhaps those of us who cut our teeth on dial-up and inordinately slow web pages can understand. It is fascinating to watch ‘digital natives’ contort their faces with anguish when an app or web page takes more than the blink of an eye (about 400ms) to load. Perhaps the patience of we, uh, more experienced users stems from suffering through multi-hour long downloads of Red Hat slackware that tied up our phone-lines and computers only to be corrupted five minutes from completion by digital garbage injected by call-waiting thanks to yet another urgent telemarketing call.
Youngins today have no idea how good they have it. The Internet (and technology in general) is actually amazing. And yet nobody is happy. It turns out they’re not only not happy, they are stressed out.
But we can’t go back (and honestly, who really wants to?) so all we can do is deal with the world as it is, not as it was or we’d like it to be. And that means users that are increasingly sensitive to variations in performance.
Thus, when a report like that from AppDynamics arrives detailing the devastating impact of poor performance, we ought to pay attention. Because poor performance is a serious issue that can dramatically impact your ability to enjoy a piece of the global app economy, now estimated to surpass $6 trillion in 2021.
The potential impact is not trivial. On the contrary, it turns out today’s users are more loyal to an app than they are a brand. One might then conclude that in the app economy, your app is your brand, for better or worse.
Worse, it turns out, is worser than you might expect. Poor performance not only frustrates users, it stresses them out.
Given that many of us rely on apps – both mobile and otherwise – to perform a hundred different tasks during a typical day at work, this isn’t just about your external facing apps, either. It’s about both those apps designed for profit and productivity.
The truth is that in the digital economy, apps are opportunity to grow the corporate domestic product (CDP) with improved productivity and increased profits through convenience. But poor performance can bring that growth to a screeching halt.
When 8 of ten users have DELETED an app because of performance issues,you have lost an opportunity. The sad reality is that, based on this report, they’ve probably already gone to a competitor.
The good news is that there are a variety of app services designed to improve performance. And organizations are using them.
THE APP SERVICES ORGS USE TO IMPROVE PERFORMANCE
The problem is, of course, that you don’t always (rarely, in fact) have complete control over the performance of your app. There’s the last mile – that sinuous stretch of cable between you and the user. There’s the app platform, itself, which may or may not already be tweaked and tuned as much as it can be. If it’s in the public cloud, you have no control over the network, itself. And then there’s the app. Language choice, database connectivity, logic. The factors that contribute to poor performance are voluminous, and not always under anyone’s control. App services, which sit upstream in the data path, are able to provide an excellent counterweight to those issues bogging down app performance and, in many cases, give it a leg up to perform better than you might have hoped.
Techniques like compression and acceleration (minification, image optimization, etc..) improve performance by manipulating content to deliver it faster – inside the organization and out. Protocol-focused services like HTTP2 and “fast HTTP” focus on eliminating those pesky aspects of text-based protocols that get in the way of delivering apps faster. While HTTP2 usage remains sluggish, we found in our State of Application Delivery 2017 survey that a significant percentage of organizations (16%) planned on deploying HTTP2. Other performance related services fared well, vying with security-related performances for the title of “most likely to be deployed in the next year.”
SSL offload and TCP multiplexing means servers focus on serving content, not performing cryptographic and connection-related acrobatics on every request and response, an increasingly burdensome task when serving up apps constructed from hundreds of API calls.
App services provide a robust set of options for improving the performance of apps, in the cloud or in the data center. Their focus is purely on how to make apps go faster, regardless of their location or construction.
Performance has always been important, but it’s never been quite as critical as its becoming in the app economy. With a lower tolerance for poor performance (one might even suggest users are highly sensitive to jitter these days) it is more important than ever to take advantage of all the tricks in your toolbox to make sure your app goes fast enough to satisfy even the most demanding of users.
If you need a test subject for that, I’ll rent you my 9 year old. If you can satisfy his twitchy app finger, you’ve got a winner.
Years ago, Forrester declared dead the old security mantra, “trust but verify,” and coined the term zero trust. The argument was that we trusted everything based on an initial successful authentication, but never really verified thereafter. Per usual, buzz words like this go through their hype cycle, starting with a lot of excitement and often not resulting in much action in the near-term. Zero trust, and its spin-offs (the application is the new perimeter, etc.) are now making traction in real-world architectures and implementations. A big proponent of this security strategy, Google, has made large strides in implementing it, and has even been so kind to publish their process of doing so. They’ve dubbed it, BeyondCorp.
Google recently published a blog post to tout their 4th research paper on the topic, which described how they maintained productivity while going through the long process of migrating. To sum up the BeyondCorp architecture, there is no longer a differentiation between on-prem access and remote access…it’s just access. All authentication and access requests follow the same path through a centralized access gateway regardless of the user’s location or device. However, authentication challenges and access decisions may differ based on a number of risk factors. The gateway serves up authentication prompts, but also allows for fine-grained, attribute-based access control based on a risk profile. This provides consistency and simplicity for users, which is extremely valuable in lessening the likelihood of successful attacks (as I’ve described in past writings).
To illustrate, if a user is attempting to connect to their company’s web app that only has company announcements that may as well be public (low risk), and attempting to connect from their office desk or from their corporate-issued laptop (low risk), then maybe as a security admin I choose to only require username and password for access. But suppose the user is attempting to connect to a financial-related application (high risk), from somewhere in Russia (high risk), and from an unknown device (high risk). I, as the security admin, may have a policy on my access gateway that deems this too risky and denies access, or at the very least, prompts the user for a 2nd or even 3rd factor to verify identity. Furthermore, with fine-grained, attribute-based access control, I may decide to give access requests that match that level of risk a scaled-down form of access…maybe read-only instead of read/write.
Does any of this sound familiar to you? If you use an IDaaS service such as those from F5 partners Microsoft Azure AD, Ping, or Okta, you’re already doing this to some extent. There are other components that make up the BeyondCorp model, however the access gateway is certainly the crux of the entire architecture. Google is now offering up their “identity-aware proxy” (IAP) for other companies to use, but it’s not without limitations. Besides only being able to use this with apps on the Google Compute Platform (GCP), some customers noted challenges with granularity and flexibility of controlling access, wishing for configurable session length, per application policies rather than per GCP, and ability to do flexible step-up 2nd factor authentication.
Not coincidentally, F5 offers an access solution that offers these capabilities, and more. And it works for ALL of your environments rather than only apps residing within GCP. Whether completely in the cloud or in a hybrid manner, F5 offers a secure, centralized, and scalable access solution to fit any architecture. To learn more, visit f5.com/apm.
Organizations like MailControl often discover they need to gain additional visibility into encrypted incoming and outgoing application traffic to detect potential threats or anomalies. F5 BIG-IP Virtual Edition (VE) on Amazon Web Services (AWS) delivers an advanced application delivery controller (ADC) that goes beyond balancing application loads, enabling inspection of inbound and outbound application traffic. Join our webinar with AWS to discover how F5 was able to help MailControl boost their visibility into the email traffic flowing through their application.
By using virtualized F5 services on Amazon Web Services (AWS), the organization increased its application monitoring capabilities and improved security for its customers, while simultaneously automating processes to support its agile DevOps process.
Generally speaking, the use of the term “attack” has come to mean an attempt to deny service to an organization. That’s likely because the frequency and volume of DDoS attacks have had serious consequences for high-profile organizations. The resulting spate of coverage has cemented the term ‘attack’ in most minds to mean only one kind of attack: a DDoS attempt against an organization.
But there are other attacks that come before a DDoS, and it is those we need to focus on if we’re going to start addressing the growing threat arising from the legion of “thingbots” that grows as a result of ignoring them.
Every prediction today from analysts and pundits alike predicts rapid, nearly exponential growth in the number of things attached to our networks. The sensational splash of attackers exploiting a consumer-grade thing may make them seem more prolific, but the reality is that organizations are consuming IoT devices in copious amounts. And attacks on those devices are following suit. If you think about the most visible of these – road signs – and how often they’ve been ‘hacked’, you’ll quickly recognize just how proliferate “things” truly are.
Consider a recent survey noting that the average number of devices in an organization – not a home – will double in the next two years. The same survey further makes note that a mere 28% of those know where all those things are. That’s right, the majority of folks only know a portion of the devices and things connecting to the Internet in their organization. A 2016 SANS Institute survey focusing on the financial industry found much the same, with fewer than 40% claiming full visibility into devices – including IoT – and around half claiming at least partial visibility.
With F5 Labs research showing a staggering 1373% annual growth rate in attacks seeking those devices, one has to consider how we are currently approaching security for such a vast legion of would be thingbots. Because as you might recall an Arxan/IBM survey noted: “a staggering 44% admitted they aren’t doing anythingto prevent an attack. Oh, they’re concerned about a breach occurring through those apps—58% fingered IoT apps and 53% mobile—but they aren’t doing anything about it.”
Now call me crazy, but it would seem that preventing the initial ‘recruiting’ attack from succeeding would be a good place to start. Generally speaking, this means hardening the management plane by locking down SSH and telnet, and then securing any web interfaces that may be present.
That’s because the primary methods of compromising these devices remains using default passwords to gain access to their command lines, or by
exploiting vulnerable web interfaces. That’s the purpose behind the growth in telnet scans, after all. Attackers and compromised devices scan for other devices and attempt to gain access using known defaults and then recruit the device by infecting it, too.
Paying attention to outbound traffic is important, as it may expose compromised devices as they join the legions of existing thingbots and attempt to exploit other devices outside (or inside) your network. Watching for “new” devices exhibiting unusual behavior – like excessive traffic or connection attempts – may pinpoint bad actors already in your network that need to be addressed.
According to recent research, 94% rely on a traditional network firewall to handle IoT threats. And yet many of the threats might just be originating inside your own network from already compromised devices or via web-interfaces that aren’t full secured by just a network firewall. And given the percentages of folks who don’t know where these devices are in the first place, it’s unlikely the firewall is blocking access on a destination IP basis and we know blocking by source IP isn’t very successful a tactic given the ease with which attackers change and distribute attacks.
So take advantage of a WAF to protect management interfaces as well as user-facing apps to shut down attempts to exploit common web-based vulnerabilities in interfaces that may provide attackers an easy route to compromise as well. Whether it’s a vulnerability that enables the deposit of malware or simply the means to enable access to the command line, web-based attacks against the management interface may be the fastest route to recruiting devices into the growing thingbot army.
Too, it’s unlikely to catch those recruiting attempts that may take advantage of IoT protocols like MQTT or CoAP, where payload inspection may be required. While the majority of attacks today take advantage of protocols traditionally used for management of devices (like telnet and SSH), the threat of direct attacks on devices via MQTT is already recognized. To wit, OWASP has already begun a project to help secure IoT in much the same way it promotes web security. You may want to consider an IoT gateway to secure devices from native protocol exploitation that may lead to compromise.
In a nutshell, consider the following for securing your IoT devices:
Change default passwords (prevention)
Lock down telnet / SSH access (prevention)
Secure web interfaces (use a WAF) (prevention)
Invest in an IoT gateway (prevention)
Monitor for unusual intra-network traffic (detection)
Watch for new initiators of outbound traffic (detection)
Attacks on IoT devices seems inevitable at this point. The vast legions of these devices already connected to networks (and accessible via the Internet) is simply too inviting for attackers to ignore given their well-known lack of attention to security. It’s important to prevent those in your network from becoming part of the problem, and that means detecting and preventing the attacks that come before THE ATTACK.
Because it’s going to be quite embarrassing if some day your own devices DDoS you.
The cloud changes nothing. The cloud changes everything. The private cloud does neither and both.
OK, enough with the Zen. The truth is, your applications running in a private cloud, of whatever flavor, still need the critical application delivery services that the network has traditionally provided: security, load balancing, high availability, and optimization. Even newer architectures such as microservices and container platforms use familiar methods to keep applications up and running. The latest languages or development techniques should still be backed up with security tools like access control and web application firewalls. In this way, nothing changes.
So, what does?
For most organizations, the private cloud changes not what we deploy, but how, and, more importantly, how fast we can deploy it. Successful private cloud implementations will deliver self-service IT and allow internal customers to use automation and infrastructure-as-code to create highly dynamic, operationally efficient application environments. The frequency of changes to the code and the infrastructure will accelerate as new services or applications are developed and delivered more efficiently. While many of the factors driving faster ‘time-to-value’ are cultural and organizational, the infrastructure must not be a road block. From request to implementation, there is no time for human latency. Automation tools will have control the bulk of IT service delivery if the infrastructure is not going to be a bottleneck.
If you are moving from a ticket-based, traditional organization where changes are requested, reviewed and implemented manually, then this transition will change a lot of your day to day activities. IT will have move from being implementation-focused, to designing frameworks and end-to-end service automation. Thinking about how to allow application developers or application operations (dare I say ‘DevOps’?) to provision network and application delivery services in the same manner as the rest of the stack needs to central to deploying a private cloud platform. Stealing a phrase from a colleague: IT operations must move from being button pushers to button creators.
At a philosophical level (and, after all, that’s where this article started) what fundamentally changes is how IT operations controls the infrastructure. Think about it: the reason there was a ticket system and an operations team to make changes to the infrastructure was partially to prevent errors or misconfigurations by only having domain-experts make changes. IT operations had access to all the buttons and knew which ones to press to effect the requested change, and hopefully not break anything else in the process. Now everyone wants to press their own buttons, or have software do it for them. The logical buttons that operations create had better be safe to press. This is now where operations exert control. By creating templated, automated systems that only require the requesters to know what they want, not how to do it, and limiting their choices to secure and supportable configurations, IT grants freedom to their customers but keeps control of the infrastructure.
While these changes might be challenging technically and culturally, the resulting productivity and agility gains are significant enough to warrant the challenges in getting there.
The news is full of stories about Russian hacking and speculation about their motivations. These hackers have been implicated in influencing elections as well as some of the largest cybercrime heists in history. In March of 2017, then FBI Director James Comey stated, “Russian intelligence services hacked into a number of enterprises in the United States, including the Democratic National Committee.”1
Russian hacking, however, is not a new threat. Over the years, there have been quite a few infamous Russian hackers, including:
Russian Federal Security Service (FSB) officers Dmitry Dokuchaev and Igor Sushchin,2 and cyber-criminal Alexsey Belan3 who were convicted of hacking a billion Yahoo accounts beginning in 2014.
Roman Seleznev, convicted in 2016 of 38 counts of financial cybercrime for the hack of RBS Worldpay payment processor.4
Evgeniy Bogachev, wanted by the FBI as the bot master of the GameOver Zeus botnet,5 used for bank fraud and ransomware.
Sasha Panin, who hacked over a million systems, stealing credit cards and bank account credentials.6
Vladimir Drinkman, who pleaded guilty to a hacking theft of 160 million credit card numbers.7
Dmitry Sklyarov, who was charged with selling copy-protection breaking software.9
Russian hackers are indeed formidable and responsible for a fair share of cybercrime and Internet mayhem. But this raises a lot of questions: Why are they so good at hacking? What is their motivation to hack? Aren’t they worried about getting caught? Are they spies or crooks or both?
I will attempt to answer these questions with the words of real Russian hackers that I interviewed face to face, as part of an FBI undercover operation that took place in 2000.
Hacking systems and launching denial-of-service attacks was nothing new, even back in 1999. However, in the latter half of that year there was a noted uptick in high-profile cyber attacks against ISPs, e-commerce companies, and banks. Systems were brought down, and data was being stolen or erased. A hacker known as “Subbsta” and his gang were becoming known for approaching breached organizations and offering “security consulting” services to minimize further damage. In other words, cyber-extortion. Pay up or face more deleted or breached data.
Working with a victimized Seattle ISP, the FBI cooked up Operation Flyhook, a plan to lure this hacker to the United States and apprehend him. Taking a cue from the hacker’s request for an extortion payment, the FBI upped the stakes and offered the hacker Subbsta a job at a fictitious security consultancy.
Two members of the hacking gang took the bait. Alexey Ivanov “Subbsta” and Vasiliy Gorshkov “Kvakin” of Chelyabinsk, Russia flew to Seattle and were interviewed in that sting operation.
Ivanov’s and Gorshkov scheme was both elaborate and impressive: credit card numbers were stolen from PayPal accounts, acquired through well-crafted phishing attacks. Then, these credit card numbers were used by bots to purchase goods in eBay auctions. Goods purchased from eBay were shipped into Russia for resale. Ivanov and Gorshkov were not just creative in these pursuits; their hacking skills were top notch, as well. The lead forensics expert on the case told me, “These are some of the best system integrators I’ve ever seen.” In fact, Ivanov and Gorshkov were responsible for discovering and exploiting several root-level zero-day exploits. The US government would later issue a few major alerts regarding these holes.
The Operation Flyhook sting consisted of three undercover FBI agents and a civilian cyber-security expert: me. At that time, the FBI did not have deep, technical cyber-security expertise on staff as they do now. I was already working with the FBI on a few other matters, so I volunteered to assist in interviewing Ivanov and Gorshkov to establish evidence of their guilt. The interview also gave me an opportunity to understand the motivations and capabilities of the people attacking the systems I defended in my day job. It was an illuminating experience.
First of all, I expected them to be evil men; to my surprise, they were not. They were techies like me. They loved technology and enjoyed devising interesting and powerful new ways to use it. Second of all, Ivanov and Gorshkov were challenged in making use of their skills in their native country. There were no technology companies to join, and startups were impossible to fund. There weren’t very many IT jobs at all, since Russia had not adopted that kind of technology at that time.
With the limited resources of Russia—then only a decade out from the fall of the Soviet Union—talented and ambitious technologists had to teach themselves and stretch every resource they had. Gorshkov told me, “We build computers ourselves because it is much cheaper and much faster.” Those who succeeded were often highly creative and innovative and also possessed a deep understanding of technology, as is evident from my interview with them, Ivanov’s resume, and the array of hacking tools discovered during the forensic investigation.
These hackers lived where the bending and breaking of the rules was just a part of the culture. Both men were astonished at how Americans obeyed traffic rules and smoking restrictions, citing how in their country such rules are ignored. They wanted to go into business for themselves but found it difficult to do so. During the interview, Gorshkov said of his home town of Chelyabinsk, “Here, it is difficult for a person to live on honest wages.” They spent a portion of our interview talking to us about their startup business, Tech.Net.Ru. They even shared a photo of their office and equipment.
At the time, the first dot-com boom was exploding in Silicon Valley and Seattle. They wanted to be part of it, not just because of the money, but also to apply their skills and build something innovative. The underlying purpose for the hacking and extortion scheme was to raise funds until they could get their e-commerce platform off the ground. They talked about themselves as businessmen and entrepreneurs.
When asked about law enforcement response, Gorshkov joked that, “The FBI cannot get us in Russia,” which is why they only committed their crimes while in Russia.
They did express concern about being caught and “recruited” while in Russia. At first, we thought they might be referring to being recruited by traditional organized crime gangs. However, they were referring to “agencies” in Russia who would use their talents for their own ends. These agencies including the aforementioned FSB, which was involved in the recent Yahoo hack, or the Russian military intelligence (GRU). These agencies would not bother with due process or evidence when they found a hacker; rather, “they would take you” and then “you would work for them.” Ivanov and Gorshkov were talking about forced recruitment and the end of their freedom. This is a telling hint about why Russian cyber-criminals are not extradited and what becomes of them if caught. They end up working for agencies on state-sponsored hacking missions.
After the interviews, Ivanov and Gorshkov were taken offsite, arrested, and eventually convicted of their crimes.10 There were more twists and turns in Operation Flyhook. If you’re interested in more details, I recommend checking out The Lure: The True Story of How the Department of Justice Brought Down Two of The World’s Most Dangerous Cyber Criminals11 by Steve Schroeder, one of the prosecutors in the case. You’ll find Ivanov, Gorshkov, and me right there in Chapter Four.
Looking at Russian hackers in a threat profile, you can see there are really two primary branches: criminals and state hackers, with the criminals being press-ganged into doing work for Russian agencies. Things really haven’t changed; Dmitry Dokuchaev, mentioned earlier associated with the Yahoo hack, was reportedly recruited into the FSB to avoid prosecution for fraud.12 Russian cybercriminals act as you would expect: earning a good living while trying to remain below the radar from Russian agencies and foreign law enforcement. They are often extremely technologically adept and often unreachable by law enforcement. Russian hackers are also very skilled at social engineering, usually employing well-designed phishing schemes and social media decoys. They are true hackers in the original sense of the word: they will chip away at a system for as long as it takes to find a way in. And often they will succeed.
Back in 2011, Marc Andreesen famously declared that “Software is eating the world.” We have seen this come to fruition, although today I would update this declaration to be “SaaS is eating the world.” SaaS and the subscription-based delivery of business applications have become the preferred consumption model for most organizations. Market analyst firm IDC predicts that virtually all software vendors will have fully shifted to a SaaS delivery model by 2018.
We love our SaaS. And what’s not to love? The pay-as-you-go pricing is business-friendly. It enables velocity of scale (up or down), reduces local infrastructure footprint, lowers capital costs, yada yada yada – if you are reading this blog, you probably already know all this stuff.
But here’s the thing with SaaS, we still need to implement IT security controls. While we rely on the service provider to secure the platform, we need to ensure access to our SaaS-delivered business apps is well protected. The threat of compromised accounts is arguably the biggest security risk to adopting public cloud SaaS offerings. We can’t have employees using weak or shared passwords for these apps, and sticky notes on the user’s desk make us cringe. However, strong password policies make it hard for employees, especially if they must change them regularly.
We need an identity and access management solution for cloud apps that enables strong policy without putting the administrative burden on users or IT staff. And of course, we want this delivered in an identity as a service (IDaaS) model. There are some good IDaaS offerings on the market today, like those from Ping Identity and Okta. These solutions offer SSO and SAML-based federation for cloud-based apps. Your employees simply authenticate to the IDaaS and have seamless access to all their cloud apps. Simple, easy, secure access to the cloud apps they need.
Sounds great, right? Just copy or synchronize your on-premises user directory to the IDaaS vendor’s platform, configure some SAML-enabled SaaS applications and you are ready to federate. Wait, what? Copy my directory to the cloud? Let me think about that…
We all want the simplicity and security benefits of SSO for cloud and SaaS, but having copies of the corporate directory in a 3rd party’s platform is not for everyone. While I truly believe that service providers take security seriously, they also can be a frequent attack target because of the sensitive data they host. Limiting risk in the cloud just makes good security sense.
The reports of the on-premises directory’s death have been greatly exaggerated. At F5, we have customers that just don’t want expose their directories to the public cloud. However, there is a way to get all the benefits of IDaaS without the need to put your directory in the IDaaS platform – what is known as SAML identity chaining. This is where the IDaaS federation identity provider (IdP) can redirect to an on-premises IdP, like the F5 BIG-IP APM, that has secure access to the on-premises corporate directory. Employees can be transparently authenticated via the on-premises directory and the appropriate SAML assertion can be provided to the back to the IDaaS for federated SSO to SaaS apps.
This IdP chaining model also enables on-premises access policies to be extended to cloud applications. Multi-factor authentication (MFA) and contextual-based policy access for apps can also be added. Pretty cool right?
If you are considering implementing IDaaS but have reservations about sharing your corporate directory in the cloud, IdP chaining can help ease your concerns. Most market-leading IDaaS vendors support IdP chaining and F5 BIG-IP APM has experience working with just about all of them. Go forth and IDaaS without fear…