Author Archives: AdminDCS

  • 0

ACI Worldwide Success Story

Category : HP Security

“The Silk tools are now instrumental to our software release schedule and support is very important to us. Micro Focus support is very responsive and professional and has not let us down.”

ACI Worldwide, the Universal Payments (UP) company, powers electronic payments for over 5,100 organizations around the world. More than 1,000 of the largest financial institutions rely on ACI to execute $14 trillion each day in payments and securities.

CHALLENGE

Through its comprehensive suite of software and SaaS-based solutions, ACI delivers real-time, immediate payments capabilities and enables the industry’s most complete omni-channel payments experience. A continuous integration process ensures application testing is at the center of the software development lifecycle at ACI, and every day, over 10,000 tests are executed.

However, the lack of a centralized test repository meant that ACI didn’t have complete visibility, as James Griffiths, Automation Architect at ACI, explains: “We spent lots of time manually creating reports. The administrative overhead for reporting, assigning tests, and checking execution progress was just too high and we needed an automated solution to keep pace with the ever-changing and growing business requirements.”

SOLUTION

A thorough market review highlighted the Silk suite of products as a solution to improve test integration and management. Micro Focus Silk Test, Silk Central, and Silk Performer were soon implemented to create a streamlined, end-to-end, application testing process. The integration between the tools paid dividends straight away through the ability to integrate requirements and defects into the testing cycle; have a real-time test execution status; plan test execution and maintenance; and provide structured reporting.

Griffiths comments: “We really like the Silk Test scripting capability, which allows us to perform hands-off installs and updates to our payment solutions. Silk Performer helps us to execute multiple tests from command line by running a batch file. We have built a framework to automate the execution of load tests and the generation of custom reports to save time that can be dedicated to actual performance engineering. The Silk Performer features make it easy for us to analyze test results, create reports, and troubleshoot any errors.”

By running load and duration tests with Silk Performer, ACI can identify and fix system and code bottlenecks to ensure the application’s reliability and scalability. These non-functional requirements become critical considerations early in the software development life cycle, to avoid having to do costly fixes late in the cycle.

Using the automation features of Silk Test, ACI can test earlier in the development cycle using a continuous integration strategy. Through early testing, the application reliability and quality has increased considerably.

The partnership with Micro Focus throughout the implementation and subsequent use of the Silk solutions was great, as Griffiths adds: “The Silk tools are now key to our software release schedule and support is very important to us. Micro Focus support is very responsive and professional and has not let us down.”

RESULTS

Silk Central and Silk Test have eliminated the administrative overhead and automated test assignment and execution. A full reporting process is included.

Griffiths concludes: “We can deliver new software releases much faster using our Silk-powered testing process. We save two days of manual intervention during the install and update phase of each release. With nearly 60 releases each year, this adds up to a massive productivity gain for us; time we can now spend on developing new features and added value for our customers.”

Source: https://www.microfocus.com/success/stories/aci-worldwide/w_icid=LinkedIn&sf62924861=1

Author: JAMES GRIFFITHS


  • 0

Evader by Forcepoint

Category : Uncategorized

What is Evader?

The 2017 NSS Labs NGFW Test reveals many of the leading next generation firewalls are vulnerable to Advanced Evasion Techniques (AETs) that can let exploits and malware (including aggressive ransomware attacks like WannaCry) into your network undetected.

With Evader, the world’s premier software-based testing environment for evasions, you can see how well your firewalls and intrusion prevention systems (IPSs) defend against these threats by:

  • Launching controlled AET-borne attacks at network security devices
  • Interactively combining and adjusting evasions
  • Seeing the results immediately

Note: Evader is not a hacking tool or a penetration test intended to transmit arbitrary exploits. It is offered solely for testing and should not be used against any systems outside your environment.  Using AETs, Evader tests whether or not a known exploit can be delivered through security devices you specify to a target host.

Schedule a Live Interactive Demo of Evader

 


  • 0

The Blockchain Bubble, Identifying viable opportunities for blockchain

Category : Gemalto

Blockchain technology is popping up everywhere from the currency market to smart contracts. The growth in the technology is evident from the investments being made, for example, PwC estimated that in the last nine last nine months of 2016, $1.4 billion had been invested globally in blockchain startups. This stems from its potential to enable efficiencies and cost-saving opportunities based on moving to a decentralized approach and away from the current centralized systems. With all the hype around blockchain, companies need to cut through the hype and ask the question – when does blockchain actually make business sense?

Blockchain is not a silver bullet and cannot solve every problem. There is also the added complexity of managing the security of many distributed nodes can only be justified by gaining business benefits from using blockchain. In this webinar, we will look at a business qualifying approach to blockchain to help you evaluate valid blockchain use cases and identify the security needs surrounding blockchain operations. Join us to learn more on:
•Securing blockchain from the edge to the core
•The operational benefits and pitfalls of blockchain technology
•Our 4 step qualification process for blockchain business opportunities:
1.Is there an established business process?
2.Are there more 3 parties involved – i.e. is it a distributed problem?
3.Is it important that the data being exchanged is trusted and considered to be factually accurate?
4.Would automation improve the performance of the process?

Live online Sep 26 11:00 am United States – New York
or after on demand 60 mins

  • 0

The Case for Network Visibility

Category : Gigamon

As a security professional and a consumer, my ears perk up when I hear about security breaches in the daily news. My first thought is, “Has my personal data been compromised?” (most of us initially react with emotion and self-interest) and then I ponder how the solutions from my company, Gigamon, could be applied to prevent such breaches in future. I also look at what security experts, analysts, reporters and other influencers around the industry are saying.

Companies that have suffered serious breaches have invested much in security.  Reports I’ve seen state that, in many of these instances, significant investments have been made in firewalls, intrusion prevention systems, malware protection and a host of other security solutions.  The companies are doing their best – and most organizations do – to secure business critical data and the personally identifiable information of their customers.  So why is it so hard to stop these attacks? What are cybersecurity operations teams missing?  How could they rethink cybersecurity to address the modern day threat landscape?

From my perspective, a totally new and different security approach is required that goes beyond the traditional “buy more tools approach” that is not only becoming more cost prohibitive, but also creates inefficiencies and hinders performance. All signs point to the fact that consistent and concerted attention to visibility, rather than prevention, is the key to robust network security.

The exponential growth of data traveling through enterprise networks means that instead of investing in more tools, organizations must invest in and implement technology that detects and analyzes data-in-motion and sends only the necessary data to the nearest available set of security tools such as the firewall or intrusion prevention system for processing.  This type of approach levels the playing field and changes the equation from “man fighting against machine” (since the attacks are likely coming from well-appointed systems in use by hackers and nation states) to “machine vs. machine.”  This approach is eloquently explained in the Defender Lifecycle Model security approach proposed by my friend and colleague, Shehzad Merchant and is one proposed, at least in theory, by a recent research report from Gartner entitled “Use a CARTA (continuous adaptive risk and trust assessment) Strategic Approach to Embrace Digital Business Opportunities in an Era of Advanced Threats.”

The harsh, new reality is that cyberattacks and data breaches are inevitable. And while there is not yet a perfect approach, it’s essential that enterprises shift their approach to add pervasive visibility to their traditional prevention measures – alongside detection, prediction and containment – to improve the security of their applications and the business critical and personal data traversing their network.

With detection and response integrated into security operations, today’s businesses gain a strategic advantage in the fight to wrestle the massive volume of network cyber threats that exist in this brave new world. And that is a major step forward in shifting control and advantage way from malicious attackers and back to defenders.

Source: https://blog.gigamon.com/2017/09/15/case-network-visibility/

Author: Graham Melville


  • 0

Three Things about Networks That Every CIO Should Have on their Agenda

Category : Citrix

We are headed for a future in which everything will be connected to the cloud – not just traditional servers and clients, but any kind of industrial plant, building, vehicle, machinery, and device. Global always-on networking will fundamentally transform all industries. To keep up with the upcoming changes and market requirements there are three topics every CIO, Head of IT, or Network Manager should have on their agenda:

  1. The network is the future business platform. Within the next five to ten years, business will be transformed by digital technology, on a much larger scale than seemingly possible at first glance. Everything will be part of a globally-interconnected IT infrastructure, the Internet of Things (IoT). The IoT provides a flood of sensory data to big data analytics and allows for real-time (or near real-time) interactivity. Whatever industry, the IT network will become the foundation of every business. For example, car manufacturers are preparing for a future when cars are not simply hardware that takes us from A to B, but interconnected software platforms that provide an individualized user experience to drivers. Forklift manufacturers will provide forklifts as a service with cloud-based management and fault monitoring. The list goes on and on.
  1. The network is software-defined. It is a natural mistake to think of the global networking infrastructure as just a gigantic accumulation of hardware: copper wires, fiber cables, switches, and routers. But this hardware is increasingly becoming software-defined. Software-defined networking means that data paths are no longer pre-defined connections; instead, software dynamically determines these data paths, making the network more agile. For example, branch offices used to be connected to headquarters via leased lines, complemented by some narrowband method of emergency failover. In contrast, the modern branch office communicates via multiple IP connections. A device at the branch office site uses software algorithms to decide which connection(s) to use. This way, data paths can be diversified based on economic parameters or technical necessities, such as balancing traffic loads between multiple lines. This makes the network much more powerful and cost-efficient. Ideally the network is part of a trusted security architecture enabling user-centric policies to intelligently control and secure the different types of apps, the devices and the end-to-end networks framework.
  1. Network performance does not equal business performance. With all this talk about the importance of the network and new ways to improve its performance, it seems logical to assume: the faster my network, the more efficient my business processes. Unfortunately, this is not quite right. The network is simply the vehicle for data transportation between applications. It is the applications that contain the business workflows, and sometimes whole business processes. Therefore, it is critical for a successful digital transformation to have full control of how applications are delivered. For this purpose, modern enterprises deploy so-called ‘application delivery controllers’. These allow granular management of application availability and behavior as well as application security and secure digital perimeter security policy enforcement.

Data is the new currency, making the unhindered flow of data an essential prerequisite in the modern enterprise. In a time of ubiquitous cloud services, global interconnectivity due to the rise of the Internet of Things, and digital transformation rapidly progressing across all industries, network infrastructure provides the very foundation of today’s – and, what’s more, future – business operation. In this scenario, solid network connectivity is the bare necessity for business. Software-defined networking is needed to make this infrastructure agile enough to swiftly adapt to changing business needs. And ADCs help to bridge the gap between the network, security, and business applications. This way, intelligent networks provide the groundwork for a successful digital transformation.

Source: https://www.citrix.com/blogs/2017/09/18/three-things-about-networks-that-every-cio-should-have-on-their-agenda/?utm_content=buffer5340f&utm_medium=Social%2Bmedia%2B-%2BOrganic&utm_source=linkedin&utm_campaign=corp%2Bsocial%2Bmarketing%2B(organic%2Bposts%2Band%2Bfeeds)

Author: Sherif Seddik


  • 0

The Whys and Wherefores of Automating Privileged Tasks

Category : Cyber-Ark

task can be defined as:

noun 1.  A piece of work that needs to be done regularly.

verb  2.  Assign a piece of work to.

IT operations teams are often inundated with menial, regular and repetitive tasks (e.g. trigger events, running daily monitoring activities and starting services) that can not only be damaging to the business when done incorrectly, but also hinder productivity. By limiting the number of tasks assigned to IT and enabling greater access to automation capabilities, performance and productivity can be significantly improved. In parallel, it’s important to protect your environment from risks such as the abuse or misuse of privileged access (insider threats), service outages caused by human error (typos) and third-party/remote vendor vulnerabilities (external threats).

Automation can be defined as:

noun 1.  The technique, method, or system of operating or controlling a process by highly automatic means, as by electronic devices, reducing human intervention to a minimum.

I recently addressed the importance of locking down the remote vendor attack pathway, as this is often an easy target for cyber attackers. By automating privileged tasks (any task to be performed by a privileged user), you can lessen potential vulnerabilities in process workflows both utilized by internal users and remote vendors alike. Once you fully automate a privileged task, you’re not only simplifying privileged account security processes, but also helping to ensure your remote vendors (who might have access to critical servers, endpoints and applications) will not inadvertently make an error that could lead to a serious security risk.

Additionally, in the DevOps world, orchestration tools are automating tasks across workflows, taking this role from IT operations and vendors – even for some systems that are no longer in existence. In the on-premises world, organizations still rely on vendors and support staff to perform tasks on an ad-hoc, often sporadic basis. Ideally, organizations should allow all of these tasks to be performed while a complete and correlated audit trail is generated automatically.

CyberArk solutions enable audit and operations teams to monitor and record the task management and automation of related activities as well as promote user accountability across the board. Users can automate maintenance and provisioning of tasks, (re)start and stop services, and only launch the applications or clients necessary to perform the task at hand – and nothing else. Users can also automate deployments through remote SSH command execution on target systems in both on-premises and cloud environments – all while maintaining the highest security standards. This functionality enables users to place restrictions on what privileged users are allowed to do with an organization’s most critical assets.

So How Does it Work?

Let’s walk through a simple example. A local Windows Server Administrator account has been on-boarded into the CyberArk Vault, and the usage of this privileged account has been limited to only a handful of allowed operations.

Full access to the server is not permitted, the user can only manage a list of services running on that server.

The user selects “Restart Service” and is then prompted to select the service to be managed, which can be pre-populated or added as a part of a drop-down list to further limit the control the user has over this account and the server.

 

After the user clicks ‘OK,’ the service will restart. Through the CyberArk Privileged Session Manager, a full audit trail is created capturing the completed actions by each privileged user. Any abnormal behavior, abuse of privileges or any other privileged activities associated with that privileged task will be on record and immutably stored in a tamper resistant vault. Sessions can be monitored in real time or later reviewed by a member of the audit team to improve security and support compliance regulations.

Whether your tasks ‘need to get done regularly’ or they’re something you ‘assign a piece of work to,’ it’s in your best interest to introduce automation controls. The example above shows how easily this can be done. Organizations today mostly exist in a ‘do more with less environment’ so it’s a best practice to automate simple privileged tasks to keep a high level of security and enable IT operations teams to focus on workloads that deliver more value to the organization.

Learn more about privileged task automation and management by attending our webinar, “Curse of the Typo! Automate Repeated Tasks to Improve Efficiency and Reduce Risk Around User Mistakes.”  Register and select a session in your closest time zone: Americas on September 19 at 2:00 pm EST or EMEA September 14 or September 21 at 10:00 am BST.

Source: https://www.cyberark.com/blog/whys-wherefores-automating-privileged-tasks/

Author: 


  • 0

Vulnerable Stuff Running in Containers Still Vulnerable Stuff

Category : F5

It has been said – and not just by me – that encrypted malicious code is still malicious code. The encryption does nothing to change that, except possibly blind security and app infrastructure to its transport through the network.

The same is true of apps and platforms running in containers. If the app is vulnerable, it is vulnerable whether it’s running atop an OS, in a virtual machine, or nowadays, in a container. If the app is vulnerable in the data center, it’s vulnerable in the cloud. And vice-versa.

Containers aren’t designed to magically protect applications. They provide some basic security at the network layer, but the network is not the application. Applications have their attack surface, comprising its code and interfaces (APIs) as well as the protocols (HTTP, TCP) and the app stack it requires. None of which changes by adding an entry to an IP table or otherwise restricting inbound requests to those coming from the ingress to the containerized environment.

The reason I bring this up is thanks to Sonatype’s 2017 DevSecOps Survey. In it, 88% of the over 2200 respondents agreed container security was important, but only 53% leverage security products to identify vulnerable applications/OS/configurations in containers.container-security-sonatype-2017

The first two pieces of that statistic – applications and OS – jumped out at me, because they are two of the components of a fully realized app stack that don’t necessarily change based on location or operational model (cloud, container, virtualization, etc…). An app or API with an SQLi or XSS vulnerability is not magically imbued with protection when it moves between models. That vulnerability is in the code. The same is true for platforms, which are inarguably part of the app security stack. A vulnerability in the handling of HTTP headers in Apache when running on Linux will still exist if that app is moved from a traditional, OS-based to a containerized model.

It’s important – imperative, even – that we continue to identify vulnerabilities in the full app stack regardless of where or in what form the app is deployed.

It is also just as important to keep in place those app protections already employed for traditional apps when moving to containers. A web application firewall is just as beneficial for apps deployed in containers as it is for apps deployed in the cloud as it is for apps deployed in traditional environments.

As are other the other security tools the survey found used by respondents such as static and real-time scanning solutions (SAST, DAST, IAST, and RASP). While web application firewall (WAF) use exceeds that of other tools, SAST and SCA (Source Code Analysis) are also common. SCA is a static means of rooting out problems before delivery. I’ll date myself and note that tools like lint fall into the SCA tool category, and while these don’t expose vulnerabilities resulting from the interaction of code (and with users) in real-time, they can find some of the more common mistakes made by developers that result in memory leaks, crashes, or the infamous buffer overflow.  maturedevopsusewaf

I know what you’re thinking. You’re thinking, “Lori, I just read Stack Overflow’s 2017 Developer Survey Results, and JavaScript is by far the number one preferred language of developers. And JavaScript is interpreted, so all this buffer overflow and memory leak stuff is just bad memories from the old days when you were coding in C/C++.”

Except that JavaScript – and other modern, interpreted languages – is ultimately implemented in a language closer to the circuit board, like C/C++. And as has been shown in the past, if one is clever enough, one can use that fact to craft an exploit of the system.

And even if that’s not a concern, there are plenty of other vulnerabilities in any code based on libraries used or a misused system call that breaches security on the server side. Current surveys say 80% of apps are composed from open source components. The Sonatype survey further noted that there has been a 50% increase in verified or suspected breaches related to open source components from 2014 to 2017. Many of which are written in languages that lend themselves to more spectacular mistakes, both because they are less controlled and because there are fewer and fewer developers proficient in those languages.

The point being that any code is prone to contain vulnerabilities. And since code is the building block of apps, which are the face of the business today, it’s important to scan and protect them no matter where or how they’re deployed.

Containers or cloud. Traditional or virtual. All applications should be scanned for vulnerabilities and protected against platform and protocol exploits. Period.

Apps should be thoroughly scanned and tested during development, and then tested again in production. Both are necessary, because the Fallacy of Decomposition tells us that introducing new components changes the baseline. New interactions can force previously undiscovered vulnerabilities to the fore.

To protect apps, consider the following:

  • Employ code and app analysis tools in development. Build them into the CI/CD pipeline if possible.
  • Test again in production, in case interaction with other components/apps surfaces issues.
  • Keep aware of protocol and platform vulnerabilities, as well as those discovered in third party libraries you may use
  • Integrate a web app firewall into your architecture. Even if you don’t use it in blocking mode, it is an invaluable resource in the event a protocol/platform zero-day or library vulnerability is discovered.

Stay safe!

Source: https://f5.com/about-us/blog/articles/vulnerable-stuff-running-in-containers-still-vulnerable-stuff-27580?sf113516371=1

Author: Lori MacVittie 


  • 0

FireEye Uncovers CVE-2017-8759: Zero-Day Used in the Wild to Distribute FINSPY

Category : FireEye

FireEye recently detected a malicious Microsoft Office RTF document that leveraged CVE-2017-8759, a SOAP WSDL parser code injection vulnerability. This vulnerability allows a malicious actor to inject arbitrary code during the parsing of SOAP WSDL definition contents. FireEye analyzed a Microsoft Word document where attackers used the arbitrary code injection to download and execute a Visual Basic script that contained PowerShell commands.

FireEye shared the details of the vulnerability with Microsoft and has been coordinating public disclosure timed with the release of a patch to address the vulnerability and security guidance, which can be found here.

FireEye email, endpoint and network products detected the malicious documents.

Vulnerability Used to Target Russian Speakers

The malicious document, “Проект.doc” (MD5: fe5c4d6bb78e170abf5cf3741868ea4c), might have been used to target a Russian speaker. Upon successful exploitation of CVE-2017-8759, the document downloads multiple components (details follow), and eventually launches a FINSPY payload (MD5: a7b990d5f57b244dd17e9a937a41e7f5).

FINSPY malware, also reported as FinFisher or WingBird, is available for purchase as part of a “lawful intercept” capability. Based on this and previous use of FINSPY, we assess with moderate confidence that this malicious document was used by a nation-state to target a Russian-speaking entity for cyber espionage purposes. Additional detections by FireEye’s Dynamic Threat Intelligence system indicates that related activity, though potentially for a different client, might have occurred as early as July 2017.

CVE-2017-8759 WSDL Parser Code Injection

A code injection vulnerability exists in the WSDL parser module within the PrintClientProxy method (http://referencesource.microsoft.com/ – System.Runtime.Remoting/metadata/wsdlparser.cs,6111). The IsValidUrl does not perform correct validation if provided data that contains a CRLF sequence. This allows an attacker to inject and execute arbitrary code. A portion of the vulnerable code is shown in Figure 1.


Figure 1: Vulnerable WSDL Parser

When multiple address definitions are provided in a SOAP response, the code inserts the “//base.ConfigureProxy(this.GetType(),” string after the first address, commenting out the remaining addresses. However, if a CRLF sequence is in the additional addresses, the code following the CRLF will not be commented out. Figure 2 shows that due to lack validation of CRLF, a System.Diagnostics.Process.Start method call is injected. The generated code will be compiled by csc.exe of .NET framework, and loaded by the Office executables as a DLL.


Figure 2: SOAP definition VS Generated code

The In-the-Wild Attacks

The attacks that FireEye observed in the wild leveraged a Rich Text Format (RTF) document, similar to the CVE-2017-0199 documents we previously reported on. The malicious sampled contained an embedded SOAP monikers to facilitate exploitation (Figure 3).


Figure 3: SOAP Moniker

The payload retrieves the malicious SOAP WSDL definition from an attacker-controlled server. The WSDL parser, implemented in System.Runtime.Remoting.ni.dll of .NET framework, parses the content and generates a .cs source code at the working directory. The csc.exe of .NET framework then compiles the generated source code into a library, namely http[url path].dll. Microsoft Office then loads the library, completing the exploitation stage.  Figure 4 shows an example library loaded as a result of exploitation.


Figure 4: DLL loaded

Upon successful exploitation, the injected code creates a new process and leverages mshta.exe to retrieve a HTA script named “word.db” from the same server. The HTA script removes the source code, compiled DLL and the PDB files from disk and then downloads and executes the FINSPY malware named “left.jpg,” which in spite of the .jpg extension and “image/jpeg” content-type, is actually an executable. Figure 5 shows the details of the PCAP of this malware transfer.


Figure 5: Live requests

The malware will be placed at %appdata%\Microsoft\Windows\OfficeUpdte-KB[ 6 random numbers ].exe. Figure 6 shows the process create chain under Process Monitor.


Figure 6: Process Created Chain

The Malware

The “left.jpg” (md5: a7b990d5f57b244dd17e9a937a41e7f5) is a variant of FINSPY. It leverages heavily obfuscated code that employs a built-in virtual machine – among other anti-analysis techniques – to make reversing more difficult. As likely another unique anti-analysis technique, it parses its own full path and searches for the string representation of its own MD5 hash. Many resources, such as analysis tools and sandboxes, rename files/samples to their MD5 hash in order to ensure unique filenames. This variant runs with a mutex of “WininetStartupMutex0”.

Conclusion

CVE-2017-8759 is the second zero-day vulnerability used to distribute FINSPY uncovered by FireEye in 2017. These exposures demonstrate the significant resources available to “lawful intercept” companies and their customers. Furthermore, FINSPY has been sold to multiple clients, suggesting the vulnerability was being used against other targets.

It is possible that CVE-2017-8759 was being used by additional actors. While we have not found evidence of this, the zero day being used to distribute FINSPY in April 2017, CVE-2017-0199 was simultaneously being used by a financially motivated actor. If the actors behind FINSPY obtained this vulnerability from the same source used previously, it is possible that source sold it to additional actors.

Acknowledgement

Thank you to Dhanesh Kizhakkinan, Joseph Reyes, FireEye Labs Team, FireEye FLARE Team and FireEye iSIGHT Intelligence for their contributions to this blog. We also thank everyone from the Microsoft Security Response Center (MSRC) who worked with us on this issue.

Source: https://www.fireeye.com/blog/threat-research/2017/09/zero-day-used-to-distribute-finspy.html

Author: Genwei Jiang, Ben Read, James T. Bennett 


  • 0

Optus Fusion SD-WAN – the Future of your Network, Today

Category : Riverbed

I’ve worked in the industry for over 25 years, with roles in end user, systems integrator, service provider and vendor land and whichever lens I use there are three common aspirations and four or five typical challenges end customers face when considering how to make their IT infrastructure more a business enabler than a business overhead.

Aspirations:

  • Flexible & agile
  • Business outcome focussed
  • Lower total cost of ownership (TCO), high ROI

Most end customers need their IT systems to be flexible enough to move quickly to meet their business needs whilst at the same time having sufficient rigour around management and change control to mitigate risk of downtime and business interruption. As we move through the cloud adoption maturity cycle organisations have also become accustomed to consuming services on an OPEX/Subscription basis reducing the demand on CAPEX funding.

10+ years ago the answer to this was IT outsourcing for some or all of the IT infrastructure, in many instances customers owning the assets themselves, and engaging a third party to manage it 24×7 with pro-active monitoring and management.

One of the challenges with this model, is that the management services provided were really “element management”—is the system/appliance up or down, showing green or red on the network management platform, and is it backed up regularly. In addition, traditional managed service providers insist on strong change control and security risk assessment methodologies to mitigate risks. In these scenarios someone is also carrying the cost of financing the hardware/software—the overall cost and charges often seem high for the real business value they offer.

Challenges:

  • Element management does not deliver business outcomes
  • Strong managed service change control impacts agility
  • Reduce the burden on CAPEX, aligning expenditure with incoming Revenue
  • Finance leasing increases total cost

Optus Fusion SD-WANIn today’s world, element management is almost irrelevant as customers want to consume services with guaranteed performance outcomes. Remembering that customers have become accustomed to, and want, self-service access to systems, we see the dawn of the software-defined era.

Putting myself back in end user land, where I was responsible for designing and operating highly-available large enterprise networks, what would I want from a software-defined WAN (SD-WAN)?

 

  • True resilience with full carrier diversity, with end-to-end performance monitored in real time from a single common intuitive management interface
  • Integrated visibility that gives me insights into the real-time utilization of my network—understanding what users, devices and applications are consuming resources at a particular point in time
  • The ability to make global changes at the click of a button, without the risk of a CLI script containing errors, potentially taking users, sites, applications, or even worse the whole WAN down!
  • Flexibility to dynamically redirect less important traffic over alternate links to make headroom for short-term business initiatives—e.g. the CEO’s monthly all-hands webcast
  • And finally the means to consume this on an opex model on a monthly-basis as an additional item on my carriers bill

Sounds too good to be true, right?

Two years ago that would have been the case, but here we are today with Optus Business in Australia launching their Optus Fusion SD-WAN service which does exactly that.

Am I for real? Yes—to find out more, see what is truly possible—the Future of the Network, here today—Optus Fusion SD-WAN, powered by Riverbed SteelConnect!


  • 0

PCI DSS 3.1 Compliance in the Modern Data Center & Cloud Lessons & Advice from Expert QSA Coalfire

Category : Check Point

Join Forrest McMahon from Qualified Security Assessor (QSA) Coalfire for an insightful view of PCI DSS 3.1 requirements in the face of increasingly sophisticated cyber-attacks and more complicated deployment scenarios.
These are the scenarios we will be looking at:
  • How to approach the task of PCI DSS 3.1 compliance
  • What the impacts of different deployment environments (physical, virtual, cloud) have on compliance
  • What key tools & approaches can be used to streamline and ease compliance impacts

Watch NOW!


Support