Monthly Archives: June 2017

  • 0

How Should We Think About Securing Critical Infrastructure?

Category : Sentinel One

In the first part of the afternoon panel discussion, General Michael V. Hayden, Former Director of the CIA and the NSA, Dr. Douglas Maughan, Division Director, Cybersecurity Division, DHS/S&T/HSARPA, Tim Conway, Technical Director – ICS & SCADA programs, SANS, Steve Orrin, Federal Chief Technologist, Intel Corp., and Jeremiah Grossman, Professional Hacker and Chief of Security Strategy, SentinelOne, explored solutions for making critical infrastructure more resilient.

Some of the questions addressed were:
– To what extent have we secured the grid infrastructure today?
– What options are available to secure the grid?
– What are the long-term solutions?
– Who is working on these solutions?
– How does the regulatory structure in the US facilitate or impair grid resilience?

Co-hosted by the Siebel Scholars program and the Siebel Energy Institute, the conference examined the frequency, nature, sources, and potential impact of cyber-attacks on U.S. critical infrastructure, with a concentration on the power grid. Learn more at http://gridcybersecurity.org/.


  • 0

Decline in Rig Exploit Kit

Category : Palo Alto

Starting in April 2017, we saw a significant decrease in Rig exploit kit (EK) activity after two major campaigns, EITest and pseudo-Darkleech, stopped using EKs. Figure 1 shows the hits for the Rig EK from December 2016 through May 2017, highlighting this trend.

This blog reviews recent developments in the EITest and pseudo-Darkleech campaigns that have contributed to the current drop in Rig EK. We also explore other causes for the overall decline of EK activity as others have noted in recent reports. Finally, due to the anemic nature of today’s EK scene, we review some methods criminals are focusing on for malware distribution.

decline-in-rig_1

Figure 1: Hits for Rig EK from December 2016 through May 2017.

Two Major Campaigns Stop Using Rig EK

At the very end of March 2017, researchers stopped seeing indicators of the pseudo-Darkleech campaign. Pseudo-Darkleech was a long-running campaign that switched to Rig EK in September 2016. Since September 2016, pseudo-Darkleech accounted for a significant amount of Rig EK seen on a daily basis. When pseudo-Darkleech disappeared, Rig EK activity dropped approximately 50 percent from previous months.

Three to four weeks later, another long-running campaign cut back on its use of Rig EK. Near the end of April 2017, the EITest campaign began pushing tech support scams. Previously, EITest had also generated a great deal of Rig EK traffic, but as the criminals behind this activity began focusing on other techniques, Rig EK levels dropped another 50 percent in May 2017. As we enter June, EITest is primarily pushing tech support scams, and it does not appear to be utilizing EKs at this time.

Figure 2 shows the hits for Rig EK from March 1, 2017 through May 31, 2017 in more detail. Note on the chart when pseudo-Darkleech disappears and EITest shifts focus and their impact on Rig EK traffic.

Although researchers still find Rig from other campaigns like RoughTed or Seamless, recent levels are their lowest since we began tracking this EK.

decline-in-rig_2

Figure 2: Rig EK hits from March 1st through May 31st, 2017.

Not the Threat They Once Were

Rig is not the only EK suffering in today’s threat landscape. All EKs have been affected. So why aren’t EKs as active as they once were?

One contributing factor is that the target surface for EKs is getting smaller.

EKs typically use browser-based exploits targeting Microsoft Windows systems. They are primarily focused on Internet Explorer, Microsoft Edge, and Adobe Flash Player. EKs are largely ineffective against more popular browsers like Chrome, a product that has gone through four major version updates this year alone.

Users (potential victims) are moving to other browsers, and this has greatly reduced the number of possible targets for current EKs. As shown below in Figure 3, as of May 2017, only 19 percent of the desktop browser market was taken by Microsoft Edge and Internet Explorer 11 combined.

decline-in-rig_3

Figure 3: Desktop browser market share in May 2017 from NetMarketShare.com.

With a declining target base, EKs are not aging gracefully. In previous years, we saw a variety of EKs used by various campaigns. But by the end of 2015, notable EKs like Sweet Orange and Fiesta had disappeared. As 2016 progressed, other prominent EKs like Nuclear and Angler also shut down. The graveyard of expired EKs has several dozen names by now.

This lack of diversity has impacted EK development. According to Proofpoint, more than a year has passed since any major EK has featured a zero-day exploit, making EKs far less effective compared to previous years.

Furthermore, the security community has been much more active against EKs. Recent efforts by Cisco Talos in 2016 and RSA Research in 2017 have seen researchers coordinating with hosting providers to take down servers used in domain shadowingschemes favored by EKs. The resulting setbacks have not been permanent, but they have significantly impacted operations for criminals using EKs.

Ultimately, a declining browser target base, lack of new exploits, and recent efforts by the community to fight domain shadowing have all contributed to an overall decline in EK activity.

What Are Criminals Turning To?

As EKs become more ineffective, criminals are focusing on other methods like malicious spam attacks, or social engineering schemes like HoeflerText notifications  like shown in Figure 4. Whether through spam or a browser popup, criminals trick potential victims into double-clicking a file that infects their computers.

decline-in-rig_4

Figure 4: A fake HoeflerText notification in Google Chrome that leads to malware.

In some cases, URLs will redirect to an EK one day and then on following days will often redirect to a fake installer for something like Adobe Flash Player like shown in Figure 5. These social engineering schemes are becoming more common, and researchers often run across them as they search for EKs.

decline-in-rig_5

Figure 5: Fake Flash installer distributing the same malware Rig EK did the day prior.

In some cases, criminals have turned away from malware entirely, and are focusing on apparently more lucrative activity. For example, the EITest campaign has switched to pushing tech support scams. At first, this seemed to be location-based, targeting the US and UK. However, as we go into June 2017, this type of activity is all we have found from the EITest campaign in recent days.

Figure 6 below shows a page viewed on June 7th, 2017 from a website compromised by the EITest campaign. The highlighted portion is a URL that redirects to a tech support scam website shown in Figure 7 that states your computer has been infected.

decline-in-rig_6

Figure 6: Injected script in page from a site compromised by the EITest campaign.

decline-in-rig_7

Figure 7: The tech support scam site.

This particular campaign also has audio continually reinforcing the same information. You cannot simply click okay or close the browser. The windows will immediately reappear. In order to close the browser and stop the audio, you must use the task manager to kill the browser process.

These tech support scams have been so successful that they are now a constant feature of our threat landscape. The EITest campaign has been pushing them for more than a month now.

Conclusion

Although EK activity levels are down, we still see indicators of Rig and Magnitude on a near-daily basis. But EKs are a relatively minor factor in today’s threat landscape compared to social engineering schemes and malspam. Users who follow best security practices are much less likely to be affected by the EK threat.

However, this situation could change as new exploits appear and updated techniques are used in malware distribution. It always pays to be prepared. Threat detection, preventions, and protection solutions like Palo Alto Networks next-generation security platform are a key part of any prevention strategy.

 Source: https://researchcenter.paloaltonetworks.com/2017/06/unit42-decline-rig-exploit-kit/
Author: 

  • 0

Five Reasons Your Digital Experience Management Strategy Could Fail

Category : Riverbed

You can be sure your CEO has digital experience on his or her radar. According to Gartner’s 2017 CEO Survey, CEOs are more focused this year on how technology and product innovation drive company growth. In the last few years of Gartner’s CEO survey, technology has never ranked so high on the list of CEO priorities. So the pressure is on IT to deliver excellent digital experiences. But this is easier said than done. Here are five reasons why your digital experience management strategy could fail.

1. Application complexity

Although Gartner’s survey shows that CEOs are relying on technology to drive growth, it also shows that they rank technology impediments as the #2 internal constraint to growth. How can technology be both a driver of growth and an impediment to it? Application complexity is one major reason. Application performance management is more challenging than ever before.

  • Applications must scale based on demand and remain highly responsive 24/7 across geographies. Innovative applications interact with legacy applications, so IT must support the full portfolio—web, mobile, apps running in the cloud, on virtual infrastructure, and legacy client-server environments.
  • End users and customers no longer interact with static applications at discrete times. They interact continuously with applications whose architectures have evolved to become modular, distributed, and dynamic.

2. The expanding population of end users

End User Experience Management is also more complex. Customers aren’t the only ones whose digital experience matters. The Gartner definition of Digital Experience Monitoring also includes employees, partners, and suppliers. If that weren’t enough of a challenge, the advent of IoT requires IT to ensure an excellent digital experience for machines as well!

 

3. Different teams have different goals

According to a recent EMA Digital Experience Management report, 59% of enterprise leaders agree that IT and the business share the responsibility for Digital Experience Management. Although they share responsibility for ensuring excellent digital experience, groups within IT and the business have specific needs which vary greatly, depending on their roles.

  • Business executives must ensure they meet goals for revenue, customer satisfaction, and workforce productivity.
  • IT executives need to staff their teams efficiently to architect and support digital business initiatives, ensure technology investments are made appropriately, and hold IT vendors accountable to SLAs that meet customer objectives.
  • IT and Network Operations teams must ensure the network and infrastructure can support new services, identify and resolve issues quickly, and understand the impact of problems on digital experience.
  • DevOps teams must release new apps and digital services quickly, identify and resolve issues in test and QA, and ensure excellent application performance perform in real-world environments.
  • Cloud architects need to plan, design, and implement the infrastructure to support new services, and scale up and down as demand changes.
  • End User Services teams require visibility into the digital experience of customers, employees, partners, and suppliers to identify and triage issues before users call to complain.

4. A variety of analytics are required to measure success

“You can’t manage what you can’t measure.” Management expert Peter Drucker’s famous saying applies equally well to tracking the success of a Digital Experience Management initiative. With varying responsibilities, each group in IT and the business requires different metrics and analytics to indicate their progress in achieving their Digital Experience Management goals.

Digital Experience Monitoring tools must therefore supply a broad set of business and technical analytics, such as application performance, network performance, infrastructure capacity analysis, and end user productivity across the extended enterprise.

5. The IT monitoring visibility gap

IT-Monitoring-Visibility-GapAs IT organizations respond to CEO priorities and roll out new services to drive growth, they need a cross-domain understanding of applications, the networks and infrastructure on which they run, and the impact they have on end user experience.

But the typical enterprise has from 4-15 different network monitoring tools, which complicates troubleshooting, change management, and other aspects of service level management. While these tools provide insight into the performance and availability of their particular domain, they lack visibility into the actual digital experience of customers, the workforce, partners and suppliers.

Addressing Digital Experience Management challenges

An effective Digital Experience Management approach closes this visibility gap and enables you to measure the end user experience of the entire population of end users. Each group within IT and the business gets the metrics and analytics they need to ensure a successful digital experience outcome.

When it comes to meeting or exceeding your CEO’s expectations for driving growth, the key is to ensure you have an effective Digital Experience Management strategy. Failing to do so could mean lost revenue, lost productivity, and even irreparable damage to a company brand. In the next few weeks, we’ll extend this Digital Experience Management series to show you how Riverbed SteelCentral can help.

Source: https://www.riverbed.com/blogs/five-reasons-your-digital-experience-management-strategy-could-fail.html?utm_campaign=steelcentral&utm_content=mikem&utm_medium=social&utm_source=linkedin&sf91954046=1

Author: Mike Marks


  • 0

Free appliance upgrades with the Pulse Access Suite

Category : Pulse Secure

The Pulse Secure Advance Now promotion combines the high performance of the Pulse Secure Appliance with new software intelligence of the Pulse Access Suite. Use it to securely connect mobile users to the cloud and your corporate network.
A free PSA300, PSA3000, PSA5000 or PSA7000 when you replace an SA, IC or a MAG appliance under support and purchase a minimum number of Pulse Access Suite licenses.
Up to a 40% discount of the suggested list price for the purchase of Pulse Access Suite Advanced or Enterprise editions.
*This promotion is for SA, IC and MAG customers with a valid support subscription.
Find the complete terms and conditions here.
Why upgrade?
  • Cloud services flexibility
  •  Easy BYOD
  •  Automatic compliance
  •  Simple integration
  •  No passwords
  •  Unified policy and visibility

UPGRADE NOW!


  • 0

How a Hacking Group Used Britney Spears’ Instagram to Operate a Command and Control Server

Category : McAfee

A nasty piece of malware is currently being tested by a Russian hacking group named Turla, and its trial round has been conducted in an unexpected area of the internet — the comments section of Britney Spears’ Instagram. As a matter of fact, they’re using her Instagram as a way to contact the malware’s command and control (C&C) server.

So how does Turla make this happen, exactly? Leveraging a recently discovered backdoor found in a fake Firefox extension, the cybercriminals instruct the malware to scroll through the comments on Spears’ photos and search for one that has a specific hash value. When the malware finds the comment it was told to look for, it converts it into this Bitly link: http://bit.ly/2kdhuHX. The shortened link resolves to a site that’s known to be a Turla watering hole.

This way, in the chance their attack becomes compromised, the cybercriminals can ensure their C&C can be changed without having to change the malware. If the attackers want to create a new meetup location, all they have to do is delete the first infected comment, and infiltrate a new one with same hash value.

This infected comment on Spears’ post doesn’t look exactly normal, but most people probably would think it’s just spam — unless they clicked it. If someone does in fact click on the link, they’ll be directed to the hacker group’s forum, which is where they actually infect innocent users. For this Trojan in particular, visitors who click will get taken to a site and asked to install the extension with the benign name “HTML5 Encoder.”

The good news is — this is, after all, just a test. Plus, Firefox is said to be already working on a fix so that the extension being used won’t work anymore.

Source: https://securingtomorrow.mcafee.com/business/hacking-group-used-britney-spears-instagram-operate-command-control-server/?utm_source=RR&utm_medium=LinkedIn#sf90241661


  • 0

Pushing Incapsula SIEM Logs Directly to an Amazon S3 Bucket

Category : Imperva

Incapsula allows you to push your account’s SIEM logs directly to a designated bucket in Amazon S3. Pushing your Incapsula SIEM logs to cloud storage lets you examine your log data in new ways. For example, your Incapsula SIEM logs can be combined with SIEM logs from other platforms to give you a single source of security issues across your entire tech stack.

We’ll demonstrate how to configure Incapsula to push SIEM logs to an Amazon S3 bucket by following these five major steps:

  • Step 1 – Create an Amazon S3 bucket for your Incapsula SIEM logs
  • Step 2 – Create access keys for your AWS account
  • Step 3 – Copy a test file to your Amazon S3 bucket
  • Step 4 – Check your Amazon S3 bucket for the copied test file
  • Step 5 – Configure Incapsula to push SIEM logs to Amazon S3

Step 1 – Create an Amazon S3 Bucket for Your Incapsula SIEM Logs

As a first step, let’s create a new Amazon S3 bucket to hold our Incapsula SIEM log files.

  1. Use your web browser to sign in to your AWS account and go to the AWS Management Console.
  2. Select All services > Storage > S3.
    Create an Amazon S3 Bucket for Your Incapsula SIEM Logs
  3. Click Create bucket to start the Create bucket wizard.
    Create an Amazon S3 Bucket for Your Incapsula SIEM Logs
  4. In the Name and region step, enter a unique Bucket name, and select the Region where you want to store your bucket. Note: You cannot use the bucket name shown in the following illustration, incapsula-siem-logs, because it has already been used. Your bucket name must be globally unique. A best practice for avoiding bucket naming issues is to use a DNS-compliant name, such as incapsula-siem-logs.company_name.com.
    Create an Amazon S3 Bucket for Your Incapsula SIEM Logs
  5. Click Next to go to the Set properties step.
    Create an Amazon S3 Bucket for Your Incapsula SIEM Logs
  6. Recommended: Enable logging by clicking the Disabled link and specifying a target bucket and prefix for your logs. You can choose to store your log files in the same bucket as your SIEM logs or in a separate bucket. The optional target prefix you specify can help you identify access requests to your SIEM log bucket. Access log information can be useful in security and access audits. Click Learn more for additional information.

Create an Amazon S3 Bucket for Your Incapsula SIEM Logs

  1. Click Next to go to the Set permissions step, and then expand the Manage users section.
    Create an Amazon S3 Bucket for Your Incapsula SIEM Logs
  2. Under Objects and Object permissions, make sure Read and Write permissions are enabled for the account Owner, and then click Next to go to the Review step.
    Create an Amazon S3 Bucket for Your Incapsula SIEM Logs
  3. Check your configuration settings. If you need to make changes, click the corresponding EditWhen you are satisfied with your settings, click Create bucket.

You’ve now created a bucket with the configuration you need for holding your Incapsula SIEM log files.

Step 2 – Create Access Keys for Your AWS Account

Although as the account owner you can freely copy files to and from your new S3 bucket, enabling Incapsula to programmatically write to your Amazon S3 SIEM bucket requires that you use access keys for your AWS account. You can use one of the following two options to obtain access keys:

  • Use the IAM access keys of your AWS account – You can get these access keys by signing in to your AWS account and selecting IAM.
  • Create an access key based on the IAM account – You can create an access key separate from the ones already associated with your account.

Use the following steps to create an access key for your AWS root account:

Use your AWS account email address and password to sign in to the AWS Management Console.

Note: If you previously signed in to the console with IAM user credentials, your browser might open your IAM user sign-in page. You can’t use the IAM user sign-in page to sign in with your AWS account credentials. Instead, choose Sign-in using root account credentials to go to the AWS account sign-in page.

  1. In the top left Services -> IAM (or right -> My Security Credentials)
    Create an Amazon S3 Bucket for Your Incapsula SIEM Logs
  2. Choose Continue to Security Credentials.
    Create an Amazon S3 Bucket for Your Incapsula SIEM Logs
  3. Choose Account User name.
    Create an Amazon S3 Bucket for Your Incapsula SIEM Logs
  4. Select the Security credentials tab.
    Create an Amazon S3 Bucket for Your Incapsula SIEM Logs
  5. Scroll down and either use an existing access key or Create access key

Create an Amazon S3 Bucket for Your Incapsula SIEM Logs

  1. Choose your desired action.

To create an access key:

Choose Create access key. Then save the access key ID and secret access key to a file on your computer. After you close the dialog box, you can’t retrieve this secret access key again.

Create an Amazon S3 Bucket for Your Incapsula SIEM Logs

  1. Make sure and copy Access key ID and Secret access key or Download .csv file

Create an Amazon S3 Bucket for Your Incapsula SIEM Logs

You’ve now created an access key to use.

Step 3 – (Optional) Copy a Test File to Your Amazon S3 Bucket

At this point, it’s a good idea to make sure everything is working. You can do this by using the AWS command-line tools to copy a file from your computer to your S3 bucket. Following these steps also confirms that your AWS access key ID and secret access key are working.

  1. Install the AWS Command Line Interface. For step-by-step instructions and links to AWS CLI for Linux, Microsoft Windows and iOS, go to http://docs.aws.amazon.com/cli/latest/userguide/installing.html.
  2. From a command prompt, run aws configure.

Fill in the requested information as the AWS CLI prompts you for the following:

  • AWS Access Key ID – The access key ID that you generated. The access key ID is listed on the Your Security Credentials
  • AWS Secret Access Key – The secret key that you downloaded or copied and pasted for safekeeping. If you did not save your secret key, you cannot retrieve it from AWS – you must generate a new one.
  • Default region name – The region whose name you specified for your S3 bucket. This parameter must be specified using the region code with no spaces, such as us-west-1. For a current list of S3 region codes, go to http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region.
  • Default output format – Specify jsontext, or table. For the purposes of pushing files from Incapsula, this setting does not matter.

You only need to specify these configuration parameters once per CLI installation. They remain in effect until you change them.

  1. Execute a directory listing of your bucket with the following command:
    aws s3 ls s3://bucket_name
    If successful, this command returns a list of zero or more files, depending on various settings, such as whether you have enabled access logs and whether any access has occurred that would result in log files.
  2. Copy a file to your bucket with the following command:
    aws s3 cp path_name/file_name s3://bucket_name
    If successful, this command returns the message:
    upload: path_name/file_name to s3://bucket_name/file_name

You’ve now installed and configured the AWS CLI, confirmed your AWS key ID and secret key, and copied a file from your local computer to your S3 bucket.

Step 4 – (Optional) Check Your Amazon S3 Bucket for the Copied Test File

To confirm that your file is in your S3 bucket, you can perform the following steps:

  1. Execute a directory listing of your bucket with the following command:
    aws s3 ls s3://bucket_name
    Among the list of files in your bucket, make sure that the list contains the file you copied in the previous step.
  2. Sign in to your AWS account and go to the AWS Management Console.
  3. Select All services > Storage > S3.
    Amazon S3 bucket and Incapsula
  4. On the Amazon S3 page, under Bucket name, click the name of the bucket you created for your Incapsula SIEM logs.
    Amazon S3 bucket and Incapsula
  5. Verify that the file you copied is listed.
    Amazon S3 bucket and Incapsula

Step 5 – Configure Incapsula to Push SIEM Logs to Amazon S3

Now that Amazon S3 is properly configured and you have your AWS access key, you’re ready to set up Incapsula to start pushing your SIEM log files to your S3 bucket.

  1. Use your web browser to go to https://my.incapsula.com/login, and then enter your Incapsula log in credentials and click Sign in.
     Configure Incapsula to push SIEM logs to Amazon S3
  2. Click Logs in the navigation panel.
     Configure Incapsula to push SIEM logs to Amazon S3
  3. In the Logs Setup page, select Amazon S3.
     Configure Incapsula to push SIEM logs to Amazon S3
  4. Enter the following:
  • AWS Access Key ID in the Access key field.
  • AWS Secret Access Key in the Secret key field.
  • Path name for your S3 bucket location in the Path field.
     Configure Incapsula to push SIEM logs to Amazon S3
  1. Click Test connection to verify that all your entries are correct.

That’s all there is to configuring Incapsula to push your SIEM logs to an Amazon S3 bucket.

 

Source: https://www.incapsula.com/blog/incapsula-siem-logs-to-amazon-s3.html?utm_source=linkedin&utm_medium=organic&utm_campaign=2017_q2_siembuckets

Author: Farzam Ebadypour


  • 0

Deliver cloud-based enterprise mobility management (EMM) at scale

Category : Mobile Iron

Empower employees to work faster and smarter with secure mobile productivity apps and content on any device. Reduce the risk of data loss with advanced mobile security protection extended across the entire mobile fleet. Using MobileIron Cloud-based EMM, which includes MDM, MAM, and MCM solutions, you can easily configure and secure all your mobile devices and apps in minutes.

Keep all your mobile apps and corporate data safe while freeing users to do great work on their preferred mobile devices. With advanced mobile security capabilities such as posture based access control and selective wipe, you can prevent business data from falling into the wrong hands. MobileIron Cloud MDM is a globally available solution that supports the most stringent compliance, security, and privacy requirements in the world. As part of our commitment to trust and security, MobileIron has successfully completed an SOC 2 Type 2 assessment. In addition, the MobileIron Cloud platform has received FedRAMP Authority to Operate (ATO).  FedRAMP ATO recognizes that MobileIron Cloud has passed the federal risk management process defining standard security requirements for all cloud providers.

  • Deliver MobileIron’s layered security platform through a cloud-based mobile device management console.
  • Easily distribute policies for email, Wi-Fi, VPN, user passwords, and security to mobile devices.
  • Provide secure access to key files and presentations.
  • Remotely wipe corporate data whenever a device is lost, stolen, or retired.
MobileIron Cloud Dashboard

Get mobile users up and running in minutes

In just minutes, you can secure devices, apps, and content with the most robust, multi-OS EMM platform today. Deploy everything users need to be productive on any Android, iOS, macOS and Windows 10 device. Whether your company relies on public or custom-built in-house apps, you can push them to any device through MobileIron Cloud-based mobile device management.

  • Quickly configure and deploy mobile devices over the air with no manual intervention required.
  • Distribute and update the productivity apps employees rely on every day.
  • Support the latest OS releases, including iOS 10, Android for Work, and Windows 10 through a cloud EMM console.

Learn More About

Simplify the IT management experience

Enable IT admins to work more efficiently and productively while keeping all your mobile assets secure. MobileIron Cloud provides an easy-to-use dashboard that allows IT admins to easily create complex policies, delegate administrative tasks, and quickly take action based on the state of the device. You can also proactively notify users and provide a self-service portal to help employees manage common device tasks and reduce help desk tickets.

  • Provide a single, easy-to-use console that simplifies complex mobile and PC management tasks and reporting.
  • Deploy, secure, and manage app and documents with cloud-based mobile device management.
  • Easily create policies and take action based on device compliance.
  • Get granular visibility into usage trends across your mobile deployment.

Source: https://www.mobileiron.com/en/products/emm-platform/mobileiron-cloud


  • 0

FlexPod SF: A Scale-Out Converged System for the Next-Generation Data Center

Category : NetApp

Welcome to the age of scale-out converged systems—made possible by FlexPod®SF. Together, Cisco and NetApp are delivering this new FlexPod solution built architecturally for the next-generation data center. Architects and engineers are being asked to design converged systems that deliver new capabilities to match the demands of consolidating silos, expanding to web-native apps, and embracing the new modes of operations (for example, DevOps).

New Criteria for Converged Systems

Until now, converged systems have served design criteria of integration, testing, and ease of configuration, within the confines of current IT operations and staples like Fibre Channel. A new approach, however, focuses on the following design requirements:

 

Converged systems needs to deliver on performance, agility, and value.

Enter the First Scale-Out FlexPod Solution Built on Cisco UCS and Network

Cisco and NetApp have teamed to deliver FlexPod SF, the world’s first scale-out converged system built on Cisco’s UCS Server, Cisco’s Nexus switching, Cisco management, VMware vSphere 6, and newly announced NetApp® SolidFire®SF9608 nodes running the NetApp SolidFire Element® OS on Cisco’s C220 platform. The solution is designed to bring the next-generation data center to FlexPod.

SF9608 Nodes Powered by Cisco C220 and the NetApp SolidFire Element OS

The critical part of bringing FlexPod SF forward is the new NetApp SF9608 nodes. For the first time, NetApp is producing a new Cisco C220-based node appliance running the Element OS.

 

SF9608 nodes built on Cisco UCS C220 M4 SFF Rack Server have these specifications:

  • CPU: 2 x 2.6GHz CPU (E5-2640v3)
  • Memory: 256GB RAM
  • 8 x 960GB SSD drives (non-SED)
  • 6TB raw capacity (per node)

Each node has these characteristics:

  • Block storage: iSCSI-only solution
  • Per volume IOPS-based quality of service (QoS)
  • 75,000 IOPS
  • Single copy of data kept—that is, a primary and replicated copy

Users can obtain support through 888-4NetApp or Mysupport@netapp.com.

 

The key here is that it’s the same Element OS that’s nine revisions mature, born from service providers, and used by some of the biggest enterprise and telco businesses in the world. The Element OS is preconfigured on the C220 node hardware to deliver a storage node appliance just for FlexPod. Element OS 9 delivers:

  • Scale-out clustering. You can cluster a minimum of four nodes, and then add or subtract nodes as needed. You’ll get maximum flexibility with linear scale for performance and capacity, because every node has CPU, RAM, 10GB, SSD IOPS, and capacity.
  • QoS. You can control the entire cluster’s IOPS for setting minimum, maximum, and burst settings per workload to deliver mixed workloads without performance issues.
  • Automation programmability. The Element OS has a 100% exposed API, which is preferred for programming no-touch operations.
  • Data assurance. The OS enables you to protect data from loss of drives or nodes. Recovery for a drive is 5 minutes, and less than 60 minutes for a full node failure (all without any data loss).
  • Inline efficiency. The solution is always on and inline to the data, reducing the footprint through deduplication, compression, and thin provisioning.

The Element OS is also different from existing storage software. It’s important to understand that FlexPod SF is not a dual-controller architecture with SSD shelves; you will not need 93% of the overhead tasks.

Use Cases Delivering the Next-Generation Data Center

As you design for the next-generation data center, you’ll find requirements that are often buzzword-worthy but take technical meaning within FlexPod SF’s delivery:

  • Agility. You’re able to respond by means of the infrastructure stack to a variety of on-demand needs for more resources, offline virtual machine (VM) or app building from infrastructure requests, and autonomous self-healing from failures or performance issues (end-to-end QoS—compute, network, storage).
  • Scalability. You gain scalability not just in size but in how you scale—with granularity, across generations of products—moving, adding, or changing resources such as the new storage nodes. FlexPod SF delivers scale in size (multi-PB, multimillions of IOPS, and so on) and gives you maximum flexibility to redeploy and adjust scale.
  • Predictability. FlexPod SF offers performance, reliability, and capabilities to deliver a SLA from compute, network, and storage via VMware so that VMs, apps, and data can be consumed without periodic delivery issues from existing infrastructure.

With the next-generation data center, IT can simplify and automate, build for “anything as a service” (XaaS), and accelerate the adoption of DevOps. FlexPod SF delivers the next-generation data center for VMware Private Clouds and gives IT and service providers the ability to deliver infrastructure as a service.

  • VMware Private Cloud. Different from server virtualization, where the focus is on virtualization of apps, integration to existing management platforms and tools, and optimization of VM density.
    • Instead of managing through a component UI, manage through the vCenter plug-in or Cisco UCS Director.
    • Move from silos to consolidated and mixed workloads through QoS.
    • Instead of configuring elements of infrastructure, automate through VMware Storage Policy-Based Management, VMware vRealize Automation, or Cisco UCS Director.
  • Infrastructure as a service. Currently, service and cloud providers take the components of FlexPod SF and deliver them as a service. With this new FlexPod solution, you’ll be able to configure multitenancy with much more elasticity of resources with performance controls to construct a SLA for on-demand consumption.

FlexPod SF Cisco Validated Design

A critical part of the engineering is the Cisco Validated Design (CVD), which encompasses all the details needed from a full validation of a design. With FlexPod SF, the validation was specific to the following configuration:

 

 

As you can see, the base strength of Cisco’s UCS and Nexus platforms now configures into scale-out NetApp SF9608 nodes with a spine-leaf 10Gb top-of-rack configuration. All of this is “new school,” and the future is now. Add CPU and RAM in small and flexible increments along with 10Gb network and storage 1U at a time (from a base four-node configuration).

 

Architecture and Deployment Considerations

FlexPod SF is not your average converged system. To architect and deploy, you’ll need to rethink your work—for example, helping the organization understand workload profiles to set QoS, and creating policy automation for rapid builds and self-service. Here are some considerations:

  • Current mode of operations
    • Analyze the structure of current IT operations. FlexPod SF presents the opportunity for IT or a service provider to move past complex configurations to profiles, policy automation, and self-service so VM builders and developers can operate with agility.
  • Application profiles and consolidation
    • Help organizations align known application and VM profiles to programmable settings in QoS, policies, and tools such as PowerShell.
    • Set QoS for minimum, maximum, and burst separate from capacity settings. This granularity enables architects to apply settings that will consolidate app silos and SLAs without overprovisioning hardware resources.
  • Cisco compute and network: same considerations as previous FlexPod solutions; only B Series supported at this time.
  • Storage
    • Architecting the SF9608 nodes is straightforward. With the Element OS, your design requirements are for volume capacity (GB/TB) and IOPS settings through QoS. The IOPS settings are:
      • Minimum: the key ability to deliver performance SLAs. This ability is delivered through the Element OS on a 4+ node governing the maximum capabilities of the cluster and inducing latency to workloads trespassing the QoS settings.
      • Maximum: capping a maximum IOPS of a workload.
      • Burst: over a given time, allows a workload to go past maximum if the cluster can supply the IOPS.
    • Capacity does not need to be projected for a three-to-five-year sizing as with existing storage. SF9608 nodes are an on-demand, 1U-node granularity add to needs for capacity and performance. Scale is linear: each node has CPU, RAM, 10GB, capacity, and IOPS.
    • Encryption is not available at this time.
    • Boot from SAN is supported.
    • You cannot field-update a C220 to become a SF9608 node.
    • There is no DC power at this time (roadmap).
  • VMware
    • In architecting for a FlexPod SF environment, focus on the move from server virtualization, where consolidation ratios, integration to existing stack tools, and the modernization to updated resources like all-flash, 10Gb, and faster Intel. For VMware Private Cloud environments, align all of these attributes and capabilities to an on-demand, profile-centric, policy-driven (SPBM) environment for VM administrators to completely build VMs from vCenter or Cisco UCS Director.
    • FlexPod SF presents a new opportunity for operators. The interface for daily operations is VMware vCenter, Cisco UCS Director, or both. As you build, move, add, and change VMs, you’ll notice policies that go beyond templates. You’ll see granular capabilities to completely build all attributes of VMs. You’ll also be able to present self-service portals for developers and consumers of a VMware Private Cloud to operate with agility and achieve their missions.

Source: https://newsroom.netapp.com/blogs/flexpod-sf-a-scale-out-converged-system-for-the-next-generation-data-center/

Author: Lee Howard


  • 0

GhostHook – Bypassing PatchGuard with Processor Trace Based Hooking

Category : Cyber-Ark

In this article, we’ll present a new hooking technique that we have found during our research work.

Hooking techniques give you the control over the way an operating system or a piece of software behaves. Some of the software that utilizes hooks include: application security solutions, system utilities, tools for programming (e.g. interception, debugging, extending software, etc.), malicious software (e.g. rootkits) and many others.

Please note, this is neither an elevation nor an exploitation technique. This technique is intended for post-exploitation scenario where the attacker has control over the asset. Since malicious kernel code (rootkits) often seeks to establish persistence in unfriendly territory, stealth technology plays a fundamental role.

Technical Description

The GhostHook technique we discovered can provide malicious actors or information security products with the ability to hook almost any piece of code running on the machine.

Let’s start by explaining the primary technology involved in this technique, Intel® PT:

Intel® Processor Trace (Intel PT) is an extension of Intel® Architecture that captures information about software execution using dedicated hardware facilities that cause only minimal performance perturbation to the software being traced.

This information is collected in data packets. The initial implementations of Intel PT offer control flow tracing, which generates a variety of packets to be processed by a software decoder.

The packets include timing, program flow information (e.g. branch targets, branch taken/not taken indications) and program-induced mode related information (e.g. Intel TSX state transitions, CR3 changes). These packets may be buffered internally before being sent to the memory subsystem or another output mechanism that is available in the platform.

Debug software can process the trace data and reconstruct the program flow. Here’s a list of a change-of-flow instructions which Intel PT traces:

 

Type Instructions
Conditional Branch JA, JAE, JB, JBE, JC, JCXZ< JECXZ, JRCXZ, JE, JG, JGE, JL, JLE, JNA, JNAE, JNB, JNBE, JNC, JNE, JNG, JNGE, JNL, JNLE, JNO, JNP, JNS, JNZ, JO, JP, JPE, JPO, JS, JZ, LOOP, LOOPE, LOOPNE, LOOPNZ, LOOPZ
Unconditional Direct Branch JMP (E9 xx, EB xx), CALL (E8 xx)
Indirect Branch JMP (FF /4), CALL (FF /2)
Near Ret RET (C3, C2 xx)
Far Transfers INTn, INTO, IRET, IRETD, IRETQ, JMP (EA xx, FF /5), CALL (9A xx, FF /3), RET (CB, CA xx), SYSCALL, SYSRET, SYSENTER, SYSEXIT, VMLAUNCH, VMRESUME

 

Intel PT was initially released as part of “Broadwell” (5th-generation) CPU and was expanded on “Skylake” (6th-generation) CPU.

So basically, Intel PT provides low overhead hardware that executes tracing on each hardware thread using dedicated hardware (implemented entirely in hardware) in the CPU’s Performance Monitoring Unit (PMU). Intel PT can trace any software the CPU runs including hypervisors (except for SGX secure containers).

This technology is primarily used for performance monitoring, diagnostic code coverage, debugging, fuzzing, malware analysis and exploit detection.

There are three types of tracing:

  1. Tracing of the entire user-mode/kernel-mode (current privilege level).
  2. Tracing a single process (Page Map Level 4).
  3. Instruction Pointer tracing, and this is what we will take advantage of.

To enable tracing, all you have to do is set the proper values inside the IA32_RTIT MSRs according to the tracing type.

Although this technology can be used for legitimate, valuable purposes, one can also take advantage of the buffer-is-going-full notification mechanism to try to take control of a thread’s execution.

The basis of this proposed technique is to make the CPU branch to our piece of code. How can we achieve that with Intel PT?

  1. Allocate an extremely small buffer for the CPU’s PT packets.

This way, the CPU will quickly run out of buffer space and will jump the PMI handler.

The PMI handler is a piece of code controlled by us and will perform the “hook”.

The PMI handler is invoked when the buffer is full or about to be full and can be registered via:

 

1
HalSetSystemInformation(HalProfileSourceInterruptHandler, sizeof(PMIHANDLER), (LPVOID)&amp;hookroutine);

 

  1. Start the Intel PT to trace a critical range of code in the kernel. For example, the LSTAR MSR, which is the system-call entry point of the kernel. It’s address can be obtained as follows:

 

1
ULONG64 LSTAR = ((ULONG64(*)())"\xB9\x82\x00\x00\xC0\x0F\x32\x48\xC1\xE2\x20\x48\x09\xD0\xC3")();

 

This will produce a naked function with the following instructions:

As mentioned, LSTAR is the kernel’s RIP SYSCALL entry (MSR entry 0xc0000082) for 64-bit software.

So basically, we will intercept the nt!KiSystemCall64 function, which is the entry point for service functions on Windows, until the end of the nt!KiSystemServiceUser.

Once a user-mode (SYSCALL) or a kernel-mode (ZW functions) thread will branch inside this code region, the CPU will trace its execution.

  1. As mentioned before, we are allocating a tiny buffer for the CPU to get it filled almost immediately. We can also use the PTWRITE instruction to make it more precise. PTWRITE will allow us to write data to a processor trace packet, and once the buffer is full, the CPU will interrupt the execution and will call the PMI handler (controlled by us) in the context of the running thread. It is possible to completely alter the execution context at this point, which is exactly the same as what one could do via traditional opcode-replacement-based patching for a given location. Proof of Concept:

The technique described above, in the current implementation, will result in a race-condition manner. This stack trace demonstrates hooking the service function nt!NtClose, called by a user-mode application.

The same thing can be done for example to an IDT routine:

Registering to PMI is transparent to the current PatchGuard implementation. Because this technique uses hardware to gain control of a thread’s execution and kernel code/critical kernel structures aren’t being patched, it would be extremely difficult for Microsoft to detect and defeat this technique. Thus, the proposed method should be preferable (despite adding significant complexity to the implementation). Moreover, the suggested technique should be future proof and reliable across kernel versions.

Microsoft’s response:

The engineering team has finished their analysis of this report and determined that it requires the attacker already be running kernel code on the system. As such, this doesn’t meet the bar for servicing in a security update however it may be addressed in a future version of Windows. As such I’ve closed this case.”

Microsoft does not seem to realize that PatchGuard is a kernel component that should not be bypassed, since PatchGuard blocks rootkits from activities such as SSDT hooking, not from executing code in kernel-mode.

Source: https://www.cyberark.com/threat-research-blog/ghosthook-bypassing-patchguard-processor-trace-based-hooking/

Author: Kasif Dekel


  • 0

Resolve security incidents quickly, efficiently and at scale

Category : FireEye

Your business is your top priority. At best, attacks are a distraction. At their worst, they can cripple your operations.

Mandiant, a FireEye company, has dedicated incident responders in over 30 countries to help you quickly investigate and thoroughly remediate attacks, so you can get back to what matters most: your business. Mandiant helps protect you with more than a decade of experience responding to thousands of incidents and conducting intrusion investigations.

Our consultants combine their expertise with industry-leading threat intelligence and network and endpoint technology to help you with a wide range of activities — from technical response to crisis management. Whether you have 1,000 or 100,000 endpoints, our consultants can be up and running in a matter of hours, analyzing your networks for malicious activity.

Source: https://www.fireeye.com/services/mandiant-incident-response.html

Support