Monthly Archives: October 2017

  • 0

Code Execution Technique Takes Advantage of Dynamic Data Exchange

Category : McAfee

Email phishing campaigns are a popular social engineering technique among hackers. The idea is simple: Craft an email that looks enticing to users and convince them to click on a malicious link or open a malicious attachment. Weight-loss and other health-related phishing emails are common. Package deliveries, bank notices and, in the case of spear phishing, even emails from your coworkers can contain malicious links or documents. With new research related to old Microsoft Office technologies, hackers have found new techniques that makes phishing all that more effective.

Microsoft’s Dynamic Data Exchange (DDE) has been around since 1987. Its purpose is to facilitate data transfers between applications that do not require ongoing user interaction. For example, if you have an item in a document that needs to have up-to-date data from an external source such as financial data, you can use DDE to kick start a process to collect that information. In Word, this could update charts and graphs of your financial data every time you open the document. Its functionality has been superseded by other technologies, notably Object Linking and Embedding (OLE), released in 1990, but due to compatibility reasons DDE is still supported by current Windows versions and Office products. (Microsoft advises that you disable DDE.)

On August 23 SensePost told Microsoft what they saw to be a security concern, if not a vulnerability, in the behavior of DDE. Microsoft determined that DDE behaved as expected and that the issue would be considered for a next-version candidate bug. On September 10 SensePost reported their findings.

The holy grail of phishing is to get arbitrary code execution on the target box. Once attackers can execute code, the hard part is done. At this stage they can load any malicious code on the target box to steal data or hold it for ransom. The Necurs botnet, which delivers the notorious Locky ransomware, has implemented a DDE-based phishing attack. Necurs delivers crafted Word documents, using the DDE protocol upon loading, that infect systems with Locky. (For more technical details on the Necurs botnet please refer to the McAfee Labs Threat Report, June 2017.)

Exploiting DDE for code execution is relatively simple. Attackers insert a calculated field and change the field codes to proper DDE syntax. They can use the DDEAUTO or DDE field identifier to launch executables. A proof of concept shows PowerShell giving an attacker a lot of control. The user is presented with two messages. The first appears innocuous: “This document contains links that may refer to other files. Do you want to update this document with the data from the linked files?” The second message, which hopefully raises more suspicion, identifies the executable being called. However, attackers have some control over this message and may be able to reduce suspicion by changing how they launch the executable. At this time, the user must click “Yes” on both pop-ups for the attack to succeed.

Not only Word documents support this technique. Researchers have shown that Outlook Calendar invites can be crafted to exploit the DDE code execution technique, removing the need to attach a malicious file. An Outlook attack can begin when users merely open their invites.

Attackers are always innovating to trick users and bypass protections. Even old technologies can play a role in new attack techniques. It is important to educate your users on phishing attempts and to pay attention to security messages asking for access to system resources. With the simplicity of this technique and quick adoption from large malware players, we can assume that many new phishing campaigns will implement DDE for initial code execution, delivering both known malware, such as Locky, and new malware to the victims.

For technical details, see “Configuring McAfee ENS and VSE to Prevent Macroless Code Execution in Office Apps.”


Author: Charles McFarland

  • 0

MobileIron Bridge

Category : Mobile Iron

Modern desktop security and management with the granular controls you need

MobileIron Bridge allows you to use modern management techniques to solve for the right desktop use cases. Now you can enjoy low touch, agile IT operations while ensuring strong security and over-the-air management across all your modern endpoint.

Closing the EMM gap

MobileIron Bridge is the first solution to unify mobile and desktop operations for Windows 10 using a single console and communications channel so you can provision, secure, and manage Windows 10 PCs more cost-effectively and with greater agility.

  • Enforce actions from existing Powershell scripts
  • Deploy non-MSI applications through an enterprise app store
  • Define a peripheral device
  • See the software on the device
  • Edit and manage the registry
  • Manage the file system and create desktop shortcuts
  • Determine hardware connected to the device
  • Remove bloatware from the device including system apps

More info:


  • 0

NetApp HCI Officially Arrives

Category : NetApp

Today we are excited to announce that NetApp HCI is finally here and available!

Back in June when we announced NetApp HCI at our analyst day in Boulder, CO, the hyper converged market was projected to grow from $371.5 million in 2014 to nearly $5 billion by 2019*. Now, four months later, the hyper converged market is expected to double within two years, reaching over $10 billion in 2021**. This clearly demonstrates that NetApp joined this rapidly growing and evolving market at the right time.

While NetApp HCI is a new product offering, it combines trusted and best-of-breed components from NetApp and VMware to deliver a true enterprise–scale hyper converged infrastructure that addresses the evolving market. Architected on SolidFire Element OS and VMware vSphere, and fully managed by VMware vCenter, NetApp HCI allows us to provide unique capabilities such as:

Predictable Performance: Any time you have multiple applications sharing the same infrastructure, the potential exists for one application to interfere with the performance of another.  NetApp HCI delivers unique Quality of Service capabilities that binds storage performance on three dimensions, min, max and burst. This unique capability means hundreds to thousands of applications can be consolidated with predictable performance.

Flexible and Scalable: A key tenet of most, if not all, hyper converged solutions is simplicity, but that does not always mean flexibility. NetApp HCI has a node-based, shared-nothing architecture that delivers independent scaling of compute and storage resources. This avoids costly and inefficient over-provisioning and simplifies capacity and performance planning. Start small with two 2RU chassis and then scale by node. Need storage capacity or performance? Just add a storage node. Want more processing power or memory for virtualization? Simply add compute. Grow how you want. Nondisruptively.

Simple and Automated: The key to agile and responsive IT operations is to automate routine tasks, eliminating the risk of user error associated with manual operations, freeing up resources to focus on driving differentiated business outcome. The NetApp Deployment Engine (NDE) simplifies Day 0 deployment by reducing the number of manual steps from over 400 to less than 30. Once you have deployed NetApp HCI, direct integration with VMware vCenter lets you easily automate and manage day-to-day tasks, including hardware-level operations and alerts, from Day 1 to Day 1500 and beyond. Finally, a robust API enables seamless integration into higher-level management, orchestration, backup, and disaster recovery tools. Watch the demos below to  learn more about how our HCI NetApp Deployment Engine works and the NetApp HCI vCenter Plugin.




NetApp Data Fabric: NetApp HCI is also an integral part of NetApp’s Data Fabric. The Data Fabric is NetApp’s vision for the future of data management, simplifying and integrating data management. It enables customers to respond and innovate more quickly because their data is accessible from on-premises to public cloud. Integration with the Data Fabric allows NetApp HCI to provide robust data services including file services via ONTAP Select, object services via StorageGrid Webscale, replication services via SnapMirror and backup and recovery services via AltaVault.

We look forward to existing and new customers utilizing the benefits of NetApp HCI. To get a quick tour of NetApp HCI or to get more information, visit


Author: Cynthia Goodell

  • 0

Automatic Static Detection of Malicious JavaScript

Category : Palo Alto

JavaScript, alongside HTML and CSS, is considered a core technology for building web content. As an influential scripting language found nearly everywhere on the web, it provides several unique vulnerabilities for malicious developers to attack unsuspecting users and infect otherwise legitimate and safe websites. There is a clear and eminent need for users of the web to be protected against such threats.

Methodologies used for the judgment of JavaScript safety can be separated into two broad categories: static and dynamic. Static analysis of JavaScript will treat the textual information in the script as the sole source of raw data. Computation can take place on this text to calculate features, estimate probabilities and serve other functions, but no code is ever executed. On the other hand, dynamic analysis will include evaluation of the script through internet browser emulation. Varied by the complexity and breadth of emulation, this has the potential to provide much more insightful information about the JavaScript’s true functionality and, thus, safety. However, this comes at the cost of increased processing time and memory usage.

“Obfuscation” is the intentional concealing of a program’s functionality making it difficult to interpret at a textual level. Obfuscation is a common problem for static analysis; dynamic analysis is much more effective at overcoming obfuscation. Minor obfuscation can include things like random or misleading variable names. However, heavier amounts of obfuscation aren’t so simple. Here is an example of a heavily obfuscated script, abridged for brevity:


As you can see, there is no way to infer the script’s functionality from a textual standpoint. Here is the original script, before obfuscation:


A human can easily interpret this original script, and there is much more readily available information about its level of safety. Note that, textually, the obfuscated and original scripts look almost nothing alike. Any text-based features extracted from the two files would likely look completely different.

It is important to note that both benign and malicious developers use obfuscation. Well-intentioned developers will often still obfuscate their scripts for the sake of privacy. This makes the automatic detection of malicious JavaScript tricky since, after a script has been obfuscated, malicious and benign code can look nearly the same. This problem is pervasive. In a randomly sampled set of 1.8 million scripts, approximately 22 percent used some significant form of obfuscation. However, in practice, we’ve found the use of obfuscation to be largely disproportionate between malicious and benign developers. In a labeled sample of about 200,000 JavaScript files, over 75 percent of known malicious scripts used obfuscation, while under 20 percent of known benign scripts used it.

A natural concern arises for traditional machine learning techniques, trained on hand-engineered, static textual features generated from scripts, to unintentionally become simple obfuscation detectors. Indeed, using the presence of obfuscation as a determining factor of maliciousness wouldn’t give you bad results in any evenly distributed dataset. Accordingly, heavily weighting the presence of obfuscation is a likely result of training algorithms to improve accuracy. However, this is not desirable. As mentioned, legitimate developers use obfuscation in a completely benign manner, so obfuscated benign samples need to be rightfully classified as benign to avoid too many false positives.

Static Analysis

Extracting Hand-Engineered Features

Despite these challenges, we’ve found the use of static textual features still has the potential to perform well. Our experiments suggest static analysis can produce acceptable results with the added benefit of simplicity, speed and low memory consumption.

Our experiments took place on approximately 200,000 scripts, around 60 percent of them benign and the other 40 percent malicious. This skew in the distribution was intentional. In any natural sample taken from crawling the internet, the percent of scripts that are malicious would be around 0.1 percent, whereas in our training set we are using 40 percent malicious samples. If we trained with the natural 0.1% distribution we would have problems with our results.  For example if you created a classifier that always said “Benign” it wouldn’t be very useful, but it would be right 99.9% of the time!

Using a training set with more malicious samples than benign samples runs the risk of developing an obfuscation detector, since most malicious samples are obfuscated. If they represent most of the dataset, obfuscation will likely be learned as a strong predictor to get good accuracy. We instead introduced a distribution only slightly skewed toward benign samples, which forces the model to learn to better detect benign samples without overpowering the need to detect malicious samples. This also maximizes the amount of obfuscated benign samples, which we are particularly concerned with and want to train on as much as possible.

Here is a visualization of a uniformly selected, random sample of our 128-dimensional data:


In the visualization, blue is benign and red is malicious. Although it’s not perfect (as is the case with any real-world data), notice the local separability between the benign and malicious clusters. This approximated visualization, created using the technique known as t-SNE, was a good omen to continue our analysis.

With a 70 percent training and 30 percent testing split, a random forest with just 25 trees could achieve 98 percent total accuracy on the test set. As mentioned before, test set accuracy is not always very informative. The more interesting numbers attained are the 0.01 percent false positive rate and the 92 percent malicious class recall. In English, this means only 0.01 percent of benign samples were wrongly classified as malicious, and out of the entire pool of malicious samples, 92 percent of them were correctly detected. The false positive ratio was manually decided by adjusting the decision threshold to ensure certain quality standards. The fact that we maintained 92 percent malicious class recall while enforcing this low false positive ratio is a strong result. For comparison, typical acceptable malicious class recall scores fall between 50 and 60 percent.

We hypothesize that certain obfuscation software is more commonly used among benign developers than malicious developers and vice versa. Aside from the fact that randomly generated strings will all look different anyway, more interestingly, certain obfuscation techniques result in code that is structured differently from other obfuscation techniques at a higher level. Distributions of characters will also likely change with different obfuscation methods. We believe our features might be picking up on these differences to help overcome the obfuscation problem. However, we believe there is a much better solution to the problem, which we will detail here.

Composite Word-Type Statistical Language Model

As opposed to hand-engineered feature extraction, a more robust and general approach to go about static analysis on text files is to build statistical language models for each class. The language models, which for simplicity’s sake can be thought of as probability distributions, can then be used to predict the likelihood of scripts to fit in that class. Let’s discuss a sample methodology to build such a system but many variations are possible.

The language model can be defined over n-grams to make use of all information in the script. More formally, we can write a script as a collection of J n-grams as such (the value of n is unimportant):


Then, we can build a malicious class model, script M, and a benign class model, script B. The weight of each n-gram for both models can be estimated from the data. Once these weights are determined, they can be used to predict the likelihood of a script belonging to either class. One possible way to measure these weights from a set of scripts all belonging to the same class is simply to calculate the number of times that n-gram appears in all scripts in the set over the total number of n-grams found in all scripts in the set. This can be interpreted as the probability of an n-gram appearing in a script of a given class. However, because of the effects of naturally common n-grams being weighted heavily in both classes despite being uninformative, one may instead seek out a measurement such as term frequency-inverse document frequency. TF-IDF is a powerful statistic, commonly used in the domain of information retrieval, that helps alleviate this problem.

Once the language models have been defined, we can use them for the sake of calculating the likelihood of a script belonging to either class. If your methods of model construction build the model as a probability distribution, the following equations will do just that:



In the above, C(J) represents the true class of J, which is either 0 or 1 for benign and malicious respectively. The class with the highest probability can be chosen as the predicted class. A different variation entirely could be to use the weights of n-grams as features to calculate a feature vector for each script. These feature vectors can be fed into any modern machine learning algorithm to build a model that way.

However, note that the space of possible n-grams in JavaScript code is massive; much larger than standard English text. This is especially true in the presence of obfuscation since randomized strings of letters, numbers and special characters are very common. In its unaltered form, the n-gram space from which the models are built is likely too sparse to be useful. Inspired by recent research, a potential solution to this problem is to introduce what is known as composite word-type. This is a mapping of the original n-gram space onto a much smaller and more abstract n-gram space. Concretely, the idea is to have a predefined set of several classes into which possible characters or character sequences can fall. Consider this string of JavaScript code as a demonstrative example:

var x = 5;

A naïve character-level unigram formation of this statement would look like this:

[‘v’, ‘a’, ‘r’, ‘ ‘, ‘x’, ‘ ‘, ‘=’, ‘ ‘, ‘5’, ‘;’]

Alternatively, one could define classes, such as whitespace, individual keywords, alphanumeric, digit, punctuation, etc., to reduce this level of randomness. Using those classes, the unigram formation would look like this:

[‘var’, ‘whitespace‘, ‘alphanumeric’, ‘whitespace‘, ‘punctuation’, ‘whitespace‘, ‘digit’, ‘punctuation’]

Notice that the randomness has been significantly reduced in this new space. Many possible statements, which would all look very different from the perspective of a character-level unigram, could all fit into the above abstraction. All the possible fits to the abstraction have their underlying meaning expressed while ignoring ad hoc randomness. This increases the difficulty for malicious developers to undermine the detection system since this is very robust to string randomization and variance in general.

It makes sense to have a unique class for each JavaScript keyword since those are informative pieces of information that must occur in a standard form to compile. Other alphanumeric strings may also contain useful information, and thus it is not advisable to abstract away all instances into one class. Instead, you might make a list of predictive keywords you expect to find and add them as classes or derive them from the data itself. That is, count the number of occurrences of alphanumeric strings across malicious and benign scripts separately, and discover which strings have the largest difference in frequency between the two.

Shallow Dynamic Analysis

Despite the strong potential from static analysis, the problem is only alleviated, not completely solved. Benign, obfuscated samples are still under greater suspicion than is desirable. This is confirmed by the manual inspection of false positives, which are almost all obfuscated benign samples. The only way to completely overcome obfuscation is with dynamic analysis. A shallowly dynamic strategy generally known as deobfuscation includes a family of techniques used to evaluate and unpack obfuscated code back into its original form. More complex and dynamic analysis techniques exist that typically consist of tracking all actions taken by a script in an emulated browser environment. We won’t discuss those methods in this post, since we’re aiming to demonstrate that strong, dependable behavior can come from simpler, quicker methods.

There are many open source tools meant for JavaScript deobfuscation. Utilizing these tools as a pre-processing step on a script-by-script basis can ensure we generate features from strictly deobfuscated script. Of course, this changes the appearance of the data and demands either a new set of textual features or recomputed statistical language models. As mentioned before, a large increase in robustness is to be expected when working with deobfuscated script compared to obfuscated script. The verbosity and detail of each script is often greatly increased, which machine learning or language models can leverage to gain better insight and give better predictions.



  • 0

5 ways NFC technology is improving the sports world

Category : Gemalto

When you mention ‘NFC’ to the average American sports fan, they might not immediately think of Near Field Communication technology, a short-range wireless connectivity standard that uses magnetic field induction to enable communication between devices… Instead, they might think of the National Football Conference, which makes up one half of the NFL; the other half is the American Football Conference, known as the AFC).

The NFC is full of some very famous and iconic teams, such as the Dallas Cowboys, the Green Bay Packers and the San Francisco 49ers, so don’t be surprised if that’s what is thought of first. But can NFC help the NFC? Can contactless technology improve sports in general? The answer is yes. See below for our list of the top five ways this is happening.

  1. Improving running shoes


Running is incredibly popular and a great way to keep fit. For example, in the UK alone, there are an estimated 10.5million+ runners (and possibly six times as many in the US) – that’s a lot of running shoes in regular use! But how can NFC improve them? Adidas has found a way – they’ve recently put NFC chips in many of their shoe models that reveal original content when used with the Adidas app…

But there are much more advanced and ambitious plans for the technology on the way. Adidas is planning to give users the ability to send feedback directly to the company with details on how the shoes fit and perform in various conditions. These chips will give Adidas an incredible wealth of data to help develop the best possible products for the future. The more chipped shoes they sell, and the more data they receive, the more they can improve their performance running shoes.

Adidas is already using this type of data to build specific shoes for runners in different cities – it turns out runners in London are different to those in New York City. As a result, Adidas has just launched a new shoe call the AM4LDN— “Adidas made for London” – and a shoe for Parisian runners will launch next week.

  1. Helping basketball fans engage with the game

The NBA (National Basketball Association) is taking big steps to upgrade fan experiences in 2017 by effectively bringing fans to the courtside thanks to the release of new official Nike uniforms.

The new jerseys incorporate technological innovations may change the sports apparel industry forever. They come complete with an NFC chip that connects to your phone, offering exclusive gameday information and content such as highlight reels, real-time stats, exclusive offers, and even favorite music playlists of the players!

These ‘smart jerseys’ will help fans achieve a more in-depth and interactive experience when watching the game, and help them connect to the players they cheer for. They’re already available, perfect timing as the season is just beginning.

  1. Speeding up baseball stadium entry for MLB

We’ve all been there; standing in line, near the back, waiting for an eternity to get into the stadium so you can watch your favorite team play. You just want to get inside and find your seat and get ready for the action. If only there were some way we could speed up the entry process… if you’re an MLB (Major League Baseball) fan, specifically a fan of the Oakland Athletics, you’re in luck!

The team is piloting an NFC ticketing solution allowing fans to enter the stadium by tapping their iPhone (or Apple Watch) on a ticket scanner – the same way you’d use Apple Pay. This method uses the NFC technology you find in contactless rewards cards (such as a Walgreens Balance Rewards) via Apple Pay.

However, this is the first time the technology is being used outside of reward cards or stored balance gift cards, so it’s a great move forward and use of NFC that will save baseball fans plenty of time.

  1. Preventing fraudulent sales of sports memorabilia

Unfortunately, fraud and sports memorabilia are regularly linked to each other. Fake sports memorabilia is impossible to avoid – all too often you see stories of someone paying hundreds (sometimes thousands) of dollars for a piece of iconic sports memorabilia, only to then discover it’s fake later down the line.

Fortunately, NFC tags are here to help! NFL legend and former Dallas Cowboys star, Emmitt Smith (who played in the NFC his entire career), has founded a startup that creates stamp-sized NFC chips that track when an item is worn or used in a game. The smart tags from PROVA do more than just track items; they also identify stolen goods – ultimately making it very difficult for anyone trying to sell property owned by the NFL.

It’s a win-win for fans and the NFL and a considerable blow for anyone trying to steel or sell stolen sporting goods. Hopefully, this will help prevent any more fiascos such as Tom Brady’s stolen Super Bowl jersey earlier this year.

  1. Speeding up sales in NFL stadiums

Buying food and drinks in a large sports stadium (particularly a busy NFL stadium) can be time-consuming. There are long queues, mainly due to arduous payment processes with fans fiddling around with loose change or forgetting their PIN codes… Once again, it’s NFC to the rescue! Last year, Visa partnered with the NFL and the San Francisco 49ers to provide Levi’s Stadium, the site of Super Bowl 50 with approximately 700 NFC-enabled, point-of-sale terminals, enabling cardholders at the game to swipe, tap or click-to-pay using a smartphone when purchasing food, drinks or merchandise. And over in England, Gemalto has introduced NFC bands at Saracens, the superstar rugby team, to help improve the fan experience on matchday.

The move in San Francisco was a big success and was one of the many reasons why Super Bowl 50 was a great experience for all the fans in attendance (apart from Carolina Panther’s fans who had to witness their team suffer at the hands of Von Miller).

So, there you have it – five ways NFC technology is helping improve the sporting world. What do you think? Are there more use cases for NFC in sports on the way? Let us know by tweeting to us @Gemalto, or leave a comment below.

  • 0

No More Network Blind Spots, See Um, Secure Um

Category : Gigamon

East Coast summer nights of my childhood were thick with humidity, fireflies and unfortunately, merciless mosquitoes and biting midges. So, when a West Coast friend said she had a summertime no-see-um tale to tell, I was ready to commiserate.

My friend likes to camp – alone. Not in deep, dark, remote backcountry, but, you know, at drive-in campgrounds. Pull in, pitch a tent, camp – that’s her style. While not the most private, she likes the proximity to restrooms and even, people.

Before one adventure, she was gathering provisions at Costco when she saw a “no-see-um” tent for sale. “Well, this is exactly what I need,” she thought. No longer would she have to lower her “shades” or head to the restroom to change. She’d be free to undress in her tent, relax and fall asleep to the hum of an adjacent freeway.

Of course, we can all figure out how this story ended. After having enjoyed her newfound freedom for an evening, she returned the following morning from a visit to the loo only to realize the naked truth.

Like a Good Boy Scout, Are You Prepared?

While my friend’s false sense of security bordered on the ridiculous – okay, it was ridiculous – it speaks to the potential for misjudging cybersecurity readiness. Her problem was that she felt secure when she wasn’t – a blind spot of sorts that could have led to more than just awkward consequences.

In a way, the same holds true with enterprises who have bought innumerable security tools – perimeter firewalls, endpoint antivirus, IPSs – to keep prying eyes out. They, too, often have a false sense of security. Unlike my friend, it’s not that they don’t understand how these tools work; rather it’s that they don’t understand that these tools cannot provide complete network protection.

There are simply too many bad guys and too little time to detect and prevent all cyberattacks. Not only is malware everywhere – for example, zero-day exploits and command-and-control infrastructures are available for purchase at a moment’s notice by anyone with a computer and the desire to wreak havoc – but with data flying across networks at increasing speeds and volumes, it’s more and more difficult for enterprises to do any intelligent analysis to uncover threats and prevent attacks from propagating across core systems.

Detecting compromises is hard. It requires monitoring a series of activities over time and security tools only have visibility into a certain set of activities – most cannot see and comprehend the entire kill chain. This incomplete view is more than problematic – it’s dangerous.

In fact, according to 67 percent of respondents to a new Vanson Bourne survey, “Hide and Seek: Cybersecurity vs. the Cloud,” network blind spots are a major obstacle to data protection. The survey, which polled IT and security decision-makers on network visibility and cloud security preparedness, also revealed that 43 percent of respondents lack complete visibility into all data traversing their networks and half lack adequate information to identify threats. By all counts, such data blindness could lead to serious security implications – not only within enterprise environments, but also in the cloud, where 56 percent of respondents are moving critical, proprietary corporate information and 47 percent are moving personally identifiable information.

See the Forest and the Trees

Sometimes we apply an available tool because it sounds like it’ll do the job – ahem, my dear friend and her no-see-um tent – but fully understanding the purpose and assessing the efficacy of your security tools isn’t a minor detail to be overlooked. Enterprises who’ve been buying more tools to address the security problem are beginning to question if they are getting the right return on their investments, especially when they have no means to measure how secure they are. To further complicate matters, more tools often increase the complexity of security architectures, which can exacerbate the data blindness issue.

So, what can be done? For sure, preventative solutions shouldn’t go away – they play a critical role in basic security hygiene and protecting against known threats – but they must be augmented with solutions for better detection, prediction and response in a way that doesn’t create more blind spots. In other words, with a new approach that is founded on greater visibility and control of network traffic to help increase the speed and efficacy of existing security tools and that allows enterprises to say, “Okay, this is where my investments are going and these are the gaps I need to address to become more secure or even, to identify if it’s possible to become more secure or not.”

If you’re unsure how secure your network is, maybe start with a few simple questions:

  • Can you see into all data across your network? Or does some data remain hidden due to silos between network and security operations teams?
  • Are your security tools able to scale for faster speeds and increased data volume? Without diminishing their performance?
  • What about your cloud deployments – are they being used securely? Is there clear ownership of cloud security?


Author: Erin O’Malley

  • 0

What to Look for in a Credible Unified Endpoint Management (UEM) Solution

Category : HP Security

Invest in a Unified Endpoint Management (UEM) solution that actually meets your device management needs. (No, all UEM solutions are not created equal.) This video outlines the capabilities to look for when you’re ready to move forward with UEM.


  • 0

Detecting Data Breaches, Why Understanding Database Types Matters

Category : Imperva

Different data characteristics and access patterns found in different database systems lead to different ways of detecting suspicious data access, which are indicators of potential data breaches. To accurately detect data access abuse we need to classify the database processing type. Is it a transactional database (OLTP) or a data warehouse (OLAP)?

OLTP vs. OLAP – What’s the Difference?

Today, in the relational database world there are two types of systems. The first is online transactional processing (OLTP) and the second is online analytical processing (OLAP). Although they look the same, they have different purposes. OLTP systems are used in business applications. These databases are the classic systems that process data transactions. The queries in these databases are simple, short online transactions and the data is up-to-date. Examples include retail sales, financial transaction and order entry systems.

OLAP systems are used in data warehouse environments whose purpose is to analyze data efficiently and effectively. OLAP systems work with very large amounts of data and allow users to find trends, crunch numbers, and extract a ‘big picture’ from the data. OLAP systems are widely used for data mining and the data in them is historic. As OLAP’s number-crunching usually involves a large data set, the interactions with the database last longer. Furthermore, with OLAP databases it’s not possible to predict what the interactions (SQL queries) will look like beforehand.

OLAP and OLTP data flow

Figure 1: OLAP and OLTP data flow

The different nature of OLTP and OLAP database systems leads to differences in users’ access patterns and variations in the characteristics of the data that is stored there.

Comparing Access Patterns

With OLTP we expect that users will access the business data that is stored in the database using the application interface. Interactive (or human) users are not supposed to access the business application data directly through the database. One exception might be DBAs who maintain the database, but even in this case there is no real reason that a DBA should access business applicative data directly. It is more likely that DBAs will only access the system tables (which store the metadata of the data store) in the database.

With OLAP the situation is different. Many BI users and analysts regularly access the data in the database directly and not through the application interface to produce reports and analyze and manipulate the data.

The Imperva Defense Center worked with dozens of databases across Imperva enterprise customers to analyze the data access patterns for OLTP and OLAP databases over a four-week period. We used audit data collected by SecureSphere and insights gathered from CounterBreach. Figure 2 shows the average number of new interactive users who accessed these databases during the four-week period.

new interactive users in the database

Figure 2: The number of new interactive users who accessed OLTP and OLAP databases over time.

As indicated in Figure 2, there were almost no new interactive (or human) users who accessed OLTP databases over time. However, this was not the situation for OLAP databases.

Comparing Data Characteristics

The data in OLTP systems is up-to-date. In most cases, the tables that hold the business application data are not deleted and repeatedly re-created – they’re stable.

On the other hand, in OLAP systems the data that is saved in the database is historic data. There are ETL (extract, transform, load) processes which upload and manipulate data on the database periodically (hourly/daily/weekly). In many cases, the data is uploaded to new tables each time, for example each day the data is uploaded to a table, which contains the date of the data upload. This leads to many new tables in the database, including temporary tables which help manipulate the data, and tables which are deleted over time.

Again, the Imperva Defense Center analyzed the characteristics of data stored in OLTP and OLAP databases using Imperva enterprise customers’ audit data collected by SecureSphere and insights gathered from CounterBreach. Figure 3 shows the average number of new business application data accessed by interactive users over a four-week period. The average number of new business application tables in OLTP is very low, whereas in OLAP this amount is much higher.

New business app tables in database over time

Figure 3: The number of new business application tables in the database over time.

Incorporating OLTP and OLAP Differences to Improve Detection of Suspicious Data Access

Detecting potential data breaches in a relational database requires identifying suspicious activity in the database. To identify suspicious activity successfully—without missing attacks on one hand and not identifying many false positive incidents on the other—the detection should be based on the story behind the database.

We need to ask ourselves, what is the purpose of the database? How should we expect interactive users to act in the database? How do we expect applications to act in the database? What can we tell about the data in the database? To answer these questions a deep understanding of databases – user types, data types and database types – is required.

The latest release of Imperva CounterBreachadds further understanding of the database types and factors database types into its detection methods. Leveraging the Imperva Defense Center research on behaviors of interactive users for OLTP and OLAP databases, CounterBreach uses machine learning to classify database types based on access patterns of interactive users to the database. The machine learning algorithm analyzes a number of different aspects…the number of business intelligence (BI) users and DBAs who access the database, which data is accessed by those interactive users, the amount of new business application data that is created in the database, and more.

With an understanding of the database type, CounterBreach determines the best method to detect suspicious activity. In databases that act like OLTP systems, it detects and alerts on any abnormal access by an interactive user to business applicative data.

In OLAP systems where interactive users access business application data as part of their day-to-day work, CounterBreach won’t alert on such behavior because it’s legitimate. In these systems, it will let BI users do their jobs and use other indicators, such as an abnormal amount of records exfiltrated from the database’s business application tables, to detect data abuse. This helps keep data driven business processes functioning and reduces the number of false positives detected.

Ongoing Research

Imperva data scientists continue to research and identify additional characteristics that distinguish OLTP and OLAP systems. These characteristics go beyond the access patterns of interactive users to the database and the data stored in the database. They include the names of the tables that are stored in the database, the source applications which are used to access the database, ETL processes, diversity between operations in the database, ratio between different entities access to the database and much more. This ongoing research will further refine the detection accuracy needed to detect potential data breaches.

Learn more about data breach detection. Read our paper on the top ten indicators of data abuse and understand how to identify insider threats.


Author: Shiri Margel

  • 0

Internet Outages, Botnets… Just Another Day at the Office

Category : Forcepoint

2017 seems to have been a breakout year for cyber risk, and just when you’re telling yourself it can’t get any worse… well, it gets worse. As anyone monitoring the security press (or Twitter) will be aware, both the FBI and DHS have released information about campaigns targeting our critical infrastructure  and the potential of internet outages from the quickly-growing ‘IoTroop’ IoT botnet. While neither revelation is much of a surprise (summary: bad people are targeting stuff that matters and someone is growing a big botnet for reasons yet to be disclosed) that’s hardly a good Monday morning in the office.

While it’s good that we’re seeing sharing of cyber risk, I have to ask if this is a warning that we can do much about in the short term. Yes, we can add specific detection for the botnet traffic, yes, we can detect IoCs for the latest round of people-centric attacks. However, neither of those do us much good in the long term, because the attacker doesn’t stay static and simply say ‘You got me!’ If a nation state has us in its crosshairs, I have to ask what concrete steps commercial entities can take that would make much of a difference given the vast asymmetry they face in terms of cost to attack versus cost to defend. Even if we were to disclose the “Who?”, “What?” and “Why?” would that change the specific mitigations we need to put in place? There are steps we can take, but they are anything but quick, and they are not simple. That’s an important point, so I’ll reiterate. Not quick, as this requires a fundamental do-over in how we try and build protections, and not simple, in that we live in a world where defenses and threats co-evolve: the attackers respond to us, and vice versa. Changing the technology (but more importantly, the underlying economics) of that game is something that we have to do.

As an active member of the cybersecurity community for over 25 years, the takeaway is perhaps different than one might expect. Cyber represents a continuous risk for not just vulnerable sectors, but at the upper end, to our way of life. I am not arguing that the sky is falling nor trying to sell fear or uncertainty (or doubt, to complete the thought), but we also need to recognize the highly-asymmetric threat environment in which we now live for what it is. This is not abstract… it is personal, and we’re all in it together. For example, with a botnet, your insecurity directly impacts my safely online… and vice versa. Once we recognize that, we then have to make the investments to do something about it – something well thought out, not a shared system of liability where my only recourse is litigation.

From a security perspective, these joint warnings remind us that attackers will use any means necessary to accomplish their goals, ranging from simple distributed denial of service attacks using massive botnets to the specific targeting of high-value targets within an organization. As defenders, we need to do the basics well, such as patching, continuous monitoring and secure software development. However, in addition we must recognize the criticality of focusing on not just the purely technological, but also the human. We cannot remain trapped in an arms race chasing the latest exploit or vulnerability, but must work on a more holistic strategy that provides protection for every end user in our organization. Building resilience in our systems must be our mantra as we go forward.


Author: Richard Ford

  • 0

A Matrix Approach for Account Ranking and Prioritization

Category : Cyber-Ark

Throughout the course of my six years in helping KPMG clients with their Privileged Access Management programs, there has rarely been a simple answer to the critical questions of exactly which privileged accounts in an environment should be integrated first (e.g., application/infrastructure/personal accounts), and exactly how we should control each type of privileged account. The ways an organization can control privileged accounts using a solution like CyberArk can vary greatly (e.g. vaulting, password rotation, brokering, etc.).

A common approach to password management includes treating all vaulted credentials with the same level control measures; this is typically a symptom that indicates a lack of a risk-based approach to assigning criticality to accounts. Alternatively, we also see cases of wild inconsistencies in the way passwords are managed, typically leaving it up to the individual platform owners to pick and choose the right security controls for them. This typically an indication of a lack of defined PAM standards that can be applied enterprise-wide. When developing strategies and roadmaps for KPMG clients, our teams apply an “Account Criticality Matrix” to help answer these questions. This matrix is designed to help standardize the way we rate and weigh the criticality of a given account.  It includes a set of predefined criteria that we tailor to meet the unique needs of each organization. Example criteria in the Account Criticality Matrix include:

*   Number of individuals that have access to a given privileged credential
*   Frequency of account usage
*   Potential to access sensitive data
*   Scope of privilege across single/multiple systems or platforms
*   Control level granted

Based on the numerical scoring derived from the Account Criticality Matrix, we then begin to build a profile of what an organization would consider a “high-risk” account versus a “low-risk” account.  This profile helps on numerous fronts.  First, it allows for consideration of account types that typically would not be considered as true “privileged” accounts.  For example, many application or service accounts are inadvertently excluded from management in organizations due to a lack of understanding of enterprise privileged account definitions by the application owner.  In the absence of pre-defined account prioritization criteria, those owners are left to decide what constitutes a “privileged” account or not.  Many will opt for the latter without prescribed guidance.  The matrix will allow an organization to take any account type and provide a standardized metric to determine whether it meets the criteria to be integrated into CyberArk.

The second benefit is the standardization of account controls across the organization based on the calculated account criticality.  Depending on licensing and hardware limitations, recording all privileged accounts may not be feasible.  Based on a pre-defined policy, an organization could mandate that only “high” rated accounts require dual control and PSM recording, but periodic password rotations of “medium” rated credentials are sufficient.

Thirdly, combining knowledge of “high” severity accounts and implementation effort can provide a window to prioritization of the path of integration.  When various stakeholders ask why the decision was made to start with default local accounts first and not their specialized application, you can point them not only to the fact that those accounts rated as high based on the user base, scope of privilege, and access granted, but also because the implementation effort was lowest for those accounts.


Author: Art Chaisiriwatanasai