Category Archives: Cyber-Ark

  • 0

Protecting Cross-Border Data Transfers For GDPR

Category : Cyber-Ark

Corporate legal counsels, technology providers, IT professionals – and anyone else paying attention to the General Data Protection Regulation (GDPR) – would undoubtedly agree that the requirements within the 99 Articles of the regulation present a laundry list of necessary changes many organizations will need to make to avoid non-compliance. The one we want to highlight in this blog calls for an adequate level of protection to be implemented for cross-border data transfers. Article 45, ‘Transfers on the basis of an adequacy decision’ specifically states:

“A transfer of personal data to a third country or an international organization may take place where the Commission has decided that the third country, a territory or one or more specified sectors within that third country, or the international organization in question ensures an adequate level of protection.”

This complicates things in the world of international commerce. Here in the United States, the Department of Commerce has nixed the U.S.-EU Safe Harbor Framework (following a decision by the Court of Justice of the European Union) and replaced it with a new framework, the EU-U.S. Privacy Shield. This new framework better aligns to the very detailed and specific requirements of GDPR, and it will allow companies within the United States and the European Union to successfully execute transatlantic data transfers.

Any country, governmental body or organization that turns a blind eye to this requirement will subsequently have their respective data transfers blocked by this legislation.  Most importantly, by not having an ‘adequate level of protection,’ basically means the chances of being subjected to a personal data breach increase considerably. Which as we all now know, introduces severe financial and reputational consequences.

With CyberArk The Privileged Account Solution Version 10, we’ve made significant enhancements that enable customers to better meet the requirements in storing session recordings for cross-border data transfers. Our customers now have the ability to securely store privileged session recordings on regional-based storage, as opposed to storing them in a Digital Vault, which might be globally dispersed or more likely, outside the European Union. This is especially important for monitored database sessions, where client data has the potential to be revealed as a consequence of a command executed by an administrator.   

This change applies to both processor and controller requirements and benefits customers that have a need to lock down their session recordings and ensure they do not leave a specific region (see Figure 1). This new capability goes beyond the requirements of GDPR and equally applies to local secrecy acts such as the Singapore Banking Secrecy Act, which prohibits (without permission) the export of client data outside of the region.


Figure 1. CyberArk now provides the ability to store privileged session recordings on dedicated, regional-based external storage.

It’s important for organizations to only provide authorized users with access to these recordings, ensuring that any playback processes are consistent with the data isolation requirements. Additionally, it’s critical to protect the integrity of these privileged session recordings for digital forensics in the case they should ever be needed for a legal proceeding. To support the security, integrity and validity of these session recordings, the following capabilities have been enforced with CyberArk Privileged Account Solution Version 10:

  • Secure Communication – The communication between the Privileged Session Manager, the storage devices and the CyberArk user interface for the recordings replay is performed via a secure protocol.
  • Managed Authorization – Only authorized users in the Vault will be able to access the session recordings through CyberArk systems.
  • Searchable Audit Records and Streamlined Video Replay – The actual location of the video is transparent to the authorized user (e.g. auditors and reviewers) and provides the exact same user experience for both vault-stored recordings and externally stored recordings.
  • Maintenance Users Protection – The CyberArk Privileged Account Security Solution will be used for authorizing and monitoring maintenance users’ access to the secure storage.

These enhancements show CyberArk’s dedication to helping organizations avoid non-compliance with GDPR. The CyberArk Privileged Account Security Solution can be critical for your organization to advance securely in an increasingly dynamic, competitive business environment. Be sure to visit our website for more information on how CyberArk solutions can help support your GDPR strategy today.



  • 0

Golden SAML: Newly Discovered Attack Technique Forges Authentication to Cloud Apps

Category : Cyber-Ark

In this blog post, we introduce a new attack vector discovered by CyberArk Labs and dubbed “golden SAML.” The vector enables an attacker to create a golden SAML, which is basically a forged SAML “authentication object,” and authenticate across every service that uses SAML 2.0 protocol as an SSO mechanism.

In a golden SAML attack, attackers can gain access to any application that supports SAML authentication (e.g. Azure, AWS, vSphere, etc.) with any privileges they desire and be any user on the targeted application (even one that is non-existent in the application in some cases).

We are releasing a new tool that implements this attack – shimit.

In a time when more and more enterprise infrastructure is ported to the cloud, the Active Directory (AD) is no longer the highest authority for authenticating and authorizing users. AD can now be part of something bigger – a federation.

A federation enables trust between different environments otherwise not related, like Microsoft AD, Azure, AWS and many others. This trust allows a user in an AD, for example, to be able to enjoy SSO benefits to all the trusted environments in such federation. Talking about a federation, an attacker will no longer suffice in dominating the domain controller of his victim.

The golden SAML name may remind you of another notorious attack known as golden ticket, which was introduced by Benjamin Delpy who is known for his famous attack tool called Mimikatz. The name resemblance is intended, since the attack nature is rather similar. Golden SAML introduces to a federation the advantages that golden ticket offers in a Kerberos environment – from gaining any type of access to stealthily maintaining persistency.

SAML Explained

For those of you who aren’t familiar with the SAML 2.0 protocol, we’ll take a minute to explain how it works.

The SAML protocol, or Security Assertion Markup Language, is an open standard for exchanging authentication and authorization data between parties, in particular, between an identity provider and a service provider. Beyond what its name suggests, SAML is each of the following:

  • An XML-based markup language (for assertions, etc.)
  • A set of XML-based protocol messages
  • A set of protocol message bindings
  • A set of profiles (utilizing all of the above)

The single most important use case that SAML addresses is web browser single sign-on (SSO). [Wikipedia]

Let’s take a look at figure 1 in order to understand how this protocol works.

Figure 1- SAML Authentication

  1. First the user tries to access an application (also known as the SP i.e. Service Provider), that might be an AWS console, vSphere web client, etc. Depending on the implementation, the client may go directly to the IdP first, and skip the first step in this diagram.
  2. The application then detects the IdP (i.e. Identity Provider, could be AD FS, Okta, etc.) to authenticate the user, generates a SAML AuthnRequest and redirects the client to the IdP.
  3. The IdP authenticates the user, creates a SAMLResponse and posts it to the SP via the user.
  4. SP checks the SAMLResponse and logs the user in. The SP must have a trust relationship with the IdP. The user can now use the service.

SAML Response Structure

Talking about a golden SAML attack, the part that interests us the most is #3, since this is the part we are going to replicate as an attacker performing this kind of attack. To be able to perform this correctly, let’s have a look at the request that is sent in this part – SAMLResponse. The SAMLResponse object is what the IdP sends to the SP, and this is actually the data that makes the SP identify and authenticate the user (similar to a TGT generated by a KDC in Kerberos). The general structure of a SAMLResponse in SAML 2.0 is as follows (written in purple are all the dynamic parameters of the structure):

Depending on the specific IdP implementation, the response assertion may be either signed or encrypted by the private key of the IdP. This way, the SP can verify that the SAMLResponse was indeed created by the trusted IdP.

Similar to a golden ticket attack, if we have the key that signs the object which holds the user’s identity and permissions (KRBTGT for golden ticket and token-signing private key for golden SAML), we can then forge such an “authentication object” (TGT or SAMLResponse) and impersonate any user to gain unauthorized access to the SP. Roger Grimes defined a golden ticket attack back in 2014 not as a Kerberos tickets forging attack, but as a Kerberos Key Distribution Center (KDC) forging attack. Likewise, a golden SAML attack can also be defined as an IdP forging attack.

In this attack, an attacker can control every aspect of the SAMLResponse object (e.g. username, permission set, validity period and more). In addition, golden SAMLs have the following advantages:

  • They can be generated from practically anywhere. You don’t need to be a part of a domain, federation of any other environment you’re dealing with
  • They are effective even when 2FA is enabled
  • The token-signing private key is not renewed automatically
  • Changing a user’s password won’t affect the generated SAML

AWS + AD FS + Golden SAML = ♥ (case study)

Let’s say you are an attacker. You have compromised your target’s domain, and you are now trying to figure out how to continue your hunt for the final goal. What’s next? One option that is now available for you is using a golden SAML to further compromise assets of your target.

Active Directory Federation Services (AD FS) is a Microsoft standards-based domain service that allows the secure sharing of identity information between trusted business partners (federation). It is basically a service in a domain that provides domain user identities to other service providers within a federation.

Assuming AWS trusts the domain which you’ve compromised (in a federation), you can then take advantage of this attack and practically gain any permissions in the cloud environment. To perform this attack, you’ll need the private key that signs the SAML objects (similarly to the need for the KRBTGT in a golden ticket). For this private key, you don’t need a domain admin access, you’ll only need the AD FS user account.

Here’s a list of the requirements for performing a golden SAML attack:

  • Token-signing private key
  • IdP public certificate
  • IdP name
  • Role name (role to assume)
  • Domain\username
  • Role session name in AWS
  • Amazon account ID

The mandatory requirements are highlighted in purple. For the other non-mandatory fields, you can enter whatever you like.

How do you get these requirements? For the private key you’ll need access to the AD FS account, and from its personal store you’ll need to export the private key (export can be done with tools like mimikatz). For the other requirements you can import the powershell snapin Microsoft.Adfs.Powershell and use it as follows (you have to be running as the ADFS user):

ADFS Public Certificate

IdP Name

Role Name

Once we have what we need, we can jump straight into the attack. First, let’s check if we have any valid AWS credentials on our machine.

Unsurprisingly, we have no credentials, but that’s about to change. Now, let’s use shimit to generate and sign a SAMLResponse.

The operation of the tool is as follows:

Figure 2– Golden SAML with

    1. Generate an assertion matching the parameters provided by the user. In this example, we provided the username, Amazon account ID and the desired roles (the first one will be assumed).
    2. Sign the assertion with the private key file, also specified by the user.
    3. Open a connection to the SP, then calling a specific AWS API AssumeRoleWithSAML.
  1. Get an access key and a session token from AWS STS (the service that supplies temporary credentials for federated users).
  2. Apply this session to the command line environment (using aws-cli environment variables) for the user to use with AWS cli.

Performing a golden SAML attack in this environment has a limitation. Even though we can generate a SAMLResponse that will be valid for any time period we choose (using the –SamlValidity flag), AWS specifically checks whether the response was generated more than five minutes ago, and if so, it won’t authenticate the user. This check is performed in the server on top of a normal test that verifies that the response is not expired.


This attack doesn’t rely on a vulnerability in SAML 2.0. It’s not a vulnerability in AWS/ADFS, nor in any other service or identity provider.

Golden ticket is not treated as a vulnerability because an attacker has to have domain admin access in order to perform it. That’s why it’s not being addressed by the appropriate vendors. The fact of the matter is, attackers are still able to gain this type of access (domain admin), and they are still using golden tickets to maintain stealthily persistent for even years in their target’s domain.

Golden SAML is rather similar. It’s not a vulnerability per se, but it gives attackers the ability to gain unauthorized access to any service in a federation (assuming it uses SAML, of course) with any privileges and to stay persistent in this environment in a stealthy manner.

As for the defenders, we know that if this attack is performed correctly, it will be extremely difficult to detect in your network. Moreover, according to the ‘assume breach’ paradigm, attackers will probably target the most valuable assets in the organization (DC, AD FS or any other IdP). That’s why we recommend better monitoring and managing access for the AD FS account (for the environment mentioned here), and if possible, auto-rollover the signing private key periodically, making it difficult for the attackers.

In addition, implementing an endpoint security solution, focused around privilege management, like CyberArk’s Endpoint Privilege Manager, will be extremely beneficial in blocking attackers from getting their hands on important assets like the token-signing certificate in the first place.




  • 0

CyberArk Unveils V10 – Simplicity, Automation, Risk Reduction

Category : Cyber-Ark

The iPhone is not the only v10 to be released this year! As a product leader, I am not sure if splitting the release of the iPhone 8 from the release of the iPhone X was an example of Apple’s marketing genius or not, but as an Android user, I’ll observe the results from a safe distance. I will save my thoughts on the merits of the Android over the Apple for another day.

What I am very excited about, however, is the latest release of the CyberArk Privileged Account Security Solution.  Like Apple, we’ve achieved a milestone of sorts with a v10 of our own (and unlike Apple, we actually delivered a v9!). Nearly a 12-year veteran of CyberArk, I can honestly say this release is one of my proudest moments, right up there with the global recognition and debut of CyberArk on the NASDAQ in September of 2014 and the previous introduction of Threat Analytics capabilities. What really makes this release stand out for me is our unwavering focus on two big themes: simplicity and automation.

As the #1 market share leader in privileged account security, we push ourselves to always do better by our customers, and with this v10 release, we have delivered!  After spending countless hours engaging with and soliciting input from our customers and partners—not to mention organizing extensive usability and beta testing—we are unveiling a brand new, modernized user interface (UI) that is elegant, clean and simple. Here is a sample of what one of our customers, a senior consultant at a global financial services organization had to say:

 “The new user interface and account management features will dramatically facilitate adoption and ease-of administration, providing a better experience for our end users, and saving our IT staff valuable time by simplifying day to day management tasks.”

The new v10 UI offers a simplified view of account management that reduces time spent on common tasks for operation teams by 10x and the time it takes auditors to review sessions recordings has decreased by 5x. All of this means simpler and faster deployment, allowing operations and auditor teams to spend their time on value-add endeavors!

On the automation front, we’ve fully embraced that we now live in an API-first digital world.To that end, our new and improved REST APIs make it even easier to integrate CyberArk solutions with existing security, operations and DevOps tools. A good example of this is a new integration with AWS CloudWatch and Auto Scaling that automates onboarding to enable security teams to save time and reduce the risk of unmanaged SSH keys.

Along with our big focus on simplicity and automation, we continue to stay true to our corporate mission to reduce risk associated with privileged accounts wherever they exist, whether on-premises, in the cloud or in DevOps workflows. The AWS integration is great example of this since manually provisioning SSH keys just doesn’t work with modern, elastic scaling infrastructure.

On the endpoint, where many damaging attacks start, we have further enhanced our Endpoint Privilege Manager to deliver a new cloud-based Application Risk Analysis Service, which enables timely, well-informed privilege and application control policy decisions. We’ve also extended support to the Mac, a platform that is increasingly adopted in the enterprise.

Unlike the release of iPhone, you don’t have to set up a tent outside of a big glass store for the next big release from CyberArk. You can learn all about it here and sign up for our upcoming webinar covering all of the great details from this latest release. Stay tuned for upcoming blog posts that will provide additional details on what we’ve delivered, since there are a lot of great new capabilities in this release that I did not have the real estate to cover here.



  • 0

“Alarming” lack of privileged account security awareness in DevOps

Category : Cyber-Ark

DevOps environments are facing an ‘alarming’ lack of security planning and potentially serious gaps around privileged accounts and secret awareness, a new report from CyberArk warns.

DevOps and security professionals around the world, including Australia, have major knowledge gaps about where privileged accounts and secrets exist across their own IT infrastructure, according to CyberArk’s Advanced Global Threat Landscape survey.

The survey offered a range of options in which secrets and privileged accounts exist (PCs/laptops, microservices, cloud environments and containers), however 99% of respondents were unable to identify all places.

84% were unaware that privileged accounts are found on source code repositories such as GitHub. 80% were unaware privileges also exist in microservices; 78% were unaware about privileges in cloud environments and 76% were unaware of privileged accounts in CI/CD tools used by DevOps teams.

DevOps remains a priority for enterprises according to Gartner, however 75% of survey respondents admitted they have no privileged account strategy (PAS). In Australia, the percentage is even higher (82%). This creates significant weak points that attackers can target, CyberArk says.

37% of DevOps professionals also believe that compromised DevOps environments are but one of their organisation’s biggest security vulnerabilities.

CyberArk’s VP of DevOps Security Elizabeth Lawler says that as DevOps uptake increases, the amount of privileged account credentials are being created and shared in interconnected business ecosystems.

“Even though dedicated technology exists, with few organisations managing and securing secrets, they become prime targets for attacks. In the hands of an external attacker or malicious insider, compromised credentials and secrets can allow attackers to take full control of an organisation’s entire IT infrastructure.”

78% of Australian security teams say there is a problem for security and DevOps teams, compared to 65% worldwide.

“So it’s worrying that the rush to achieve IT and business advantages through DevOps is outpacing awareness of an expanded – and unmanaged – privileged attack surface.”

The report suggests that 23% of Australian DevOps teams are now taking things into their own hands by building their own security solutions.

Lawler says that building custom security solutions works to a point, but it is not scalable.

“Jenkins to Puppet to Chef, there are no common standards between different tools, which means you must figure out every single tool to know how to secure it. DevOps really needs its own security stack, and security teams must bring something to the table here. They can provide a systemised approach that helps the DevOps teams maintain security while accelerating application delivery and boosting productivity,” Lawler says.

74% of respondents said they use cloud vendor’s built-in security, which means privileged account security isn’t fully integrated into DevOps processes in new environments.

Lawler says the survey findings demonstrate the lack of understanding around security

“DevOps and security tools and practices must fuse in order to effectively protect privileged information. Building awareness and enabling collaboration between DevOps and security teams is the first step to help businesses build a scalable security platform that is constantly improved as new iterations of tools are developed, tested and released,” Lawler concludes.


Author: Sara Barker

  • 0

7 Types of Privileged Accounts You Should Know

Category : Cyber-Ark

Privileged accounts exist in many forms across an enterprise environment, and they pose significant security risks if not protected, managed and monitored. The types of privileged accounts typically found across an enterprise environment include:

  1. Local Administrative Accounts are non-personal accounts which provide administrative access to the local host or instance only. Local admin accounts are routinely used by the IT staff to perform maintenance on workstations, servers, network devices, databases, mainframes etc. Often, they have the same password across an entire platform or organization for ease of use. This shared password across thousands of hosts makes for a soft target that advanced threats routinely exploit.
  2. Privileged User Accounts are named credentials which have been granted administrative privileges on one or more systems. This is typically one of the most common forms of privileged account access granted on an enterprise network, allowing users to have administrative rights on, for example, their local desktops or across the systems they manage. Often these accounts have unique and complex passwords, and the power they wield across managed systems makes it necessary to continuously monitor their use.
  3. Domain Administrative Accounts have privileged administrative access across all workstations and servers within the domain. While these accounts are few in number, they provide the most extensive and robust access across the network. With complete control over all domain controllers and the ability to modify the membership of every administrative account within the domain, a compromise of these credentials is often a worst case scenario for any organization.
  4. Emergency Accounts provide unprivileged users with administrative access to secure systems in the case of an emergency and are sometimes referred to as ‘firecall’ or ‘breakglass’ accounts. While access to these accounts typically requires managerial approval for security reasons, it is usually a manual process that is inefficient and lacks any auditability.
  5. Service Accounts can be privileged local or domain accounts that are used by an application or service to interact with the operating system. In some cases, these service accounts have domain administrative privileges depending on the requirements of the application they are being used for. Local service accounts can interact with a variety of Windows components which makes coordinating password changes difficult.
  6. Active Directory or domain service account password changes can be even more challenging as they require coordination across multiple systems. This challenge often leads to a common practice of rarely changing service account passwords which represents a significant risk across an enterprise.
  7. Application Accounts are accounts used by applications to access databases, run batch jobs or scripts, or provide access to other applications. These privileged accounts usually have broad access to underlying company information that resides in applications and databases. Passwords for these accounts are often embedded and stored in unencrypted text files, a vulnerability that is replicated across multiple servers to provide greater fault tolerance for applications. This vulnerability represents a significant risk to an organization because the applications often host the exact data that APTs are targeting.

For information on how to protect privileged accounts, please read the rest of our brief guide which also highlights best practices: “The Three Phases of Securing Privileged Accounts.”



  • 0

A Matrix Approach for Account Ranking and Prioritization

Category : Cyber-Ark

Throughout the course of my six years in helping KPMG clients with their Privileged Access Management programs, there has rarely been a simple answer to the critical questions of exactly which privileged accounts in an environment should be integrated first (e.g., application/infrastructure/personal accounts), and exactly how we should control each type of privileged account. The ways an organization can control privileged accounts using a solution like CyberArk can vary greatly (e.g. vaulting, password rotation, brokering, etc.).

A common approach to password management includes treating all vaulted credentials with the same level control measures; this is typically a symptom that indicates a lack of a risk-based approach to assigning criticality to accounts. Alternatively, we also see cases of wild inconsistencies in the way passwords are managed, typically leaving it up to the individual platform owners to pick and choose the right security controls for them. This typically an indication of a lack of defined PAM standards that can be applied enterprise-wide. When developing strategies and roadmaps for KPMG clients, our teams apply an “Account Criticality Matrix” to help answer these questions. This matrix is designed to help standardize the way we rate and weigh the criticality of a given account.  It includes a set of predefined criteria that we tailor to meet the unique needs of each organization. Example criteria in the Account Criticality Matrix include:

*   Number of individuals that have access to a given privileged credential
*   Frequency of account usage
*   Potential to access sensitive data
*   Scope of privilege across single/multiple systems or platforms
*   Control level granted

Based on the numerical scoring derived from the Account Criticality Matrix, we then begin to build a profile of what an organization would consider a “high-risk” account versus a “low-risk” account.  This profile helps on numerous fronts.  First, it allows for consideration of account types that typically would not be considered as true “privileged” accounts.  For example, many application or service accounts are inadvertently excluded from management in organizations due to a lack of understanding of enterprise privileged account definitions by the application owner.  In the absence of pre-defined account prioritization criteria, those owners are left to decide what constitutes a “privileged” account or not.  Many will opt for the latter without prescribed guidance.  The matrix will allow an organization to take any account type and provide a standardized metric to determine whether it meets the criteria to be integrated into CyberArk.

The second benefit is the standardization of account controls across the organization based on the calculated account criticality.  Depending on licensing and hardware limitations, recording all privileged accounts may not be feasible.  Based on a pre-defined policy, an organization could mandate that only “high” rated accounts require dual control and PSM recording, but periodic password rotations of “medium” rated credentials are sufficient.

Thirdly, combining knowledge of “high” severity accounts and implementation effort can provide a window to prioritization of the path of integration.  When various stakeholders ask why the decision was made to start with default local accounts first and not their specialized application, you can point them not only to the fact that those accounts rated as high based on the user base, scope of privilege, and access granted, but also because the implementation effort was lowest for those accounts.


Author: Art Chaisiriwatanasai

  • 0

Boundhook, Exception Based Kernel-Controlled Usermode Hooking

Category : Cyber-Ark


 | | 

In this article, we’ll present a new hooking technique that we have found during our research work.

Hooking techniques give you control over the way an operating system or a piece of software behaves. Some of the software that utilizes hooks include: application security solutions, system utilities, tools for programming (e.g. interception, debugging, extending software, etc.), malicious software (e.g. rootkits) and many others.

Please note, this is neither an elevation nor an exploitation technique. This technique can be used in a post-exploitation scenario in which the attacker has control over the asset. Since malicious kernel code (rootkits) often seeks to establish persistence in unfriendly territory, stealth technology plays a fundamental role.

Technical Description

The idea behind this BoundHook technique is to cause an exception in a very specific location in a user-mode context and catch the exception to gain control over the thread execution.

To do this, we can use the BOUND instruction, which is part of Intel MPX (Memory Protection Extensions). This instruction is designed to (along with the compiler, runtime libraries and OS support) increase software security by checking pointer references whose normal compile-time intentions are maliciously exploited at runtime due to memory corruption vulnerabilities.

In a nutshell, the BOUND instruction checks an array index against bounds and raises software interrupt 5 if the test fails (32-bit: nt!KiTrap05, 64-bit: nt!KiBoundFault).

Why not just do a comparison, you ask? Because Intel designed this new instruction to generate a fault that will enable the OS to examine the bound check failure.

The instruction’s syntax is as follows –

BOUND r16, m16&16 – Checks if r16 (array index) is within bounds specified by m16&16

BOUND r32, m32&32 – Checks if r32 (array index) is within bounds specified by m32&32

When a bound fault occurs, the trap handler calls nt!KiHandleBound and then executes registered bounds-exception callback routines.

A kernel-mode driver or a shellcode payload running in kernel-mode can register a callback routine for bound faults using nt!KeRegisterBoundCallback. This function is not “exported” by the WDK headers, and a pointer to the function has to be obtained dynamically.

The callback routine has no parameters and should return a BOUND_CALLBACK_STATUS, which is basically:

After completion of the bound fault registration, the kernel-mode code should get a pointer to the user-mode DLL (or any other PE) base address and calculate the address of the function that it’s about to hook.

Obtaining a function address is a simple task and can be accomplished in various ways, for example by parsing the PE header. Please note, parsing an image that is loaded into a specific process should be done in the process’s context or using the appropriate APIs.

Once our code is done calculating the function address, it would be nice to simply start writing to that address. However, because this code resides in read/execute only memory, we are unable to do this.

Windows memory protection relies on the following factors:

  • The R/W flag in PDEs and PTEs (read only = 0, read/write = 1).
  • The U/S flag in PDEs and PTEs (supervisor mode = 0, user mode = 1).
  • The WP flag in the CR0 register (17th bit).

Now, we have a few options. We can either write to that address in a way that would trigger the COW (copy-on-write) protection or, to achieve maximum stealth, we can write directly to the function address in one of two ways. We can either manipulate the CR0 register using __readcr0() and __writecr0(), or we can allocate our own memory descriptor list (MDL) to describe the memory pages and adjust permissions on the MDL using a bitwise OR and the MDL_MAPPED_TO_SYSTEM_VA. The MDL approach will be much more “stealthy”, since it’s completely invisible by design to the current PatchGuard implementation.

First, here’s how we can use the CR0 approach. The CR0 register description, taken from the Intel 64 and IA-32 Architectures Software Developer’s Manual reads:

WP Write Protect (bit 16 of CR0) — When set, inhibits supervisor-level procedures from writing into readonly pages; when clear, allows supervisor-level procedures to write into read-only pages (regardless of the U/S bit setting; see Section 4.1.3 and Section 4.6).

Here is an example of cr0 register manipulation:

Writing directly to the DLL’s COW page will allow us to hook every process on the system that is using this DLL since it will affect the cow-origin page.

Triggering a bound fault is easy. For example, this code will trigger a fault:

Thus, our kernel-mode code that performs the hooking should write a similar assembly code to the place where it wants to get control over the execution of the thread.

For example, if we want to hook KERNELBASE!CreateFileW, we can inject these opcodes to the function’s prologue:

UCHAR opcodes[5]= {0x36, 0x66, 0x62, 0x0C, 0x24};

This is basically: BOUND CX, DWORD PTR SS : [ESP]. In this specific case, we assume that CX will be zero (when used in real code this should be tested for every function) and the top of stack will be greater than zero (as this is a proof of concept and not a released tool).

Now, after writing this to the KERNELBASE!CreateFileW  prologue, when a user-mode thread calls this function our kernel-mode callback function will take control of the thread.

Doing this, gives us a lot of advantages, for example –

  • The hooked page will still be COW, thus anti-malware solutions and researchers doing manual analysis won’t be able to notice that the page has been modified.
  • Most AVs are unaware of this method and probably aren’t addressing it (especially since the page is still COW).
  • A user-mode debugger will not be able to catch this hook. A regular inline hook method makes the hooked routine jump to another user-mode code, but BoundHook’s method traps the execution flow by the kernel bound faults handler.
  • This method is invisible to most PatchGuard (PG) protection mechanisms. The MDL approach to bypass the COW mechanism is not detectable by PG today by design. As for the CR0 modification approach, although the CR0 is protected by PG, since it is modified for a very short period of time, the chance of being caught by PG is minimal.

Proof-of-concept, a call stack of a hooked thread:

We know that BoundHook does not meet Microsoft’s bar to be considered a vulnerability, as machine administrator rights are already compromised. Microsoft’s response on receiving responsible notification of a similar issue from CyberArk (GhostHook) was as follows:

“We have completed our investigation of this issue and have found that it is not a vulnerability but a technique to avoid detection once the machine is already compromised. Because it’s a post-exploitation technique it doesn’t meet the bar for servicing in a security update but we will consider fixing it in a future version of Windows.”

In conclusion, this method will bring new capabilities to both software security vendors and malware writers.



  • 0

Privileged Task Automation and Management With CyberArk

Category : Cyber-Ark

CyberArk’s Product Marketing Manager Corey O’Connor explains how to reduce the risk of accidental and intentional damage to critical systems through privileged task automation and management.


  • 0

The Unique Challenges of Protecting Cloud Workloads

Category : Cyber-Ark

Addressing Cloud Vulnerabilities

The cloud offers organizations tremendous opportunities, and this session is designed to help IT and security leaders understand and address the unique challenges related to securing applications in the public cloud. With a focus on securing the public cloud, we’ll address an organization’s typical cloud journey, including hybrid, all-in cloud and leveraging DevOps for increased agility.

We’ll start with an overview of the “shared responsibility” model that public cloud vendors use to clarify who’s responsible for the security of what – typically above and below the hypervisor or equivalent layer. Next, we’ll address specific use cases and customer examples which highlight some of the key challenges and solutions for securing applications, data, virtual infrastructure and cloud workloads.

Key Points Include:
  • Understanding of potential vulnerabilities in cloud workloads that attackers have exploited
  • Examples of use cases and solutions for protecting cloud workloads
  • Examples of how customers have addressed vulnerabilities at each stage of their cloud journey

Register now!

12 October, 2017 at 2:00 p.m. EST

  • 0

Get Your Enterprise Ready for General Data Protection Regulation (GDPR)

Category : Cyber-Ark

The General Data Protection Regulation (GDPR) is said to be one of the most important changes to data privacy regulations within the past two decades. The primary purpose of GDPR is to reinforce the personal data rights for all individuals’ residing within the European Union, and subsequently harmonizing the way member states enforce data protection across this geography. The fact of the matter is, most people today do not trust their personal data in the hands of businesses – and honestly, who can blame them?

Significant personal data breaches continue to dominate headlines. Most organizations are not taking security seriously enough with some even admitting they are well aware of existing security gaps but deliberately look the other way to keep business costs down and maintain a higher profitability. As we’ve seen over the past few months, the media has highlighted both the financial and reputational implications with being caught in non-compliance – and for good reason.

GDPR will affect organizations globally. If an organization is found to be negligent, they’ll face fines north of €20 million or 4 percent of total global turnover (whichever greater of the two). Moreover, there are equally as serious reputational risks such as significant brand damage and loss of both consumer trust and loyalty. Gartner predicts that by the end of 2018, more than 50 percent of companies affected by the GDPR will not be in full compliance with its requirements.1 This begs a very important question: is your enterprise really ready?

What to Know and Understand

Understand where personal data resides within your organization. Personal data is defined as any subject’s name, address, localization, online identifier, health information, income, cultural profile and more. Enterprises should map their data flows in a prioritized manner, starting from the top down with whatever is considered to be of high risk and with whatever business processes involve gathering, processing and protecting sensitive personal data. CyberArk solutions will help an enterprise lock down the access both human and non-human users have to critical systems and applications, but before you can do that, you really need to first identify where exactly the data resides within your organization. Additionally, any personal data that no longer serves a legitimate business purpose needs to be deleted. Backups and duplicate copies of personal data files might land you in the hot seat if you don’t manage your data subjects’ ‘right to erasure’ correctly.

Get a handle on your supply chain. One important change in GDPR that was absent from its mandated predecessor (the Data Protection Directive) is the new direct legal obligations for data processors. This change brings potential litigation and damage claims directly from data subjects, whereas before, data processors really only needed to concern themselves with existing contractual agreements they had in place with their data controllers. Once GDPR goes into enforcement, bothcontrollers and processors will be required to prove they were not held responsible in the event of a breach. You might have the most comprehensive GDPR strategy in place with all the necessary tools and components to protect your personal data – but there still remains substantial risk residing within your third-party vendor supply chain. There needs to be a greater degree of transparency across the supply chain, with a shared responsibility for securing personal data.

Additional Considerations

Given that GDPR is a very complex and far-reaching regulation that cannot be solved overnight, it’s best to not boil the ocean. Take a pragmatic approach. One of the first and most critical steps for enterprise-level organizations is to partner with an advisory consultant. Most consultancies offer GDPR-specific workshops, detailed assessments, regular testing and actionable guidance. They’ll work with your team to put in place the necessary personnel, processes and technology that align with whatever is your most optimal strategy to maintain compliance with this regulation.

I previously discussed five ways CyberArk can help you address GDPR, highlighting some of the key articles within the regulation and how CyberArk can help mitigate risk against non-compliance.  It’s well understood that complying with GDPR cannot be achieved with a single security vendor – it’s a team effort. CyberArk customers also have access to our C3Alliance Technology Program, which provides a wide range of integrations with security solution providers from around the world. These technology integrations enable an organization to realize a much more comprehensive GDPR solution, as well as bring more value to your existing security investments.

Take the first step and download the Security Checklist for Securing Personal Data to get your enterprise ready for GDPR. Visit the CyberArk GDPR solution web page for more information on how privileged account security plays a critical part in safeguarding sensitive personal data.

Don’t get caught in the crosshairs of GDPR non-compliance. Get your enterprise ready before time runs out.

1 Gartner Press Release: “Gartner Says Organizations Are Unprepared for the 2018 European Data Protection Regulation,” May, 3 2017.