Author Archives: AdminDCS

  • 0

8 Months to Go – Are You Getting the Most Out of What You Already Own?

Category : Palo Alto

As we move closer to the May 2018 deadline for GDPR, more and more businesses are focused on ensuring their ability to meet the requirements set out by the regulation. All too often, people assume this requires additional investments, which at some level will be true; but you should also be challenging your organisation as to how you get more from what you already have.

Equally, I hear many looking to data loss prevention (DLP) and encryption tools as primary requirements for protecting data. However, having worked with both for many years during previous stages of my career, I would highlight that these, like every capability, come with their own implementation challenges. Often, adjacent technologies can help reduce the scope of these challenges and the associated costs.  To cover all of these would be a mammoth blog; as such I’m only going to pick a couple, just to start your creative thinking and help you on your GDPR journey.

Here, I’m going to focus on how your firewall can help reduce the effort and cost of better securing the personally identifiable information (PII) data lifecycle. How do you validate if you are getting the most from what you have already, and where could you join up security processes that today may be owned and implemented by different teams in the business?

The all too common first thought with PII data lifecycle management is to try and classify all your data – a task that has the potential to take until the end of time – as every day we generate new data or new instances of existing data. Often, much of this focus is on how to bring clarity to the volume of unstructured data.

You need a change of mindset, which is not to try and define where all your PII data is, but instead to define – and then enforce – where your PII data can and should be. This reduces the scope of where you need to then apply DLP controls.

Most organisations already have insight on PII data in known business processes. In practice, this could include customer relationship management (CRM) tools, threat intelligence that may include some form of PII data, and HR systems. However, there is often still a significant gap between where structured data is and where it is thought to be. Being able to truly define where it is will allow you to start identifying the points where it becomes unstructured data.

If you are using good Layer 7 firewalls in your business, you can use these to help map your real-time application usage. This will allow you to see which apps are talking with which others, those that are communicating outside the business, and which users are doing this and at what volume.

The core goal here is to think to the Zero Trust model: can you segment your organisation’s PII data – allowing you to reduce where it can move – into being unstructured and, more importantly, reduce the scope of what you need to apply security to. Likewise, such visibility can help you define at which points you need encryption versus the points where the data should never reside or be accessed to start with.

Now pivot to managing the PII itself: a good Layer 7 firewall will typically include some level of content inspection. If you do have a DLP solution in place that tags data, your firewall should also be able to leverage the tags inserted into the data. You may wonder why that’s of value; well, your firewall can typically inspect inside encrypted traffic, which may give access to data the DLP solution could not otherwise analyse. Also, depending on your configuration, you may be able to leverage your firewall to give you more enforcement points if you have used it to segment your organisation’s system traffic.

If you don’t have a DLP tool in place but are looking to enforce your PII dataflows, once again, a Layer 7 firewall will likely have some content inspection. While not as rich as that provided by DLP, many will allow you to look at regular expressions (words, common structure forms like banking card data, etc.) in common file types that will give you a light version that may suffice, in some cases, to both map and enforce data usage.

So, what are the takeaways here?

Take anything to the Nth degree and it can solve a problem. DLP and encryption are key to managing the PII data lifecycle. However, they can be expensive from both a Capex and Opex perspective. You can use other tools and processes to reduce dependence on them.

GDPR is a rare opportunity to take a step back. It’s amazing to see how many organisations invest in state-of-the-art technology, such as a next-generation firewall, that can do all these things, but then still use it like their 20-year-old port and protocol firewall that works at Layer 3.

Before you spend any more of your valuable budget, challenge yourself and your organisation on what you already have. Ensure you know the capabilities at the core, as well as the additional components. Map them out and then consider how they could streamline your processes to reduce the scope and effort. In this instance, looking at how you zone your traffic or go to a full Zero Trust model is not a specific GDPR callout, but it does align to the notion of what is state-of-the-art best practice and, more importantly, what could reduce where your PII data proliferates. This means less to then secure, and less risk to the organisation.

 Source: https://researchcenter.paloaltonetworks.com/2017/10/cso-gdpr-8-months-go-getting-already/
Author: 

  • 0

InsightOps – 2 minute overview

Category : Rapid7

A two-minute overview of Rapid7 InsightOps—your modern solution for infrastructure monitoring and asset interrogation. Learn more at http://r-7.co/2sQtINT.

 


  • 0

5 Keys to Quick and Effective Identity Verification Service Deployment

Category : Gigamon

ID fraud is a critical issue for MNOs (Mobile Network Operators); there are approximately 200 types of fraud, and 35% of all mobile fraud comes from subscriptions. It’s an issue that cannot be ignored; the cost is too great for many MNOs to bear. Furthermore, in addition to damaging profits, it damages consumers as well, thanks to the inhibitive effect fraud has on innovation. How can we innovate successfully if we are continually forced to divert significant funds and resources towards mitigating fraudulent activity?

As we’ve discussed in a previous post, there are three overarching reasons to care about the problem:

  • Revenue: the total annual cost of identity fraud globally is €40 billion
  • Regulation: financial services on mobile are growing. MNOs must meet KYC regulations or face heavy fines
  • Reputation: identity fraud victims will abandon networks they no longer trust to keep them secure

But how can we counteract all this fraud? The answer lies in the deployment of trusted and tested identity verification services that can perform effective checks in real time. These solutions are available and are flexible enough to meet a wide range of needs – they can provide identity document verification (to check authenticity), customer authentication(to check the holder is the correct owner) through advanced biometric checks, risk assessment (which checks a holder against control lists), ID verification reports (for audits) and automatic form filling (to speed up enrolment and limit manual input errors).

With all of this in mind, MNOs will of course want to know what the keys to success will be. Can they be confident that it’ll all work? See below for the five key factors that will affect the success and effectiveness of a roll out.

  1. A phased and systematic approach

Phasing implementation ensures the effectiveness of the solution is well tested and perfected before it’s fully initiated. With this approach, teams can draw on best practices and lessons learned, rather than migrating all stores at the same time, which can pose problems. These first stages are essential when trying to understand, analyze and document the dynamics of identity fraud on a small scale, before expanding it across all stores.

This phased and systematic approach also requires anticipation of new regulations which might be introduced during deployment; of course, this is easier said than done. It is essential though, if you want to ensure ID checks can be extended to all use cases (including enrolment for specific value-added services) as well as purchase and renewal of prepaid and postpaid SIMs. As a result of all this, MNOs will ensure they meet current legal requirements and will be prepared for the introduction of more.

  1. Strong feedback

Feedback is crucial and shouldn’t be underestimated. Store managers can share best practice techniques whenever possible. With profitability as a collective main objective, any solution that cuts or at least reduces ID fraud and related costs should be welcomed with open arms. As soon as the benefit of the ID Verification solution is realized, it will then be discussed at length internally, encouraging strong adoption across the board.

  1. A user-centric approach

When it comes to acceptance, we must keep things as simple and convenient as possible for all employees and customers. This means in-store staff will be able to focus on customer care rather than on admin.

It can be something as simple as automated form filling that provides convenience for the customer and clerk, as it speeds up enrolment and avoids needless input errors.

And if the company can prove it is handling its customers’ details securely while streamlining interaction, it will be able to build a deeper and more trusted customer relationship.

  1. Integrating with legacy infrastructures

The best identity verification services are designed to have a minimal impact on existing infrastructures. They plug seamlessly into existing IT systems and can be used (with or without scanners) on mobile devices such as smartphones and tablets. This easy and flexible integration into existing infrastructure ensures a quick deployment. In addition, adaptable reporting allows easy integration into existing back-end systems.

  1. Addressing MNOs’ acquisition strategies

On top of regular internet and mobile services, MNOs can also offer more value-added services now, such as transport ticketing and banking and payment services. For example, our own identity verification services from Gemalto offers a unique and consistent way to cover all those services at the same time, helping streamline sales processes both in-store and remotely.

So, there you have it – the five key factors for successful ID verification deployment.

Source: https://blog.gemalto.com/mobile/2017/10/17/5-keys-quick-effective-identity-verification-service-deployment/

Author: Didier Benkoel-Adechy


  • 0

Why We Need to Think Differently about IoT Security

Category : Gigamon

Breach fatigue is a real issue today. As individual consumers and IT professionals, we risk getting de-sensitized to breach alerts and notifications given just how widespread they have become. While this is a real issue, we cannot simply let our guard down or accept the current state – especially as I believe the volume and scale of today’s breaches and their associated risks will perhaps pale in comparison to what’s to come in the internet of things (IoT) world.

It is one thing to deal with loss of information, data and privacy, as has been happening in the world of digital data. As serious as that is, the IoT world is the world of connected “things” that we rely on daily – the brakes in your car, the IV pumps alongside each hospital bed, the furnace in your home, the water filtration system that supplies water to your community – but also take for granted simply because they work without us having to worry about them. We rarely stop to think about what would happen if … and yet, with everything coming online, the real question is not if, but when. Therein lies the big challenge ahead of us.

Again, breaches and cyberattacks in the digital world are attacks on data and information. By contrast, cyberattacks in the IoT world are attacks on flesh, blood and steel – attacks that can be life-threatening. For example, ransomware that locks out access to your data takes on a whole different risk and urgency level when it is threatening to pollute your water filtration system. Compounding this is the fact that we live in a world where everything is now becoming connected, perhaps even to the point of getting ludicrous. From connected forks to connected diapers, everything is now coming online. This poses a serious challenge and an extremely difficult problem in terms of containing the cyberrisk. The reasons are the following:

  1. The manufacturers of these connected “things” in many cases are not thinking about the security of these connected things and often lack the expertise to do this well. In fact, in many cases, the components and modules used for connectivity are simply leveraged from other industries, thereby propagating the risk carried by those components from one industry to another. Worse still, manufacturers may not be willing to bear the cost of adding in security since the focus of many of these “connected things” is on their functionality, not on the ability to securely connect them.
  2. Consumers of those very products are not asking or willing in many cases to pay for the additional security. Worse still, they do not know how to evaluate the security posture of these connected things or what questions to ask. This is another big problem not just at the individual consumer level, but also at the enterprise level. As an example, in the healthcare space, when making purchasing decisions on drug infusion pumps, hospitals tend to make the decision on functionality, price and certain regulatory requirements. Rarely does the information security (InfoSec) team get involved to evaluate their security posture. It is a completely different buying trajectory. In the past, when these products did not have a communication interface, that may have been fine. However, today with almost all equipment in hospitals – and in many other industries – getting a communications interface, this creates major security challenges.
  3. Software developers for connected devices come from diverse backgrounds and geographies. There is little standardization or consensus on incorporating secure coding practices into the heart of any software development, engineering course or module across the globe. In fact, any coursework on security tends to be a separate module that, in many cases, is optional in many courses and curriculums. Consequently, many developers globally today have no notion of how to build secure applications. The result is a continual proliferation of software that has been written with little to no regard to its exploitability and is seeping into the world of connected things.

These are all significant and vexing challenges with neither simple fixes nor a common understanding or agreement on the problem space itself. I won’t claim to have a solution to all of them either, but in a subsequent blog, I will outline some thoughts on how one could begin to start approaching this. In the meanwhile, I think the risk and rhetoric around cyber breaches associated with the world of connected things could perhaps take on an entirely new dimension.

Source: https://blog.gigamon.com/2017/10/15/need-think-differently-iot-security/

Author: Shehzad Merchant


  • 0

COBOL to the core

Category : HP Security

A COBOL Context

Micro Focus has evolved to become a much larger organization nowadays. At that heart of that organization sits COBOL technology. In the recent press publication, “Why New CEO Will Keep COBOL a Key Focus of Micro Focus”, Chris Hsu, CEO of Micro Focus explains why this technology is so significant both to Micro Focus and our customer community.

As outlined by Micro Focus’ executive chairman Kevin Loosemore, the ethos driving Micro Focus is that their “customers […] can maximize the value of existing IT investments and adopt new technologies — essentially bridging the old and new.”

The Micro Focus COBOL history is a perfect illustration of customers continuing to derive value and future innovation from previous IT investments. “Forty years ago, Micro Focus had COBOL, predominately mainframe COBOL, and helped in the development of COBOL applications,” Hsu said. “Today, COBOL is still one of the largest assets in the portfolio.”

The COBOL secret?

COBOL’s popularity is actually no secret at all. It doesn’t receive the same fanfare as other contemporary technology; it quietly goes about running the global economy, supporting large-scale enterprise systems across many major sectors and industries. Various sources reinforce the ubiquity of COBOL – over 90% of the fortune 100, the vast majority of major banks and insurers, with large footprints across retail, healthcare, government, automotive and other sectors. Hsu comments, “Mission-critical applications in COBOL still run most of the major at-scale transaction systems, such as credit-card processing [and] large travel logistics”.

Its status as a valued computer language, in a diverse technology market, has persisted. One respected measurement, the TIOBE index shows COBOL at number 23 as of October 2017. More significantly it shows COBOL as present in the top 30 since 1987, one of only 3 languages that can make that claim over that period.

What’s so good about COBOL?

COBOL can be traced back to the pioneer Grace Hopper in the late 1950’s and has evolved over the decades thanks to care and attention from Micro Focus (and others). Over the years it has developed a reputation and staying-power, largely thanks to five key characteristics. We have blogged about those strengths previously, but it is significant how much of that truth remains.

Foresight – Ensuring enterprise applications meet tomorrow’s needs today

As a modern language, COBOL supports all contemporary deployment architectures, leading edge technology and composite applications. It will integrate with Java, C++ & C#, deploy to cloud, mobile.NET and JVM, and runs across over 50 market leading platforms. Micro Focus invests tens of millions of dollars each year so our customers have a simple path to future innovation

Heritage – Five decades of heritage, thousands of organizations, billions of lines of value

New applications often mean delivering business value through new channels. Using the business logic built into existing COBOL applications provides a springboard for accelerated delivery of IT services. Furthermore, other apps and systems can easily access COBOL logic and data through APIs and integration points

Portability – COBOL: the original write once, run anywhere technology

Micro Focus COBOL technology enables the same application to run unchanged across many platforms. This portability means COBOL developers can focus on building application value rather than on the nuances of the operating system

Fitness-for-purpose – Engineered for building enterprise-class business applications

Today’s enterprise applications must offer robustness, strong data manipulation, accuracy, speed and accessibility. Micro Focus COBOL products offer numerical arithmetic accuracy to 38 digits, strong and rapid data manipulation and SORT capability, with a proven record of thousands of live deployments

Readability – Ease of use means developers can focus on business

COBOL is simple to understand, read and code. Other language syntax is, by comparison, opaque and unintuitive. COBOL is far cheaper to maintain as a result. COBOL products work using standard IDEs, putting COBOL in a familiar, productive environment

What has changed?

Within in a few years, the IT world has changed immeasurably – Blockchain, AI, IoT, mobile devices along with the increasing ‘Digitization of everything’. Meanwhile, Java came of age, the Mainframe turned 50, and Linux turned 25. Core business systems need to modernize for the digital age.  This is driving the appetite for modern tooling to help transform core COBOL systems.

Micro Focus thinks change and growth is the norm. The COBOL franchise is literally three times the size it was back in 2001, Chris Hsu said. “This has to do with the fact that [Micro Focus] continue to make the COBOL applications accessible on newer platforms,” he added. “While customers are moving some of their apps to public cloud, a lot of their business-critical apps are remaining on-premise,” Hsu said, “and the data is being spread across everything. What our software does is manage and simply the complexity that customers now have to manage across a set of deployment models from mainframe to public cloud.”

It could be argued that in Enterprise IT, the only constant is change. Indeed that’s exactly what we have argued before.

Challenges Ahead

Upholding and developing COBOL’s reputation is a Micro Focus cultural objective – and the facts are on our side. Hsu says “Micro Focus has been around for 40 years. That COBOL software is unbelievably efficient and relevant today”. In the October 2017 Gartner symposium, the keynote address predicted that 90% of all of today’s applications will still be in use in 2023. Valuable systems endure; COBOL systems. It’s hard to argue against that.

Source: https://blog.microfocus.com/micro-focus-cobol-to-the-core/

Author: Derek Britton


  • 0

Ransomware Attacks on MySQL and MongoDB

Category : Imperva

Ransomware is arguably one of the most vicious types of attack cyber security experts are dealing with today. The impact ransomware attacks can have on an organization is huge and costly. A ransomware payment alone does not reflect the total expense of an attack—the  more significant costs come from downtime, data recovery and partial or total business paralysis. Following the recent NotPetya ransomware attacks, Maersk estimated their losses at $200-$300 million, while FedEx estimated theirs at $300 million. Needless to say, ransomware-related losses seem to be growing in size.

It is well known that typical ransomware encrypts files—but what about ransomware targeted at databases? We’ve previously written about it, but database ransomware continues to be less talked about even though it introduces a potentially larger risk since an organization’s data and core applications rely on the data in its databases.

In this post we’ll explain how database ransomware attacks work and provide analysis of two database ransomware attacks recently monitored by our systems: one on MySQL and another on NoSQL (MongoDB).

Methods Used to Attack Databases with Ransomware

There are three primary methods used to attack databases with the goal of corrupting or tampering with data:

1) SQL/NoSQL – inside attack

Considering access to the database is already given (whether through brute force, a DBA account that was compromised or even a malicious insider who already has access), an attacker can drop/insert/update data and hence modify the data. This can be done with a few simple SQL transactions/NoSQL commands.

2) SQL/NoSQL – external attack

A web app vulnerability, like SQL Injection or NoSQL injection, allows attackers to execute any SQL statement they wish to make. Although we’ve already seen ransomware attacking web apps, we haven’t seen such a method targeting databases in the wild yet, but it’s likely to happen.

Another method for external attackers is to target databases with public IP. This can be easily done with online services like Shodan.

3) Encrypting the database file

The database file is where the database schema and data are stored. This type of attack is exactly the same as traditional ransomware attacks that target files. The only caveat (from the ransomware point of view) is that it must terminate the database process before encrypting, as it holds the database file, making it unmodifiable to other processes while in use.

Analysis of Database Ransomware Attacks in the Wild

Let’s take a look at two SQL/NoSQL transaction-based attacks that were recently monitored by our systems.

MySQL

The attacker successfully gained access to the databases through brute force user/password combinations. Afterwards, the next step was “show databases”. Then, each of the enumerated databases was deleted with the “drop database” statement.

It is important to note that database monitoring and enforcement systems cannot rely on cumulative suspicious activities per connection (stream). With this attack, after every SQL statement, the attacker’s client logged out before taking the next SQL statement. So deleting a 10-table database would have ended up with 11 sequenced connections (extra one for listing the tables). Also the “Follow TCP Stream” feature in Wireshark will show one malicious activity at a time and not the entire attack sequence.

Figures 1-3 show how the attacker listed the databases and dropped one of them.

database ransomware - attack lists the databases

Figure 1: The attack lists the databases

database ransomware - attacker ends the connection

Figure 2: The attacker ends the connection before proceeding to the next phase

database ransomware - attacker deletes a database

Figure 3: The attacker deletes a database

After disposing of the data in this database, the attacker created a table named “Readme” and left the ransom note there (Figures 4 and 5).

database ransomware - creating a readme table

Figure 4: Creating a “Readme” table

database ransomware - inserting the ransomware note

Figure 5: Inserting the ransomware note that explains to the victim what happened and how to pay

And this is how it looks in Imperva SecureSphere database activity monitoring (Figure 6):

Figure 6: SecureSphere audit screen shows the entire attack stack

The ransom note details (as described in Figure 5):

– eMail: cru3lty@safe-mail.net
– BitCoin: 1By1QF7dy9x1EDBdaqvMVzw47Z4JZhocVh
– Reference: https://localbitcoins.com
– Description: Your DataBase is downloaded and backed up on our secured servers. To recover your lost data: Send 0.2 BTC to our BitCoin Address and Contact us by eMail with your MySQL server IP Address and a Proof of Payment. Any eMail without your MySQL server IP Address and a Proof of Payment together will be ignored. You are welcome.

Note: with this attack the attacker didn’t even bother to read the data before deleting it.

It appears this group is changing its bitcoin address every few weeks. The above bitcoin address was used in an attack that took place three weeks ago, while our systems observed a new bitcoin payment address just a few days ago: 1G5tfypKqHGDs8WsYe1HR5JxiwffRzUUas (see Figure 7).

New bitcoin address for MySQL ransomware

Figure 7: New bitcoin address for MySQL ransomware monitored by Imperva SecureSphere

MongoDB

MongoDB is a NoSQL database, but the attack’s logic is very much the same. Login was easier for the attacker this time as no authentication was required. Access control is not enabled by default on MongoDB, so the entrance ticket was just knowing the IP and the (well known) port. According to Shodan, there are ~20,000 MongoDBs with public IP with no authentication. This is ~40% of all public-facing MongoDBs.

Figures 8 and 9 show where the attacker listed the databases and deleted one of them.

database ransomware - attacker lists the databases

Figure 8: The attacker lists the databases

database ransomware - attacker deletes one of the databases

Figure 9: The attacker deletes one of the databases

In order to let the victim know about the attack (and how to pay), the attacker created a “Warning” database with a “Readme” inside. This is the JSON generated with MongoDB’s native audit…

Creating the Readme document to store the ransom note

Figure 10: Creating the Readme document to store the ransom note

And here’s the message itself…

Writing the ransom note and bitcoin account

Figure 11: Writing the ransom note and bitcoin account

The ransom note details (as described in Figure 11):

– eMail: cru3lty@safe-mail.net
– BitCoin: 1Ptza47PgMtFMA6fZpLNzacb1EPkWDAv6n
– Solution: Your DataBase is downloaded and backed up on our secured servers. To recover your lost data: Send 0.2 BTC to our BitCoin Address and Contact us by eMail with your MongoDB server IP Address and a Proof of Payment. Any eMail without your MongoDB server IP Address and a Proof of Payment together will be ignored. You are welcome!

Although this is a different bitcoin (BTC) address than the MySQL attack, note the attacker’s contact info – it’s the same group as the MySQL attack and also the top group mentioned in this article on 26K victims of MongoDB attacks. Our systems also indicated both attacks originated from the same IP (China).

To Pay or Not to Pay?

At the time of writing, there were two payments to the MySQL account (none for the latest attack) and three payments to the MongoDB account. A total of 1 BTC, which is $4,800.

Imperva doesn’t suggest customers pay the ransom (although that is a dilemma when no backup is in place), and with these specific attacks we’d highly recommend not paying it, even without a backup. This is due to the fact that for these two recorded and audited attacks, the attacker did not even read the data before disposing it. The attacker listed the databases and immediately dropped the tables without even backing it up, so restoring the data is impossible (for the attacker).

Takeaways

Enforcing behavioral-based policies are effective at detecting these kind of attacks – you can identify brute force attacks, login attempts with known database user dictionaries, abnormal behavior of an application user or SQL audit profiler, etc. But here are a few items you can implement right away for quick security wins:

  • Make sure your database cannot be accessed from the internet. Usually there is no real need to expose a database; only the web app server and a jump server for the DBAs should have access to the database’s isolated network (VPN/VPC).
  • Make sure firewall rules are in place, whitelisting approved IPs only
  • Have audit enabled (using a database activity monitoring solution or even native audit)
  • Alert on failed logins (for brute force attempts), preferably with some minimal threshold
  • Take regular backups

Source: https://www.imperva.com/blog/2017/10/ransomware-attacks-on-mysql-and-mongodb/?utm_source=linkedin&utm_medium=organic-social&utm_content=database-ransomware&utm_campaign=2017-Q4-linkedin-awareness

Author: Elad Erez


  • 0

Privileged Task Automation and Management With CyberArk

Category : Cyber-Ark

CyberArk’s Product Marketing Manager Corey O’Connor explains how to reduce the risk of accidental and intentional damage to critical systems through privileged task automation and management.

Source: https://www.cyberark.com/resource/privileged-task-automation-management-cyberark/


  • 0

Example-driven Insecurity Illustrates Need for WAF

Category : F5

Learning online is big. Especially for those who self-identify as a developer. If you take a peek at Stack Overflow’s annual developer survey (in which they get tens of thousands of responses) you’ll find a good portion of developers that are not formally trained:

  • Among current professional developers globally, 76.5% of respondents said they had a bachelor’s degree or higher, such as a Master’s degree or equivalent.
  • 20.9% said they had majored in other fields such as business, the social sciences, natural sciences, non-computer engineering, or the arts.
  • Of current professional developers, 32% said their formal education was not very important or not important at all to their career success. This is not entirely surprising given that 90% of developers overall consider themselves at least somewhat self-taught: a formal degree is only one aspect of their education, and so much of their practical day-to-day work depends on their company’s individual tech stack decisions.

Note the highlighted portion from the survey results. I could write a thesis on why this is true, but suffice to say that when I was studying for my bachelor’s, I wrote in Pascal, C++, and LISP. My first real dev job required C/C++, so I was good there. But later I was required to learn Java. And SQL. I didn’t go back to school to do that. I turned to books and help files and whatever other documentation I could get my hands on. Self-taught is the norm whether you’re formally educated or not, because technology changes and professionals don’t have the time to go back to school just to learn a new language or framework.

This is not uncommon at all, for any of us, I suspect. We don’t go back to school to learn a new CLI or API. We don’t sign up for a new degree just to learn Python or Node.js. We turn to books and content on the Internet, to communities, and we rely heavily on “example code.”

ways devs teach themselves

still rely on blogs and documentation, not just from our own engineers and architects, but other folks, too. Because signing up for a Ph.D now isn’t really going to help learn me* the ins and outs of the Express framework or JQuery.

It’s no surprise then that network engineers and operations (who, being the party of the first part of the second wave of DevOps, shall henceforth be known as NetOps) are also likely to turn to the same types of materials to obtain those skills they need to be proficient with the tools and technologies required. That’s scripting languages and APIs, for those just tuning in. And they, too, will no doubt copy and paste their hearts out as they become familiar with the language and systems beginning to automate the production pipeline.

And so we come to the reason I write today. Example code.

There’s a lot of it. And it’s good code, don’t walk away thinking I am unappreciative or don’t value example code. It’s an invaluable resource for anyone trying to learn new languages and APIs. What I am going to growl about is that there’s a disconnect between the example code and security that needs to be addressed. Because as we’re teaching new folks to code, we should also be instilling in them at least an awareness of security, rather than blatantly ignoring it.

I say this because app security is not – repeat NOT – optional. I could throw stat after stat after stat but I hope at this point I’m preaching to the choir. App security is not optional, and it is important to promulgate that attitude until it’s viewed as part and parcel to development. Not just apps, mind you, but the scripts and systems driving automation at the fingertips of DevOps and NetOps.

I present as the source of my angst this example.

example violates security rule zero.png_thumb[2]_thumb

The code itself is beautiful. Really. Well formatted, nice spacing. Readable. I love this code. Except the part that completely violates Security Rule Zero.

THOU SHALT NOT TRUST USER INPUT. EVER.

I’m disappointed that there’s not even a head nod to the need to sanitize the input. Not in the comments nor in the article’s text. The code just passes on “username” to another function with nary a concern that it might contain malicious content.

But Lori, obviously this code is meant to illustrate implementation of some thing that isn’t designed to actually go into production. It’s not a risk.

That is not the point. The point is that if we continue to teach folks to code we ought to at least make an attempt to teach them to do it securely. To mention it as routinely as one points out to developers new to C/C++ that if you don’t allocate memory to a pointer before accessing it, it’s going to crash.

I could fill blog after blog with examples of how security and the SDLC is given lip-service but when it comes down to brass-tacks and teaching folks to code, it’s suddenly alone in a corner with an SEP (somebody else’s problem) field around it.

This is just another reason why web application firewalls are a critical component to any app security strategy. Organizations need a fire break between user input and the apps that blindly accept it as legitimate to avoid becoming the latest victim of a lengthy list of app security holes.

Because as much as we like to talk about securing code, when we actually teach it to others we don’t walk the walk. We need to be more aware of this lack of attention to security – even in example code, because that’s where developers (and increasingly NetOps) learn – but until we start doing it, we need security solutions like WAF to fill in the gaps left by insecure code.
* Or English, apparently. Oh come on, I do that on purpose. Because sometimes it’s fun to say it wrong.

Source: https://f5.com/about-us/blog/articles/example-driven-insecurity-illustrates-need-for-waf-27704?sf119697594=1

Author: LORI MACVITTIE


  • 0

Devising a Suitable End State of Your CTI Program

Category : FireEye

The shift to an intelligence-led security program can seem daunting. When implementing Cyber Threat Intelligence (CTI) capabilities, there may be a degree of uncertainty across the organization. We’ve seen this happen many times with client teams who initially were not cyber security savvy; however, after the adjustment period, when CTI is fully integrated into their technology and business processes, we continuously see that customers are satisfied with the results.

While managing this shift is challenging, it is not insurmountable. To be successful, it’s important to have a vision for the end state of your program. This vision will help to plot the planned shift, define its true value, and identify opportunities afforded by those who carry out implementation.

When defining a program’s vision, it is important to cover the following four high-level areas:

  • Mission & Strategy: Define a clear mission that enables communications and justifies go-forward action items. Ultimately, focusing on the enhanced ability to manage risk within the organization using a requirements-based intelligence approach is crucial. Establishing the expected resulting capabilities ensures the end-state business objectives, goals, and outcomes are clearly identified and agreed upon.
  • Implementation Roadmap: Employ a clear game plan that addresses the changes in people, processes, and technologies. A smart roadmap provides guidance on order of events and scale of effort required to execute properly. This roadmap will also enable communication of budgetary requirements to senior leadership over the course of the program’s buildout.
  • Conceptual Organizational Design: Construct an end-state organizational design aligned with the mission, approved by executives, and agreed to by peers. This will ease the creation and integration of new teams and transition of any existing ones. While the actual end state may play out differently, the buy-in achieved at the onset of your program evolution will keep your major players moving in the right direction.
  • Metrics: Decipher a key set of metrics that will be used to evaluate the success of your program. This will be critical when determining whether or not the end state is a success, and will also enable you to easily identify wins as the program begins to take shape. Metrics should evaluate the individuals responsible for carrying out the mission, the intelligence sources, the technology supporting the program, and the program’s overall health. The true value of intelligence can be complex to assess; however, the proper level of granularity can help point out if the value is being delivered, and where any breakdowns may transpire and are occurring.

All said, the success of an operational transformation is truly grounded in the strategic legwork done before execution begins. Proper planning ensures that key stakeholders and senior leaders are in agreement with respect to the direction of the overall security operations, as well as the expected value provided. This in turn will motivate executives and other key stakeholders to help shepherd the program through its pending shifts, and into a position where everyone in the organization will see its true potential.

Visit our Cyber Threat Intelligence Services homepage for more information on how Mandiant can help your organization improve its threat intelligence capabilities.

Source: https://www.fireeye.com/blog/products-and-services/2017/10/devising-a-suitable-end-state-of-your-cti-program.html

Author: Jeff Compton, Jeff Berg


  • 0

Forcepoint GDPR Product Mapping Webcast Series

Category : Forcepoint

In this three-part series of short, live webcasts, Forcepoint provides insight and interpretation around the General Data Protection Regulation (GDPR), how it maps to relevant information security technology, and more specifically, how Forcepoint technology can help you prepare for the GDPR. Each session will focus on key areas where technical measures can play a part in supporting your efforts towards the GDPR.

October 4 – Inventory of Personal Data Learn why organizations must ensure they understand what personal data they hold and where it exists across the organization. In this session, we will discuss data-centric technologies like DLP and examine how they help organizations to find personal data and understand risk.

October 17 – Data Flow Mapping & Control It’s necessary to understand personal data flows in order to measure risk and apply controls: this is an important part of managing effective processing practices. Integrating multiple technologies is a key to success. In this session, Forcepoint will show how technologies like DLP can be used to provide visibility and orchestrate controls to enforce processing policies through the integration of other technologies.

November 8 – Responding in a timely manner With the 72 hour breach response window, organizations will need to rapidly detect data incidents and efficiently orchestrate the appropriate response. In this session, Forcepoint will explore how technologies can support organizations’ breach response process.

Presenters: Mike Smart, Product & Solutions Director, Forcepoint Chris Jones, Sales Engineer, Forcepoint

Register here to attend all 3 sessions


Support