• 0

Our Increasingly Data-Centric World: What to Look Out for in 2018

Category : NetApp

In 2018, businesses will continue to see demand for more and more digital services and customer touchpoints, with data as the lifeblood for making those things happen. Businesses and IT need to ensure that they are agile enough to meet this demand, and a modern IT infrastructure is an essential building block to becoming agile.

NetApp technology can be the cornerstone of a modern IT infrastructure. We at NetApp are continuing our mission to change the world with data and to help customers derive more value from the world’s most valuable resource: data. Our goal is both to give customers a competitive advantage and to make the world a better place.

For example, I have seen the NetApp Data Fabric strategy help many organizations unleash the potential of their data by both modernizing IT infrastructure and simplifying data management capabilities. In turn, these organizations accelerate digital transformation by:

  • Using digital technology to reach customers in new ways
  • Building new data-centric digital businesses and revenue streams
  • Radically improving business operations through timely insight

It’s exciting to see that our approach provides innovative data visibility and insights, data access and control, and data protection and security. All these benefits are essential ingredients for success for businesses in 2018, across all industry sectors. I work with customers in Australia, so I use that region as an example here. But the following trends to look out for this year can certainly apply to your organization and to enterprises worldwide.

Data Security Will Be a Major Focus for Most Businesses

Data security was headline news for much of 2017, and in 2018, this trend looks set to continue in Australia and globally. The high-profile Notifiable Data Breaches (NDB) scheme will come into effect on February 22 in Australia, forcing most businesses to take a hard look at their data, and in particular personally identifiable data. It will be important for these businesses to review how they are storing and using the data and whether it’s sufficiently protected.

The need for adequate security is literally expanding. Businesses are collecting an increasing amount of data to get to know customers better. But you need to balance this goal with the responsibility to protect the data that your customers have entrusted to you. Otherwise, you risk huge damage to your reputation from breaches that could have been avoided.

As the 2017 Australian Cyber Security Centre Threat Report points out, “Defending a network from compromise is far less costly than dealing with the costs of compromise.” This warning coupled with the increasing use of cloud services means that you must remember that the responsibility of data security lies with you, not with your cloud provider. An instance of poor security controls in systems or applications, or even basic human error, is enough to undo any customer goodwill or trust that you have earned.

Healthcare Will Become Data-Rich for Better Patient Outcomes

Healthcare is certainly a sector that collects a lot of personal data. Healthcare records have also been shown to be among the most valuable customer records as a data-breach target. However, if this data is secured, it can also be leveraged to deliver unparalleled patient outcomes.

The combination of an aging population, rapid advances in medical technology and pharmacology, and a better-informed and litigious public means that the healthcare sector will lead the way in leveraging the power of data. High-definition 3D imagery of patients that’s accessible through various devices and at any time (bedside or in the operating room) is becoming the norm. The potential for 24/7 or “follow-the-sun” diagnostic services can accelerate patient recovery time frames and deliver savings for healthcare organizations.

Australia’s digital health strategy strives for every citizen to have a digital health record through the My Health Record system by the end of this year. We will continue to see this sector transform itself to take advantage of the data that’s available to improve outcomes for areas such as medicine management, mental health, aged care, and chronic disease management.

Governments Will Tap into Data to Improve Citizen Engagement

Similarly, in the public sector, digital transformation will be underpinned by the use of data in 2018. Governments continue to amass ever-increasing amounts of data about their citizens, and at the same time, constituents are demanding more and more transparency into how governments are using public resources and personal data.

For governments to really use data as an asset, data must be turned into actionable information that results in improved decision making or outcomes. The Digital Transformation Agency is just one example of where the Australian government has recognized the need to improve the delivery of citizen services through digital solutions. The continued investment in platforms such as myGov will mean that citizens can securely and seamlessly access government services, including employment support, child support, and welfare payments.

I believe that the ability to combine and analyze various data sources, possibly along with machine learning capabilities, will help steer governments to deliver better services where they’re needed most, and faster.

Data-Driven Banking Provides Strong Use Cases

As one of the most digitized industries, most financial institutions understand that their core asset is their data. Consumers are demanding increased personalization, and banks are highly capable of leveraging today’s technology to better understand their customers through differentiated services.

For example, does your bank:

  • Make it simple for you to transfer money to someone else through your mobile device?
  • Provide analysis on your transactions to help you with savings or payments?
  • Alert you to set your overseas travel notification when you arrive at the international airport?

I think it’s safe to assume that most of us use a bank’s digital services and that we have seen how banks use data to give us a more proactive and contextual customer experience. It’s also exciting to see the rate of innovation in this sector continue to accelerate as the fintech (financial technology) industry continues to grow and demonstrate agility in the Australian market, for example.

Start Keeping Up with Data-Centric Trends Now

So, is 2018 the “Year of Digital Transformation”? Or maybe it’s the “Year of Customer Experience”? Perhaps it’s the “Year of Data Security”? No matter what you call it, how you manage, access, and protect your data undoubtedly plays a critical role in this ever-distributed, dynamic, and diverse world.

Keep pace with these changes or even stay ahead of the trends. Learn how NetApp can help you transform your organization and gain a competitive advantage by unleashing the power of your data. You can read more in this digital transformation report recently developed in conjunction with IDC.

Source: https://blog.netapp.com/our-increasingly-data-centric-world-what-to-look-out-for-in-2018/

Author: Glenn McPherson


  • 0

Do I Even Need to Secure the Cloud?

Category : McAfee

You share responsibility for securing your data in the cloud. What does that mean? More than anything else, that you understand where the layers of protection from your cloud provider ends, and your responsibility begins.  

A storm awaits many companies as they move infrastructure, applications, and entire portfolios to cloud services.  Yet, the pace of digital transformation demands that businesses make the transition.  We all receive the emails: “Deploy with scalability”, “leverage provider security”, “make your operational model more efficient”, and “manage less of the complexity” in your services!  These promises can certainly be realized – on the back of the billions of dollars in cloud investment from Amazon Web Services, Microsoft Azure, and others. To do so without risking the security of your data, however, requires careful planning along the way.

Most companies have become aware of which services they continue to “own” in the basic cloud provider models.

While the “who” of service block ownership has cleared, the question of security responsibility is a bit more complex. Amazon and Microsoft are spending billions (with a “b”) of dollars investing in the technology, people, and governance to protect public cloud services. The recent introduction of services like Amazon’s Macie shows for example, how the stock set of firewall and identity rules are quickly being complemented by deeper levels of data protection.

You, however, still retain something that Amazon and Microsoft simply don’t have: you know how your business works!  You know your people.  You know your data.  Amazon and Microsoft depend on your team, your understanding of what “good” and “bad” look like, and your willingness and ability to put reasonable security controls in place.  Often, those controls require advanced capabilities and visibility that complement the investments of the public provider, allowing you to mitigate your unique risks.

Take a simple communications scenario in Amazon Web Services.  A virtual machine in your cloud deployment makes a request to an S3 bucket to list the contents, which it receives, and then begins to request objects from the bucket.  In the transaction, Amazon’s various protective layers are  hard at work ensuring that DDoS and other external threats are not immediately involved.  The investment Amazon has made in the identity and access management (IAM) system, including the tools for policy generation and monitoring, are activated to check the policies which apply to establish a basic authorization context.

Yet, your enterprise still has outstanding risk in even this basic scenario.  Do you need to know why the list action occurred?  What application called it?  Has the VM been recently seen to engage in other unknown traffic streams?  What type of environment is the VM a part of?  What is the hygiene status / policy compliance of that VM?  Once the list is returned, is the VM allowed to access all of the things in the bucket, or should some of them be restricted?

Your enterprise remains responsible for critical aspects of the risk management of your deployment, including the ability to recognize and detect mis-configurations and/or respond to undesired access events.  In these kinds of scenarios, the cloud provider has applied their formidable assets in your defense – but as far as your IAM and bucket configuration have stated, the provider can only understand the events to be permitted.

Recent data leaks at a partner of VerizonDow Jonesand elsewhere from misconfigured cloud resources have underscored that this is not mere conjecture, confirming that “but, I’m on Amazon” is not a defense for breached data.  Your enterprise should have strong governance, ready discovery tools, the same (or better) identification and investigation tools you had on-premises, and the instrumentation to better assess the risk of individual data access and transmissions to your business.

In today’s cloud services, “we are running DevOps”, “it’s cloud”, and “but I’m on [provider]” cannot be our line of defense.  Your enterprise can safely realize the business cases of cloud deployment, remembering the lessons of the last generation of incrementally controlling first the perimeter and then north-south and east-west traffic for risks.  Today, data probably would not transit(hybrid or private cloud) without policy check, inspection, and data loss consideration.  Why would your operations on a cloud service be protected any less?

Source: https://securingtomorrow.mcafee.com/business/even-need-secure-cloud/#sf182055598

Author: Wayne Anderson


  • 0

Integrate Your Ticketing System into Database Security to Prevent DBA Privilege Abuse

Category : Imperva

Many of the recent high-profile data security breaches were made by trusted insiders. They are often database administrators (DBAs) who are highly privileged and trusted insiders with access to sensitive data.

In this blog post, I will discuss the inherent risk introduced by highly privileged administrators who are required to support production databases, the challenge of ensuring they are not abusing their privileges, and then, how you can integrate your ticketing system with your database compliance and security solution to mitigate the risk.

The risk of highly-privileged database users

Database administrators are sometimes required to connect to a production database to conduct maintenance tasks or diagnose and fix a problem. These tasks often require high-level privileges. With these ultimate privileges, it means database administrators can do whatever they want.

Any DBA can drop, create, backup, recover, truncate, and obviously query any table. At first look, querying any table is the least dangerous task from the list. On the other hand, if someone is trying to export and sell the content of the credit cards’ table, that’s exactly the privilege they will need.

Malicious DBAs (insiders) are just one face of the risk. Careless DBAs might expose their DB credentials. Alternatively, their DB credentials may be compromised by an email phishing campaign (outsiders).

The “need-to-know” approach

In theory, you would like to grant each user the minimal permissions that they need for the task. In practice, it’s virtually impossible to achieve this since most administrative tasks require high-level privileges. In some cases, these privileges are hierarchic and contain other privileges the administrator should not have. In addition, the administrator’s permission needs keep changing based on their current task.

An example to demonstrate the risk of reading the SQL Server audit file using the sys.fn_get_audit_file stored function, requires the CONTROL SERVER permission. This permission also allows the administrator to query any table on any database of that server. Querying any table might enable exporting all personally identifiable information from your most sensitive tables.

The “trust, but verify” approach

The alternative to a strict permissions model is to audit all activities, let administrators know their actions are audited, and finally, review and investigate any suspicious activity.

Let’s assume the first two parts are easy. But, how would you review all activities? And what the hell is a suspicious activity, when you do not know what the administrator was supposed to do?

Trust is easy. However, if the verify part is too tedious, you and your database security personnel in general, will not do it properly. What you’re probably looking for is a set of tools and procedures that will simplify the verify part.

Managing production maintenance tasks and supporting cases using a ticketing system

Now let’s say you use a ticketing system. Each maintenance task or production issue has a ticket. It describes the symptoms to investigate, or the required action. Someone assigns each ticket to a DBA, who will in turn connect to the database and handle the ticket.

In a well-managed system, no highly privileged user will connect to a production database without having a ticket assigned to them.

In a perfect world, the highly privileged user will act according to the ticket’s description. That’s exactly what you need to verify.

The missing link

When database support is managed through a ticketing system, you can tell which task should be done and by whom. Still, the ticketing system will not validate that DBAs do not abuse their privileges.

The missing building block is a tool that matches what the privileged users actually did, with the support tasks that should have been done. Such an automated process will filter out all legitimate actions, which leaves you to deal with suspicious activity only.

Naturally, a database security solution that audits all activity, also has the potential to help you validate that privileged users don’t abuse their privileges and alerts you when they do.

Integrating a ticketing system into DB audit and security

Let’s take a closer look at how ticketing systems and database security solutions should cooperate to automate alerting for abuse of high privileges. Such DB audit solution integrations should have:

  • Easy one-time set up
  • Continuous notifications on any new ticket
  • Highly privileged users, who can easily tell the database their assigned ticket ID. This is crucial: It must be as easy as executing a single SQL statement in the current connection.
  • A unique ticket ID for a specific DB connection that is associated with all activity performed in the same DB connection
  • Validation that the ticket ID is both valid and assigned to the connected DB user
  • Alerts issued when a highly privileged user connects, executes privileged actions, or queries sensitive tables with no valid ticket assigned
  • Validation of the actual activity by reporting all audited events that belong to each ticket ID

SecureSphere DAM provides all the above and more

SecureSphere allows you to integrate a ticketing system into your database security policies. Its highly-customizable audit and security policies let you define which DB users must have a valid ticket ID, what actions should trigger alerts when no valid ticket is assigned, and much more.

Find out more about how it works with “Integrate Imperva SecureSphere with BMC Remedy.” I’ll discuss the technical details of how to set up SecureSphere for integration with a ticketing system in my next post.

Source: https://www.imperva.com/blog/2018/02/integrate-ticketing-system-with-database-to-prevent-dba-privilege-abuse/?utm_source=LinkedIn&utm_medium=organic_social&utm_content=preventdbaprivilege&utm_campaign=2018_q1_linkedinawareness

Author: Ehud Eshet


  • 0

100 days to GDPR – the industry speaks

Category : HP Security

May 25th 2018 could prove to be a crucial day for many businesses, as the new General Data Protection Regulation (GDPR) rules come into force. But with the deadline for GDPR is now exactly a hundred days away – so how are businesses coping?

We asked some of the leading figures in technology industry for their advice on how best to cope with GDPR – here’s what they said.

Joe Garber, global head of product marketing: information management & governance, Micro Focus

“As today marks exactly 100 days until the GDPR deadline, it is important to reflect on the changes the new rules and regulations will bring. When it comes to the GDPR, the risk of hefty fines and loss of credibility with customers are the bottom-line consequences of non-compliance for businesses. However, today we should be thinking about the benefits the GDPR will bring to privacy and security – something organisations will see if they approach the new regulations methodically and carefully, with the right technology processes in place.”

“Thinking about the safety of the web more broadly, the explosion of the Internet of Things (IoT) devices in our homes and offices – and even on ourselves through smartwatches, medical sensors and more –  poses a huge threat to privacy and security. The immense volumes of information gathered by these devices means that even legitimate use could quickly pinpoint the identity of an individual using many different fragments of data.”

“We have not previously had the experience as a society nor the legislative framework to decide what should constitute privacy, so the GDPR will be a catalyst for organisations to put measures in place to ensure the privacy of data, which they arguably should have been doing already. As a consumer, I am excited about what the GDPR can do for me as an individual, protecting my information in a time when many privacy issues are vague, threatening and of colossal scale.”

Bert Bouwmeester, director, business solutions, SQS

“Today marks the 100 day countdown to GDPR kick-off and businesses of all sizes should be putting steps in place to ensure compliance.”

“Data Protection Assessments are designed to identify and address security weaknesses within an organisation. These involve a critical examination of your systems, working processes, and staff behaviours. These assessments can help businesses focus their efforts and achieve compliance in a targeted fashion. However, the GDPR will not be the “silver bullet” for cybersecurity. The fact that a business can be fully GDPR compliant, yet still liable to a data breach is something that all businesses need to be aware of.”

Carl Leonard, principal security analyst at Forcepoint

“The GDPR countdown provides a timely push for all of us to do more to protect the privacy of the people that matter most; it is the perfect opportunity to show them how much you care. After all, by protecting the people you secure the organisation.”

“100 days does not sound like a lot of time, but it’s not too late – most organisations will be well on the way to putting in place the processes and security measures that the regulation requires. 100 days is the perfect opportunity to check your progress to see if you are on track as you put the last pieces of your strategy in place.”

Ross Jackson, vice president of customer transformation & innovation, Mimecast

“Breach notification is one of the bigger risks of the upcoming GDPR regulation. As it stands, businesses, in their Controller capacity, need to report the breach within 72 hours of becoming aware of it. But, if we consider a normal business supply chain, not every business has the necessary contacts to report it if it occurs. This is a huge problem.”

“As such, achieving GDPR compliance is a substantial task. Automation arguably has a massive role to play here, but this will only take organisations so far. For many organisations, it is going to be a manual process. Businesses must ensure they have up to date contact information across their estate as a Controller and should prepare messaging to avoid wasting time in the event of a breach.”

“Businesses should also be prepared to go through email and archived data. Email data represents one of the biggest challenges for compliance. Many organisations do not realise how much sensitive personal data is hidden within their employees’ email.”

“To prepare for the GDPR, businesses must implement a cyber resilience strategy and update outdated email archives that hold personal and sensitive data. In addition, GDPR compliance needs to be a c-suite conversation and priority. Business leaders must be aware of the implications of the regulation and also the hidden surprises it may unearth.”

James Romer, EMEA chief security architect, SecureAuth

“With 100 days to go before the GDPR kicks in, what best practices can CISOs put in place to prepare? Securing the user and their methods of accessing data is a great place to start. One of the most important changes is the broadening definition of personal data. Under GDPR, any data that could feasibly identify an individual is now considered personal. This is all the more important because 81 per cent of all data breaches come from attackers using stolen credentials.”

“Adaptive authentication gives organisations an added layer of protection to prevent the misuse of stolen credentials. CISO’s should work closely with a range of groups within their organisations to understand how they classify and handle data. CISOs also need comprehensive knowledge of their business’ legacy practices. Due diligence requires these systems to be regularly tested to make sure they’re resilient and effective. This steps are critical components for successful GDPR compliance.”

Jed Mole, European marketing director, Acxiom

“It is good to see consumers taking data privacy seriously, though it’s important to understand, they do vary in terms of how they view this subject. The clear trend is towards greater real-life acceptance of data exchange as part and parcel of everyday life. This is good news for marketers who believe in data ethics and adopt the highest standards in data-driven marketing. Using data to drive more transparent value, treating people as individuals while giving them control especially as we enter the GDPR era, is key to achieving the win-win businesses and consumers really want.”

Source: https://www.itproportal.com/features/100-days-to-gdpr-the-industry-speaks/

Author: Michael Moore


  • 0

Fortify Your Web Application Firewall

Category : Gigamon , Imperva

Send in Web App Security Reinforcements.

Web application attacks deny service and steal sensitive data. They protect against the most critical web application security risks and vulnerabilities — for example, SQL injection, cross-site scripting, illegal resource access and remote file inclusion.

The Imperva SecureSphere Web Application Firewall (WAF) analyzes and inspects requests coming into applications from every part of your network — and stops these attacks cold. When paired with the GigaSECURE® Security Delivery Platform, you not only increase the resilience and efficiency of you deployment through our inline bypass technology but you also:

  • Increase WAF utilization by filtering out non-web traffic before burdening your tool.
  • Separate network speed from tool capacity and speed meaning network upgrades don’t drive unnecessary tool spend.
  • Reduce the need for network outages if moving the WAF out of line or taking it down for planned/unplanned maintenance.

Power Your Tools to Prevent Threats

Route the right traffic to your Web Application Firewall. At the speed of the network.

Joint Solution Brief

Gigamon and Imperva enable multi-vector data and application protection.

Deployment Guide

Ensure business continuity. Visibility and scalability lock in web application security.

White Paper

Build an efficient network security architecture that copes with increasing network speeds.

  Source: https://www.gigamon.com/campaigns/partners-web-application-firewall.html?utm_content=buffer2559d&utm_medium=social&utm_source=linkedin.com&utm_campaign=buffer

  • 0

When do banks need to be ready for PSD2?

Category : Gemalto

Last year was abuzz with discussions and speculations on PSD2 – the new European regulation that will change the banking industry – and its Regulatory Technical Standards (RTS), which define how it is to be implemented. At the end of last year, we wrote about what the directive will mean for the sector and why banks should prepare themselves.  Now that 2018 is upon us, the buzz will get louder, because PSD2 is getting very close. The details are being defined and banks must be ready soon.

But how soon is soon? Well, on November 27 last year, the RTS were finally released by the European Banking Authority (EBA). So there is finally a much clearer timeline to work towards.

Much of last year’s buzz was debate on the RTS requirements. Merchants were not happy with the balance between security and user convenience. Fintechs were not happy either, complaining about how they would access customer information held by banks. And both groups are perhaps still not completely satisfied. But today it seems that discussions are over, and the text should remain as it is – although the European Parliament and the European Council do have a three-month delay in which they could amend some points or tweak the calendar.

Let’s take a look at that calendar. Here are the key dates that we know already:

  • January 2016: PSD2 came into force
  • November 27, 2017: The RTS were released
  • January 2018: Each country had to transpose the Directive into national legislation
  • End of February, 2018: The RTS are expected to be formally approved by the European Parliament and the European Council, opening the 18-month delay for their actual implementation
  • September 2019: Payment Service Providers (PSPs) must be ready to go, having implemented the RTS security and functional requirements.

The new version of the RTS introduces some interesting new elements to the calendar. The first new element is that banks will have to offer their open APIs to Third-Party Providers (TPPs) for testing and integration, 6 months before the final implementation date. This means that their APIs must be ready not by September 2019, but six months earlier: March 2019.

The second new element is somewhat hidden among complicated text. Basically, banks must have a back-up plan – known as “contingency measures” – in case their open APIs don’t work. They must give TPPs an alternative way of accessing their customers’ data, allowing them to use end-users’ login credentials while indicating that they are not really the end user.

But there is one condition under which banks can do away with these contingency measures: their open APIs must have been widely used for at least three months before the September 2019 deadline.

Confusing? Perhaps the image below can clarify the key dates – and the three-month period of using open APIs can occur at any moment during the timeline below:

So what is the key take-away for banks? Essentially, the buzz on PSD2 is getting louder because time is now very short. The calendar is tight, and European banks need to act now.

We’ll be returning to the topic of PSD2 a lot over the next weeks – discussing everything from security and authentication standards, to the role of Payment Service Providers and the requirements for corporate banking.

Source: https://blog.gemalto.com/financial-services/2018/02/14/banks-need-ready-psd2/

Author: Silvia Candido


  • 0

Coinhive Cryptocurrency Mining Script Injected Into 1000S of Government Websites Via Browsealoud Plugin

Category : Forcepoint

Over the weekend reports were made of a cryptocurrency mining script injected into government owned and run websites across the US, UK and Australia.

The affected websites had a common theme – a script included in all that made a request to a JavaScript file hosted on BrowseAloud<dot>com.  This script, ba.js, was seemingly modified by a malicious actor to include obfuscated code that made an additional request to a cryptocurrency mining tool CoinHive. End-users who visited one of the affected websites Sunday on February 11, 2018, would have had a crypto-currency miner (CoinHive, known to mine Monero coins) run in the open browser tab.

WHAT IS THE CURRENT STATE OF INFECTION?

As of writing (Monday 12 noon GMT) some of the affected websites have been placed in maintenance mode.  For example, the UK’s Information Commissioner’s Office, the entity responsible for upholding information rights in the public interest and the UK’s nominated Supervisory Authority for GDPR requirements:

Other affected websites remain functioning but with Texthelp, the company responsible for the BrowseAloud tool, acknowledging they have automatically removed the script from their customer’s websites:

Source: https://twitter.com/texthelp/status/962798423941484547

INFECTION CHAIN

Should an end-user have browsed to one of the affected websites that used the BrowseAloud script on Sunday February 11, 2018 a crypto-currency miner (CoinHive, known to mine Monero coins) was run on the end-users machine in the open browser tab.  Texthelp report the compromise of their script occurred at 11:14am GMT on Sunday February 11.

The infection chain was as below:

<website pulling the BrowseAloud script> ->

hXXp://www.browsealoud.com/plus/scripts/ba.js ->

hXXps://coinhive.com/lib/coinhive.min.js?rnd=[random]

At this time we believe the script has not performed any other function apart from cryptocurrency mining.

Texthelp are in the process of cleaning their scripts.  They have provided a statement here.

JavaScript injection attacks are not a new technique; in fact they are decades old. However, this is another example of a compromised supply chain attack, similar to NotPetya in the summer of 2017.

ADVICE FOR WEBMASTERS

By identifying the source of a script common across thousands of websites a cybercriminal was able to run code on the machines of visitors to these websites.

Ultimately this event brings the issue of trust to the supply chain.  A third-party web-code supplier may not have anticipated a threat actor re-purposing their code platform and install base for cryptocurrency mining, or for other rudimentary code injection.  Care should be taken to execute code from trusted sources or to at least validate the code being called is that which is expected or normal.

Slight modifications to the way third-party code is run in websites can help mitigate the impact of third-party compromise such as this.  It is worth reading researcher Scott Helme’s blog where he describes how to use the Subresource Integrity attribute when calling scripts.

PROTECTION STATEMENT

Forcepoint customers are protected against this threat at the following stages of attack:

Stage 3 (Redirect) – The call to CoinHive is categorized as Potentially Unwanted Software, a category which is blocked by default.

 

We will continue to monitor any developments in this attack.

RESOURCES:

Texthelp Statement: https://www.texthelp.com/en-gb/company/corporate-blog/february-2018/data-security-investigation-underway-at-texthelp/

Spurce: https://blogs.forcepoint.com/security-labs/coinhive-cryptocurrency-mining-script-injected-1000s-government-websites-browsealoud

Author: Carl Leonard


  • 0

City and County of San Francisco Customer Story

Category : FireEye

City of San Francisco secures its complex infrastructure with multi-vector threat protection from FireEye.


  • 0

Scale your Apps and your People (This time for free)

Category : F5

The free, Super-NetOps training program from F5 addresses a critical need faced by nearly every large business: How to break down operational silos and get the right teams doing the right work in close collaboration and with an eye toward driving better business outcomes.

It is hard to overstate the scope of change confronting businesses over the last decade or so. The shift in control over information from businesses to consumers, the speed at which competitive offerings hit the market, and the relative ease with which customers shift brand allegiance all drive an increased urgency for businesses to innovate, improve customer experience, and discover new sources of revenue. For most businesses, application development teams are key to securing a successful future. Yet those teams are often hampered by IT systems and practices that have failed evolved at pace with the business challenges. There is frustration on both sides, of course. IT teams face enormous pressure to minimize risk and keep IT costs down even as demand for services escalates.

While some application teams have responded by bypassing IT, going straight to the cloud and addressing as many operational issues as possible through code, this approach is inherently self-limiting. Network operations teams have decades of experience deploying, managing, maintaining, and securing applications that development teams often lack. In an F5 survey earlier this year, we found that while DevOps and NetOps professionals respect each other’s priorities, there is a desire for better collaboration, more automation of network services and, on the DevOps side in particular, greater access by developers to the production pipeline.

Over the past two years, F5 has been actively working with customers to address the gap between network operations teams and the developers who want to consume more operational functions as a service rather than submit a ticket and wait. BIG-IP, as a product, has long been highly programmable. However, most BIG-IP admins have only been trained to provision BIG-IP services though the GUI or command line and few understand how to define and deliver services as part of a continuous deployment pipeline. Plus, as any student of the DevOps movement will tell you, the challenges that DevOps methodologies address are as much cultural as technical. Closing this gap is not just about products–it’s about reassessing attitudes around risk and identifying and removing constraints of all types: technical, cultural and business process.

Announced today, the result of these engagements is a new training curriculum that we and our partners have developed and delivered to over a thousand customers in the past several months. In the spirit of continuous improvement, the curriculum, which includes a hand-on lab component, is continuously refined based on lessons learned from every engagement.

The Super-NetOps curriculum teaches network operations professionals the concepts and skills they need to create a service catalog for developers to access on demand, and to provide those services in a continuous deployment pipeline. It also covers the cultural and process changes that are fundamental to the DevOps movement, but that have largely bypassed traditional network operations teams. The Super-NetOps program addresses a critical need faced by nearly every large business: How to break down operational silos and get the right teams doing the right work in close collaboration and with an eye toward driving better business outcomes.

The training is free to customers and non-customers alike, although it does assume familiarity with BIG-IP. It includes a fully provisioned lab environment and step-by-step guides that have been thoroughly tested in the field. We will be adding additional resources and course material over time and, of course, we invite questions and suggestions. You can check out the free training here.

Source: https://f5.com/about-us/blog/articles/scale-your-apps-and-your-people-this-time-for-free-29759?sf180715951=1

Author: Teri Patrick


  • 0

2018 WINTER OLYMPICS: CITIUS, ALTIUS, FORTIUS, CYBER ATTACKS?

Category : Cyber-Ark

Only days into the Winter Olympics and reports of cyber attacks are making headlines. Officials have confirmed that a cyber attack is to blame for an internet and Wi-Fi shutdown during the opening ceremony.

Noncritical systems were impacted – including the official Olympics website, which according to reports, went offline when organizers shut down servers to address the attack. Wi-Fi service also stopped working.

This follows the Department of Homeland Security’s recent warning that the 2018 Winter Olympics will be a hotbed of cybercriminal activity. While the warning was extended to those in attendance, you don’t have to be sitting in the stands to become an unwitting target.

Whether they’re part of a criminal syndicate or part of a nation-state attack group, cyber attackers love to use high-profile public events as a cover for their malicious activity. Even the most security conscious person can let their guard down when they’re caught up in the spectacle and excitement of something like the Olympics.

With that in mind, here are a few techniques and approaches that we believe attackers will use during the Olympics, both to target spectators on-site and those watching and reading about the Olympics at home or from the office.

Cryptomining

Cryptomining attacks are quickly replacing ransomware as the attacks du jour. Attackers will infect websites that are commonly used to view Olympic activity, stream events or provide news on what’s happening at the games.

By visiting an infected site, users unwittingly donate their computing power resource to mine cryptocurrency on behalf of the attacker – all without users knowing they were part of the process.
These attacks don’t require malware to run on the user’s endpoint. The only indication of the attack may be that your computer runs slower due to loss of computing power.

We’ll dig into crypto-attacks more in a subsequent blog post.
High Value Targets:  Olympic viewers back home or in the office

Spear Phishing Campaigns

This is one of the most common methods attackers use to gain a foothold on an endpoint or in an organization. Attackers use peoples’ information to specifically target them with a malicious email, in hopes that they’ll click a link and unleash the payload it’s carrying.

There are already reports that attackers have been targeting Olympic officials for months. Whether you’re watching the games from home or attending, be wary of any email that contains links or attachments to information about events, times and websites to watch the games. Vigilance is the best defense against phishing attacks.
High Value Targets:  Olympic athletes, Olympic officials, country delegations and government representatives, viewers/fans

IoT and Mobile Payment Attacks

Mobile payments and IoT promises to be a big part of the 2018 Winter Olympics. Internet-connected devices have been a favorite target of attackers of the past year, primarily because of the incredibly poor security of most IoT devices. We can expect attackers to test device defenses used during the Olympics – whether it’s cameras, wearables or any other device that will be gathering data on athletes, attendees and officials.

While mobile payments make life much easier for the consumer, the platforms have historically had poor security and represent a real threat to consumer security. Some of the more prevalent mobile payment attacks include spoofed mobile wallets, or malware on the phone itself, which will collect all of your data, passwords and other sensitive information.
High Value Targets:  Fans/attendees, Olympic athletes, Olympic officials

Public Wi-Fi-Related Attacks

Public Wi-Fi-related attacks are an oldie and attacker favorite – something that has manifested in previous Olympics (or any public event where free Wi-Fi is provided).

These types of attacks are incredibly common – free Wi-Fi is typically poorly secured. It’s fairly easy for attackers to use Wi-Fi sniffing software to ferret out the data transmitted over the network. This becomes worrisome when you use pubic Wi-Fi for sensitive transactions like banking or even entering passwords to websites.

If you’re at the games, be extra careful about what network you’re connecting to and try to avoid accessing websites where you need to enter your passwords, sensitive information (like SS numbers) or banking/financial websites.

In addition to these recommendations, visitors should also consider using a mobile hotspot for Wi-Fi access.
High Value Targets: Olympic athletes, fans in attendance

Source: https://www.cyberark.com/blog/2018-winter-olympics-citius-altius-fortius-cyber-attacks/

Author: 


Support