image
A Windows zero-day exploit dropped by developer SandboxEscaper would allow local privilege-escalation (LPE), by importing legacy tasks from other systems into the Task Scheduler utility. It’s the latest zero-day from SandboxEscaper, who said that she has four more in the hopper that she’d like to sell for $60,000 to non-Western buyers. Mitja Kolsek, co-founder of 0patch and CEO of Arcos Security, told Threatpost that the bug is a typical LPE flaw, allowing a low-privileged user on the computer to arbitrarily modify any file, including system executables. “Since these are executed in high-privileged context, the attacker’s code can get executed and, for instance, promote the attacker to local administrator or obtain covert persistence on the computer,” said Kolsek, adding that 0patch is working on releasing a micropatch for the vulnerability as soon as possible. “The only atypical factor is that the attacker must know a valid username and password on the computer because these must be passed to Task Scheduler in order for the exploit to work.” He added, “This means, for example, that a local corporate user without administrative privileges on their workstation could easily mount such attack, and so would an external attacker who gained remote access to some computer in the network and found or guessed any Windows domain user’s credentials.” Abusing Legacy Tasks The exploit, disclosed on Twitter on Tuesday, takes advantage of the fact that old Windows XP tasks in the .JOB format can be imported to Windows 10 via the Task Scheduler. An adversary can run a command using executables ‘schtasks.exe’ and ‘schedsvc.dll’ copied from the old system. This results in a call to a remote procedure call (RPC) called “SchRpcRegisterTask,” which is exposed by the Task Scheduler service. When a specific function is encountered, called “par int __stdcall tsched::SetJobFileSecurityByName(LPCWSTR StringSecurityDescriptor, const unsigned __int16 , int, const unsigned __int16 )par”, it opens the door to gaining system privileges. “I assume that to trigger this bug you can just call into this function directly without using that schtasks.exe copied from Windows XP,” SandboxEscaper added in her Tuesday writeup. “but I am not great at reversing :(.” Other researchers have tested the exploit and found it to be valid. “I can confirm that this works as-is on a fully patched (May 2019) Windows 10 x86 system,” tweeted Will Dormann, a vulnerability analyst at CERT/CC. “A file that is formerly under full control by only SYSTEM and TrustedInstaller is now under full control by a limited Windows user. Works quickly, and 100% of the time in my testing.” He said it works against a fully patched and up-to-date version of Windows 10, 32 and 64-bit, as well as Windows Server 2016 and 2019. Windows 8 and 7 are not vulnerable, he noted. Microsoft, for its part, has yet to release an advisory or statement on the bug, which doesn’t yet have a CVE. More Zero-Days on the Horizon? SandboxEscaper also announced on her blog that she’s sitting on three other LPE vulnerabilities and another, fittingly, for escaping the Windows sandbox. “If any non-western people want to buy LPEs, let me know,” she wrote. “(Windows LPE only, not doing any other research nor interested in doing so). Won’t sell for less then 60k for an LPE. I don’t owe society a single thing. Just want to get rich and give you *** in the west the middlefinger.” SandboxEscaper has a history of releasing fully functional Windows zero-days. Last August, she debuted another Task Scheduler flaw on Twitter, which was quickly exploited in the wild in a spy campaign just two days after disclosure. In October, SandboxEscaper released an exploit for what was dubbed the “Deletebug” flaw, found in Microsoft’s Data Sharing Service (dssvc.dll). And towards the end of 2018 she offered up two more: The “angrypolarberbug,” which allows a local unprivileged process to overwrite any chosen file on the system; and a vulnerability allows an unprivileged process running on a Windows computer to obtain the content of arbitrary file – even if permissions on such file don’t allow it read access. “I believe her claim about four more vulnerabilities as she has demonstrated her abilities to find them in the past,” Kolsek told Threatpost. Want to know more about Identity Management and navigating the shift beyond passwords? Don’t miss our Threatpost webinar on May 29 at 2 p.m. ET. Join Threatpost editor Tom Spring and a panel of experts as they discuss how cloud, mobility and digital transformation are accelerating the adoption of new Identity Management solutions. Experts discuss the impact of millions of new digital devices (and things) requesting access to managed networks and the challenges that follow.

Source

image
Google stored G Suite passwords in plaintext for almost 15 years, the cloud giant acknowledged on Tuesday evening. G Suite, Google’s brand of cloud computing, productivity and collaboration tools, software and products, has more than 5 million users as of February. Google said that it recently discovered the passwords for a “subset of enterprise G Suite customers” stored in plain text since 2005. “This practice did not live up to our standards,” Suzanne Frey, VP of engineering for Google Cloud Trust, said in a post. “To be clear, these passwords remained in our secure encrypted infrastructure. This issue has been fixed and we have seen no evidence of improper access to or misuse of the affected passwords.” Enterprise, not consumer, accounts were impacted, said Google. What Happened? The best security practice is to store passwords with cryptographic hashes that mask those passwords to ensure their security – so when users set their passwords, instead of remembering the exact characters of the password, companies will scramble it with a “hash function.” However, Google said that within G Suite, it had made an error implementing a G Suite console for domain administrators that resulted in passwords being stored in plaintext – meaning they didn’t have cryptographic hashes and were left unscrambled. The tool, located in the administrator console, allowed administrators to upload or manually set user passwords for their company’s users and was meant to help them onboard new users. However, due to implementation error the admin console was inadvertently storing passwords in plain text. The functionality no longer exists, said Google. In a separate issue, Google also discovered that starting in January 2019, it inadvertently stored a subset of unhashed passwords – for a maximum of 14 days – in its encrypted infrastructure. “This issue has been fixed and, again, we have seen no evidence of improper access to or misuse of the affected passwords,” said Frey. “We will continue with our security audits to ensure this is an isolated incident.” Google has notified G Suite administrators to change impacted passwords and will reset accounts that have not already done so themselves. Google did not specify how many users were impacted by either incident. Google Security Practices Blasted The main issue is that the full extent of a security faux pas like this for years to come is still unknown, Robert Prigge, president of Jumio, said. “That means, when G Suite users are logging into their accounts, we want to believe, really believe, that they are the legitimate account owners,” said Prigge in an email. “But, at the end of the day, we don’t know for sure. And the weakest link in the security chain is again Google’s username and password. Thanks to the Dark Web, phishing attacks and social engineering, there’s a huge quantity of user credentials available for purchase (for pennies).” Another concern is the timeline: The fact that Google just recently discovered that the G Suite passwords were stored in plaintext since 2005 is troubling, Kevin Gosschalk, CEO of Arkose Labs said. “Companies need to be constantly re-evaluating and testing their own security measures to make sure lapses in security or, in this instance, a faulty password setting and recovery offering, does not jeopardize its customers or their accounts,” Gosschalk said via email. “This mistake should have been recognized and prevented fourteen years earlier with proactive, ongoing security testing.” Google is only the latest conglomerate tech company to find itself in hot water due to how it stores passwords. In March, Facebook said it found that hundreds of millions of user passwords were stored in plain text for years. And a year ago in May 2018, Twitter said that a glitch caused account passwords to be stored in plain text on an internal log, sending users across the platform scrambling to change their passwords. Want to know more about Identity Management and navigating the shift beyond passwords? Don’t miss our Threatpost webinar on May 29 at 2 p.m. ET. Join Threatpost editor Tom Spring and a panel of experts as they discuss how cloud, mobility and digital transformation are accelerating the adoption of new Identity Management solutions. Experts discuss the impact of millions of new digital devices (and things) requesting access to managed networks and the challenges that follow.

Source

image
After Facebook and Twitter, Google becomes the latest technology giant to have accidentally stored its users' passwords unprotected in plaintext on its servers—meaning any Google employee who has access to the servers could have read them. In a blog post published Tuesday, Google revealed that its G Suite platform mistakenly stored unhashed passwords of some of its enterprise users on internal servers in plaintext for 14 years because of a bug in the password recovery feature. G Suite, formerly known as Google Apps, is a collection of cloud computing, productivity, and collaboration tools that have been designed for corporate users with email hosting for their businesses. It's basically a business version of everything Google offers. The flaw, which has now been patched, resided in the password recovery mechanism for G Suite customers that allows enterprise administrators to upload or manually set passwords for any user of their domain without actually knowing their previous passwords in order to help businesses with on-boarding employees and for account recovery. If the admins did reset, the admin console would store a copy of those passwords in plain text instead of encrypting them, Google revealed. “We made an error when implementing this functionality back in 2005: The admin console stored a copy of the unhashed password,” Google says. However, Google also says that the plain text passwords were stored not on the open Internet but on its own secure encrypted servers and that the company found no evidence of anyone's password being improperly accessed. “This practice did not live up to our standards. To be clear, these passwords remained in our secure encrypted infrastructure,” Google says. “This issue has been fixed, and we have seen no evidence of improper access to or misuse of the affected passwords.” Google also clarifies that the bug was restricted to users of its G Suite apps for businesses and that no free version of Google accounts like Gmail were affected. Though the company did not disclose how many users might have been affected by this bug beyond just saying the issue affected “a subset of our enterprise G Suite customers,” with more than 5 million G Suite enterprise customers, the bug could affect a large number of users — presumably any user who used G Suite in last 14 years. In order to address the issue, Google has since removed the capability from G Suite administrators and emailed them a list of impacted users, asking them to ensure that those users reset their passwords. Google says the company would be automatically resetting passwords for those users who do not change their passwords. “Out of an abundance of caution, we'll reset accounts that have not done so themselves,” the tech giant says. Google is the latest tech company to accidentally store unhashed passwords on its internal servers. Recently, Facebook was in the news for storing plaintext passwords for hundreds of millions of its users, both Instagram and Facebook, on its internal servers. Almost a year ago, Twitter also reported a similar security bug that unintentionally exposed passwords for its 330 million users in readable text on its internal computer system.

Source

image
An anonymous hacker with an online alias “SandboxEscaper” today released proof-of-concept (PoC) exploit code for a new zero-day vulnerability affecting Windows 10 operating system—that's his/her 5th publicly disclosed Windows zero-day exploit [1, 2, 3] in less than a year. Published on GitHub, the new Windows 10 zero-day vulnerability is a privilege escalation issue that could allow a local attacker or malware to gain and run code with administrative system privileges on the targeted machines, eventually allowing the attacker to gain full control of the machine. The vulnerability resides in Task Scheduler, a utility that enables Windows users to schedule the launch of programs or scripts at a predefined time or after specified time intervals. SandboxEscaper's exploit code makes use of SchRpcRegisterTask, a method in Task Scheduler to register tasks with the server, which doesn't properly check for permissions and can, therefore, be used to set an arbitrary DACL (discretionary access control list) permission. “This will result in a call to the following RPC “_SchRpcRegisterTask,” which is exposed by the task scheduler service,” SandboxEscaper said. A malicious program or a low-privileged attacker can run a malformed .job file to obtain SYSTEM privileges, eventually allowing the attacker to gain full access to the targeted system. SandboxEscaper also shared a proof-of-concept video showing the new Windows zero-day exploit in action. The vulnerability has been tested and confirmed to be successfully working on a fully patched and updated version of Windows 10, 32-bit and 64-bit, as well as Windows Server 2016 and 2019. More Windows Zero-Day Exploits to Come Besides this, the hacker also teased that he/she still has 4 more undisclosed zero-day bugs in Windows, three of which leads to local privilege escalation and fourth one lets attackers bypass sandbox security. The details and exploit code for the new Windows zero-day came just a week after Microsoft monthly patch updates, which means no patch exists for this vulnerability at the current, allowing anyone to exploit and abuse. Windows 10 users need to wait for a security fix for this vulnerability until Microsoft's next month security updates—unless the company comes up with an emergency update.

Source

image
By Uzair Amir The leaked database was discovered on Shodan on May 14th. A huge online database containing private contact information including phone numbers and email IDs of roughly 50 million Instagram profiles including those of influencers and brands has reportedly been discovered by security researcher Anurag Sen. The affected individuals include famous food bloggers and celebrities too […] This is a post from HackRead.com Read the original post: Database with millions of Instagram influencers' info leaked online

Source

image
Mozilla patched several critical vulnerabilities with the release of its Firefox 67 browser on Tuesday. The worst of the bugs patched are two memory safety flaws that could allow attackers to exploit the vulnerabilities to take control of an affected system, according to a security bulletin issued by United States Computer Emergency Readiness Team (US-CERT). One of the critical bugs (CVE-2019-9800) impacts the Firefox and the Firefox ESR browser in version 66. The Firefox ESR browser is its Extended Support Release version of Firefox, designed for mass deployments. “Some of these bugs showed evidence of memory corruption and we presume that with enough effort that some of these could be exploited to run arbitrary code,” wrote Mozilla in its bulletin. A second critical memory vulnerability (CVE-2019-9814), found in Firefox 66 (but not in Firefox ERS), could also be exploited to run arbitrary code, according to Mozilla. The technical specifics of both the critical bugs have not yet been released. Still unknown is whether either critical vulnerability can be exploited remotely or if they require local access to impacted systems. As with all bugs publicly disclosed Tuesday, upgrading to the latest Firefox 67 browser will patch the flaws. In all, Mozilla issued patches for 21 bugs. Of those patches, two were rated critical, 11 high, six moderate and two low. Mozilla Firefox 67 Touts Privacy and Speed The bug fixes coincide with a significant update to the Firefox browser that introduces privacy additions and under-the-hood tweaks to make the browser more competitive with Google Chrome in the speed department. On the privacy front, Mozilla Firefox 67 now blocks cryptomining scripts and a browser’s digital fingerprints. Digital fingerprinting is when a website can identify a user based on a unique set of visitor’s system parameters such as screen information, operating system version, browser time zone and installed plugins, cookies, time on site, clicks on site locations, mouse and touchscreen behavior. “Today’s Firefox release gives you the option to ‘flip a switch’ in the browser and protect yourself from these nefarious practices,” wrote Marissa Wood, vice president of Firefox product management at Mozilla. Mozilla has also enhanced its Private Browsing features. Firefox 67 now allows users to browse in Private Browsing mode and still take advantage of store passwords. Another safety, security and privacy feature gives users the ability to disable and enable web-extensions when in Private Browsing mode. Want to know more about Identity Management and navigating the shift beyond passwords? Don’t miss our Threatpost webinar on May 29 at 2 p.m. ET. Join Threatpost editor Tom Spring and a panel of experts as they discuss how cloud, mobility and digital transformation are accelerating the adoption of new Identity Management solutions. Experts discuss the impact of millions of new digital devices (and things) requesting access to managed networks and the challenges that follow.

Source

image
Intel has issued an updated advisory for more than 30 fixes addressing vulnerabilities across various products – including a critical flaw in Intel’s converged security and management engine (CSME) that could enable privilege-escalation. The bug (CVE-2019-0153) exists in a subsystem of Intel CSME, which powers Intel’s Active Management System hardware and firmware technology, used for remote out-of-band management of personal computers. An unauthenticated user could potentially abuse this flaw to enable escalation of privilege over network access, according to the Intel advisory, updated this week. The flaw is a buffer overflow vulnerability with a CVSS score of 9 out of 10, making it critical. CSME versions 12 through 12.0.34 are impacted: “Intel recommends that users of Intel CSME… update to the latest version provided by the system manufacturer that addresses these issues,” according to Intel’s advisory.**** Overall, the chip giant issued 34 fixes for various vulnerabilities – with seven of those ranking high-severity, 21 ranking medium-severity and five ranking low-severity, in addition to the critical flaw. These latest flaws are separate from Intel’s other advisory last week revealing a new class of speculative execution vulnerabilities, dubbed Microarchitectural Data Sampling (MDS), which impact all modern Intel CPUs. Those four side-channel attacks – ZombieLoad, Fallout, RIDL (Rogue In-Flight Data Load) and Store-to-Leak Forwarding – allow for siphoning data from impacted systems. High-Severity Flaws In addition to the critical vulnerability, Intel released advisories for several high-severity flaws across different products. One such glitch is an insufficient input validation that exists in the Kernel Mode Driver of Intel i915 Graphics chips for Linux. This flaw could enable an authenticated user to gain escalated privileges via local access. The vulnerability, CVE-2019-11085, scores 8.8 out of 10 on the CVSS scale. Intel i915 Graphics for Linux before version 5 are impacted; Intel recommends users update to version 5 or later. Another high-severity flaw exists in the system firmware of Intel NUC kit (short for Next Unit of Computing); a mini PC kit that offers processing, memory and storage capabilities for applications like digital signage, media centers and kiosks. This flaw, CVE-2019-11094, ranking a 7.5 out of 10 on the CVSS scale, “may allow an authenticated user to potentially enable escalation of privilege, denial of service and/or information disclosure via local access,”according to Intel. Intel recommends that the impacted products (below) update to the latest firmware version. Another high-severity flaw, discovered internally by Intel and disclosed last week, exists in in Unified Extensible Firmware Interface (UEFI), a specification defining a software interface between an operating system and platform firmware (while UEFI is an industry-wide specification, specifically impacted is UEFI firmware using the Intel reference code) “Multiple potential security vulnerabilities in Intel Unified Extensible Firmware Interface (UEFI) may allow escalation of privilege and/or denial of service,” according to last week’s advisory. “Intel is releasing firmware updates to mitigate these potential vulnerabilities.” The flaw, CVE-2019-0126, has a CVSS score of 7.2 out of 10, and may allow a privileged user to potentially enable escalation of privilege or denial of service on impacted systems. This vulnerability stems from “insufficient access control in silicon reference firmware for Intel Xeon Scalable Processor, Intel Xeon Processor D Family, according to Intel. In order to exploit the flaw, an attacker would need local access. Other high severity flaws include: an improper data-sanitization vulnerability in the subsystem in Intel Server Platform Services (CVE-2019-0089), an insufficient access control vulnerability in subsystem for Intel CSME (CVE-2019-0090), an insufficient access control vulnerability (CVE-2019-0086) in Dynamic Application Loader software (an Intel tool allowing users to run small portions of Java code on Intel CSME) and a buffer overflow flaw in subsystem in Intel’s Dynamic Application Loader (CVE-2019-0170). Lenovo for its part released an advisory with several target dates where it aims to apply patches for its Intel-impacted products, including various versions of the IdeaPad and ThinkPad (see a full list here). Want to know more about Identity Management and navigating the shift beyond passwords? Don’t miss our Threatpost webinar on May 29 at 2 p.m. ET. Join Threatpost editor Tom Spring and a panel of experts as they discuss how cloud, mobility and digital transformation are accelerating the adoption of new Identity Management solutions. Experts discuss the impact of millions of new digital devices (and things) requesting access to managed networks and the challenges that follow.

Source

image
With businesses continuing their digital migrations to cloud services and applications, IT is finding itself wrestling with how to keep companies’ data safe. The challenge? The cloud has created a next-generation, virtual perimeter. Businesses are using infrastructure-as-a-service (IaaS), cloud storage, software-as-a-service (SaaS) applications housed by third parties, and are connecting to these resources using mobile and fixed devices that are not tied to a company branch office or headquarters. The result is data being housed across a fragmented landscape, where achieving the control and visibility that organizations have traditionally had over their data has become more complex — thus introducing new areas of risk. Threatpost Senior Editor Tara Seals was recently joined on a webinar by Jim Reavis and Sean Cordero of the vendor-neutral Cloud Security Alliance to discuss best practices for locking down data across the cloud-enabled architecture. The full video with the slides is below. _Below the video is a lightly edited transcript of the webinar. _ Tara Seals: Thank you for attending today’s Threatpost webinar, entitled “Data Security in the Cloud: How to lock down data when the traditional network perimeter is no longer in place.” I’m Tara Seals, senior editor at Threatpost, and I’ll be your moderator today. I’m excited to welcome our panelists, who will give a pretty comprehensive dive into cloud security, which is a topic I think on most people’s minds these days. To that end, let me introduce them. We are going to hear today from Jim Reavis, who is CEO at the Cloud Security Alliance, as well as Sean Cordero, who is VP of Cloud Strategy at Netskope – and he’s here in his capacity as a member of the Cloud Security Alliance today. I wanted to let you guys know that they’re going to run through a presentation, and then after that we’re going to have a panel discussion and a Q&A session with you, our audience members. You can submit your questions at pretty much any time during the webinar using the control panel widget on the right hand side of your screen. If you look, there’s an option for questions. You can click on that to open up a window where you can submit your queries. Speaking of which, I have a couple of housekeeping notes before we begin. First of all, the webinar is being recorded. We’ll be sending out a link where you can listen on demand, so you can share that with your colleagues. We’re also going to eventually have a transcription video posted on threatpost.com, so keep an eye out for that. With that, before we get started, I also wanted to just briefly frame our discussion and talk a little bit about why this topic is so timely or why we think it’s so timely, when businesses are embracing on-demand and software-as-a-service, SaaS applications at a rapid clip. I think we’re aware that small businesses might have only three or four applications, but Fortune 500 companies might have literally thousands of cloud applications. So this is something that is definitely unavoidable. On top of that, businesses are using infrastructure-as-a-service and cloud storage, expanding their network footprints. They’re connecting to those resources using a vast set of new and different types of devices, both mobile and fixed, that may or may not be located within a company branch or headquarters. And the result is that you have a lot of data flying around. You have both structured and unstructured data that can either rest in some kind of cloud repository or flying back and forth between endpoints and various services. And all of that spread out across across multiple parts of the corporate architecture. Some parts of which the business might manage or own themselves, and other parts they might not have a whole lot of oversight on because it’s hosted in the cloud. So you end up with a fragmented landscape where a lot of the control and visibility that organizations have traditionally enjoyed over their data has kind of gone away. That in turn introduces risk – and new areas of risk – where the concerns that people should maybe be thinking about aren’t necessarily that well-known. Jim and Sean are going to cover this ground today, and I’m really excited to hear what they have to say. They’re going to give us some ideas and best practices for locking down data across this new cloud-enabled architecture. With that, I’m going to turn it over to these guys. Welcome Jim and Sean. Thank you for joining us. Jim Reavis: Pleasure to be here. Sean Cordero: Thank you for having me. Tara Seals: I’d love it if you guys could introduce yourselves and then tell us a little bit about what you’re bringing to the table today. Jim Reavis: Sure. I’ll go first. Hi, this is Jim Reavis. I started in information security in a bank in 1988 doing some computer security. Obviously the world has changed quite a bit. I’ve always enjoyed being in this industry because it’s a very interesting, thoughtful combination of art and science, where you have the technology, you also have adversaries who have the psychology of the organizations to be thinking about. I started Cloud Security Alliance, started thinking about it in 2007, 2008 when you were starting to see this as a coming trend and a lot of virtualization, just a very virtualized view of the world. We are now 10 years old and have done a lot of work in terms of as a nonprofit doing vendor-neutral types of research, best practices, certification for providers, as well as individuals. Just happy to be here. We’ll try to share as much of what I have learned over those 31 years that might be relevant to the topic. Tara Seals: Great. Thank you. Sean, what about you? Sean Cordero: Hi. This is Sean Cordero. Thank you again everyone for joining us for today’s conversation. I’ve been in the IT and security space now going on 21 years, which is longer than I like to admit. Grew up coming up as a network engineer architect really focusing on trying to solve for the risk-management puzzle as it related to international and global risk, the company that I served. One of the key things that led to my engagement with the Cloud Security Alliance was the acknowledgment that there was an inadequate amount of guidance from other organizations. That then led me to the CSA, where I’ve been a contributor to some of their core research, specifically the Cloud Controls Matrix and the Consensus Assessment Initiative Questionnaire. I can’t believe I didn’t stumble saying that. Happy to be here, looking forward to the conversation. I’m hopeful that folks on the phone are able to glean something from it and ask questions as well. Tara Seals: Great. Well, thank you guys. Appreciate it. And with that Sean, I’m going to turn it over to you. I know you’re going to be running the slides today, and you and Jim are both going to tag-team on this presentation. I’m excited to hear what you guys have to say, and over to you. Sean Cordero: Great. Thank you very much. Jim, you see this deck? Good to go. Jim Reavis: Yep. Sean Cordero: Excellent. As we’ve already introduced ourselves, I’ll move past this. For the next 25 to 30 minutes or so there’s going to be a lot of content that we’re going to be sharing. One of the key things that we really want to encourage everyone is to please ask questions. We want to keep this interactive. At the same time, if there’s something that you feel you agree or do not agree with, please with let’s have discourse about that. I think that’s how we all get better. For the next 25, 30 minutes or so, I’ll give you an overview of what the core drivers of cloud adoption are and why it is that it seems to have kind of gotten out of control from a IT risk management perspective. We’ll talk about some very specific and troublesome cloud risks that some organizations may or may not know about. Then we’ll provide some recommendations high-level as starting points in terms of trying to get ahead of the inevitable adoption of cloud-based technologies. And then of course, we’ll move forward with the discussion. So the 2012 Harvard Business Review in conjunction with I believe it was Verizon at the time did a study. And what they found was that this cloud adoption thing was moving pretty quickly and much faster than anyone had anticipated. In 2012 what they said is, organizations that are moving towards cloud, they will have a competitive advantage in terms of competing in the market. Interestingly enough, three years later they came back and they did a very similar analysis, and what they found was a bit startling. What they found is that the organizations that had not adopted cloud or had no plans for cloud adoption had actually lagged significantly behind and fallen down from a competitive standpoint. So cloud literally from their analysis has become table stakes for most business leaders, simply due to the agility and speed and capability that it provides. That also has been echoed with some of the top leaders in the cloud space, where Mark Benioff, the founder and CEO of Salesforce, has said on multiple occasions that this is really the next evolution and revolution in terms of how we work and interact with data, how we interact with process, and ultimately how we empower businesses. When we think about our push to digital transformation and what it means from a security and risk-management standpoint, there are some really tough truths that I think we as the security industry and even as security practitioners have had to face directly or indirectly, but now I think cloud and cloud adoption has really forced and exposed a lot of the weaknesses that have existed across the information in cyber-landscape. We all know that breaches keep going up, et cetera, et cetera. One of the things that rarely gets asked is, well, if we’re getting better at security, why is the problem seeming to get worse? Part of it is, and I’m speaking simply from my point of view, I don’t actually think security as a practice is something that IT and cyber were really that good at to begin with. And I’m painting with a very broad brush here. Probably due to the fact that some of the things that we all struggle with as cybersecurity pros, they’re really fundamental, basic things that often don’t even fall within the purview of cyber. For example, you have organizations that will spend inordinate amount of time on managing vulnerabilities, some of them on developed applications or in other cases just simply getting a patch management process in place, and that becomes sometimes like a multi-month, multi-year effort, and often it never really gets to where it wants to be. But now, since a lot of that responsibility has kind of been pushed out to the cloud, you still have as the cyber-professional a responsibility to ensure that not only do you understand what your provider’s providing you, but really the crux of this discussion is, what is it that you can effectuate from a controls’ perspective. And that’s kind of the crazy part because what we found is that the great majority of organizations that are saying, “Hey, we’re doing a cloud move or a cloud migration,” they actually may or may not know or often I think they know but kind of avoid that discussion because it’s difficult, that really the great majority of cloud usage is already in their enterprise and it’s not under the control of anyone in that organization. That creates immediate friction because as professionals we get stuck. How is it then that we’re going to enable let’s say our human resources team that is utilizing a software-as-a-service platform to do payroll, but they didn’t get it set up via IT, thus it’s not set up via single sign-on, it doesn’t utilize some of the basic controls that cyber might want. How is that then we come in as risk-management professionals and tell them that they can’t use it anymore? And that immediately puts us at odds, it always has, but in the modality of cloud-based access it’s even worse, because there’s nothing we can do to really prohibit it from the get-go outside of some architectural things. But the risks are really the same. I mean, this is the issue which makes it so complex, is that we’ve had management and risk management models and cyber-technical control models that have been in existence for a solid 30 years, and I’ve always kind of questioned the efficacy of those anyways, but now we’re trying to apply them into the cloud context where all of these components are really completely different. This is where we start looking at the core challenges, which is there’s a lot, and Jim will be speaking to this about the shared responsibilities and the scope in more detail a little further on here, but one of the things that is very troubling is within the cloud service providers (CSPs). And I understand why from a business standpoint and also from a supportability standpoint, additional features that might be necessary to provide data protection or reduce management or overhead associated with risk management are often requiring the consumer of the service to pay extra for it, which is fascinating because a lot of the CSPs will also in the same breath speak about how deep and wide the security capabilities are. But then we have another challenge which is, and the CSA many years ago, and I believe Jim, the Open Shared API Initiative that the CSA was driving as a research project, one of the ideas that CSA had brought to the industry was why don’t we create a uniform set of application programming interfaces (APIs) that can then be leveraged across the entirety of the space. Maybe Jim, you can speak to how well that was received. I can certainly speak to what I’ve seen in terms of the adoption of something like that. Jim Reavis: Sure. I think that there’s an aspect of how we do security or how we think about IT in general is somewhat idealistic, that we can create a massive amount of standards that allow a maximum amount of flexibility, and the idea in the open APIs working group project was to allow a certain modicum of portability from a consumer perspective, and that could be an enterprise consumer to be able to securely manage encrypt information to a variety of different cloud providers. Jim Reavis: You don’t necessarily see things happen that way, and from a cloud provider perspective. We’ve seen them innovate to compete with each other to provide a lot of unique services. Those unique services could be considered proprietary and you could say that’s proprietary in a good way. So it’s continuing back and forth that I think we have that we need to sort of manage to understand. It is going to be a complex environment and we can try to advocate and the consumers of cloud can try to insist that their providers adhere to standards that allow you to for example bring your own keys to any cloud provider which is something we advocate quite a bit, and it’s been in our guidance. But it really is a challenge and it’s probably foolish to think we are going to have such a level of cooperation on all the different cloud providers that it’s so easy to move applications in data between them seamlessly. We can always strive towards that, but we need to understand it’s continued to grow in complexity. Sean Cordero: Yeah, that’s great insight. And to everyone on the phone, Jim made a very critical point there, which is the cloud consumers i.e. your enterprises really need to drive the need for that by requesting it, enforcing your CSPs to engage with your other partners. Because what’s happened is not only are certain security features in some cases behind paywalls. There is no parity around this, which then leads to a really complex problem. Back in the day when single sign-on was first introduced, everyone was like, “This is great. We would love to expand this elsewhere.” And it did create a boom in terms of internal effective efficacy and efficiency for IT and security teams. However, in the cloud model that is at a minimum table stakes, so having some sort of identity provider, but what isn’t solved for is the fact and it’s tied to the first portion that the fact is that if you need to create security policies of say data protection policies on one cloud or data protections on another cloud, you’re forced to log in to each one of these clouds independently and configure them. Now I grew up as a Windows administrator as well, and I remember how difficult it was just to get the correct folder rights set up for a share. Or imagine trying to get the folder rights on a SaaS service and then ensuring that the SaaS service is configured in a secured manner on top of which hopefully you’re paying for the additional security features that actually enable you to do more control. It’s kind of like this vicious cycle that we’re finding ourselves in, and I think this is where we as practitioners and folks that consume cloud services really need to engage with the CSPs to rethink this because I don’t believe it’s going to be a model that’s going to do the right thing for organizations. And then because the vendors are limited, it creates a lot of friction for our end users and the folks that really are getting the most benefit from the usage. Some of the key things that lead to very specific cloud risks are tied to the data protection piece. And some organizations may or may not be aware of this. I know in my other capacity this is a conversation that we talk about a lot. We’ve already discussed the fact that the business organizations, those are your lines of business, your sales teams, your marketing teams, your human resources team, which is ironic actually one thing. The top two organizations within almost any enterprise that tend to adopt cloud fastest and often can’t create exposure because maybe they’re not engaging with security in our research show that it’s human resources and marketing where those two lines of business tend to kind of switch back and forth. Part of it is because they may perceive that the usage of a particular cloud irrespective of how “secure it is” is it really in the purview of IT because unlike in the past where they would have to call somebody and say, “Can I get this deployed? Can I do this? Can I do that,” that is not a process or workflow that exists in the cloud context. It doesn’t have to exist because by definition it’s meant to work that way. That leads to a variety of other issues as well. One of the key things is that we know we kind of ignore the fact that this data is doing this now. A lot of organizations are still trying to go down the path of home running architecturally everything back down to their on-premises security stack. But what very quickly occurs is that when you have this architecture that really wasn’t that effective to begin with, if you really think about it, one of the key things that a lot of organizations are dealing with is the pervasiveness efficient attacks. I don’t know if everyone kind of recalls why some of those attacks became so prevalent early on and why organizations were so subjected and weak to them. It was because it was a very effective way to bypass all of the traditional network controls because the trust model in it of itself where anything coming out from your network going to the internet is considered to be safe, phishing like attacks, command and control types attacks exploit that. Sadly the technology in place right now can’t really handle that. So where we end up with is this is one of the really scary parts and something that I work with a lot of organizations on talking about is this. If we think about this gate as your … I was going to use a firewall as an example. It could also be a proxy. If we say, “Hey, we want to prohibit our enterprise from going to bad sites,” so those might be working appropriate sites, potentially legal sites, or even storage services that are not cleared by IT or cyber or risk. Traditionally the way this gets handled is you’ll put some rules in place on your firewall going outbound and put some rules in place on your proxy going outbound and then you’ll kind of call it good and you leave it to the vendor to kind of say, “Yes, this is a good site. This is a bad site.” But back to the issue that I brought up before because of the lack of openness in terms of standards for the integration with existing security tools and other new tools that might come into existence, you end up with a situation where simply by enabling these services outbound through your enterprise, you’re actually having a implicit and sometimes explicit acceptance of risk. And that risk acceptance looks like one of your end-users intentionally or unintentionally taking your data on one of your devices that you are responsible for, and moving it to a different tenant on that same cloud. What happens then is that your traditional controls do not have any way of prohibiting that because the only way to traditionally block that would be through some level of acknowledgment that it’s going to a different instance. The way that a lot of these technologies work, it doesn’t do that. This is where one example by the consumer can really drive that discussion because for me as a practitioner who’s very passionate about this, that to me is an unacceptable risk. I could never go to my executive vice president and say, “Hey, just so you know, we’re totally okay with somebody saving all of the sensitive stuff to their home version of Office 365.” But this is where we need to really stand with like the CSA and all these other organizations to force that discussion between our security vendors, our cloud service providers, to get us all in a healthier place to address things like this that right now there is no way to easily address this. When we think about this whole data piece and how it goes, one of the key things that happens quite regularly is if you think of the kill chain and you say, “Well, how does it actually change in the cloud,” it gets even a little scarier. It’s unrealistic to say, “Hey, we’re going to just pull back all the cloud because Sean and Jim were talking about this kill chain, and we’re at high risk,” because the problems are still effectively the same. It’s just a question of how you approach it. I’ll give you an example. Let’s say for a minute that you’re utilizing a CRM, your sales team uses a CRM. Insert whomever it may be. It can be one of the leading ones or it could be a startup that nobody knows about. So let’s call it, I’m going to call it seancrm.com. Now we’ve got a bad actor that is out there really interested in our customer information, and the way that they would’ve tried to poke and prod the infrastructures in the past, they would have to get a foothold internally, which is fairly trivial phishing. I’m not saying that it would’ve been any better in the on-prem model. In fact, it’s probably worse. But what they would do in the past is they would sit there, they’d do things that were fairly loud and like port scanning, or they would learn something by gleaning header information. And it was all very rudimentary. But now with cloud, what you can really do is if you know that your organization is utilizing, I don’t know, this particular CRM, if I’m a bad actor, all I have to do to start finding ways to potentially attack your instance, your tenant, I simply just need to figure out what the name of the tenant is, which often is your company’s name. For example, let’s say it was martyscars.seancrm.com. That’s most commonly the naming scheme that’s used across all CSPs where it’s your company plus their FQDN at the end. Well, if you know that, now all of a sudden you can start doing something very basic like, hey, let me see if I can figure out how I can log into these things. With that information in hand, you can then start tailoring very specific attacks. So if you want to do spear-phishing attacks against senior executives or research folks, you can leverage that knowledge to create highly customized and very difficult to prohibit and prevent delivery mechanisms that look completely legitimate. Because the way that a lot of our technologies have worked and the fact that our CSPs may or may not be quite where they need to be in terms of their ability to support and protect against these types of things, you end up and get stuck where now you have another vector where your data can actually slip. Let me give an example of where this actually has occurred and continues to occur is, let’s say if somebody wants to do a very specific spear-phishing campaign. One way that they will get around all of your control is simply by leveraging the fact that within our architectures, we are trusting the final destination of the CSP. Let’s say for a minute that you have a cloud service provider that you’re engaged with. They’re doing data storage for you. It might be user level. It might be server level. But what you end up with is your machines, I mean your devices or your mobile devices do have a kind of trust between that CSP and what you’re doing. Usually from a risk management standpoint organizations sign off and say, “Yeah, of course you can use that.” Well, the attackers know this, and what they do instead is when they create their phishing campaigns, they leverage the fact you are further trusting that CSP, so they will provide a link as part of the spear-phish that ties to the same cloud, but it’s not against your tenants. What that ends up happening is when a user gets phished, your swigs or firewalls, all of that can’t do anything to prevent it, and now your user is exposed. Interestingly enough, one of the things that was identified is that you’re seeing attacks where end users are being compromised and subsequently the larger part of the enterprise are being compromised by a combination of drive-by infections, which have always been a thing and continue to be a thing; where users in a browser, they’re accessing some site that’s got malware loaded. They get infected. And then from there, they leverage the fact to then start feeding you other payloads leveraging the cloud infrastructures as a repository. Again, because they know that from a detection and control standpoint there’s little that can be done, you end up where not only is it difficult to identify the attack, but in addition to which, it’s really difficult, if not impossible in some cases, to pull that back once that’s occurred. With that, do we have any questions at the moment from the audience, Tara? Tara Seals: Hi Sean. Yeah, we do actually have a couple of questions. If you wanted to go ahead and we can maybe field those. We have a question about I guess who has a level of oversight over the cloud providers to make sure that they’re compliant versus just managing risk, I think is what the person is asking. She says that she gets push-back that she is requiring more than what other customers are asking for. She’s being told that drive encryption is good enough to detect the bugs, but with multi-tenant it doesn’t ensure that data is protected from other customers, especially with shared keys and administrators or service accounts that can access all of them. She wants to know can any one measure the right level of controls and requirements within a cloud environment? Sean Cordero: Yeah, that’s a great question. Jim, you want to take that one and I can tag along after? Jim Reavis: Sure. So just kind of stepping back to like who governs the cloud and who manages it, it tends to cut several different ways where there’s sort of national types of standards and you look at something like FedRAMP for the United States covering the federal government’s procurement of cloud based on NIST standards. So that’s something that you tend to see a lot of alignment even in the private sector for that. So you have that country based thing. You have maybe industry based things like PCI for the payment card industry where they’ve tried to adapt some of that. And then you have just regulatory bodies that try to use available standards. Because technology is moving so quickly, it’s hard to use standards that take years and years to develop. That’s where an organization like Cloud Security Alliance where we move pretty quickly with creating best practices and we map them to a lot of different global standards that are out there that it ends up for an organization that is doing risk management in the cloud, they have to understand what are the applicable laws that they need to be dealing with. And then they look at sort of this hybrid approach to take maybe something like the cloud Security Alliance, Cloud Controls Matrix, our STAR program and how it maps to these different standards to be able to understand what are the different governing laws and what are the different standards and how do we sort of bring those all together in a risk management program based on what you’re doing. In terms of getting more specific to the question about how people looked at the issues with encryption and the control over shared secrets, we’ve had this in our best practices for quite a while that ideally the appropriate way to manage the data is that the user, the tenant, the owner in EU parlance, you might say the data controller, that they should be managing the keys directly and encrypting the information. Ideally you get to a point where the cloud provider is a data processor and they’re managing the systems, but they’re not actually managing your data. That’s very easy to do in infrastructure-as-a-service. In software-as-a-service that’s very difficult to do in how it’s implemented today. The cloud providers, the SaaS providers actually need to be able to manipulate the data to make sure it’s correctly backed up and everything else. We’re moving to a point where that is where I believe we’re going to have the sort of hybrid best of both worlds where you bring your own key to do that. Because we don’t have this perfect world, it becomes very important to look at other indirect controls and say for example do they have very good vetting of their employees with this cloud provider, do they have security clearances, do they have proper training, do you have the proper audit trails so that if someone does have physical access to information, do we know that it’s being governed properly. So you have to end up looking at a lot of those different things. We would again encourage you to look at their certifications, the audits that they’ve had, and do they align with things like CSA with matrix and our STAR program. Tara Seals: Okay, great. We do have one more question along the same lines. This person would like to know if there are any independent reports out there that you guys are aware of on the security posture of available cloud providers, infrastructure providers like Amazon or Azure with recommendations on who has the better security posture, and he would also like to know if any ethical hackers have tackled the question I’m assuming and the sense of hunting for bugs. Jim Reavis: I can take just a quick pass on this and give Sean a chance. Frankly I wouldn’t trust some report that compared them in a Consumer Reports fashion and said one or the other is better because it is so complex. And what we find is that 80 percent to 90 percent of the security responsibility remains with the customer. But I’ll say this, that on apples to apples for what the major cloud providers do, the tier one cloud providers do in terms of the scope of what they’re responsible for, they are far better than anyone in the world. Maybe there’s a few banks and a few defense departments in different nations that are equivalent, but they do a far better job. That’s why cloud can be very secure. But most of the responsibility is on your site. So I would look at how they all answer the different compliance questionnaires, but you have to turn inward and say, “How am I using it? What are the different applications I’m going to be using,” and then say this from a risk management perspective, this is the right solution. But comparing an AWS to Azure to a Google cloud platform if we’re talking about the big US-based ones, they are all in order of magnitude better than what any typical customer be able to do on their own for apples to apples. Got it. Thank you. Okay. And let’s tackle one more before you get back to the presentation if that’s okay. We had another question that this person wants to know. How can one ensure that the cloud provider is not commingling your data and that it’s being deleted from backups and temporary or redundant copies are being eliminated as requested? Is there any way to kind of keep tabs on that? Jim Reavis: Sean, you want to answer this one? Sean Cordero: Yeah, I can take that one. The answer is the honor system. And that’s kind of what we as an industry I think are at a crossroads. Where that’s like the other two questions because to me these are all interrelated and hitting on the same problem. I’m going to jump ahead to one other slide here really quick because this is what I was going to chat about. We’ll come back to the other piece because it’s all interrelated to the last three questions where Jim stated very clearly and I 100% agree that the majority of the responsibility in the shared responsibility model still falls on the customer. What I think has been happening is there has been an over focus as a … I think it’s actually as a response … the ineffectiveness of being able to effectually risk in the cloud, i.e. Jim mentioned bring your own key. Well, if we think about that, that’s such an obvious necessary saying. But why is it so difficult for the CSPs to support it? Well, it’s because we’ve never coded it that way, and until the market demands, i.e. office practitioners that they enable these types of things, it’s always going to be a whack-a-mole in terms of the controls that are necessary to really secure your data. One of the things that occurs in this situation is really this idea of understanding the controls and the gaps of the cloud service provider. But I don’t see it from the perspective of, hey, AWS is, they don’t do this one thing and thus we’re going to move down that direction away from them because they don’t meet such and such control. Now in some cases that’s totally appropriate. In other cases, it may not even be that meaningful because really where the majority of the risk continues to reside is on the consumer. So if we think about where a lot of us right now are probably standing up third-party risk management or cloud governance programs, I actually believe that, and no disrespect to anyone that’s doing this, but just having done it and seeing the end result of it, I actually think it’s kind of a huge waste of time in the long term. The reason is is because if you ultimately come down to a conversation with your CSP where you’ve identified a control deficiency that for whatever reason is critical to you conducting business, your only way of effectuating change there is either a) if it’s a bug that they acknowledge and they can fix, second piece is whether or not … If it’s not a bug, it’s just a lack of a feature. If you can convince them to create and add that feature i.e. it’s got to go into the CI/CD pipeline and it’s got to be prioritized just like our IT teams have to, except they’re dealing with a much larger scale, and they’re going to be a little more risk-averse, why because if they make a change to something like that, that is going to be across the board, it may have significant negative impacts across all of their clients or against everyone that they’re serving. So they tend to kind of slow roll some of that stuff. And then third is that you end up now then asking these questions over and over again. Do you have this? Do you have that? And now what they’ve done, and a lot of this is great because the CSA’s been leading this whole thing for some time now, is giving a contextualized view through reporting via like the CSA’s Cloud Controls Matrix or the Assessments Commissioners Questionnaire or via STAR certification that actually gives you a higher level of assurance that they are doing the right thing. One of the questions that was asked is where could I get a sense of where … kind of where the cloud providers reside, kind of where they stand. Jim made a good point that it’s so difficult to assess them from the outside in and then back out that I would say that the best resource that you will find is going to be at the Cloud Security Alliance via their STAR registry. The STAR registry has the same types of questionnaires that our teams are all going out and asking our vendors for, and those are pre-answered in many cases by the leading CSPs. Then you have an initial starting point to kind of assess them, but then again, it comes back to, well, how much time will you spend as a risk practitioner assessing these CSPs over and over again when really your only levers are going to be during your negotiation, i.e. have a contract, post issue that’s related to a bug or a failure of service so you might have a little bit of leverage. And then third, the threat of a lawsuit of you walking away. Those are not very good options for us as security practitioners to try to get capability added. So if you really think about it then, it all comes back down to either a ticket that gets submitted, or it comes down to a contractual/legal conversation for which we as practitioners may or may not be part of because that’s going to go potentially to litigation if something bad happens because of it. That’s why I kind of say that broadly that we spend so much time trying to assess how well they do it. Meanwhile, I’ve seen organizations that they literally have like a staff of three or four people doing this full-time across all the vendors with an overt focus on the CSPs, meanwhile they’re not even looking at their procurement process to understand how is this, how is somebody in development going and spending up a $10,000 a month AWS instance? How is it getting expensed? Why does that happen? These are the things that actually cause the greater exposure and the greater risk as opposed to focusing just on what the CSP is or isn’t doing, specifically in the context that you really can’t effectuate it, at least not directly. I mean, perhaps the Fortune Global five might have the ability to pull those types of levers, but you’re talking about potentially hundreds and hundreds of millions of dollars of spend where a CSP would then perk up their ears and go, “Okay, let’s talk about this.” I don’t know about you guys, but at least for me, we don’t have that kind of budget to be able to spend that much on the CSP, so we end up kind of having to go down the contractual route. Now, there was another question that was asked, which was around how do I ensure that this data isn’t being commingled? What I would suggest is have your technical teams, your architecture teams work with … Because often the way that the CSPs will engage with you, they’ll have like a salesperson and they’ll have like a pre-sales engineer of some kind. Fairly typical in our industry. If there’s always an additional level of engineering knowledge that exists outside of the in-field teams, I would suggest engaging your architects but understand cloud architecture. Hopefully you have those. And if not, it’s a good opportunity to engage and learn a couple of things and really have them help you decompose and understand how it is they do things. I think what ends up happening is that because of the lack of cloud being effectively a black box in most cases, we end up in a situation where we say, “Oh, well that can’t be good, so we shouldn’t do it,” and it may turn out that it’s actually quite great, but perhaps we haven’t asked the right questions. So to find out the answer of whether or not you’re commingling data, if you asked a CSP seller, somebody that is selling you Amazon and his or her engineer is telling you to go, “Oh yeah, of course we don’t do that,” and they may be giving you a legitimate and correct answer. But I don’t know about you. That wouldn’t be good enough for me. I would want to know more, just so I would feel better about it. And that’s why I think it requires partnership and deep engagement with the CSP. Tara Seals: Got it. Okay, great. Thank you. Okay. We’re about 10 minutes away from reaching our time limit here. So did you want to quickly run through the rest of your slides and then maybe we can field one or two more questions before we wrap? Sean Cordero: Sure. Jim, did you want to speak a little bit about the cloud security focus? I know we jumped ahead a little bit there, but I think it was a good discussion. Jim Reavis: Yeah, that’s not a problem. What I want the audience with this slide to understand is sort of how you should strategically view your responsibilities in securing your organization. The National Institute of Standards and Technology (NIST) came up with a cloud definition years ago. What you’ll see on the left there in that layered model, CSA took the NIST definition of cloud infrastructure platform software-as-a-service and the different deployment models and we had it visualized this layered model to say that software-as-a-service resides on top of platform, resides on top of infrastructure-as-a-service. And what that should mean to you is when you’re engaging, the most applications and providers you’re dealing with, they actually exist as a mash-up of several different companies and services. It’s important to understand that. And then the idea for the inverted pyramid is that you’re actually going to have in software-as-a-service a large number, thousands of SaaS applications that are going to be residing on just a small number of infrastructure providers. We’ve already mentioned a few of who those are. Also, you might be developing your own applications if you are engaging directly with some of those major cloud providers. The things that you should understand is that vetting for procuring and managing means you’re going to have a large number of SaaS applications, you’re going to have a smaller amount of time that you’re going to be able to vet their suitability and their security practices, and you’re going to have a smaller number of infrastructure providers that you are going to have the responsibility. Because they make those pretty open platforms, it’s your job to actually implement the security controls when you’re using that. So what you should be thinking about on the top right, the shared responsibility, is the fact that if you are engaging directly with the infrastructure-as-a-service, it’s the raw compute, it’s the virtualization, it’s the containerization that you … It’s mostly the consumer, the tenant, the data controller to actually implement the security program and it’s as I’ve seen 80 percent of the security controls. If it’s a software-as-a-service, fully baked business application, it’s mostly going to be the provider implementing the controls and then your job becomes more of the audit perspective. So implement technical security if it’s infrastructure, and do the audit vendor procurement stuff if it’s SaaS. There’s a little bit of exception there. You want to have a very strong identity management infrastructure. There might be some things you can do encrypting the information before it goes in a SaaS provider, but essentially that’s just the big thing, understand the layers, understand it’s a mash up, understand the shared responsibility in those different areas, and then use your resources appropriately inside of your security program to very quickly do the assessment and the triage on the SaaS applications, and then be very careful and implement strong technical controls on the infrastructure side. To really do all this it’s really about thinking very virtually about the world and understand that your information and this technology can exist in a lot of different dimensions and planes. So think very virtually. I’ll turn that back to Sean now. Sean Cordero: Thanks Jim. With this context that Jim’s provided us, one of the things that we should think about when we’re talking about data security in the cloud, it’s I think we’ve kind of beat to death a little bit the … this concept that, yes, it’s important to engage with the CSP, you got to force conversations. But again, Jim stated it best when he says, the majority of this stuff still falls on us as internal practitioners irrespective of where the data and what CSP it’s residing on. Some of the things to consider as we work our skill sets as risk management professionals and cyber professionals is that the focus is going to shift away from the traditional, hey, let’s run the scan, everything … what’s the status on that open compliance issue, or let’s get the IRR stuff up and running. It all has to change. Even something as simple as vulnerability management in the cloud context actually starts looking more like a different version of vendor management and quality management to ensure that they but if something is found that indeed you’re able to hold them accountable for those types of things, should it unfortunately ever lead to a particular negative outcome for your organization. One of the key things that Jim was also talking about is the scope of this assurance, which is it’s really critical that when you’re looking at the data elements across SaaS and IaaS, that it’s well understood where and how the roles and responsibilities really start and end. This is what I think with regulations like GDPR, and if you’re not familiar with the Cloud Security Alliance’s Code of Conduct which is a document that’s been created by a global team in conjunction with a lot of the leaders in this particular space, privacy, that that document in and of itself gives some excellent guidance around a lot of this. One of the challenges is that even if you’re able to tick the boxes from a controls’ perspective after CSP, there’s still a lot in terms of data flow and ownership of data or who can or who cannot that still falls on us as the practitioners that have responsibility. But understanding this, I think this will be some of the levers that can then be pulled to effectually change at these large CSPs to try to address these things, because the way it’s kind of been done now isn’t terribly effective, nor efficient. Jim, is there something here else that you’d like to touch upon? Jim Reavis: No, I think you covered it pretty well. Sean Cordero: Same thing that was being discussed before which is what’s really critical is that by understanding each one of these components and knowing where the responsibility resides, and this is where it gets really complicated, is when you start talking about data and metadata, which traditionally would fall in the realm of the CSP, if you’re not confident that data is being handled appropriately or they haven’t disclosed that perhaps that data is being piped elsewhere, I knew of an organization that found out that their cloud service that they were consuming was actually not dissimilar from what Jim said earlier was cobbled together from two or three different cloud services and what they had done is created a front end for it. Well, that only was disclosed when the contracting phase came in and in the contract they stated, “By the way, we have three other CSPs that are part of our service, but don’t worry, we got it handled.” Well, I don’t know how well or comfortable everyone would be with that, but the fact that wasn’t mentioned from the get-go. There’s a very well-known company that is known for making very high-end cellphones that some years ago, they have a consumer-based cloud service that is used for storage, for photos and everything like that, and it was determined that actually their back-end, even though it looks like from our point of view looks like it’s all theirs, is actually hosted on Google. And that, we’re talking Fortune 500 companies where even that’s happening there. It’s not because it’s been done in any way of a malicious form. It’s simply an effect of modality for them to be able to deliver the high quality service that they want. But unless we as practitioners understand that or ask those questions, don’t expect that anyone’s going to disclose it certainly from the get-go. This comes back down to the concept of context which is and it’s really important. All of these standards that exist, PCI has been one of the most ineffective standards from a cloud perspective. If you even look at the standard of SaaS, the first three controls are talking about an on-prem based architecture that is not really relevant in the cloud modality. What that means then is if you’re trying to take your control and standards and clients and just simply apply it over to cloud, it’s going to fail miserably because the way that everything gets done is completely different, and they don’t have the context to actually include these things. Even 27001 which is still a great international standard had to create outcroppings standards for example 27017 to really be able to address the deficiencies that they were aware were in the ISO standard because it was never intended to address cloud computing for it didn’t exist back then. That’s why the CCM is such a critical component of this tapestry that we have of risk management across the industry, because it was the first to market and is still the leading standards and framework to get levels of measurement across your CSPs but also to look inward and ask the hard questions. Then just the last two things that I’ll leave you with is that back to my comment about so much focus on what the CSP is or isn’t doing leaves a massive gap in terms of understanding how you apply and work within that model. Let’s say that you do have a data breach. If you were to say we had a data breach associated with a misconfigured S3 bucket, you as a practitioner, a stakeholder in this process, would you know what to do? My findings, this is empirical and speaking from my experience, is most organizations are not ready for that. The reason is because the detection process is totally different. Usually SaaS and IaaS based deployments kind of reside within an LOB. So you may not have it feeding back into some sort of system that you might have access to. But on top of which, you may not even have access to it to even be able to pull the information that’s necessary, and worse off, in many cases, specifically in the case of IaaS where you’ve got let’s say ECQ-based Windows deployments for whatever reason running in this cloud, and let’s say it gets infected or you have a breach or something like that, how will you handle the forensics on that? What’s the process for that? A lot of organizations are getting caught unprepared because they thought, “Oh well, I’m just going to take my guiding software or whatever else I’ve got, I’m going to image something and then I’m going to inspect it offline the way that I did it in the ’90s in the last 20 years.” Well, it doesn’t work that way. I mean, even basic things like being able to image forensically and bring it down without any loss of integrity is a very difficult thing to do. So in terms of your data protection, it’s really critical that we as practitioners look at this. One of the key inputs for this is the Cloud Security Alliance’s Top Threats. Great document put together by some of the top minds in the cloud security and risk management space that really call out a lot of the things that we already know as practitioners, but it makes it a little bit difficult to articulate because it’s kind of looming in to everything else. It’s a great resource that helps practitioners understand here, and it’s backed by their data as well. A really good resource if you haven’t familiarized yourself with it. Jim, was there anything on the Top Threats that you may want to lead the team to? Jim Reavis: No, I think we might be into overtime. Tara might give us the hook, but it’s a great document. Go check it out. A lot of free resources at CSA. Tara Seals: Great. And yeah guys, I think we’re going to have to leave it there unfortunately. I think that this is such an interesting topic, we could probably continue to talk for a very long time about it. But I’d like to thank you very much for participating today. Tara Seals: I’d like to thank our audience members for joining us. I’m sorry that we couldn’t get to all of the questions. But I would like to let you guys know that if you wanted to reach out to me, the email address is there. I can try to get any and all additional questions answered from these guys, and/or at least point you to appropriate resources. I’m here to help. Thanks very much everybody again for joining us for our latest Threatpost webinar, and thank you very much Sean and Jim. Jim Reavis: Thank you. Sean Cordero: Thank you. Jim Reavis: Thanks everyone. Bye-bye. Want to know more about Identity Management and navigating the shift beyond passwords? Don’t miss our Threatpost webinar on May 29 at 2 p.m. ET. Join Threatpost editor Tom Spring and a panel of experts as they discuss how cloud, mobility and digital transformation are accelerating the adoption of new Identity Management solutions. Experts discuss the impact of millions of new digital devices (and things) requesting access to managed networks and the challenges that follow.

Source

image
By Uzair Amir The stolen OGUsers database is available on RaidForums for download. On 12th May, hackers managed to steal the database of a famous hijacker forum called OGUsers. This forum is used by hackers and online account hijackers, which means that the hackers have now been given a taste of their own medicine. The database contained around […] This is a post from HackRead.com Read the original post: Hackers hacked: Account hijacking forum OGUsers pwned

Source

image
Cisco has issued a handful of firmware releases for a high-severity vulnerability in Cisco’s proprietary Secure Boot implementation that impacts millions of its hardware devices, across the scope of its portfolio. The patches are the first in a planned series of firmware updates that will roll out in waves from now through the fall – some products will remain unpatched and vulnerable through November. Secure Boot is the vendor’s trusted hardware root-of-trust, implemented in a wide range of Cisco products in use among enterprise, military and government networks, including routers, switches and firewalls. The bug (CVE-2019-1649) exists in the logic that handles access control to one of the hardware components. It was disclosed last week. The vulnerability could allow an authenticated, local attacker to write a modified firmware image to that component. A successful exploit could either cause the device to become unusable (and require a hardware replacement) or allow tampering with the Secure Boot verification process, according to Cisco’s advisory. “The vulnerability is due to an improper check on the area of code that manages on-premise updates to a Field Programmable Gate Array (FPGA) part of the Secure Boot hardware implementation,” the networking giant explained. Dozens of Cisco products are affected (the full list is here). In Cisco’s updated advisory, the vendor issued fixes for its network and content security devices, as well as some products in the routing gear segment: the Cisco 3000 Series Industrial Security Appliances, Cisco Catalyst 9300 Series Switches, Cisco ASR 1001-HX and 1002-HX Routers, Cisco Catalyst 9500 Series High-Performance Switches, and Cisco Catalyst 9800-40 and 9800-80 Wireless Controllers all now have updates. Other routing and switching gear patches won’t roll out until July and August, with some products slated for even later fixes, in October and November. Voice and video devices will get fixes in September. The good news is that an attacker would need to be local and already have access to the device’s OS, with elevated privileges, in order to exploit the issue. An attacker would also need to “develop or have access to a platform-specific exploit,” Cisco noted. “An attacker attempting to exploit this vulnerability across multiple affected platforms would need to research each one of those platforms and then develop a platform-specific exploit. Although the research process could be reused across different platforms, an exploit developed for a given hardware platform is unlikely to work on a different hardware platform.” Also this week, Cisco issued an updated advisory for a medium-severity Cisco FXOS and NX-OS software command injection vulnerability (CVE-2019-1780); it updated the Nexus 3000 Series Switches and Nexus 9000 Series Switches. Want to know more about Identity Management and navigating the shift beyond passwords? Don’t miss our Threatpost webinar on May 29 at 2 p.m. ET. Join Threatpost editor Tom Spring and a panel of experts as they discuss how cloud, mobility and digital transformation are accelerating the adoption of new Identity Management solutions. Experts discuss the impact of millions of new digital devices (and things) requesting access to managed networks and the challenges that follow.

Source