image
Hospitality giant Choice Hotels fell victim to hackers this week, thanks to a MongoDB database that was left open to the internet containing 700,000 customer records. The situation highlights supply-chain data-security risk, given that the data was being held by a third-party vendor — and brings up the fact that shared responsibility should be top-of-mind. The attackers left a note in the database file claiming that they downloaded the database to their own servers; they also demanded 0.4 Bitcoin, or around $3,800, as a ransom. However, they didn’t lock up the data, making the ransom demand moot. Bob Diachenko, who discovered the database along with researchers at Comparitech, said that he thinks the note was left by an automated script targeting publicly accessible MongoDB databases. He added that the script was probably written to wipe or crypto-lock the databases when found, but it failed. Diachenko immediately notified the company of the exposed database, which was hosted on the vendor’s server. It held 5.6 million records – but only about 700,000 of the records contained guest info (consisting of names, email addresses and phone numbers). Other fields containing passwords, reservation details and payment information only contained test data, according to Choice Hotels. “We have discussed this matter with the vendor and will not be working with them in the future,” the chain told Comparitech. “We are evaluating other vendor relationships and working to put additional controls in place to prevent any future occurrences of this nature. We are also establishing a Responsible Disclosure Program, and we welcome Mr. Diachenko’s assistance in helping us identify any gaps.” In total, the passwordless database was left exposed for four days. For customers, they will remain at risk of phishing or worse, according to Justin Fox, director of DevOps engineering for NuData Security. “The stolen data will be tied to other pilfered data to build full personas used for identity theft or fraudulent account creation,” he said in an emailed comment. While the incident is notable in that it highlights the ongoing problem of unsecured cloud storage buckets, and because it affected a large company (it franchises 7,000 properties in 41 countries, under brands like Comfort Inn, MainStay Suites, Econo Lodge and Clarion) — it’s also a good illustration of the growing amount of supply-chain risk that companies face, according to researchers. “Who carries the brunt of such breaches – the third party that was hacked or the company that relied on the third party?” said Elad Shapira, vice president of research at Panorays, via email. “Past attacks have shown that while the third party suffers from associated breach costs, the company that uses the third party is greatly impacted as well, from brand damage to actual loss of revenue.” And indeed, when it comes to private information, the company could be in breach of privacy regulations and may suffer from customer loss of confidence. The stakes are too high for there not to be a conversation with those one entrusts data to about where the responsibility lies, Shapira added. “With the breach at Choice Hotels, it’s the hotel guests who made these reservations and they place the responsibility on the hotels,” Shapira added. “Companies need to be aware that outsourcing a business unit to a third party does not relieve them also from the security burden. They need to ensure that their partner has the right level of security before engaging with them, and if already engaged with them, to demand a minimum security standard.” Interested in more on the internet of things (IoT)? Don’t miss our free Threatpost webinar, “IoT: Implementing Security in a 5G World.” Please join Threatpost senior editor Tara Seals and a panel of experts as they offer enterprises and other organizations insight about how to approach security for the next wave of IoT deployments, which will be enabled by the rollout of 5G networks worldwide. Click here to register.

Source

image
Clickjacking, where links on a website redirect unknowing users to spam, advertising or malware, has been around for decades. However, new tactics that defy the best mitigation efforts of browsers has led to it affecting millions of internet users browsing the web’s top sites, researchers found in a new study. In crawling data from the Alexa top 250,000 websites, researchers discovered 437 third-party scripts that intercepted user clicks on 613 websites – which in total receive around 43 million visits on a daily basis. Making matters worse, click interception links are using new techniques – such as making the links larger – that are making them harder to avoid. “We further revealed that many third-party scripts intercept user clicks for monetization via committing ad click fraud,” researchers said. “In addition, we demonstrated that click interception can lead victim users to malicious contents. Our research sheds light on an emerging client-side threat, and highlights the need to restrict the privilege of third-party JavaScript code.” The researchers, who collaborated from the Chinese University of Hong Kong, Microsoft Research, Seoul National University and Pennsylvania State University, published their findings in a paper, “All Your Clicks Belong to Me: Investigating Click Interception on the Web,” which they are discussing Thursday at the USENIX Security conference. Emerging Click Interception Tactics The practice of clickjacking (a.k.a. click interception) – and discussions about how to stop it – have been ongoing for years. Websites that are impacted by clickjacking have third-party scripts inserted into them. These scripts look like an innocent-looking link (such as a Facebook button) – but secretly have code for a different application embedded in an iframe tag or other component. So, when a victim clicks on the link, they are “hijacked” (hence the name clickjacked) and brought to a malicious or spam page. After developing a Chromium browser-based analysis framework, which they dubbed “Observer,” the researchers were able to collect and analyze click-related behaviors for the Alexa top 250,000 websites. The clickjacking observed was utilized to send victims to malicious pages, such as fake anti-virus (AV) software and drive-by download pages; but researchers said that it also is being utilized for monetization, such as ad fraud or spam for scams. In addition to classic link hijacking as described above, bad actors have now also turned to visual deception to intercept user clicks, which include links posing as website banners or download buttons. In addition, bad actors are now relying on new tricks to better lure users to clicking on their scripts. For instance, researchers detected 86 third-party scripts utilizing huge hyperlinks which would stick out on the page, and send users to an online gambling game site when clicked. The bigger font makes the hyperlinks stick out and give them a higher chance of being clicked on, researchers say. Other third-party scripts would selectively intercept user clicks to avoid detection, essentially limiting the rate at which they intercept the clicks. “Although third-party scripts can deceive a user with different tricks, the effectiveness can vary dramatically depending on their implementation and the end user’s technical background,” said researchers. Website Collusion In a new method, researchers found that attackers were also using clickjacking to send victims to an advertisement to fabricate realistic ad clicks – to get a commission when a user clicks an advertisement. “Instead of relying on click bots, attackers recently started to intercept and redirect clicks or page visits from real users to fabricate realistic ad clicks,” researchers said. Interestingly, while many third-party scripts modified first-party hyperlinks to intercept user clicks, researchers also discovered that some websites collude with third-party scripts to hijack user clicks for monetization. The research found that more than 36 percent of the 3,251 unique click interception URLs were related to online advertising. “Clicks are also critical in one pervasive application—online display advertising, which powers billions of websites on the internet,” researchers said. “The publisher websites earn a commission when a user clicks an advertisement they embed from an online advertising network (ad network in short).” Mitigations? For years, online browsers have worked to decrease click interception issues – but they clearly aren’t enough, researchers said. For instance, just last week Facebook announced that it is filing lawsuits over two app developers who utilized click injection techniques to abuse its advertising platform. The lawsuit is one of the first of its kind against this practice, said Facebook. Other browsers, such as Chrome, have packed in mitigations for automatic redirection since 2017. In addition, systems like EvilSeed or Revolver have been developed to detect malicious web pages using content or code similarities. However, several of these mitigation tactics do not address newer clickjacking tricks and techniques. For instance, “Chrome still cannot detect and prevent other possible ways to intercept user clicks, including but not limited to links modified by third-party scripts, third-party contents disguised as first-party contents, and transparent overlays,” researchers said. Researchers for their part advised that websites could put a “warning” signal in the status bar when a user hovers their mouse above them, showing that the link contains third-party script. In addition, browsers could enforce integrity policies for hyperlinks that specify that third parties cannot modify first-party scripts. “For example, an integrity policy can specify that all first-party hyperlinks shall not be modifiable by third-party JavaScript code. One may further specify that third- party scripts are not allowed to control frame navigations, although listening for user click is still permitted. Enforcing all such policies would effectively prevent click-interception by hyperlinks and event handlers,” they said. Researchers said that they plan to develop and evaluate such an integrity protection mechanism in the future work. Interested in more on the internet of things (IoT)? Don’t miss our free Threatpost webinar, “IoT: Implementing Security in a 5G World.” Please join Threatpost senior editor Tara Seals and a panel of experts as they offer enterprises and other organizations insight about how to approach security for the next wave of IoT deployments, which will be enabled by the rollout of 5G networks worldwide. Click here to register.

Source

image
Researchers discovered the personal and biometrics data of more than a million people left publicly exposed on a database owned by Suprema, a biometric security company. Data includes facial recognition and fingerprint information collected by the UK metropolitan police, small local businesses and governments globally. Suprema touts biometrics software called BioStar 2 that uses facial recognition and fingerprinting technology to help company administrators to control access to facilities. BioStar 2 is utilized by almost 6,000 organizations – including multinational businesses, governments, banks and the UK Metropolitan Police. Researchers said that earlier in August, they discovered a publicly accessible ElasticSearch database totaling a hefty 23 gigabytes of data “of a highly sensitive nature.” That includes fingerprints of over one million people whose biometrics have been collected by various customers utilizing BioStar 2. “This is a huge leak that endangers both the businesses and organizations involved, as well as their employees,” said researchers with vpnMentor in a Wednesday analysis. “Our team was able to access over 1 million fingerprint records, as well as facial recognition information. Combined with the personal details, usernames, and passwords, the potential for criminal activity and fraud is massive.” In addition to fingerprint and facial recognition records (including images of users), also impacted was the personal data of employees such as unencrypted usernames and passwords. Within the 27.8 million records found unprotected in the database, researchers were able to seamlessly view sensitive data like employee home address and emails, employee records and security levels and more. If bad actors were able to get their hands on this information, they could access user accounts and permissions to facilities that support BioStar 2 software, researchers warned. Furthermore, in this specific incident, the fact that biometric data was stored plainly and not in hashed form “raises some serious concerns and is unacceptable,” Kelvin Murray, senior threat research analyst for Webroot said in an email. “Biometrics deserve greater privacy protections than traditional credentials, they’re part of you and there’s no resetting a fingerprint or face,” he said. “Once fingerprint and facial recognition data is leaked or stolen, the victim can never undo this breach of privacy. The property that makes biometrics so effective as a means of identification is also its greatest weakness.” The data was primarily collected from the 6,000-plus organizations that utilize BioStar 2. That includes several U.S.-based businesses, such as Union Member House, Lits Link, Phoenix Medical, and more. Also impacted was data collected by the UK metropolitan police. It is unclear whether or not the data has been accessed by third parties other than vpnMentor; however, the database was secured eight days after it was discovered. Biometrics Woes The incident heightens concerns around biometric security and privacy. Biometrics such as facial recognition is already actively used by police forces and even at the White House. And it’s not just the U.S; biometrics are spreading worldwide. The EU in April approved a massive biometrics database that combines data from law enforcement, border patrol and more for both EU and non-EU citizens. While facial recognition has its advantages – including more efficient, faster identification – the explosion of real-world biometrics applications have privacy experts worried when it comes to certain deep-rooted privacy and security concerns. “This is… in my opinion, worse than any of the other recent mega breaches, as it could directly relate to potential cyber terrorism attacks based on the data that has been compromised,” Matt Rose, Global Director for Application Security Strategy at Checkmarx said in an email. “The incident shines a light on organizations needing to take proper due diligence steps with the security companies they trust and contract to protect them, and more importantly, their customers’ most sensitive data.” It’s also not the first recent biometrics security incident. In June, the U.S. Customs and Border Protection said that a recent data breach exposed photos of the faces and license plates for more than 100,000 travelers driving in and out of the country. One of the biggest risks about biometrics security, vpnMentor researchers said, is that “facial recognition and fingerprint information cannot be changed.” “Once they are stolen, it can’t be undone,” they said. “The unsecured manner in which BioStar 2 stores this information is worrying, considering its importance, and the fact that BioStar 2 is built by a security company.” Disclosure Issues Researchers first discovered the publicly accessible database on Aug. 5, and contacted the vendor on Aug. 7. The database was closed on Aug. 13, however, researchers said that the process of disclosure was messy and that Suprema was generally uncooperative through the process. “Our team made numerous attempts to contact the company over email, to no avail,” researchers said. “Eventually, we decided to reach out to BioStar 2’s offices by phone. Again, the company was largely unresponsive.” Suprema did not respond to a request for comment from Threatpost. Interested in more on the internet of things (IoT)? Don’t miss our free Threatpost webinar, “IoT: Implementing Security in a 5G World.” Please join Threatpost senior editor Tara Seals and a panel of experts as they offer enterprises and other organizations insight about how to approach security for the next wave of IoT deployments, which will be enabled by the rollout of 5G networks worldwide. Click here to register.

Source

image
Dozens of Lenovo’s flagship ThinkPad models are vulnerable to bugs ranging in severity from low to high. Two of the flaws are tied to industry-wide security bulletins, while a medium-severity flaw affects only Lenovo laptops but remains unpatched. The most severe of the three bugs is a high-severity Bluetooth vulnerability (CVE-2019-9506) disclosed on Tuesday by Microsoft as part of its August security patch roundup. The flaw is described as an “encryption key negotiation of Bluetooth vulnerability” that could allow a nearby attacker to perform an information-disclosure or an escalation-of-privileges attack, according to a U.S. Computer Emergency Readiness Team (US-CERT) description. The flaw is tied to the way the short-range Bluetooth radio technology encrypts its end-to-end communications for security and privacy. “An unauthenticated, adjacent attacker can force two Bluetooth devices to use as low as 1 byte of entropy. This would make it easier for an attacker to brute force as it reduces the total number of possible keys to try, and would give them the ability to decrypt all of the traffic between the devices during that session,” according to a CERT bulletin. On Tuesday, the computer-maker also revealed a medium-severity Lenovo-specific bug (CVE-2019-6171) that creates conditions ripe for a privilege-escalation attack. Generically, an escalation of privileges (EoP) attack allows an adversary to exploit a software bug to gain elevated access to computer resources, otherwise protected from an application or user. This type of access could allow an adversary to gain access to restricted data, change configuration settings, plant malware or essentially take control of a targeted system. “A vulnerability was reported in older ThinkPad systems that could allow a user with administrative privileges or physical access the ability to update the embedded controller with unsigned firmware,” Lenovo said of the bug, which affects ThinkPads sold within the 2015-to-2016 timeframe (including ThinkPad Yoga, ThinkPad A series, ThinkPad E series and ThinkPad X series). Lenovo has not issued any patches for this vulnerability, however is targeting Sept. 20 as the release date for a fix. Mitigation will include updating the BIOS of effected systems. Lenovo is also warning of an industry wide low-risk vulnerability (CVE-2019-0128) in the Intel chipset device software. “A potential security vulnerability in the Intel Chipset Device Software (INF Update Utility) may allow escalation of privilege,” wrote Lenovo. Lenovo products impacted include models of its business-class ThinkServers and a small number of ThankPad laptops. “Improper permissions in the installer for Intel Chipset Device Software (INF Update Utility) before version 10.1.1.45 may allow an authenticated user to escalate privilege via local access,” Intel wrote in its bulletin posted earlier this year. Interested in more on the internet of things (IoT)? Don’t miss our free Threatpost webinar, “IoT: Implementing Security in a 5G World.” Please join Threatpost senior editor Tara Seals and a panel of experts as they offer enterprises and other organizations insight about how to approach security for the next wave of IoT deployments, which will be enabled by the rollout of 5G networks worldwide. Click here to register.

Source

image
A 20-year-old vulnerability present in all versions of Microsoft Windows could allow a non-privileged user to run code that will give him or her full SYSTEM privileges on a target machine. The bug is notable because of where it resides: In a legacy, omnipresent protocol named Microsoft CTF. First reported by Tavis Ormandy at Google Project Zero, the bug (CVE-2019-1162) is tracked by Microsoft as an APLC flaw with a severity level of “important.” Ormandy responsibly reported his findings to Microsoft in mid-May, and he released the details to the public this week, prior to the software giant’s Patch Tuesday update, after Microsoft failed to address the issue within 90 days of being notified. The bug does have a patch now, as of late afternoon Tuesday. CTF is problematic because it communicates with other Windows services without proper authentication. “The issue is with an obscure piece of functionality called CTF which is part of the Windows Text Services Framework,” explained Richard Gold, head of security engineering at Digital Shadows, speaking to Threatpost. “Programs running on a Windows machine connect to this CTF service, which manages things like input methods, keyboard layouts, text processing, etc.” As such, it also can be used as a bridge between different windows on a desktop. In his writeup, Ormandy noted in a blog post on Tuesday, “You might have noticed the ‘ctfmon’ service in Task Manager. It is responsible for notifying applications about changes in keyboard layout or input methods. The kernel forces applications to connect to the ctfmon service when they start, and then exchange messages with other clients and receive notifications from the service.” In cross-application communication, an authentication mechanism would ordinarily ensure that privileged processes are isolated from unprivileged processes. However, due to a lack of authentication in CTC, an unprivileged program running in one window can use it to connect to a high-privileged program in another, spawning high-privileged processes. “These various windows can run with different privilege levels, and there should exist some boundaries between the levels,” explained Dustin Childs, manager with Trend Micro’s ZDI, in an email to Threatpost. “Tavis found a way to communicate between various permissions levels through the CTF protocol, which has existed in Windows for some time.” From a technical perspective, the flaw is being exploited via the Input Method Editor (IME), according to Todd Schell, senior product manager of security for Ivanti. “When you log into a system using one of the Asian languages, you are set up by the IME with an input profile with enhanced capabilities,” he explained. “This is pretty severe because it bypasses the User Interface Privilege Isolation (UIPI) features of the OS.” Implications When it comes to what an attacker could do in a real-world setting, “there is no access control in CTF, so you could connect to another user’s active session and take over any application, or wait for an administrator to login and compromise their session,” Ormandy explained. Possible attacks, according to Chris Morales, head of security analytics at Vectra, include sending commands to an elevated command window, reading passwords out of dialogs or escaping app container sandboxes by sending data to an uncontained app. It could also be used by malware if chained with another vulnerability. “This vulnerability is especially dangerous in domain networks where elevating privileges might allow an attacker to acquire control of accounts privileged on other machines that are logged on the machine, move laterally and possibly compromise the entire domain,” added Roman Blachman, CTO and co-founder at Preempt, speaking to Threatpost. This technique can only be exploited by a local user, so it does require the attacker having a user session on the machine, Morales said – it is not a technique for gaining initial access to a machine, but for elevating privileges after a successful intrusion. All Windows Systems Affected, Exploit Ready CTF is a built-in Windows feature that has been around for about 20 years – and Morales pointed out that it’s persistent on every Windows system since XP, which would cover almost every Windows system deployed today. Given their legacy nature and the size of the attack surface, services like these are ripe for bug-hunting. “Microsoft’s operating systems consist of many services that were implemented to perform an original function but have continued to grow and be modified over the years by multiple developers,” Ivanti’s Schell told Threatpost. “This often results in new vulnerabilities surfacing. In this particular case, the vulnerability exists in ctfmon.exe which is a Microsoft Office process that works with the Windows operating system. It is a non-essential system process that runs in the background, even after quitting all programs.” And indeed, Yaron Zinar, senior researcher at Preempt, told Threatpost that this particular service is riddled with holes. “The API has many issues – it does not validate originator, open between privileged and non-privileged processes and – the crown jewel – it contains many memory-corruption bugs,” he said. As for exploitation, Ormandy developed a working exploit that can extract NT AUTHORITYSYSTEM from an unprivileged user on up-to-date Windows 10 1903, which means he can get privileged access to the system. “It took a lot of effort and research to reach the point that I could understand enough of CTF to realize it’s broken,” he wrote. “These are the kind of hidden attack surfaces where bugs last for years. It turns out it was possible to reach across sessions and violate NT security boundaries for nearly 20 years, and nobody noticed.” Others validated the work. “Digital Shadows tested it in its lab this afternoon and it worked great against a fully-patched Windows 10 system,” Gold told Threatpost. “The researcher from Google was able to create an exploit where he attacked the logon screen and run code as SYSTEM,” said Zinar. “It appears that some of the issues are not fully mitigated and it is possible more issues/way to exploit this interface will be discovered in the future.” As noted, Microsoft patched the bug as part of its August Patch Tuesday update. Also, Schell said this can be mitigated by simply turning off the ctfmon service, “and is not an issue for most languages as they don’t use the enhanced input profile needed.” Interested in more on the internet of things (IoT)? Don’t miss our free Threatpost webinar, “IoT: Implementing Security in a 5G World.” Please join Threatpost senior editor Tara Seals and a panel of experts as they offer enterprises and other organizations insight about how to approach security for the next wave of IoT deployments, which will be enabled by the rollout of 5G networks worldwide. Click here to register.

Source

image
Intel is warning of a high-severity vulnerability existing in its software that identifies the specification of Intel processors in Windows systems. The flaw could have an array of malicious impacts on affected systems, such as opening systems up to information disclosure or denial of service attacks. The update is part of an August round of patches issued by the chip maker, addressing three high-severity flaws and five medium-severity bugs. “Intel has released security updates to address vulnerabilities in multiple products,” according Intel’s Tuesday advisory. “An attacker could exploit some of these vulnerabilities to gain an escalation of privileges on a previously infected machine.” One of the more serious vulnerabilities exist in the Intel Processor Identification Utility for Windows, free software that users can install on their Windows machines to identify the actual specification of their processors. The flaw (CVE-2019-11163) has a score of 8.2 out of 10 on the CVSS scale, making it high severity. It stems from insufficient access control in a hardware abstraction driver for the software, versions earlier than 6.1.0731. This glitch “may allow an authenticated user to potentially enable escalation of privilege, denial of service or information disclosure via local access” according to Intel. Users are urged to update to version 6.1.0731. Other High-Severity Flaws Intel stomped out another high-severity vulnerability in its Computing Improvement Program, which is program that Intel users can opt into that uses information about participants’ computer performance to make product improvement and detect issues. However, the program contains a flaw (CVE-2019-11162) in the hardware abstraction of the SEMA driver that could allow escalation of privilege, denial of service or information disclosure. Intel NUC “Insufficient access control in hardware abstraction in SEMA driver for Intel Computing Improvement Program before version 2.4.0.04733 may allow an authenticated user to potentially enable escalation of privilege, denial of service or information disclosure via local access,” said Intel. A final high-severity flaw was discovered in the system firmware of the Intel NUC (short for Next Unit of Computing), a mini-PC kit used for gaming, digital signage and more. The flaw (CVE-2019-11140) with a CVSS score of 7.5 out of 10, stems from insufficient session validation in system firmware of the NUC. This could enable a user to potentially enable escalation of privilege, denial of service and information disclosure. An exploit of the flaw would come with drawbacks – a bad actor would need existing privileges and local access to the victim system. Vulnerabilities continue to crop up in the NUC – in April Intel slapped a high-severity NUC vulnerability (CVE-2019-0163) that could enable escalation of privilege, denial of service, and information disclosure for impacted systems; while in June, Intel patched seven high-severity vulnerabilities in the system firmware of its Intel NUC. Intel’s latest swath of patches also come on the heels of a new type of side-channel attack revealed last week impacting millions of newer Intel microprocessors manufactured after 2012. The attack, SWAPGS, is similar to existing side-channel attacks such as Spectre and Meltdown and similarly could allow a hacker to gain access to sensitive data such as passwords and encryption keys on consumer and enterprise PCs.

Source

image
Hacking conference organizer DEF CON Communications said it plans to rollout a global anonymous bug submission platform based on the SecureDrop communications tool. During a session at DEF CON in Las Vegas last week, conference founder Jeff Moss said the goal was to launch the yet-to-be-named program within the next 12 months. The plan is part of coordinated effort with the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA). The anonymous bug submission program is meant to encourage ethical hackers to submit high-level bugs anonymously that might otherwise trigger a barrage of questions or might put researchers in legal hot water. The system will be built on open-source technology from the Freedom of the Press Foundation’s SecureDrop server and is designed to be a cyber tipline of sorts. “There is a lot of apprehension among researchers wanting to report vulnerabilities to government. So we asked ourselves is there a way to create a process for hackers and researchers to report vulnerabilities into the US CERT to help do some good?” Moss said. He spoke during a session where he was joined by a panel of experts ranging from hacker Marc Rogers (a.k.a. Cyberjunky), DHS cybersecurity official Chris Krebs, Jennifer Granick, surveillance and cybersecurity counsel with the ACLU and others. Panelist said SecureDrop servers facilitated by the DEF CON organization would be a global initiative. Servers would likely be spread geographically around the world. DEF CON would act a trusted middleman, allowing a hacker the opportunity to “do the right thing” if they found or stumbled on an extremely sensitive bug that was to volatile to submit via regular channels. DEF CON representatives would then submit the bug to the US Computer Emergency Readiness Team (US-CERT). “Our preference is vulnerabilities are disclosed to the vendor,” said Krebs who serves as director of DHS’ CISA. “I understanding that doesn’t always work. Sometime it’s the [vendor] community that isn’t mature enough or maybe it is the vendor. So, sometimes you need an arbitrator.” He estimated that in a low single-digit number of cases researcher are highly reluctant to submit a bug through the normal vendor or CERT channels. “People in the hacker community occasionally reach out to me and say, ‘Hey, I know this thing. And I don’t want to explain how I know this. I’m afraid of the repercussions, but somebody should do something about this,’” said panelist Pablo Breuer, director of U.S. Special Operations Command at the Donovan Group. Plenty of questions still need to be sorted out. Some of those are how to make the anonymity feature bulletproof even if a court serves a subpoena to gain physical access to the service’s servers. “We realize there are a lot of open questions,” Rogers said speaking directly to the DEF CON audience of researchers. “And that’s why you guys can feed into this. The only way we are going to make this work is if the community is behind it and helps shape it.” Part of the logistics in putting the DEF CON SecureDrop anonymous bug submission program together would be creating a separate datacenter in new locations. “We are thinking very much about functionality. What happens if the box is taken? Then obviously, if the box is taken we have technological concerns about the contents escaping,” Granick said. She added, “if someone does either subpoena or hack their way into the box we need to make sure that they’re not going to be able to see anything, without any opportunity for us to get into court to challenge it.” She underscored the sensitive nature of the potential data stored on the server — from the vulnerability itself to names of co-workers and company whistleblowers — making it an attractive target for governments and hackers alike. Those anonymity requirements lead DEF CON and CISA to turn to SecureDrop, in use by a number of organizations such as the New York Times that use the technology for anonymous news tips. The signal to noise ratio is pretty horrendous, said panelist Runa Sandvik, director of information security for the newsroom at the New York Times. But she said, “Having the system is far more valuable than if we didn’t have it.” She said the good that comes out of having SecureDrop outweighs the bad of having to manage a lot of the unhelpful information the system collects. The technology behind SecureDrop was originally developed by the late Aaron Swartz, Kevin Poulsen and James Dolan. It was created to be a vehicle for whistleblowers. The Freedom of the Press Foundation took over development of the platform in 2013, according to the SecureDrop FAQ. “SecureDrop is designed to use two physical servers: a public-facing server that stores messages and documents, and a second that performs security monitoring of the first. The code on the public-facing server is a Python web application that accepts messages and documents from the web and GPG-encrypts them for secure storage. This site is only made available as a Tor Hidden Service, which requires sources to use Tor, thus hiding their identity from both the SecureDrop server and many types of network attackers,” according to the FAQ. In DEF CON’s implementation of SecureDrop, DEF CON operates the servers that the Tor network connects to. “The magic happens before the connection to the CERT. We never see, and cannot discern the IP address (of a submitter). CERT never discern the IPs of the exit node. And there is the back and forth of two separate Tors running,” Moss said. “There is no way we are going to engineer trust, but there’s a lot of things we can do to reduce the risk,” Moss said. The panel discussed the vetting process of the SecureDrop and said that there are still a few technical and legal issues to resolve before it’s ready. Meanwhile the audience of security professionals supported the project concept, while expressing concern over some of the technical anonymizing aspects of the program. “I’ll tell you, honestly, one of my plans is if there is a little engineering to do to this project, it’s to make sure that DEF CON can honestly answer a subpoena request and say, ‘No, we don’t have the keys. We can’t tell you what’s on the server.’” When asked, by a show of hands, who thought the program was a good idea, a clear majority of session attendees expressed support. When Moss asked whether is the plan sounded like a “catastrophic disaster and a threat to DEF CON” one or two attendees out of hundreds attending the session raised their hand. Black Hat USA 2019 has kicked off this week in Las Vegas. For more Threatpost breaking news, stories and videos from Black Hat and DEF CON, click here.

Source

image
Facebook has admitted that it has been transcribing audio chats between its users on its Messenger platform. Sources said that it’s paying hundreds to third-party outside contractors to do so. The latter calls into question Facebook’s data-handling practices when it comes to being open with its users. While Facebook confirmed that it had been transcribing users’ audio, it maintains that affected users chose to have their voice chats transcribed. In answering questions related to a Congressional probe last year, the company said that it “only accesses users’ microphone if the user has given our app permission and if they are actively using a specific feature that requires audio (like voice messaging features).” It also said this week that it is halting the program – which is meant to train its algorithm to be more accurate – to review the privacy implications. “Much like Apple and Google, we paused human review of audio more than a week ago,” the social network said Tuesday in a statement to media. One of the firms reviewing the user chat logs, TaskUs, confirmed that to be the case: “Facebook asked TaskUs to pause this work over a week ago, and it did,” a spokesperson told Bloomberg. However, the bigger issue may be that Facebook lacks transparency when it comes to communicating to users how it uses the audio transcriptions, how long it keeps them and who might have access to them, including third parties. Facebook’s privacy policy says only that the tech giant will collect “content, communications and other information you provide” when users “message or communicate with others.” It also says, “systems automatically process content and communications you and others provide to analyze context and what’s in them” – without mentioning a human review process or a transcription team. It does say that it shares information with “vendors and service providers who support our business,” while giving no specifics — a common tactic to provide a sort of data-handling loophole, according to privacy experts. “It’s vague language by design, and [these companies use] ambiguity to ensure they can do whatever they want with your data,” Sean McGrath, editor of ProPrivacy.com, told Threatpost. The messages are anonymized, but the contract employees weren’t told where the audio was recorded or how it was obtained, sources told Bloomberg, who also said that the contractors numbered in the hundreds. Facebook could find itself afoul of the General Data Protection Regulation (GDPR) in Europe due to this lack of clear data processing policies. The Irish Data Protection Commission, which oversees Facebook’s privacy behavior in Europe, said it was examining the situation for GDPR violations. The regulatory concerns are more of the same for Facebook, which just agreed to a $5 billion settlement with the U.S. Federal Trade Commission after a probe of its privacy practices. Facebook isn’t alone in running into problems over AI training. Amazon, Apple and Google have all landed in hot water over the way they collect audio clips from users and use humans to review them. Earlier in August for instance Apple said that it would suspend a program that lets contractors listen in on Siri voice recordings, after a report outlined how contractors regularly listen to intimate voice recordings – including drug deals or recordings of couples having sex – in order to improve audio accuracy, a process that Apple calls “grading.” In April, Amazon came under fire after a report revealed that the company employs thousands of auditors to listen to Echo users’ voice recordings. In July, Amazon acknowledged that it retains the voice recordings and transcripts of customers’ interactions with its Alexa voice assistant indefinitely; and in June, two lawsuits were filed seeking class-action status, alleging that Amazon records children and also stores their voiceprints indefinitely. Google meanwhile in July was caught out after it emerged that Google Home smart speakers and the Google Assistant virtual assistant have eavesdropped without permission — capturing and recording highly personal audio of domestic violence, confidential business calls and even some users asking their smart speakers to play porn on their connected mobile devices.

Source

image
A never-before-seen cryptomining variant, dubbed “Norman” after one of its executable files, has been spotted in the wild using various techniques to hide and avoid discovery. The levels of obfuscation are notable for their sheer depth, according to an analysis. Varonis uncovered an initial sample after investigating an ongoing malware infection that had spread to nearly every server and workstation at a midsize company. Much of the malware consisted of generic cryptominers, password-harvesting tools and hidden PHP shells – and Norman too at first seemed to be a generic miner hiding itself as “svchost.exe,” the researchers said. But further investigation told a different story. “Norman is an XMRig-based cryptominer, a high-performance miner for Monero cryptocurrency,” researchers said in an analysis on Wednesday. “Unlike other miner samples we have collected, Norman employs evasion techniques to hide from analysis and avoid discovery.” Multiple Layers of Obfuscation The malware’s deployment can be divided into three stages: Execution, injection and mining – each with its own evasion methods. The first stage starts with the svchost.exe executable which, unusually, was compiled with the Nullsoft Scriptable Install System (NSIS). “NSIS is an open-source system used to create Windows installers,” explained the researchers. “Like SFX, it creates a file archive and a script file that runs when the installer executes. The script file instructs the program which files to run and can interact with the other files inside the archive. The malware executes by calling a function in 5zmjbxUIOVQ58qPR.dll which accepts the other files as parameters.” In the second stage, the main 5zmjbxUIOVQ58qPR.dll file payload file (originally named “Norman.dll”, hence the name) is built with .NET and triple obfuscated with Agile obfuscator, a known commercial .NET obfuscator. “The execution of the malware involves many payload injections into itself and other processes,” according to the analysis. “Depending on the OS’s bit type, the malware will choose a different execution path and launch different processes.” More specifically, the malware injects a UPX-obfuscated version of the miner to either Notepad, Explorer, svchost or wuapp depending on the execution path. The injected payload has two main functions: To execute the cryptominer and evade detection. Once running, the XMRig miner itself is obfuscated with UPX. And, the malware is designed to avoid detection by terminating the miner (wuapo.exe) when a user opens Task Manager. After Task Manager closes, the malware will execute the wuapp.exe process and reinject the miner. Mysterious PHP Shell Varonis also found that much of the malware (Norman and other samples) were communicating with the command-and-control (C2) service via an unusual web service. “Infected hosts were easily detected by their use of DuckDNS,” according to the research. “DuckDNS is a dynamic DNS service that allows its users to create custom domain names. Most of the malware from this case relied on DuckDNS for C2 communications, to pull configuration settings or send updates.” During the investigation, the forensics specialists also found an XSL file that revealed a new PHP-shell that continually connects to the C2. Aside from the fact that both Norman and the PHP shell use DuckDNS, the two may be related given that the shell’s existence would explain why the Norman infection was so widespread within the company. “None of the malware samples had any lateral movement capabilities, though they had spread across different devices and network segments,” the researchers said. “Though the threat actor could have infected each host individually (perhaps via the same vector used in the initial infection), it would have been more efficient to use the PHP shell to move laterally and infect other devices in the victim’s network.” As for other attribution characteristics, the attackers also seem to originate in a French-speaking country, with some variables and functions in the code written in French. “An interesting thing that we encountered during the analysis is that the malware possibly originated from France or another French-speaking country: the SFX file had comments in French, which indicate that the author used a French version of WinRAR to create the file,” explained Varonis. Cryptomining is increasingly attacking businesses. To protect themselves, users should as always keep software up to date, monitor for abnormal data access, and use AV and a firewall.

Source

image
As social media platform TikTok becomes the top App Store download in 2019 – and the number three app download on Google Play and on platforms overall – scammers are looking to cash in on the troves of younger users of the popular platform. Tenable researcher Satnam Narang, who has been tracking the platform for scams since March 2019, said that, while scams have been previously undocumented, he has come across several that are “in their infancy”. He expects that number to explode. These scams, already prevalent on Instagram and Twitter, revolve around adult dating as well as account impersonation to get more likes or follows, and in some cases can be extremely profitable for scammers. “I think as long as these platforms exist, and there are billions of users using them, you’re going to have scammers. It’s just sort of part of using these platforms,” Narang told Threatpost. Listen to the Threatpost podcast below, outlining the research – and for direct download of the podcast, click here. [ ](http://iframe%20style=border:%20none%20src=//html5-player.libsyn.com/embed/episode/id/10874135/height/360/theme/legacy/thumbnail/yes/direction/backward/%20height=360%20width=100%%20scrolling=no%20%20allowfullscreen%20webkitallowfullscreen%20mozallowfullscreen%20oallowfullscreen%20msallowfullscreen/iframe) Below is a lightly-edited transcript of the podcast. Lindsey O’Donnell: Hi everyone, welcome back to the Threatpost podcast. This is Lindsey O’Donnell with Threatpost and I’m here today with Tenable senior researcher Satnam Narang. Satnam, how are you doing today? Satnam Narang: I’m doing well, Lindsey, how are you? LO: I’m good just coming off of Black Hat craziness, so a little tired. So Tenable on the kind of outskirts of Black Hat has come out with some new research today about several popular scams that are taking a hold of the popular video platform TikTok, which is very prevalent. I mean, it’s the number one app for App Store downloads and the number three download overall in terms of apps. So with that kind of success, obviously comes security issues, as we’ve seen in the past with other apps and social media platforms. So Satnam, can you give us some context about TikTok, what do we need to know about the social platform as it relates to the attacks that you’ve outlined in your research? SN: So Lindsey, yeah, TikTok is really popular, as you just noted, it’s been gaining in popularity over the last year, they just actually recently celebrated their one year anniversary. Because TikTok merged with Musical.ly last year, and Musical.ly was a really popular platform as well. And earlier this year, they reached a milestone of 1 billion monthly active users, which is a pretty tremendous feat in the consideration that Instagram also recently, as of last year, crossed the 1 billion monthly active user mark. So if you think about how prevalent and popular Instagram is, you can definitely see that TikTok is just as popular, if not more popular, especially with the younger crowd. LO: Right for sure. And I feel like I keep seeing new research about scams that are hitting Instagram and Twitter and other social media platforms, but not so much TikTok. Is this the first time the platform has been scrutinized as a threat attack surface for potential scammers or attackers? SN: Well, so through our research, I found some historical references to some of these scams back on Musically, but it wasn’t until TikTok really exploded in popularity that scammers started to take notice of it being a legitimate platform for them to leverage for scams. So, in our research, I started looking into TikTok security back in March of this year. And what ended up coming across my feed were three types of scams, adult dating base scams, impersonation account scams, and then “get free followers and likes” scams, which is tried and true, one of the oldest scams in the book. LO: That definitely seems like those are prevalent on other platforms. But in terms of TikTok, which one of those three categories would be the most popular would you say? SN: Well, I think the most popular is definitely impersonation scams. That’s just because it’s really easy to do. All you have to do is essentially download videos of say popular TikTok creators like Salice Rose, or Baby Ariel, or Liza Koshy or if you’re regionally in another part of the world, you know, popular singers, like they have Neha Kakkar, or Salman Khan, who’s one of the biggest bollywood actors in the world. So taking their videos, either from TikTok directly if they’re on the platform, or from say Instagram and repurposing them on TikTok in order to gain followers. LO: So what would the end goal for that be for the scammers? Would it be essentially free followers and likes at the end of the day? SN: Yeah, so in the case of impersonation scams, the idea is rather than organically developing your own following, you’re taking advantage of an existing creator. So in this case, like Salice Rose, who’s a creator, has been around since the Vine days, also makes YouTube videos, leveraging her videos, claiming them to be your own, and then using a username that has some funky characters in there that look like they spell Salice Rose, but they’re a little bit different. And then, once you’ve developed enough of a following, what ends up happening as an impersonator in the case of Salice Rose, for example, you sort of tease to your followers who know you’re not really Salice Rose, that you’re going to reveal your true identity. And then you post the video with your real identity, say with an existing like TikTok sound, for example. And then you reveal yourself and then in some cases, you might even use the TikTok Live feature in order to sort of have a live conversation with some of your followers. And then ultimately, the goal is then to pivot from that impersonation account to your own personal account. So you’ll do this by changing all videos, by pulling down all the existing videos, changing the profile picture, but one quirk on TikTok that’s really interesting is is that you cannot change your TikTok username for 30 days. So once you change your name, you have to keep that name for 30 days. So if you claim to be the official Salice Rose, you’re gonna have to wait 30 days before you can change that username. LO: And you were mentioning to in the research that this isn’t just direct impersonation of the celebrity or TikTok celebrity. It’s also with fan pages or even second accounts that may be created. Or even you know, as you mentioned before Bollywood celebrities who may not even have an account. So it seems like it’s pretty rampant in that regard. SN: Yeah, and the most fascinating thing about the whole notion of like a backup or second account is that some people might not even question it, because in some ways, there’s this idea that maybe your primary account could be taken down. So you’ll have a secondary account, which is not like a unique phenomenon with TikTok, it’s something we’ve seen on other platforms, too. But what’s most fascinating to note about the TikTok research that we did was, there’s an example in the report, talking about Liza Koshy, who has over 14 million followers on TikTok, someone created a backup account for Liza Koshy, and that account also got verified by TikTok, which is pretty absurd if you think about it, because the primary Liza Koshy account is already verified. So you have two accounts that are verified. So for users, there’s a bit of confusion, like is this really that account like belonging to Liza Koshy, but what we found in our research was, if you go into the videos, they’re all repurposing content from the primary Liza Koshy account, the real one. And then they’re also promoting like another account. So they’re promoting a third account, trying to drive users to follow that account. So that’s the value there, they may never pivot that Liza Koshy backup account to their own personal one, but they’re leveraging the 400,000 plus followers that they have to try to gain followers on the third account. LO: That’s pretty surprising that a second account could be verified, because I feel like the mitigation here would be to check to make sure the account is verified that may be impersonating the celebrity or whatnot. So it really makes me question or at least think more about the vetting process that goes behind some of these accounts on TikTok, for sure. SN: And like you mentioned about the fact that there are also impersonators of those who may not even have a TikTok, that’s another issue that really doesn’t take notice, because you have users looking at these accounts and actually interacting with them, thinking to themselves, they’re actually interacting with that person, even though it’s not them, it’s another person impersonating them trying to drive traffic to their own personal account. LO: And when these scammers are driving that traffic to their own account, is there any advantage there behind gaining more followers or whatnot? Is there any sort of monetary value there? Is it more you know, for status and kind of having that type of popularity on their account? SN: Yeah, it’s really just about developing a following without actually putting in the work, right, normal creators on TikTok and other platforms have to create unique content that actually appeals to a wide swath of people. But in this case, all you’re doing is taking content from an existing creator, or popular celebrities, and then leveraging that in order to drive followers to the third account by saying, “hey, follow my friend so and so” when in actuality you are just promoting yourself. LO: Can you talk a little bit about also the other category that you touched upon in your research, which is that theme of adult dating and how scammers are using this category to trick end users on the platform as well – What did you find there? SN: Yeah, you know, adult dating theme scams have been around for a while, and it makes sense that they would percolate towards TikTok as it got popular. So in the case of TikTok scams, relating to adult dating, what we’ve seen are stolen videos from other platforms like Instagram, and Snapchat, posted on profiles, and what they’re doing these scammers is that they’re driving users to a different platform, they’re saying, “hey, check me out on Snapchat, or add me on Snapchat,” to see more explicit content in a way. And I surmise the reason for that is, in order to actually have people messaging you directly on TikTok, you need to provide a telephone number. So it’s possible that scammers don’t actually want to take that step in this case, and they’re just wanting to bypass that whole process and driving users to Snapchat. And when users from TikTok move to Snapchat by saying, you know, looking up that user from TikTok, they’ll be presented with sexually suggestive content or explicit content, saying, “Hey, you know, follow me here, if you want to see me naked on a camera, or if you want to hook up,” and then they direct them to what’s called a pre-lander page, or an intermediary page, which is used to drive users to the adult dating website. And essentially, the purpose for this is to ensure that there’s like an affiliate tag. So if you’re familiar with affiliate programs that are used by most e-commerce platforms, you basically give a cut to the person driving traffic to your website. So in the case of adult dating, when you direct someone to the adult dating website, if that user signs up, you’ll learn a cut of about $1 to $3 of that sign up. LO: It seems like there’s a dual purpose here, which is, as you were saying, this affiliate program to drive that kind of cost per action revenue, and then also tricking users to pay for fraudulent premium Snapchat accounts on the other end of the spectrum as well. It sounds like there’s kind of two things that are going into there. SN: Yeah, that one was very interesting, because that’s like a recent phenomenon that I’ve observed over the last, maybe two or three weeks or so – is that they’re moving away from the affiliate model and going directly to this concept of a premium Snapchat account, which is a real thing that’s been around for a while where Snapchat users who want to invite folks to view their more not safe for work content, will ask them to pay monthly fees, which could vary between $10 to $20 a month, depending on on the person and the platform. So scammers see that opportunity and what they’re doing is that they’re mimicking it. So they’re claiming to offer a premium Snapchat account where they’re going to show more explicit material. And then they’re asking users to go through PayPal, and pay them anywhere from $10 to $20. And essentially, what’s going to end up happening is once you end up paying that $10 or $20, you won’t get the premium content that you’re expecting. And the scammers will be getting more than the $1 to $3 that they would have gotten through the affiliate program. LO: With these figures that you’re talking about, in terms of the popularity of some of these dating scams accounts that you were tracking, you said that one that you saw, received over 34,000 likes and had over 12,000 followers. I mean, that could be extremely lucrative for a scammer in this case. SN: Yes, most definitely. And especially because, once again, when users are on the TikTok platform, they may or may not believe that the person they’re interacting with is the person that they’re claiming to be. So in the case of the adult dating scam accounts, you have users who comment on videos making suggestive comments back to the scammers. So obviously, there’s an interest there on the part of the users, which serves the whole purpose of the the ecosystem, right? You’re getting users to engage with your content, and then potentially sending them to Snapchat. And then from there, potentially turning them into an affiliate payout or a “premium Snapchats subscriber,” even though they’re not going to get what they’re looking for. LO: You mentioned earlier that the typical TikTok end user here would be kind of a younger audience, what might that have to do with how much of an issue this might be? Do you think that the younger audiences are more or less aware of this type of scam? SN: Well, I think in the case of this one, signing up for an adult dating website, there’s no limitations, right? They’ll ask you, “are you over the age of 18?” And as you know, anyone can just say, “Yes, I’m over the age of 18,” there’s no way to verify you’re over the age of 18. So getting any user to sign up for it is really simple. So it doesn’t matter if you’re below the age of 18, or over the age of 18. For an adult dating website, users will still be able to sign up for the platform, where you might have an issue is, there’s a certain type of lead called a premium lead, where you convert a user who signs up for an adult dating website into a premium subscriber. And that requires the user to provide a credit card number in order to sign up for the service. And in that case, if the user ends up providing a credit card number, the scammers could make up to maybe $50 to $60 for premium subscribers, so that’s the most lucrative payout. But on average, most of the payouts that they receive are for just generally driving users to these websites and getting them to sign up. So while the intention is to get anyone to sign up, the goal is to basically get anybody to sign up, it doesn’t matter how old they are. So even though TikTok might skew towards a younger audience, there’s no controls in place to prevent a younger user for signing up for one of these adult dating websites. LO: I’m curious too what top tips you would have for TikTok users to kind of watch out for these scams, because some of them are pretty sneaky. I mean, changing one letter in a username in order to impersonate an account is pretty hard to spot. What are some of the top tips you might have? SN: Well, I mean, obviously, you know, when you’re looking for users on TikTok, the verified creator badge would be one of the things you’d look for. But as we’ve reported in our research, that’s not always reliable indicator, because you have the case of the Liza Koshy impersonator, who managed to get verified. So it really just boils down to parsing through the content, looking at comments, because there are other users on the platform who do identify these scam accounts and say, “You’re not the real Liza Koshy, you’re not the real Salice Rose.” And you know, that usually is a good way to kind of gauge whether or not you’re interacting with the real account. And I think obviously too one of the other way to notice it, is you have like the Liza Koshy account, which has 14 million followers. That’s obviously a pretty good indicator that that’s going to be the real account. And then also just looking for telltale signs of what impersonation scams might look like. For the example of the Salice Rose impersonator, they start posting their own video content eventually. So you’ll have a mix of Salice Rose content or the original creator’s content, as well as scammer’s content. So when you see that that’s obviously a huge red flag. And in the other case… of Liza Koshy impersonator when they’re trying to drive you to follow other users, that’s usually a sign that you’re not dealing with the real person. Because the whole emphasis there is to get users to follow their third account. LO: Those are good tips. And just taking a step back. These scam and fake accounts are such an issue on social media platforms across the board, whether it’s Instagram or Twitter, and just the sheer number and types of scams to from you know, we’ve all seen scams around buying different types of Bitcoin or cryptocurrency to these types of adult dating scams that you’ve mentioned as well. So I have to ask, what do you think that these social media platforms can do – if anything – to kind of scrape away these types of fake accounts? Is the Report button really going to be enough? Is this just something that we need to deal with for the long term future? SN: Well, the Report button definitely helps. Because the more people reporting these accounts, the more likely they are to get taken down, which is really helpful for the platform to kind of take away all of the the fact that these cameras are so prevalent on all these platforms, whether it be TikTok, Instagram, Twitter, Snapchat – the reporting functionality is the users’ best bet. The platforms themselves, they do a really good job, and they do their best to try to deal with it. But the problem, Lindsey, is that scammers are relentless. When they see the popularity of platforms like Instagram and TikTok with 1 billion monthly active users, they see the potential to monetize that. And they’re going to continue need to hammer those platforms as best as they can. They’re going to find ways around some of the automated detection that might be placed all down their accounts, they might do things like alter the profile photos in a certain way, or, like you mentioned earlier with the usernames, use different usernames. We’ve also done research around Instagram scams recently that we published about a month ago, which talked about some of the methods that scammers are using to bypass some of the detection methods in place by Instagram, for example. So I think as long as these platforms exist, and there are billions of users using them, you’re going to have scammers. It’s just sort of part of using these platforms. So at the end of the day, it’s a combination of the users who are on the platform plus the folks on the abuse and security team working in tandem to do their level best and try to deal with this stuff. LO: Well hopefully reports like yours will better educate users to caution them for what to look out for. So it’s definitely a threat will be watching out for in the coming months, especially as TikTok grows even more popular. So let’s wrap the show up now Satnam thank you again for coming to talk to us about your new research today. SN: It was my pleasure, Lindsey. Thanks for having me. LO: Great. Thanks. And once again. This is the Threatpost podcast. Catch us next week for our next episode.

Source