image
Researchers are warning of an ongoing campaign exploiting vulnerabilities in a slew of WordPress plugins. The campaign is redirecting traffic from victims’ websites to a number of potentially harmful locations. Impacted by the campaign is a plugin called Simple 301 Redirects – Addon – Bulk Uploader as well as several plugins made by developer NicDark (now rebranded as “Endreww”). All plugins have updates available resolving the vulnerabilities – but researchers in a Friday post warned that WordPress users should update as soon as possible to avoid attack. “Redirect locations were a typical spread, whatever ad network is running it likely does some geolocation and tracking to decide where to send you,” said Mikey Veenstra with Wordfence told Threatpost. “Most recent injections don’t even appear to be functional, suggesting some breakdown in infrastructure or a transition of some sort.” Veenstra told Threatpost that exploitation began on or around July 31, just as the first disclosure for one of the vulnerabilities was published. “The plugin repository team quickly removed the rest of NicDark’s plugins from the repository, which drew attention and revealed that they all suffered similar vulnerabilities,” he told Threatpost. “So attacks probing for all of them began pretty quickly, despite many of the plugins having fairly small install bases.” Vulnerabilities Veenstra told Threatpost that he found at least five plugins by NicDark with flaws being exploited as part of the campaign. These plugins are: Components For WP Bakery Page Builder, Donations,Travel Management, Booking and Learning Courses. The flaws (all recently patched) are exploited by similar AJAX requests, according to Wordfence. In each case the plugin registers a nopriv_ AJAX action, which is responsible for importing various WordPress settings. Unauthenticated visitors can successfully send these AJAX requests in order to modify the siteurl setting of the victim’s site – thus sending visitors to other locations. “The result of this modification is that all of the victim site’s scripts will attempt to load relative to that injected path,” researchers said. “In effect, this replaces all of a site’s loaded JavaScript with a file under the attacker’s control.” The other impacted plugin, Simple 301 Redirects – Addon – Bulk Uploader, developed by Webcraftic, adds functionality to a plugin called the Simple 301 Redirects plugin, which enables the redirect of requests to another pages. The plugin has more than 10,000 installations. The plugin has a recently-patched vulnerability that enables unauthenticated attackers to inject their own 301 redirect rules onto a victim’s website. That means that a bad actor has the ability upload a CSV file that could import a bulk set of site paths and redirect destinations. Ultimately, if a vulnerable site processes an uploaded malicious CSV file it will begin redirecting all of its traffic to the addresses provided. Researchers said they have also identified related attacks against other formerly-vulnerable plugins, including Woocommerce User Email Verification, Yellow Pencil Visual Theme Customizer, Coming soon and Maintenance Mode and Blog Designer. “The domains used by the attackers in performing these script injections and redirects rotate with some frequency. New domains appear every few days, and attacks involving older domains taper off,” researchers said. “At this time, many of the redirect domains associated with these attacks appear to have been decommissioned, despite the fact that these domains still show up in active attacks at the time of this writing.” Plugins continue to be a security thorn in WordPress’ side. According to a Imperva report, almost all (98 percent) of WordPress vulnerabilities are related to plugins that extend the functionality and features of a website or a blog. Other recent vulnerabilities found in WordPress plugins include WP Live Chat and Yuzo Related Posts. Interested in more on the internet of things (IoT)? Don’t miss our free Threatpost webinar, “IoT: Implementing Security in a 5G World.” Please join Threatpost senior editor Tara Seals and a panel of experts as they offer enterprises and other organizations insight about how to approach security for the next wave of IoT deployments, which will be enabled by the rollout of 5G networks worldwide. Click here to register.

Source

image
Why did Valve-owner Steam say it made a “mistake” turning a researcher away from its bug bounty program? Who was behind a backdoor that was purposefully introduced into a utility utilized by Unix and Linux servers? And why is Facebook coming under fire for its “Clear History” feature? Threatpost editors Lindsey O’Donnell and Tom Spring break down the top stories of the week that have the infosec space buzzing, including: A backdoor that was intentionally planted in Webmin in 2018 and found during the DEF CON 2019 security conference when researchers stumbled upon malicious code. A researcher disclosing a zero-day vulnerability (the second in two weeks) for the Steam gaming client after he said he was barred from the bug bounty program of Steam’s owner, Valve. Facebook being met with vitriol after users discovered its “Clear History” feature, rolled out in some countries this week, wasn’t what they had thought. For the full podcast, listen below or download here. Below is a lightly-edited transcript of the news wrap podcast. Lindsey O’Donnell: I’m Lindsey O’Donnell with Threatpost, and I’m here today with Tom Spring to break down for you the top news of this week ended August 22. Tom, thanks for joining the Threatpost podcast today. How are you doing today? Tom Spring: I’m doing great. Thanks for asking. LO: Good. Well, we’re just ending a big week. But we should probably talk about one of the biggest stories that you wrote about that garnered the most interest for a lot of Threatpost readers, which was the backdoor that was discovered on the Webmin utility for Unix servers. That was a really interesting story. TS: Yeah, it just goes to show you how susceptible some of these libraries are to manipulation and the clever way that people are now abusing – whether it be a repository or whether it be a Git library – it’s really spooky, this backdoor that was found, just recently, a couple weeks ago. I should say, even just earlier this week. It’s been a evolving story. It touches on DEF CON and touches on zero days. But in a nutshell, you’re right, there was a backdoor found in this utility for Linux and Unix servers, called Webmin that could give attackers basically control over their servers, and it was sort of a worst case scenario. But what’s interesting about it is the seeds of this attack, of this vulnerability, were planted in – I believe it was April 2018 — I’m not too sure of the month. But it was last year, there was a library that was put into the code that is behind this Webmin tool, and they backdated it, and it kind of existed and it was not exploited, and if I understand things correctly, it went unnoticed for almost a year. And then during DEF CON, what happened was researchers were looking at the code for Webmin, and they discovered a way to exploit the utility, using the vulnerability found in a CGI script “password change”. And that was the first red flag that led to more attention to what was going on with this script, and what they discovered was that it was not a mistake and it was intentionally inserted into GitHub, and it was backdated as several versions of the Webmin went out to users. And then you know, sort of everything unraveled, they patched it, it has a happy ending. And now, I think the patches went out, the community is aware, the Webmin utility has been fixed. Obviously, there’s probably a number of different, there’s probably a percentage that hasn’t been patched yet. But I think there’s a lot of awareness around this problem. But we’ve gotten some comments on the story asking about who’s behind this. It’s one thing to say, “Okay, well, we understand what happened, but who is behind inserting this malicious code?” And then the other feedback that we’ve been getting on this story is, is just how difficult it is to rely on one set of eyes, or even a couple set of eyes in terms of looking at code, and making sure it’s secure, and not trusting a lot of these commits to make sure that they’re saying like, that it’s not a foregone conclusion that it’s secure. And it’s, I think it’s really kicked up quite a bit of discussion within the open source community in terms of how to handle problems like this and we see a lot of this within repositories in terms of malicious code, or just bad code reuse, in terms of libraries with software developers, it really is a tremendous challenge. And I think this was a really interesting example of how this can be abused by a malicious actor. LO: Did Webmin give any indication about what it might change in the future to stop something like this from happening again? Or is that up to speculation at this point about what can be done? TS: Well, so they’re going to be updating their build process to use checked in code from GitHub, rather than a local directory that is kept in sync – that had a lot to do with how this was overlooked. They’re going to be – These are suggestions for users as well – rotating all passwords and keys accessible from old build systems and auditing all GitHub check ins over the past year to look for commits, that may have introduced similar vulnerabilities. So, like I said, again, you hear a lot about the preventative measures that are being taken that are taking place to prevent these things from happening in the future. We’re seeing more code dependency, more norm code reuse dependency. And I think we’ll probably hear about more tools, more solutions, and repositories talking more about making noise about how they’re making sure that what they’re doing is better than what other repositories are doing to keep things safe. LO: Right. Yeah, for sure. Well, I wrote an interesting story too this week. So I covered an ongoing story that Tara actually reported first last week, and that we chatted about on last week’s news wrap; Tom, if you remember the zero day that was discovered in Steam by a researcher last week. TS: Yeah. LO: So that story has continued into an entire whirlwind of drama this week. The researcher said that he was barred from Steam’s owner, Valve’s, bug bounty program after disclosing that initial zero day vulnerability for the Steam gaming client. And then on the heels of that he also disclosed another zero day privilege escalation vulnerability. So it was a little crazy. And then, just last night, Thursday evening, according to reports, after all that, Valve patched the recent Steam zero day, and essentially called turning away this researcher who had found the zero days a big mistake, and they updated their bug bounty program to address the issue. TS: And Lindsey, was this was this HackerOne? I’m just trying to figure out who’s actually apologetic for what essentially, it seems like getting this this researcher angry. LO: Yeah. So this was Valve. But let me take a step back. If you remember, last week, the researcher had some back and forth with valve about the initial flaw that he had disclosed via its HackerOne bug bounty platform. And essentially, it came down to the fact that Valve didn’t consider a local escalation of privilege bugs to be part of its bug bounty platform. TS: Yeah, and we got a lot of comments that were in support of Valve’s position on that, alot of fanboys. But go ahead. LO: So anyways, what happened was eventually, Valve told the researcher that he was not allowed to publicly release the bug details, but he did anyways, 45 days after the initial disclosure, and then after that, the researcher said that things essentially escalated and that he was banned from the platform. That led to a big discussion around kind of disclosure in the hacker community. But you know, I guess the story, it kind of has a happy ending at this point, if Valve has admitted that it made a mistake and banning the researcher. And the other part was that it also updated its bug bounty program to now start accepting local privilege escalation class vulnerabilities. TS: That really kind of warms my heart because there’s so much animosity between these bug bounty programs and the researchers or at least there can be and, if you were to ask me yesterday how this was going to play out or how it was playing out, I would have I would have said another bug bounty researcher standoff gone awry. I had little to no hope that that there was going to be any resolution on this. And it has been a big soap opera, really interesting stuff. LO: It’s led to discussion around, as you said Tom, disclosure issues like these in the hacker community. And Katie Moussouris has weighed in and a couple of others, and it’s kind of been split the reactions that I’ve seen online. TS: What’s Katie’s take on that? I’m just curious. Good for Valve for apologizing for the mistake in their dismissal of the vulnerabilities. Their bounty triage provider chalked it up to disclosure being a “murky process”. Basic triage is “murky”? Isn’t the outsourced service supposed to navigate that?https://t.co/Hqk4GzPWIp — Katie Moussouris (@k8em0) August 22, 2019 LO: Yeah. I mean, she was basically pointing out how this is yet another kind of issue that we’re seeing when it comes to bug bounty programs. Because as you know, Katie, she has talked a lot about some of the hurdles that bug bounty programs kind of need to go over. TS: Yes, she’s a very strong advocate for bug bounty programs and getting them right, that’s for sure. LO: Yeah. So I mean, on Twitter, she did say that vendors have labeled full disclosure is responsible and planted onus on researchers, while completely skirting their own liability and negligence. And basically said, if the vendor failed to address it, suddenly, it’s the researchers fault for speaking up, and how is that fair? TS: Yeah, I’m sure you’ve spoken to researchers that are very unhappy about bug bounty programs, they get involved in them. And they’ve got handcuffs on, they find the vulnerabilities and they’ve signed non-disclosure agreements. And the vendor sits on the vulnerability and doesn’t fix it. And the researcher wants either to get paid and get notoriety, or just wants the internet to be a safer place for things to get fixed. And they just basically have a gag order. And if they want to go public with the vulnerability, they risk the backlash, and I’m not too sure if that’s what happened in this case. But Valve and Steam, it doesn’t get much bigger in terms of an online gaming community. And, I don’t know why anybody would be sitting on a bug, a dangerous bug impacting potentially as many users as is where they use Steam, if it’s not hundreds of millions, at least 100 million, LO: I feel like there needs to be some sort of mediator almost, between these companies, and the researchers who are participating in their programs to what level platforms like HackerOne, like a Bugcrowd play in that. But I do feel as though for something like this there, there needs to be someone who’s like, either to the vendor, like you can’t just kick someone off because they they reported something. And then on the other hand, they need someone who can go to researchers who may be having their own their own issues. And this story is definitely split up, some are arguing that the incident points to an issue and bug bounty platforms, as I said, which is that you can’t just ban someone from the platform after they find something that you don’t like, but then others are arguing that the researcher shouldn’t have disclosed the second bug by in this method by essentially going around Valve and being like, well, you banned me. So now I’m going to disclose this zero day vulnerability. But yeah, I mean, in terms of other big news this week, did you see that Facebook Clear History button news about them kind of rolling out their new Clear History feature? That was kind of interesting. TS: Yeah. Well, I guess it doesn’t have much of an impact for folks here in the US, please yet. And it’s not anything to get too excited about either. If you know more about it, please do share. But I don’t think we can all breathe a sigh of relief quite yet in terms of Facebook and the data that they collect. LO: As you mentioned, it sounds like this is just being rolled out in Ireland, South Korea or Spain, so like very random countries that don’t affect us here in the US. But yeah, I was reading reports and articles that were saying that while Facebook has this Clear History button that’s supposed to kind of clear all your data. And consumers were really wanting that ability to wipe out all the data that Facebook has on us. It sounds like it’s not really what people had hoped for and what they had expected. And it doesn’t really truly clear all of our history. It sounds like essentially, it just like still takes your data, but it will anonymize you so that I guess your data isn’t attached to you. But it’s still collecting your data, essentially. And I think that’s what has people riled up at this point. TS: Yeah. And I think that by virtue of the fact that you actually push the button, it sort of red flags you as well, and it takes extra effort to anonymize you, but also to scrutinize you at the same time. And I gotta figure, this is one of those feel good things that doesn’t serve anybody but Facebook, you know, it says, Oh, we have we’ve got a button for that now, and you don’t have to worry about it, we’re going to see a lot more of these types of privacy pushes. Some of them, I know that you just wrote about something that Google’s working on as well, where you’ve got the big tech giants, who are feeling the heat from government, whether it be the US government, or foreign governments are very concerned about data privacy, and about the amount of information that’s being collected. And they’re coming up with a lot of new solutions, to try to address that situation. And I think what they’re doing is they obviously don’t want to hurt their bottom line. And they’re coming as close as they can to offering a genuine solution without actually having them hurt the billions in profit that they make every year. It’s a fine line that they’re walking, I think they’re really trying to head off a lot of the possible regulations that are coming down the pike by saying, look, we’ve got a button for that, look, we’ve got a browser extension for that. LO: Yeah, I mean, it is interesting, because what the alternative would be is that we’re essentially consuming free content online, in return for our data. So the other option, that Facebook that companies like Google are telling us is that we that we would have to pay instead. TS: I don’t know, Lindsey, it’s not a black and white issue. I don’t think you’re suggesting it is. But you know, if they’re going to say you can use Google Chrome and surf the internet for free, because we can take every little tiny piece of data that we can about you and monetize it, there has to be there has to be a middle ground. And I have I’ve heard, we’re seeing micro payments becoming a bigger reality. I’m not familiar with some of the success stories that newspapers and and other websites are having in terms of making, giving access these walled gardens that are going up, left and right. I mean, maybe, maybe, maybe that’s where we’re headed. I don’t see Facebook ever charging for access to their world or or Google Chrome. I mean, it kind of might be nice if you pay $10 a year, you don’t have to worry about being tracked as much. But I still feel like even if you paid $10 a year to Facebook, and they said they weren’t tracking you, they probably be like, oops, I’m sorry, you’re tracking you. LO: So we’re basically almost into deep at this point. TS: Yeah, I don’t know. But I just don’t buy the argument, that you’re getting it for free. So we should be able to collect every website that you go to browser fingerprints, IP address, where you do your banking, your health care provider – mean, these guys, these guys are making billions and trillions. And if they weren’t so hungry to make to keep their bottom line and might be able to figure out how to find a little more of a middle ground where they don’t have to completely suck up every little detail of your life to be able to monetize it. LO: Yeah, that’s fair. I don’t know. I think at this point like you mentioned, they’re really trying to kind of stave off regulation and who knows if that’s going to work or not at this point, because it is getting so much traction, but all right. Well, I think we’ve had a very busy week, Tom, thanks for coming on to talk a little bit more about the biggest stories that Threatpost wrote about this week. Hopefully, we’ll have a quieter weekend. TS: Yeah. Yeah, for sure. Thanks, Lindsey. LO: All right. Thanks. Catch us next week on the Threatpost podcast.

Source

image
Another flaw has been found in Lenovo’s decommissioned Lenovo Solution Centre software, preinstalled on millions of older-model PCs made by the world’s leading computer maker. The vulnerability is a privilege escalation flaw that can be used to execute arbitrary code on a targeted system, giving an adversary Administrator or SYSTEM-level privileges. Research come from Pen Test Partners, who found the flaw (CVE-2019-6177) and said the vulnerability is tied to its much-maligned Lenovo Solution Center (LSC) software. “The bug itself is a DACL (discretionary access control list) overwrite, which means that a high-privileged Lenovo process indiscriminately overwrites the privileges of a file that a low-privileged user is able to control,” wrote researchers at Pen Test Partners in a technical description of the bug posted Thursday. Lenovo issued a security bulletin regarding this bug and recommended users upgrade to a similar utility called Lenovo Vantage. Researchers describe the bug as giving hackers with low-privilege access to a PC the ability to write a “hardlink” file to a controllable location. This “hardlink” file would be a low-privilege “pseudo file” that could be used to point to a second privileged file. “When the Lenovo process runs, it overwrites the privileges of the hardlinked file with permissive privileges, which lets the low-privileged user take full control of a file they shouldn’t normally be allowed to,” researchers wrote. “This can, if you’re clever, be used to execute arbitrary code on the system with Administrator or SYSTEM privileges.” The software’s intended purpose is to monitor the overall health of the PC. It monitors the battery, firewall and checks for driver updates. It comes pre-installed on the majority of Lenovo PCs, including desktop and laptop, for both businesses and consumers. The problematic version is 03.12.003, which Lenovo said is no longer supported. According to Lenovo, the software was originally released in 2011. Lenovo said LSC been “officially” designated end of life since November 2018. However, a version is still available for download via the Lenovo website. Lenovo’s LSC software has been a source of many headaches for Lenovo. In 2016, researchers found a similar escalation of privileges bug. In 2015, the hacking group Slipstream/RoL demonstrated a proof-of-concept attack that exploited a LSC bug allowed a malicious web page to execute code on Lenovo PCs with system privileges. The LSC security flaw is the most recent in a long list of security fumbles that have plagued Lenovo over the past year. In February 2015, Lenovo was put in the security hot seat when researchers discovered a piece of software called Superfish that injected ads on websites and could be abused by hackers to read encrypted passwords and web-browsing data. Last August, Lenovo again landed in hot water when it was criticized for automatically downloading Lenovo Service Engine software – labeled as unwanted bloatware by many. Worse, when users removed the software Lenovo systems were configured to download and reinstall the program without the PC owner’s consent. Interested in more on the internet of things (IoT)? Don’t miss our free Threatpost webinar, “IoT: Implementing Security in a 5G World.” Please join Threatpost senior editor Tara Seals and a panel of experts as they offer enterprises and other organizations insight about how to approach security for the next wave of IoT deployments, which will be enabled by the rollout of 5G networks worldwide. Click here to register.

Source

image
On Tuesday of this week, one of the more popular underground stores peddling credit and debit card data stolen from hacked merchants announced a blockbuster new sale: More than 5.3 million new accounts belonging to cardholders from 35 U.S. states. Multiple sources now tell KrebsOnSecurity that the card data came from compromised gas pumps, coffee shops and restaurants operated by Hy-Vee, an Iowa-based company that operates a chain of more than 245 supermarkets throughout the Midwestern United States. Hy-Vee, based in Des Moines, announced on Aug. 14 it was investigating a data breach involving payment processing systems that handle transactions at some Hy-Vee fuel pumps, drive-thru coffee shops and restaurants. The restaurants affected include Hy-Vee Market Grilles, Market Grille Expresses and Wahlburgers locations that the company owns and operates. Hy-Vee said it was too early to tell when the breach initially began or for how long intruders were inside their payment systems. But typically, such breaches occur when cybercriminals manage to remotely install malicious software on a retailer’s card-processing systems. This type of point-of-sale malware is capable of copying data stored on a credit or debit card’s magnetic stripe when those cards are swiped at compromised payment terminals. This data can then be used to create counterfeit copies of the cards. Hy-Vee said it believes the breach does not affect payment card terminals used at its grocery store checkout lanes, pharmacies or convenience stores, as these systems rely on a security technology designed to defeat card-skimming malware. “These locations have different point-of-sale systems than those located at our grocery stores, drugstores and inside our convenience stores, which utilize point-to-point encryption technology for processing payment card transactions,” Hy-Vee said. “This encryption technology protects card data by making it unreadable. Based on our preliminary investigation, we believe payment card transactions that were swiped or inserted on these systems, which are utilized at our front-end checkout lanes, pharmacies, customer service counters, wine & spirits locations, floral departments, clinics and all other food service areas, as well as transactions processed through Aisles Online, are not involved.” According to two sources who asked not to be identified for this story — including one at a major U.S. financial institution — the card data stolen from Hy-Vee is now being sold under the code name “Solar Energy,” at the infamous Joker’s Stash carding bazaar. An ad at the Joker’s Stash carding site for “Solar Energy,” a batch of more than 5 million credit and debit cards sources say was stolen from customers of supermarket chain Hy-Vee. Hy-Vee said the company’s investigation is continuing. “We are aware of reports from payment processors and the card networks of payment data being offered for sale and are working with the payment card networks so that they can identify the cards and work with issuing banks to initiate heightened monitoring on accounts,” Hy-Vee spokesperson Tina Pothoff said. The card account records sold by Joker’s Stash, known as “dumps,” apparently stolen from Hy-Vee are being sold for prices ranging from $17 to $35 apiece. Buyers typically receive a text file that includes all of their dumps. Those individual dumps records — when encoded onto a new magnetic stripe on virtually anything the size of a credit card — can be used to purchase stolen merchandise in big box stores. As noted in previous stories here, the organized cyberthieves involved in stealing card data from main street merchants have gradually moved down the food chain from big box retailers like Target and Home Deport to smaller but far more plentiful and probably less secure merchants (either by choice or because the larger stores became a harder target). It’s really not worth spending time worrying about where your card number may have been breached, since it’s almost always impossible to say for sure and because it’s common for the same card to be breached at multiple establishments during the same time period. Just remember that while consumers are not liable for fraudulent charges, it may still fall to you the consumer to spot and report any suspicious charges. So keep a close eye on your statements, and consider signing up for text message notifications of new charges if your card issuer offers this service. Most of these services also can be set to alert you if you’re about to miss an upcoming payment, so they can also be handy for avoiding late fees and other costly charges.

Source

image
Google is launching an experimental, open-source browser extension aimed at increasing transparency around online advertising by displaying information about the ads that are shown to users. The browser extension is an integral part of a new Google initiative announced Thursday to develop a set of open standards, dubbed Privacy Sandbox. The standards aim to help internet browsers strike a delicate balance between protecting web users’ privacy – while also ensuring that advertisers who collect browser-based data aren’t being completely shut out. “To aid with this dialog and help explore the feasibility of this proposal, Google will launch an early, experimental, open-source browser extension that will display information for ads shown to a user and will work across different browsers,” said Google in its proposal. “We plan to start with the ads that Google shows on our own properties and on the properties of our publishing partners. We will also be providing open protocols to enable other advertising companies to use the browser extension in order to disclose similar types of information to their users, if they choose,” it stated. While there has been consumer pushback when it comes to browser data privacy, Google explains that the content consumed by users of Chrome and other browsers is free only because it’s supported by data-driven advertisers. With this in mind, Google’s Privacy Sandbox initiative bridge the gap between consumers exploring online content for free while keeping private data secure. The move would also allow advertisers to gather a non-invasive amount of data on consumers without turning to shady practices such as browser fingerprinting. A large part of the initiative revolves around users having more control over what they’re able to see and control in terms of data being collected. That’s where the experimental extension comes in. The extension, which will work across different browsers, aims to detail more information and give users better insights around why ads are being launched, who is responsible for ads and what caused an ad to appear. “We want to find a solution that both really protects user privacy and also helps content remain freely accessible on the web,” Justin Schuh, director with Chrome Engineering, said Thursday in a post. “At I/O we announced a plan to improve the classification of cookies, give clarity and visibility to cookie settings, as well as plans to more aggressively block fingerprinting… Collectively we believe all these changes will improve transparency, choice, and control.” Privacy Sandbox will also look to at other privacy data issues on the internet. The initiative for instance will address what browsers could do to allow publishers to show relevant ads to consumers – while protecting consumers’ private browsing data as much as possible. Google said one idea being explored, for instance, is delivering ads to a large group of similar types of web browsers – without letting advertisers identify individual’s data. “New technologies… show that it’s possible for your browser to avoid revealing that you are a member of a group that likes Beyoncé and sweater vests until it can be sure that group contains thousands of other people,” said Schuh. Other aspects that will be explored by Privacy Sandbox include how to address the measurement needs of the advertiser without letting the advertiser track a specific user across sites; as well as how to fight fraudulent behavior online such as false transactions or fake ad activity designed to rip off advertisers. Cookie Blocking Over the past few years, web browsers have looked at various ways to help consumers better protect their data – including limiting or even fully blocking cookies. A year ago, for instance, Mozilla announced plans to disable cross-site tracking by default in its Firefox browser. However, Google argues that attempts like large-scale blocking of cookies – without another way to deliver relevant ads – may significantly reduce publishers’ primary means of funding, “which jeopardizes the future of the vibrant web.” For instance, it said, recent studies show that when advertising is made less relevant by removing cookies, funding for publishers falls by 52 percent on average. Cookie blocking could also encouraging developers to turn to shady techniques such as browser fingerprinting. Browser fingerprinting, or canvas fingerprinting is when websites harvest the browser data to produce a single, unique identifier to track users across multiple websites without any actual identifier persistence on the user’s machine. “With fingerprinting, developers have found ways to use tiny bits of information that vary between users, such as what device they have or what fonts they have installed to generate a unique identifier which can then be used to match a user across websites,” said Schuh. “Unlike cookies, users cannot clear their fingerprint, and therefore cannot control how their information is collected. We think this subverts user choice and is wrong.” In May, Google announced that future versions of Chrome will modify how cookies work so that developers need to explicitly specify which cookies are allowed to work across websites — and which could be used to track users. “Collectively we believe all these changes will improve transparency, choice, and control,” said Schuh. Browsers Pushing For Privacy Google in September 2018 sought to clarify its data privacy initiatives after several critics panned issues in Chrome 69 – including cryptographer and professor at Johns Hopkins University Matthew Green, who blasted Google for what he said were questionable privacy policies. He noted that Google automatically signs users into the Chrome browser when they sign into any other Google service. On the heels of that, browsers have sought to make strides in better protecting users’ privacy; in June, Firefox and Chrome received updates to add security and privacy tools that help with password management and help block sites that track users. For instance, Google Chrome 75 implemented a way to addresses weak passwords by porting its Chrome’s built-in password manager to the Android OS version of its browser. Browser tracking methods have also come under scrutiny over the past year: The Electronic Frontier Foundation in a report issued in June decried websites participating in sneaky tracking methods like browser fingerprinting, which the organization claimed were trying to skirt privacy regulations like GDPR. Moving forward, Google hopes to follow the web standards process by seeking industry feedback on its initial ideas for the Privacy Sandbox. “While Chrome can take action quickly in some areas (for instance, restrictions on fingerprinting) developing web standards is a complex process, and we know from experience that ecosystem changes of this scope take time,” said Google. “They require significant thought, debate, and input from many stakeholders, and generally take multiple years.” Interested in more on the internet of things (IoT)? Don’t miss our free Threatpost webinar, “IoT: Implementing Security in a 5G World.” Please join Threatpost senior editor Tara Seals and a panel of experts as they offer enterprises and other organizations insight about how to approach security for the next wave of IoT deployments, which will be enabled by the rollout of 5G networks worldwide. Click here to register.

Source

image
How often do we hear Willie Sutton’s famous (but probably apocryphal) quote about robbing banks because “that’s where the money is?” This gets invoked in the context of information security in general and mobile devices in particular, and there’s a reason: Given the estimates from institutions like the Pew Research Center and others suggesting that there are over 2.5 billion smartphones currently in use, the money is clearly there for cybercriminals. Indeed, as we increasingly use mobile devices for most of our personal day-to-day business, untethered from desks and clunky full-sized computers, they become increasingly attractive to bad actors. This is exacerbated by the fact that mobile devices also frequently blur the line between personal and professional use, with one IDG survey showing an 85 percent increase in users who access business applications from their mobiles. All of that makes these devices almost irresistible targets for attacks. Let’s look at how mobile attacks are developing, and then talk about five crucial questions that any organization needs to answer in order to build a successful mobile defense strategy. How Mobile Attacks Are Evolving Back to the bank robber. Perhaps more interesting than pithy quotes expressing the obvious are the parallels between Willie Sutton’s tactics, techniques and procedures (TTPs) and those of attacks targeting modern mobile devices. One notable connection is mobile banking trojans: Like Sutton, these are essentially bank robbers. These began appearing as early as 2010 and continue to represent a significant percentage of mobile malware, especially in third-party app stores. Sutton was also famous for using disguises in both his heists and his prison escapes. He is alleged to have masqueraded as — among other things — a policeman, a postman and a prison guard. Similarly, attacks against mobile increasingly rely on cloaking themselves in legitimate forms, including the impersonation of popular apps, and the abuse of Accessibility features and other permissions. For instance, in May of this year, WhatsApp made headlines when attackers exploited a vulnerability in a built-in calling function in the app to spy on end users. While this particular attack was narrowly targeted and didn’t affect the general public, it highlights a novel approach to mobile attacks: Finding and attacking vulnerable apps that already have access to the resources or data that bad actors seek. As mobile operating system vendors continue to strengthen baseline security and restrict the interfaces that connect to sensitive capabilities, exploiting apps that already have legitimate access to those features will become a more efficient and useful technique. It’s also worth emphasizing one major difference between the crimes of Sutton in his heyday and today’s mobile attacks: While Sutton had to go inside the banks he intended to rob, cybercrime has no such old-school need for physical proximity. And because the desired objects are digital, “valuables” may exist in multiple places, which may or may not be equally well-protected. 5 Questions for Developing a Mobile Cyber-Defense Organizations need to develop strategies that can keep up with not only the growing number of mobile devices, but also the increasingly sophisticated attacks mounted by cybercriminals. Asking five critical questions can help you build an appropriate defense. Where does sensitive corporate data reside? Is your data centralized, on-premises or is it in the cloud? Do you have data housed on mobile endpoints? If you’re like most, the answer is most likely “all of the above,” regardless of established policy. It’s hard to prevent data sprawl, so your focus could be more on how to manage it. Modern work — and the data that supports it — is highly distributed, but that doesn’t mean it is uncontrollable. A good place for organizations to start is by taking an inventory of their endpoints, as well as both their on-premises and cloud services, so they know where everything is. How can that sensitive data be accessed? Which devices are allowed to access that inventoried data, and how are they authenticated? Remember that if you aren’t explicitly denying access to an endpoint, you are probably implicitly permitting it. In practical terms, organizations need ways to incorporate data from both configuration management databases (CMDBs) and endpoint protection platforms (EPPs) into their authorizations schemes. With this additional context, things like device type, asset ownership, current configuration and overall “cleanliness” of the endpoints that are being used to access data are taken into account when giving a device a green light to access sensitive data. Ideally, you want to dynamically tailor the permissions. Also, a number of network access control (NAC) and SSL-VPN products offer capabilities like these for on-premises infrastructure, so organizations must also consider how they will marry these capabilities with their cloud authorization schemes. Who is “behind the screen” of those devices? How do you know that the endpoint is being used by who you think it is? This is an especially important question in cases where front-line workers share devices. Naturally, strong authentication should be your first approach. Standard authentication relies on what someone knows (passwords) and possession of a device. Inherence factors however tend to be much stronger. When the increasingly capable biometric sensors on modern devices are coupled with authentication specifications like FIDO2, credentials are both better protected and much more difficult to spoof. How is data protected in transit to and at rest on those devices? At this point, encryption is (or should be) ubiquitous. While some estimates suggest that 87 percent of websites are capable of TLS and most devices now ship with encrypted filesystems, organizations still require the infrastructure to provide encrypted transport to access legacy systems and centralized management of endpoint encryption. Unified Endpoint Management (UEM) platforms have been solving the former for over a decade. The latter can be addressed with tools like per-app VPNs, which can tunnel traffic based on user agent and destination network/port information, as opposed to the purely network-centric rules of legacy VPNs. This has the advantage of preserving user privacy while creating robust micro-segmentation. What apps are running on those devices? Can unknown or unauthorized apps access and potentially circumvent the controls for that sensitive data? User agents are, perhaps, the most overlooked aspect of modern data-loss prevention (DLP). The permissions that apps have and their method of installation have a significant impact on administrators’ ability to control their behavior. Because many SaaS apps are built largely on top of RESTful APIs, the backend is often indifferent to the user agent. Organizations, therefore, need a reliable method of application inventory and must extend their authorization framework to include application, as well as users and devices. The answers to the questions above will vary widely depending on lines of business and the type of data being handled. These are important to tackle: as more sensitive information of both the personal and professional variety finds its way onto mobile devices, enterprises should expect criminals to “follow the money.” Be prepared for increasingly common, clever and sophisticated schemes being used to gain unauthorized access to your data because…”that’s where the money is.” James Plouffe is strategic technologist at MobileIron. Please check out all of the latest posts in our Infosec Insider Community.

Source

image
A music-streaming app offered on Google Play, harboring spyware that stole victims’ contacts, files and SMS messages, made its way onto the official Android app marketplace not once, but twice. The spyware was hidden in an app called Radio Balouch (also known as RB Music). The app itself was actually a fully-functional streaming radio app for music enthusiasts interested in listening to music from the Balouchi region – an area in eastern Iran, western Pakistan and southern Afghanistan. But behind the scenes, the app was stealing its users’ personal data. “Besides Google Play, the malware, detected by ESET as Android/Spy.Agent.AOX, has been available on alternative app stores,” said Lukas Stefanko, security researcher with ESET in a Thursday post. “Additionally, it has been promoted on a dedicated website, via Instagram, and YouTube. We have reported the malicious nature of the campaign to the respective service providers, but received no response.” Radio Balouch made its way past Google’s app vetting policies not once, but twice, researchers said; but was swiftly removed by Google both times after they alerted the company to it. The first app discovered was reported on July 2 and removed within 24 hours. The app then reappeared on July 13 and was again swiftly removed. The app, which was found on Google Play at two different times and works on Android 4.2 and above, had over 100 installations each time it was found on Google Play. Spyware Functionality What makes the spyware stand out is that it was built on the AhMyth malware, available on Github as an open source project. This remote access tool was made publicly available in late 2017; since its release, researchers have witnessed various apps that are based on it: “however, the Radio Balouch app is the very first of them to appear on the official Android app store,” he said. The app’s internet radio feature is bundled into the functionality of the AhMyth malware in one malicious app. While the internet radio component is fully functional and plays a stream of Balouchi music after installation, the app also has capabilities to steal contacts, harvest files stored on the device and send and steal SMS messages on the affected device. After installation, the app opens a home screen with music options, and offers the option to register and login – an option which researchers believe is actually an attempt to steal user credentials for the purposes of phishing. After installation the app also starts requesting permissions, including permission to access to files on the device (which is a legitimate permission for a radio app to enable its functionality) and to access contacts (under the guise of a functionality for the user to share the app with friends in their contact list). Information about compromised devices and victim contact lists would be sent to a command-and-control server. Upon further investigation, Stefanko found that the app was distributed from a dedicated website (radiobalouch[.]com), which utilized a server that was also used for the spyware’s command-and-control communications. The domain was registered on March 30, 2019, but was taken down shortly after the apps were reported. Despite being removed from Google Play, the malicious radio app is still available on third-party app stores as of Thursday, researchers said. Google Play The incident throws Google Play’s app vetting processes into question. The official Android app marketplace has continued to weed out malicious apps delivering bad functions, from adware to mobile trojans. Earlier in 2019, Google Play removed least 85 fake apps harboring adware, disguised as game, TV and remote control simulator apps. Once downloaded, the fake apps hide themselves on the victim’s device and continued to show a full-screen ad every 15 minutes. Last year, Google removed 22 malicious adware apps ranging from flashlights, call recorders to WiFi signal boosters that had been downloaded up to 7.5 million times from the Google Play marketplace. And, an Android app booby-trapped with malware was recently taken down from Google Play in November — after being available for download for almost a year. “The (repeated) appearance of the Radio Balouch malware on the Google Play store should serve as a wake-up call to both the Google security team and Android users. Unless Google improves its safeguarding capabilities, a new clone of Radio Balouch or any other derivative of AhMyth may appear on Google Play,” said Stefanko. Google did not immediately respond to a request for comment from Threatpost. Interested in more on the internet of things (IoT)? Don’t miss our free Threatpost webinar, “IoT: Implementing Security in a 5G World.” Please join Threatpost senior editor Tara Seals and a panel of experts as they offer enterprises and other organizations insight about how to approach security for the next wave of IoT deployments, which will be enabled by the rollout of 5G networks worldwide. Click here to register.

Source

image
A researcher has disclosed a zero-day privilege-escalation vulnerability for the Steam gaming client after he said he was barred from the bug bounty program of Steam’s owner, Valve. The vulnerability is the second zero-day privilege-escalation vulnerability that has been released by independent researcher Vasily Kravets in two weeks for the Steam gaming client, which is a video game digital distribution platform developed by Valve Corporation. Despite being banned from Valve’s bug bounty program on the HackerOne platform, Kravets on Tuesday disclosed a new flaw in the Steam client that he said would be simple for any OS user to exploit. “Not long ago I published an article about Steam vulnerability,” said Kravets in a Tuesday evening post. “I received a lot of feedback. But Valve didn’t say a single word, HackerOne sent a huge letter and, mostly, kept silence. Eventually things escalated with Valve and I got banned by them on HackerOne — I can no longer participate in their vulnerability rejection program (the rest of H1 is still available though).” Kravets disclosed his first zero-day vulnerability earlier in August affecting Steam. The flaw, disclosed Aug. 7, is a privilege-escalation vulnerability that can allow an attacker to level up and run any program with the highest possible rights on any Windows computer with Steam installed. It was released after Valve said it wouldn’t fix it (Valve then published a patch, that the same researcher said can be bypassed). Like last week’s vulnerability, the newest flaw found by Kravets, also enables local privilege escalation. Kravets told Threatpost he is not aware of a patch for the vulnerability. This most recent vulnerability stems from a combination of insecure permissions in Steam’s folders, insecure permissions in Steam’s branch of registry and insufficient checks during Steam’s self-update process, Kravets told Threatpost. No specific privileges and requirements are needed for an attacker to take control of the game client – while the privilege escalation attack is local, someone wouldn’t need physical access: “Any user on a PC could do all actions from exploit’s description (even ‘Guest’ I think, but I didn’t check this). So [the] only requirement is Steam,” Kravets told Threatpost. To prepare the exploitation environment Kravets said he first obtained the CreateMountPoint.exe and SetOpLock.exe files. Then, he made small changes to Steam file structure: “Our goal is to have folder with Steam.exe and steamclient.dll, and without ‘bin’ folder,” he said. This can be done in two ways: renaming/removing big folders from Steam root folders, or changing the InstallPath value to a path to any folder in the HKLMSOFTWAREWow6432NodeValvesteam registry key. After these changes have been made, Kravets said it is possible to execute a dynamic link library (DLL) within the Steam Client Service (a video demo of the attack is below) due to the insufficient checks existing in the self-update process, enabling “maximum privileges” for the user. With Steam saying that it has more than a billion registered users worldwide (and 90 million active users, who sign up to play games like Assassin’s Creed, Grand Theft Auto V and Warhammer), the implications of such privilege escalation attacks are potentially massive. “Despite any application itself could be harmful, achieving maximum privileges can lead to much more disastrous consequences,” said Kravets. “For example, disabling firewall and antivirus, rootkit installation, concealing of process-miner, theft any PC user’s private data — is just a small portion of what could be done.” Bug Bounty Ban After finding the first vulnerability that was disclosed earlier in August, Kravets submitted a bug report on June 15, which was rejected on June 16 because the bug enables “attacks that require the ability to drop files in arbitrary locations on the user’s filesystem.” After disputing this, the report was reopened – and then closed again on July 20 for the same reason, along with a note that “attacks…require physical access to the user’s device.” HackerOne Message Provided By Kravets Though HackerOne told Kravets that he was not allowed to publicly release the bug details, he did anyway 45 days after the initial disclosure. Since then, the HackerOne report was reopened, and Steam has updated the client to address a “privilege escalation exploit using symbolic links in Windows registry.” However, Kravets said that another researcher showed the fix could be bypassed. From there, Kravets said “eventually things escalated with Valve” and he ultimately received a message from HackerOne saying “Team Valve has elected to no longer receive reports from you.” ” In short, Valve and H1 decide to remove me from program due to my public disclosure,” Kravets told Threatpost. “I fully understand this and have no objections. But I still think that the first disclosure [was the] right move. Before my post Valve had no intensions to patch the vulnerability. A vulnerability is a vulnerability even if it [does] not fit into the security model.” Other researchers that have participated in Valve’s bug bounty program have criticized the company for its program and how it treats vulnerabilities such as local privilege escalation. At this point, after being banned from Valve’s bug bounty program, Kravets told Threatpost he has not yet heard from Valve as of Wednesday regarding the most recent vulnerability. “It’s sad and simple — Valve keeps failing,” Kravets said. “Last patch, that should have solved the problem, can be easily bypassed (https://twitter.com/general_nfs/status/1162067274443833344) so the vulnerability still exists. Yes, I’ve checked, it works like a charm.” Valve did not respond to a request for comment about the vulnerability, bug bounty incident and whether a patch is available. HackerOne did not have a comment. Interested in more on the internet of things (IoT)? Don’t miss our free Threatpost webinar, “IoT: Implementing Security in a 5G World.” Please join Threatpost senior editor Tara Seals and a panel of experts as they offer enterprises and other organizations insight about how to approach security for the next wave of IoT deployments, which will be enabled by the rollout of 5G networks worldwide. Click here to register.

Source

image
Texas officials have been left scrambling after up to 22 Texas entities – the majority of which are local governments – were hit by a coordinated ransomware attack on Friday. So far, these include the cities of Borger and Keene, and Texas officials say the attacks are all connected and carried out by a single threat actor Further details are slim regarding the ransomware attacks, which began on the morning of Aug. 16 – but what we do know is that the attack is the first of its kind. They were coordinated – as opposed to a hacker targeting a single “opportunity.” Allan Liska , threat intelligence analyst with Recorded Future talks to Threatpost about how last week’s cyberattacks showcase a potential shift in how future ransomware attacks will be launched. [ ](http://iframe%20style=border:%20none%20src=//html5-player.libsyn.com/embed/episode/id/10946294/height/360/theme/legacy/thumbnail/yes/direction/backward/%20height=360%20width=100%%20scrolling=no%20%20allowfullscreen%20webkitallowfullscreen%20mozallowfullscreen%20oallowfullscreen%20msallowfullscreen/iframe) For direct download, click here.

Source

image
Cisco Systems is warning of six critical vulnerabilities impacting a wide range of its products, including its Unified Computing System server line and its small business 220 Series Smart switches. In all instances of the vulnerabilities, a remote unauthenticated attacker could take over targeted hardware. Four of the critical bugs (CVE-2019-1938, CVE-2019-1935, CVE-2019-1974 and CVE-2019-1937) impact Cisco’s Unified Computing System (UCS) components. Each has a critical-severity rating and a CVSS score of 9.8. One of the bugs (CVE-2019-1935) is a default-user-credential flaw and impacts Cisco Integrated Management Controller Supervisor, Cisco UCS Director and Cisco UCS Director Express for Big Data SCP. The bug “could allow an unauthenticated, remote attacker to log in to the CLI of an affected system by using the SCP User account (scpuser), which has default user credentials,” according to Cisco. Another UCS bug (CVE-2019-1938) impacts Cisco UCS Director and Cisco UCS Director Express for Big Data API. In this case, a “vulnerability in the web-based management interface of Cisco UCS Director and Cisco UCS Director Express for Big Data could allow an unauthenticated, remote attacker to bypass authentication and execute arbitrary actions with administrator privileges on an affected system,” Cisco said. With each of the four UCS bugs, Cisco said, no known public exploits are available and systems impacted by the flaws have not been attacked. Patches are available for each of the four flaws. Cisco 220 Series Smart Switch Users Urged to Patch Cisco 220 Series Smart Switches Cisco also warning of two remote code execution bugs impacting its small business 220 Series Smart switches. In both cases, an unauthenticated remote adversary can trigger a buffer overflow attack and execute arbitrary code to gain control of the switch’s operating system. Public exploit code for both critical bugs is available online, however there are no reported incidents leveraging the bugs, Cisco said. Both bugs (CVE-2019-1913 and CVE-2019-1912) were first made public Aug. 6 , but on Wednesday were updated with additional information. The most serious of the 220 Series Smart switch bugs (CVE-2019-1913) has a CVSS rating of 9.8. According to Cisco, the small business 220 Series Smart switch vulnerability is “due to insufficient validation of user-supplied input and improper boundary checks when reading data into an internal buffer. An attacker could exploit these vulnerabilities by sending malicious requests to the web management interface of an affected device. Depending on the configuration of the affected switch, the malicious requests must be sent via HTTP or HTTPS.” Vulnerable 220 switches are running firmware 1.1.4.4 with the web management interface enabled. “To determine whether the web management interface is enabled via either HTTP or HTTPS, administrators can use the show running-config command on the device CLI. If both of the following lines are present in the configuration, the web management interface is disabled and the device is not vulnerable,” wrote Cisco. In both cases, Cisco credited researchers at VDOO Disclosure Program for identifying the critical vulnerabilities. Medium-Severity Bugs Wednesday’s critical bug news was part of a wider disclosure of vulnerabilities by Cisco that included three medium severity bugs. Two of the flaws (CVE-2019-1914 and CVE-2019-1949) affect Cisco’s 220 series switch and the company’s Firepower Management Center. The third medium-severity bug (CVE-2019-9506) is tied to Microsoft’s August Patch Tuesday disclosure of the so-called DejaBlue vulnerability. Cisco lists six IP-based phones impacted by the flaw along with versions (DX70 and DX80) of its Webex collaboration software. Interested in more on the internet of things (IoT)? Don’t miss our free Threatpost webinar, “IoT: Implementing Security in a 5G World.” Please join Threatpost senior editor Tara Seals and a panel of experts as they offer enterprises and other organizations insight about how to approach security for the next wave of IoT deployments, which will be enabled by the rollout of 5G networks worldwide. Click here to register.

Source