Your Phone Is Listening and it’s Not Paranoia
Here’s how I got to bottom of the ads-coinciding-with-conversations mystery.
A couple years ago, something strange happened. A friend and I were sitting at a bar, iPhones in pockets, discussing our recent trips in Japan and how we’d like to go back. The very next day, we both received pop-up ads on Facebook about cheap return flights to Tokyo. It seemed like just a spooky coincidence, but then everyone seems to have a story about their smartphone listening to them. So is this just paranoia, or are our smartphones actually listening?
According to Dr. Peter Henway—The senior security consultant for cybersecurity firm Asterix, and former lecturer and researcher at Edith Cowan University—the short answer is yes, but perhaps in a way that’s not as diabolical as it sounds.
For your smartphone to actually pay attention and record your conversation, there needs to be a trigger, such as when you say “hey Siri” or “okay Google.” In the absence of these triggers, any data you provide is only processed within your own phone. This might not seem a cause for alarm, but any third party applications you have on your phone—like Facebook for example—still have access to this “non-triggered” data. And whether or not they use this data is really up to them.
“From time to time, snippets of audio do go back to [other apps like Facebook’s] servers but there’s no official understanding what the triggers for that are,” explains Peter. “Whether it’s timing or location-based or usage of certain functions, [apps] are certainly pulling those microphone permissions and using those periodically. All the internals of the applications send this data in encrypted form, so it’s very difficult to define the exact trigger.”
He goes on to explain that apps like Facebook or Instagram could have thousands of triggers. An ordinary conversation with a friend about needing a new pair of jeans could be enough to activate it. Although, the key word here is “could,” because although the technology is there, companies like Facebook vehemently deny listening to our conversations.
“Seeing Google are open about it, I would personally assume the other companies are doing the same.” Peter tells me. “Really, there’s no reason they wouldn’t be. It makes good sense from a marketing standpoint, and their end-use agreements and the law both allow it, so I would assume they’re doing it, but there’s no way to be sure.”
With this in mind, I decided to try an experiment. Twice a day for five days, I tried saying a bunch of phrases that could theoretically be used as triggers. Phrases like I’m thinking about going back to uni and I need some cheap shirts for work. Then I carefully monitored the sponsored posts on Facebook for any changes.
The changes came literally overnight. Suddenly I was being told mid-semester courses at various universities, and how certain brands were offering cheap clothing. A private conversation with a friend about how I’d run out of data led to an ad about cheap 20 GB data plans. And although they were all good deals, the whole thing was eye-opening and utterly terrifying.
Peter told me that although no data is guaranteed to be safe for perpetuity, he assured me that in 2018 no company is selling their data directly to advertisers. But as we all know, advertisers don’t need our data for us to see their ads.
“Rather than saying here’s a list of people who followed your demographic, they say Why don’t you give me some money, and I’ll make that demographic or those who are interested in this will see it. If they let that information out into the wild, they’ll lose that exclusive access to it, so they’re going to try to keep it as secret as possible.
Peter went on to say that just because tech companies value our data, it doesn’t keep it safe from governmental agencies. As most tech companies are based in the US, the NSA or perhaps the CIA can potentially have your information disclosed to them, whether it’s legal in your home country or not.
So yes, our phones are listening to us and anything we say around our phones could potentially be used against us. But, according to Peter at least, it’s not something most people should be scared of.
Because unless you’re a journalist, a lawyer, or have some kind of role with sensitive information, the access of your data is only really going to advertisers. If you’re like everyone else, living a really normal life, and talking to your friends about flying to Japan, then it’s really not that different to advertisers looking at your browsing history.
“It’s just an extension from what advertising used to be on television,” says Peter. Only instead of prime time audiences, they’re now tracking web-browsing habits. It’s not ideal, but I don’t think it poses an immediate threat to most people.”
Read the original article over at Vice.
How to Delete All Amazon Alexa Recordings
Written bHongkiat/ Courtesy of
Amazon Alexa, like Apple’s Siri and Google Assistant, stores all your voice commands on its servers. However, Amazon Alexa is known to share private conversations of its users in the past. Of course, it is unofficially confirmed to be a bug, but nobody likes to get spied on in his own house, right?
If you are serious about your privacy, then you must wipe those voice recordings. Though you cannot command “Alexa, stop spying.” yet, Amazon does provide certain methods to delete them, which I am going to share below.
Why delete the recordings?
Alexa gets awaken when you say its trigger word, so it keeps listening to your conversations to pick up the trigger word whenever you say it. The issue with its always-on, always-listening mode is that it listens to everything.
Occasionally, it may misunderstand your conversations to think that you said the trigger word and it may awaken itself to listen to your further commands. If you think it may not harm your privacy, then let me share an example.
In Portland, a family experienced the exact issue. Their Alexa misunderstood their conversations to pick up the trigger word and further picked up their talks for commands. And the result: it sent the recordings of their discussions to a random person on their contact list. You don’t want that, right?
Delete the recordings
Now that you understand the risks involved with your voice commands getting stored by Alexa, let’s see how to delete those recordings to help you claim back your privacy. Do note that these recordings are used to improve the experience, and if you remove them, you may degrade your experience while using Alexa.
Delete recordings one-by-one
Let’s say you are okay with your voice commands being stored on the servers. However, if you wish to delete some recordings which may invade your privacy, then you can follow the below steps to remove those specific recordings.
- Open Alexa, then its menu, and choose Settings.
- Scroll down to the section “General“, then choose History.
- Choose a recording from the list, then select the option “Delete voice recordings” to delete it. This action will delete the audio file as well as the home screen card related to the deleted recording. Please note that if you wish to listen to it before deleting it, click Play after selecting it.
Delete recordings all-at-once
If you care about your privacy and plan to delete all the voice recordings stored by Amazon Alexa, then please follow the below steps. Do note that this is a one-time action, which will degrade your experience while using Alexa.
- Open Amazon in a web browser and log in to your account.
- Now open this link: Manage Your Content and Devices.
- Select the tab named “Your Devices“.
- Choose your device from the list of devices shown in your account. Do remember to choose the correct device else you will delete the voice recordings of a wrong device that is linked to your account.
- Choose “Manage voice recordings“, then select Delete.
That is all about deleting the voice commands or recordings done by Amazon Alexa. How do you feel about voice control devices? Do you trust them for your privacy? Please leave a comment and tell me.
Read the original article over at Hongkiat.com.
Malware has no trouble hiding and bypassing macOS user warnings
Warnings bypass can be used to “do a lot of malicious stuff,” researcher says.
Apple works hard to make its software secure. Beyond primary protections that prevent malware infections in the first place, company engineers also build a variety of defense-in-depth measures that are designed to lessen the damage that can happen once a Mac is compromised. Now, Patrick Wardle, a former National Security Agency hacker and macOS security expert has exposed a major shortcoming that generically affects many of these secondary defenses.
In a presentation at the Def Con hacker convention in Las Vegas over the weekend, Wardle said it was trivial for a local attacker or malware to bypass many security mechanisms by targeting them at the user interface level. When these security measures detect a potentially malicious action, they will block that action and then display an alert or warning. By abusing various programming interfaces built into macOS, malicious code could generate a programmatic click to interact or even dismiss such alerts. This “synthetic click,” as Wardle called it, works almost immediately and can be done in a way that is invisible to the user.
“The ability to synthetically interact with a myriad of security prompts allows you to perform a lot of malicious actions,” Wardle told Ars. “Many of Apple’s privacy and security-in-depth protections can be trivially bypassed.”
With the ability to generate synthetic clicks, an attack, for example, could dismiss many of Apple’s privacy-related security prompts. On recent versions of macOS, Apple has added a confirmation window that requires users to click an OK button before an installed app can access geolocation, contacts, or calendar information stored on the Mac. Apple engineers added the requirement to act as a secondary safeguard. Even if a machine was infected by malware, the thinking went, the malicious app wouldn’t be able to copy this sensitive data without the owner’s explicit permission.
Though many of Apple’s security alerts attempt to detect and ignore synthetic clicks, Wardle discovered that the privacy alerts, even on a fully updated High Sierra system, were not protected. “What is the point of displaying an alert, if malware can simply dismiss it?” he asked.
In the past, malware has abused such synthetic clicks to perform a variety of nefarious actions. For example, the sneaky Genio adware, DevilRobber currency mining malware, and the insidious Fruitfly malware that stole millions of images from infected Macs over a 13-year period all used synthetic clicks to bypass defense-in-depth warnings.
Apple responded to these in-the-wild wares by improving the security of its operating system. Now, in recent versions of macOS, security alerts and prompts will ignore synthetic events. At least that was the idea. In his presentation, Wardle first illustrated how an attacker could abuse a feature of macOS called “mouse keys” that would convert keyboard keypresses into mouse movements. Mouse keys lets a user move a mouse up, down, to the right or left, or in diagonal directions by pressing certain keys as diagrammed below:
However, Wardle illustrated how an attacker or malware could also leverage “mouse key” events to generate synthetic mouse clicks that would be accepted, even by “protected” security alerts. After creating a proof-of-concept attack that could interact and dismiss the keychain’s access prompt and dump a user’s unencrypted passwords and private keys, he reported the issue to Apple, which released a supplemental update to patch it as CVE-2017-7150. Now “mouse keys” are ignored by security alerts, and keychain access always requires a user’s password.
But even after Apple issued the patch, the warnings could still be bypassed. While testing an older attack, Wardle incorrectly copied and pasted some code. Without realizing the mistake, he ran the code, which to his amazement allowed him to post synthetic clicks to security alerts, even on a fully patched High Sierra system. Digging deeper, he realized that his buggy code was sending two mouse “down” events (instead of the typical mouse down, mouse up event).
“The system converts the second mouse down event to a mouse up event” he noted. “But since this mouse up event is generated by the system, it is allowed to interact with security prompts.” As a result of this issue, Wardle was able to completely bypass the warnings when doing a variety of things that have serious security and privacy consequences. The most worrisome is bypassing a newly introduced Apple security mechanism designed to prevent the programmatic loading of “kexts,” which are kernel extensions that interact with the core of the macOS.
Apple representatives didn’t respond to an email seeking comment for this post. Wardle, for his part, said the bypass raises questions about how the company rolled out the improvements. “I wasn’t trying to find a bypass, but I uncovered a way to fully break a foundational security mechanism,” said Wardle, who is the developer of the Objective-See Mac tools and chief research officer at Digita Security. “If a security mechanism falls over so easily, did they not test this? I’m almost embarrassed to talk about it.”
This post was rewritten for clarity and grammar fixes.
Read the original article over at ArsTechnica.com.
Microsoft Edge Flaw Lets Hackers Steal Local Files
Microsoft has fixed a vulnerability in the Edge browser that could be abused against older versions to steal local files from a user’s computer.
The good news is that social engineering is involved in exploiting the flaw, meaning the attack cannot be automated at scale, and, hence, present a smaller level of danger to end users.
Edge flaw is SOP-related
Discovered by Netsparker security researcher Ziyahan Albeniz, the vulnerability involves the Same-Origin Policy (SOP) security feature that all browser support.
In Edge, and all other browsers, SOP works by preventing an attacker from loading malicious code via a link that does not matches the same domain (subdomain), port, and protocol.
Albeniz says that Edge’s SOP implementation works as intended except one case —when users are tricked into downloading a malicious HTML file on their PC and then running it.
When the user runs this HTML file, its malicious code will be loaded via the file:// protocol, and because it’s a local file, it will not have a domain and port value.
What this means is that this malicious HTML file can contain code that collects and steals any data from local files accessible via a “file://” URL.
Because any OS file can be accessed via a file:// URL inside a browser, this essentially gives the attacker free reign to collect and steal any local file he wants.
Flaw useful in targeted attacks
Albeniz says that during tests he was able to steal data from local computers and send it to a remote server by executing this file in both Edge and the Mail and Calendar app. He also recorded a video of the attack, embedded below.
The attack requires an attacker knowing where various files are stored, but some OS and app config and storage files are in most cases stored at the same location on the vast majority of devices. Furthermore, the location of some files can be inferred or guessed.
The vulnerability may not be useful in the case of en-masse malware distribution campaigns, but it could be useful in more targeted attacks on high-value targets.
Warning for opening HTML files of unknown origin
But while Microsoft has addressed this issue in recent Edge and Mail and Calendar app versions, Albeniz now wants to warn users about the dangers of running HTML files they receive from strange persons or via email.
The researcher’s warning is valid because HTML files are not usually associated with regular malware distribution campaigns.
According to an F-Secure report, just five file types make up 85% of all malicious attachments sent via email spam campaigns. They are ZIP, DOC, XLS, PDF, and 7Z.
“The only way to protect yourself is to update to the latest versions of the Edge browser and Windows Mail and Calendar applications. And, of course it’s best to never open attachments from unknown senders, even if the extension doesn’t initially appear to be malicious,” the researcher said in a report he published yesterday, entitled “Exploiting a Microsoft Edge Vulnerability to Steal Files.”
Albeniz said other browsers were not vulnerable to the SOP vulnerability he reported to Microsoft. The researcher also told Bleeping Computer that the Redmond-based OS maker fixed the vulnerability (CVE-2018-0871) with the release of the June 2018 Patch Tuesday.
Read the original article over at Bleeping Computer.
Simple Steps to Protect Yourself on Public Wi-Fi
Accessing the internet isn’t normally a problem when you’re inside the confines of your own home—it’s secure, it’s easy to connect to, and it’s relatively uncongested—unless the whole family is streaming Netflix on five separate devices. When you venture out though, it’s a different story. You can access Wi-Fi in more places than ever, enabling you to keep in touch or catch up with work from wherever you happen to be, but getting online isn’t quite as simple, or as safe, as it is with your home network.
A public Wi-Fi network is inherently less secure than your personal, private one, because you don’t know who set it up, or who else is connecting to it. Ideally, you wouldn’t ever have to use it; better to use your smartphone as a hotspot instead. But for the times that’s not practical or even possible, you can still limit the potential damage from public Wi-Fi with a few simple steps.
Know Who To Trust
This relates to the previous point, but wherever possible stick to well-known networks, like Starbucks. These Wi-Fi networks are likely less suspect because the people and companies operating them are already getting money out of you.
No public Wi-Fi network is absolutely secure—that depends as much on who’s on it with you as who provides it—but in terms of relative safety, known quantities generally beat out that random public Wi-Fi network that pops up on your phone in a shopping mall, or a network operated by a third party that you’ve never heard of. These may well be legit, but if any passerby can hook up for free, what’s the benefit for the people running the network? How are they making money? There’s no hard or fast rule to apply, but using a bit of common sense doesn’t hurt.
If you can, stick to as few public Wi-Fi networks as possible. In a new city, connect to Wi-Fi in a store or coffee shop you’ve used before, for example. The more networks you sign up to, the more likely the chances that you’ll stumble across one that isn’t treating your data and browsing as carefully as it should be.
Stick With HTTPS
As of a couple of weeks ago, Google Chrome lets you know when the site you’re visiting uses an unencrypted HTTP connection rather than an encrypted HTTPS encryption by labeling the former “Not Secure.” Heed that warning, especially on public Wi-Fi. When you browse over HTTPS, people on the same Wi-Fi network as you can’t snoop on the data that travels between you and the server of the website you’re connecting to. Over HTTP? It’s relatively easy for them to watch what you’re doing.
Don’t Give Away Too Much Info
Be very wary of signing up for public Wi-Fi access if you’re getting asked for a bunch of personal details, like your email address or your phone number. If you absolutely have to connect to networks like this, stick to places you trust (see above) and consider using an alternative email address that isn’t your primary one.
Stores and restaurants that do this want to be able to recognize you across multiple Wi-Fi hotspots and tailor their marketing accordingly, so it’s up to you to decide whether the trade-off is worth it for some free internet access.
Again, sign up for as few different public Wi-Fi platforms as you can. Does your phone or cable carrier offer free Wi-Fi hotspots in your current location, for example? If you can get connected through a service that you’re already registered for, then that’s usually preferable to giving up your details to yet another group of companies.
Limit AirDrop and File Sharing
When you’re on a public network around strangers, you’ll want to cut off the features that enable frictionless file sharing on your devices. On a PC, that means going to Network and Sharing Center, then Change advanced sharing settings, then Turn off file and printer sharing. For Macs, go to System Preferences, then Sharing, and unselect everything. Then head to Finder, click on AirDrop, and select Allow me to be discovered by: No One. For iOS, just find AirDrop in the Control Center and turn it off. And voila!. No one nearby can grab your files, or send you one you don’t want.
Check What You’re Signing Up For
We know we’re probably saying this in vain, but read up on the attached terms and conditions before you connect yourself to a public Wi-Fi connection. You might not always understand every word, but you should be able to spot any major red flags, particularly around what kind of data they’re collecting from your session, and what they’re doing with it.
If you find the associated policies really impenetrable, a quick web search should bring up any known issues or problems that other users have been having. Of course there’s nothing inherently evil about terms and conditions—they help protect the Wi-Fi provider too—but don’t just blindly click through on whatever pop-up screens you’re presented with. And if they ask you to install any extra software or browser extensions, back away quickly.
Use a VPN
By far the most effective trick for staying safe on public Wi-Fi is to install a VPN or Virtual Private Network client on your devices. It encrypts data traveling to and from your laptop or phone, and hooks you up to a secure server—essentially making it harder for other people on the network, or whoever is operating the network, to see what you’re doing or grab your details.
We’ve written here about some of the ways to choose a good VPN, as not all VPNs are created equal, and some are downright dodgy. It’s definitely worth paying for a service, as free solutions are more likely to be financed by some suspect marketing or data collection practices that it’s best to steer clear of. Independent review sites like The Wirecutter and That One Privacy Site can help here.
Actually connecting to a VPN is usually straightforward, and once you’ve downloaded the client for your provider of choice, it will take you step-by-step through the process, whether you’re on mobile or the desktop. If you move around a lot, and connect to a lot of different networks, a good VPN is well worth investing in.
In the next few years, as the next-generation WPA3 Wi-Fi security protocol comes online, public Wi-Fi will have more built-in protections. Until then, many security exploits rely on old, outdated software, so make sure you’re running all the latest patches and software updates on your laptop or phone before venturing out. Also, don’t download or install anything new over public Wi-Fi unless you absolutely have to.
And again, best way to avoid running into security problems due to public Wi-Fi is not to use it at all—think about downloading videos and music for offline access before you leave home, for instance, or using your smartphone’s hotspot function instead. If you are going to get connected though, the steps mentioned above should maximize your chances of staying out of trouble.
Read the original article courtesy of Wired.com.
Are Digital Assistants Always Listening?
Written by Forbes/ Courtesy of
Digital assistants are popping up everywhere. From Amazon Alexa to Google Home and Apple HomePod, more and more people have smart devices in their homes, right next to where they discuss much about their lives, preferences, and plans. It gives companies an opportunity to listen to their customers, which has led to an overarching belief that companies really are listening through their digital assistant devices and potentially blurring privacy lines.
A few weeks ago, my husband and I were discussing a kitchen gadget called the sous vide with our neighbors. It was a quick conversation and we didn’t think much of it until the next day when we started seeing ads on Amazon for a sous vide. We actually ended up buying the product, but we couldn’t help but wonder—was it a coincidence, or was Alexa listening to a private conversation we had in our kitchen?
It’s a common concern for consumers. After all, things you search on your computer often end up in internet ads, so why wouldn’t things you say and talk about also end up there? If I search for Airbnb rentals in New Orleans, I’ll likely soon see an ad for Airbnb rentals on my Facebook feed. As digital assistants grow, consumers may assume that transferring private conversations to ads is the natural progression of personalized advertising. But where is the line between a private conversation and what is fodder for companies?
Even if companies aren’t actually listening in on conversations, most customers think they are. Lots of people fear that companies like Amazon, Apple, or their internet or phone providers are always listening and pulling data from private conversations. Most companies claim that digital assistants only start recording after the “wake word” such as “Ok Google” or “Hey Alexa”, but not everyone is convinced. A recent survey found that 27% of Americans don’t use voice assistants because of privacy concerns. That means companies are missing out on a large group of customers because they aren’t clear about how they collect and use data.
The answer to this epidemic of mistrust is transparency. All companies need to have messaging ready to explain to customers what they do with private data. Customers care about how companies use their data and would love to get an answer from companies of where that data ends up. In most cases, the issue isn’t whether or not digital assistants are listening, but rather what they are doing with that information. Many large companies have shown that they aren’t collecting data, but others have yet to make a statement, which can make customers assume the worst. The key is carefully handling personal data and being transparent and honest about what is happening with that data.
Digital assistants are the future, and we’ll likely see more of these types of devices in other areas of our homes soon. Now is the time for companies to set the record straight about what they actually listen to so customers can know who to trust.
Read the original article over at Forbes.
Elon Musk making “kid-sized submarine” to rescue teens in Thailand cave
“Construction complete in about 8 hours,” the tech billionaire tweeted Saturday.
Elon Musk tweeted on Saturday that a team of SpaceX engineers is hours away from completing work on a “tiny kid-sized submarine” that could be used to extract 12 teenagers and preteens who are stranded with their soccer coach in a flooded cave in Thailand. Musk has had a team of engineers working on the problem for the last couple of days and has been keeping the world updated on the work via Twitter.
On Thursday night, Musk tweeted about an idea to use an inflatable nylon tube to help the kids escape. By Friday afternoon, Musk’s thinking had evolved. He tweeted that his team was working on building “double-layer Kevlar pressure pods with Teflon coating to slip by rocks.” A mid-day tweet on Saturday provided another update:
Got more great feedback from Thailand. Primary path is basically a tiny, kid-size submarine using the liquid oxygen transfer tube of Falcon rocket as hull. Light enough to be carried by 2 divers, small enough to get through narrow gaps. Extremely robust.
— Elon Musk (@elonmusk) July 7, 2018
And this isn’t just a theory: Musk says that his team is building the contraption now. “Construction complete in about 8 hours, then 17 hour flight to Thailand,” Musk tweeted just before noon, California time.
We haven’t seen any reaction from Thai authorities to this idea yet, but it could provide a solution to the deadly dilemma facing Thai rescuers. Much of the path out of the cave is flooded, and in places the cave gets as narrow as 70cm. A route that narrow is a big challenge for even the most experienced cave divers—indeed, one diver died ferrying oxygen tanks to the boys earlier this week. Some of the boys don’t even know how to swim, and so it might not be possible to provide them with the training necessary to swim out, even with professional help.
Yet waiting may also not be an option. The oxygen level in the boys’ location has been dropping. On top of that, Thailand is just entering its rainy season. With heavy rains expected in the coming days, there’s a danger that the water level could rise, drowning the group.
The kind of tiny submarine Musk is describing could allow professional divers to bring the boys out without requiring the boys to do anything more than lay still. It would still be a harrowing and claustrophobic experience, but—if everything works as Musk describes—it could be much less dangerous than conventional scuba diving, where a panicking teenager could lead to the death of the teen himself as well as his professional scuba guides.
At the same time, Musk says he’s continuing to work on his earlier idea: an “inflatable tube with airlocks” that could be inflated inside the submerged portions of the tube, creating a tunnel the kids could crawl through. In a Friday evening tweet he described this option as “less likely to work, given tricky contours, but great if it does.”
Update: In a follow-up tweet, Musk describes some features of the rescue pod.
4 handles/hitch points on front & 4 on rear. 2 air tank connections on front & 2 on rear, allowing 1 to 4 tanks simultaneously connected, all recessed for impact protection w secondary cap seal if leak develops.
— Elon Musk (@elonmusk) July 7, 2018
Read the original article over at ArsTechnica.com.
Two-Thirds of Second-Hand Memory Cards Contain Data From Previous Owners
A recent study conducted by academics from the University of Hertfordshire in the UK has revealed that almost two-thirds of second-hand memory cards still contain remnants of personal data from previous owners.
For their study, researchers analyzed 100 second-hand SD and micro SD memory cards purchased from eBay, conventional auctions, second-hand shops, and other sources over a four-month period.
Researchers recovered selfies, intimate photos, personal docs
All in all, researchers say the memory cards they recovered were previously used in smartphones and tablets, but some cards were also used cameras, SatNav systems, and even drones.
The research team says the analysis process consisted of creating a bit-by-bit image of the card and then using freely available software to see if they could recover any data from the card.
Their efforts were successful and worrisome at the same time, as the team says it managed to recover data from the memory cards, including intimate photos, selfies, passport copies, contact lists, navigation files, pornography, resumes, browsing history, identification numbers, and other personal documents.
People don’t wipe their devices properly
“Often the problem is not that people don’t wipe their SD cards; it’s that they don’t do it properly,” said Paul Bischoff, Privacy Advocate for Comparitech.com, the company who commissioned the study.
“Simply deleting a file from a device only removes the reference that points to where a computer could find that file in the card memory. It doesn’t actually delete the ones and zeros that make up the file,” Bischoff said.
“That data remains on the card until it is overwritten by something else,” he also added. “For this reason, it’s not enough to just highlight all the files in a memory card and hit the delete key. Retired cards need to be fully erased and reformatted.”
Special software exists, including open-source one, that can help users properly wipe their devices by deleting files, and then overwriting it with random data so the previous information is permanently and irrevocably removed from a device a user wants to sell.
This procedure is not recommended for memory cards alone, but for all media storage devices, such as regular hard drives or USB sticks.
The problem detailed in the University of Hertfordshire study is not unique. A study conducted in 2010 revealed that 50% of the second-hand mobile phones sold on eBay contained data from previous owners.
A 2012 report from the UK’s Information Commissioner’s Office (ICO) revealed that one in ten second-hand hard drives still contained data from previous owners. A similar study from 2015 found that three-quarters of used hard drives contained data from previous owners.
The full breakdown of the University of Hertfordshire study data is available below:
? 36 were not wiped at all, neither the original owner nor the seller took any steps to remove the data.
? 29 appeared to have been formatted, but data could still be recovered “with minimal effort.”
? 2 cards had their data deleted, but it was easily recoverable
? 25 appeared to have been properly wiped using a data erasing tool that overwrites the storage area, so nothing could be recovered.
? 4 could not be accessed (read: were broken).
? 4 had no data present, but the reason could not be determined
Read the original article over at Bleeping Computer.
This password-stealing malware just added a new way to infect your PC
One of the new tactics by the malware involves an injection technique not seen in the wild until just days ago.
A powerful form of malware which can be used to distribute threats including Trojans, ransomware and malicious cryptocurrency mining software has been updated with a new technique which has rarely been seen in the wild.
Distributed in spam email phishing campaigns, Smoke Loader has been sporadically active since 2011 but has continually evolved. The malware has been particularly busy throughout 2018, with campaigns including the distribution of Smoke Loader via fake patches for the Meltdown and Spectre vulnerabilities which emerged earlier this year.
Like many malware campaigns, the initial attack is conducted via a malicious Microsoft Word attachment which tricks users into allowing macros, enabling Smoke Loader to be installed on the compromised system and allowing the Trojan to deliver additional malicious software.
Researchers at Cisco Talos have been tracking Smoke Loader for some time and have seen its latest campaigns in action. One of the current preferred payloads is TrickBot — a banking Trojan designed to steal credentials, passwords and other sensitive information. Phishing emails distributing the malware are designed to look like invoice requests from a software firm.
What intrigued researchers is how Smoke Loader is now using an injection technique which hadn’t been used to distribute malware until just days ago. The code injection technique is known as PROPagate and was first described as a potential means of compromise late last year.
This technique abuses the SetWindowsSubclass function — a process used to install or update subclass windows running on the system — and can be used to modify the properties of windows running in the same session. This can be used to inject code and drop files while also hiding the fact it has happened, making it a useful, stealthy attack.
It’s likely that the attackers have observed publically available posts on PROPagate in order to recreate the technique for their own malicious ends.
Those behind this process have also added anti-analysis techniques to complicate forensics, runtime AV scanners, tracing, and debugging that any researchers may attempt to conduct on the malware.
While there’s still plenty of Smoke Loader attacks which look to deliver additional malware to compromised systems, in some cases the malware is being equipped with its own plug-ins to go straight onto performing its own malicious tasks.
Each of these plugins are designed to steal sensitive information, specifically stored credentials or sensitive information transferred over a browser — the likes of Firefox, Internet Explorer, Chrome, Opera, QQ Browser, Outlook, and Thunderbird can all be used to steal data.
The malware can even be injected into applications like TeamViewer, potentially putting the credentials of others on the same network as the infected machine at risk too.
It’s possible that Smoke Loader has been equipped with these tasks because its operators aren’t currently getting much business in response to adverts on dark web forums advertising their ability to install other types of malware onto their compromised network of machines. It could also just be a means of taking advantage of the botnet for their own purposes.
Either way, it indicates that organisations must remain vigilant against potential threats.
“We have seen that the trojan and botnet market is constantly undergoing changes. The players are continuously improving their quality and techniques. They modify these techniques on an ongoing basis to enhance their capabilities to bypass security tools. This clearly shows how important it is to make sure all our systems are up to date,” wrote Cisco Talos researchers.
“We strongly encourage users and organizations to follow recommended security practices, such as installing security patches as they become available, exercising caution when receiving messages from unknown third parties, and ensuring that a robust offline backup solution is in place. These practices will help reduce the threat of a compromise, and should aid in the recovery of any such attack,” they added.
Read the original article over at ZDNet.com.
Rash of Fortnite cheaters infected by malware that breaks HTTPS encryption
Malware can read, intercept, or tamper with the traffic of any HTTPS-protected site.
Tens of thousands of Fortnite players have been infected by malware that hijacks encrypted Web sessions so it can inject fraudulent ads into every website a user visits, an executive with a game-streaming service said Monday.
“As the errors kept flowing in, we took a glance at what these users had in common,” Sampson wrote. “They didn’t share any hardware, their ISPs were different, and all of their systems were up to date. However, one thing did stand out—they played Fortnite.”
Root certificate installed
Suspecting the malware was spread by one of the countless Fortnite cheating hacks available online that promise to give users an unfair advantage over other players, Rainway researchers downloaded hundreds of the hacks and scoured them for references to the rogue URLs. The researchers eventually found one Sampson declined to name that promised to allow users to generate free in-game currency called V-Bucks. It also promised users access to an “aimbot,” which automatically aims the character’s gun at opponents without any need for precision by the player. When the researchers ran the app in a virtual machine, they discovered that it installed a self-signed root certificate that could perform a man-in-the-middle attack on every encrypted website the user visited.
Sampson wrote: “Now, the adware began altering the pages of all Web requests to add in tags for Adtelligent and voila, we’ve found the source of the problem—now what?”
Rainway researchers reported the rogue malware to the unnamed service provider that hosted it. The service provider removed the malware and reported that it had been downloaded 78,000 times. In all, the malware generated 381,000 errors in Rainway’s logs. The researchers also reported the abuse to Adtelligent and Springserve. Adtelligent, Sampson said, didn’t respond, but Springserve helped to identify the abusive ads and remove them from its platform. Adtelligent officials didn’t immediately respond to a message seeking comment for this post. Officials from Epic Games, the maker Fortnite, declined to comment.
Sampson also said that Rainway implemented a defense known as certificate pinning. Certificate pinning binds a specific certificate to a given domain name in order to prevent browsers from trusting fraudulent TLS certificates that are self-signed by an attacker or misissued by a browser-trusted authority. While the adoption of certificate pinning is a good defense-in-depth move, it unfortunately would do nothing to protect users against root certificates installed to perform man-in-the-middle attacks, as Google researchers have warned for years. That means the malware has the ability to read, intercept, or tamper with the traffic of any HTTPS-protected site on the Internet.
The rash of infections is the latest cautionary tale about the risks of installing shady software provided by unknown sources. People who suspect they have been infected should install antivirus protection from a name-brand provider and thoroughly scan their systems ASAP.
Read the original article over at ArsTechnica.com.