Repairs & Upgrades

January 23, 2020 »

Looking for silver linings in the CVE-2020-0601 crypto vulnerability

By Chester Wisniewski

The scene stealer in January’s Patch Tuesday updates from Microsoft was CVE-2020-0601, a very serious vulnerability in the crypt32.dll library used by more recent versions of Windows.

The flaw, which also goes by the names Chain of Fools and Curveball, allows an attacker to fool Windows into believing that malicious software and websites have been digitally vouched for by one of the root certificate authorities that Windows trusts (including Microsoft itself).

An attacker could exploit the flaw to disguise malware as legitimate – Microsoft-approved – software, to conduct silent Man-in-the-Middle attacks or to create more realistic phishing websites.

The vulnerability is undoubtedly very serious, but in the days since its disclosure I have started to wonder if there is a silver lining to this cloud.

Fortunately, there may be a few.

First, it appears this vulnerability only affects the latest editions of Windows, including Windows 10, Windows Server 2016, Windows Server 2019 and their derivatives. It doesn’t affect older versions of Windows, nor does it impact users of MacOS, Linux or Unix variants.

Second, the vulnerability can be detected both in the network and at the endpoint. This means you may have a heads-up from patched machines or network security devices, even if some of your endpoints may not yet have the January 2020 updates.

It would also seem that the most important thing, Windows updates themselves, are unaffected by the vulnerability. Windows Update uses a pinned certificate chain with RSA certificates, which are not affected by CVE 2020-0601. This means you can safely update systems without fear of someone booby-trapping your updates.


UN report alleges that Saudi crown prince hacked Jeff Bezos’s phone

By Lisa Vaas

A forensic examination of Amazon CEO Jeff Bezos’s mobile phone has pointed to it having allegedly been infected by personal-message-exfiltrating malware – likely NSO Group’s notorious Pegasus mobile spyware – that came from Saudi Arabia’s Crown Prince Mohammed bin Salman’s personal WhatsApp account.

The United Nations backed up the allegation by releasing details of the evidence on Wednesday.

The UN’s report said that full details from the digital forensic exam of Bezos’s phone were made available to its special rapporteurs. The release of the report followed a story about the hack from The Guardian that was published earlier on Wednesday.

The report was drafted by Agnes Callamard, a UN expert on extrajudicial killings who’s been probing the murder of The Washington Post columnist Jamal Khashoggi, and by David Kaye, who’s been investigating violations of press freedom. Bezos owns The Washington Post.

Khashoggi was killed in October 2018 by agents of the Saudi government after they allegedly used Pegasus to hack his friend’s phone.

According to the UN’s report, the crown prince’s WhatsApp account sent Bezos a taunting message a month after Khashoggi was murdered. From the report:

A single photograph is texted to Mr. Bezos from the Crown Prince’s WhatsApp account, along with a sardonic caption. It is an image of a woman resembling the woman with whom Bezos is having an affair, months before the Bezos affair was known publicly.

The richest man in the world had been having a seemingly friendly WhatsApp conversation with bin Salman when, on 1 May 2018, an unsolicited file was sent from the crown prince’s phone.

Within hours, a trove of data was exfiltrated from Bezos’s phone, although the forensic exam did not reveal what was in the messages.


Apple allegedly made nice with FBI by dropping iCloud encryption plan

By Lisa Vaas

In spite of Apple having turned over the shooter’s iCloud backups in the case of the Pensacola, Florida mass shooting last month, the US government has been raking it over the coals for supposedly not helping law enforcement in investigations.

But according to a new allegation, Apple has been far more accommodating than the FBI has been willing to admit. Specifically, according to six sources – Reuters relied on the input of one current and three former FBI officials and one current and one former Apple employee – a few years ago, Apple, under pressure from the FBI, backed off of plans to let iPhones users have end-to-end encryption on their iCloud backups.

The bureau had griped that such encryption would gum up its investigations.

Last week, US Attorney General William Barr fumed at Apple over its refusal to break encryption per FBI request:

So far, Apple has not given any substantive assistance.

President Donald Trump piled on, tweeting that Apple refuses to unlock phones used by “killers, drug dealers and other violent criminal elements.”

But if the recent allegation proves true, it means that Apple has been far more accommodating to US law enforcement than headlines, politicians’ ire, and Apple’s marketing would indicate.

Its sources told Reuters that more than two years ago, Apple told the FBI that it planned to offer end-to-end encryption for iCloud backups, primarily as a way to thwart hackers. If it had gone through with the plan, it would have meant that Apple wouldn’t have a key to unlock encrypted data and would thus be unable to turn over content in readable form, even if served with a court order to do so.


Sonos’s tone-deaf legacy product policy angers customers

By Danny Bradbury

When you buy a cloud-connected appliance, how long should the vendor support it for with software updates? That’s the question that home audio company Sonos raised this week when it dropped some unwelcome news on its customers.

The company has announced that it will discontinue software updates for older products in May this year (here’s a list of products that it marks as legacy). Stopping software updates for legacy kit is nothing new, but it’s the way the company has done it that has Sonos customers’ hackles up.

Sonos points out that it supports software updates on products for at least five years after it stops selling them. However, the issue here is that all products in a Sonos network must run on the same software, meaning that any newer (‘non-legacy’) equipment connected to the speakers will also stop downloading new software updates. The only way around this for Sonos users is to disconnect their new equipment from their legacy kit and run them independently of each other.

From Sonos’s email to customers:

Please note that because Sonos is a system, all products operate on the same software. If modern products remain connected to legacy products after May, they also will not receive software updates and new features.

This carries service implications for users, because while products will continue working without software updates, it doesn’t mean that they will work as well. Sonos explains that as third-party connected cloud partners change their own services, they may become incompatible with the legacy software.

This isn’t just a product service issue; it’s a cybersecurity problem. Any cloud-connected equipment is potentially vulnerable to attack, and researchers frequently discover new exploits. Ugo Vallauri is co-founder and policy lead of the Restart Project, a European organization that promotes user repairs of consumer electronics in a bid to cut down on e-waste. He told us:

A big issue is the lack of separation between security updates and software updates. While we can’t expect a product’s software to be improved indefinitely, security updates should be ensured for as long as possible. In this case, Sonos is not even mentioning security updates when suggesting that “legacy” products could continue to be used.

When we asked Sonos about this, it replied:

We take our customer’s security seriously and will work to maintain the existing experience and conduct critical bug fixes where the computing hardware will allow.

So perhaps there’s hope, but there’s no official policy that tells you exactly what to expect in terms of cybersecurity fixes.


FBI issues warning about lucrative fake job scams

By John E Dunn

What’s the difference between a real job and the horde of fake ones found on the internet?

It’s even more basic than the fact that one is fake – fake jobs are suspiciously easy to get interviews for.

These hiring scams sound like child’s play. Post fake employment opportunities on legitimate job sites, which link to spoofed sites impersonating known brands, which in turn leads to an email offering a teleconference ‘interview’ from an imaginary HR department.

Next comes the job offer, but only after collecting the applicant’s social security number, a scan of their driving license and – the important bit – a credit-card fee to cover the recruitment, training, or background checks they are told will be reimbursed by their new employer.

That never happens because there is no employer to pay them back, and of course, no job.

These scams date back to the earliest days of the internet but seem to be getting, if not more common, then a lot more ambitious.

This week the FBI’s Internet Crime Complaint Center (IC3) put out its latest warning about the fake job problem about which it has received numerous complaints over the past year.

What’s surprising is that financial losses now run to almost $3,000 per victim, plus the loss of personally identifiable information (PII) which can be abused for years.

But why do people keep falling for them?

It’s a matter of speculation but one possibility is the widespread notion that the internet has created plenty of quick-and-dirty jobs that only get advertised on unusual channels.


Big Microsoft data breach – 250 million records exposed

By Paul Ducklin

Microsoft has today announced a data breach that affected one of its customer databases.

The blog article, entitled Access Misconfiguration for Customer Support Databases, admits that between 05 December 2019 and 31 December 2019, a database used for “support case analytics” was effectively visible from the cloud to the world.

Microsoft didn’t give details of how big the database was. However, consumer website Comparitech, which says it discovered the unsecured data online, claims it was to the order of 250 million records containing:

…logs of conversations between Microsoft support agents and customers from all over the world, spanning a 14-year period from 2005 to December 2019.

According to Comparitech, that same data was accessible on five Elasticsearch servers.

The company informed Microsoft, and Microsoft quickly secured the data.

Microsoft’s official statement states that “the vast majority of records were cleared of personal information,” meaning that it used automated tools to look for and remove private data.

However, some private data that was supposed to be redacted was missed and remained visible in the exposed information.

Microsoft didn’t say what type of personal information was involved, or which data fields ended up un-anonymized.

It did, however, give one example of data that would have been left behind: email addresses with spaces added by mistake were not recognized as personal data and therefore escaped anonymization.

So, if your email address were recorded as “” your data would have been converted into a harmless form, whereas “name[space]” (an easy mistake for a support staffer to make when capturing data) would have been left alone.


January 22, 2020 »

Ubisoft sues DDoS-for-hire operators for ruining game play

By Lisa Vaas

These guys aren’t just launching attacks that kick all players on a targeted server out of a game, or degrade the game performance down to sludge, Ubisoft alleges. They also allegedly went so far as to throw up a bogus domain seizure notice on one of their sites, claiming that the domain had been seized by “Microsoft Inc. and Ubisoft Entertainment” pursuant to a fictional “Operation(D)DoS OFF”, according to the complaint (posted courtesy of Polygon) that Ubisoft filed on Thursday in the US District Court of Northern California.

Ubisoft says it was part of the operators’ attempts to rub out their tracks:

Defendants are well aware of the harm that the DDoS Services and DDoS Attacks cause to Ubisoft. Indeed, knowing that this lawsuit was imminent, Defendants have hastily sought to conceal evidence concerning their involvement.

It’s not just alleged DDoS-for-hire operators who knew this lawsuit was coming. Everybody in the gaming world knew. Ubisoft picked up on an increase in DDoS attacks in September 2019, banned the worst offenders, and said that it was talking to its legal team about legal action.

Last week, Ubisoft filed the complaint against five people whom it thinks run a network of four distributed denial of service- (DDoS)-for-hire services via various domain names and websites – the websites,,, and (could they possibly be more redundant?) – and that they hide behind various anonymous online aliases to do so.


NIST’s new privacy rules – what you need to know

By Danny Bradbury

You’ve waded through the relevant privacy regulations until your brain hurts, and you understand the basic requirements under GDPR, CCPA, or whatever industry rules you must abide by. But how do you ensure that you’re compliant? Worry no more. NIST has released a Privacy Framework to help you get your house in order.

The federal US government’s National Institute of Standards and Technology (NIST) has a good track advising organization’s on cybersecurity. It published a set of password rules in 2016. It also publishes a Cybersecurity Framework that has become a litmus test for those trying to secure their data.

The brand-new Privacy Framework 1.0 is the equivalent document for protecting peoples’ personal privacy. As NIST points out, cybersecurity and privacy are connected, but different. Some privacy events aren’t related to cybersecurity incidents, but stem from other issues like over-aggressive data collection, poorly thought-out marketing practices, or manual mishandling of data.

You can use the Privacy Framework when developing new products and services to ensure that they tick all your privacy boxes. It’s a good tool when conducting the privacy impact assessments that regulations like GDPR demand. It isn’t a compliance toolkit for meeting the requirements of specific regulations. Instead, it’s a voluntary toolkit that you can use to think about your approach to privacy. You can use bits of or all of it – NIST isn’t prescriptive.

The Framework breaks down into three broad areas: the core, the profiles, and the implementation tiers. The core contains a set of five functions that you work through as part of your privacy assessment process.


Regus spills data of 900 staff on Trello board set to ‘public’

By John E Dunn

Another company has ended up accidentally spilling sensitive data from business collaboration tool Trello.

According to a Daily Telegraph report, the company that put the boot to its own throat this time is office space company Regus, which posted performance ratings of 900 managers to a public Trello board.

Trello boards come in three types – private (password needed), approved (i.e. visible to specific people), and public.

It seems the Regus parent company IWG carried out covert video assessments using researchers from a company called Applause posing as clients looking for office space.

The evaluations from this were gathered into a spreadsheet which was inadvertently set to ‘public’.

Because search engines index public Trello boards that meant that anyone with a browser could, in theory, see the data, which included names, addresses, performance ratings, and company training videos.

These would normally be shown only to the employee concerned as part of company assessments.

In addition to exposing Regus’s own staff, the personal details and email addresses of the external researchers working for Applause were also leaked. IWG issued a statement that appeared to shift the blame to the research company:

We are extremely concerned to learn that an external third-party provider, who implemented the exercise, inadvertently published online the outcomes of an internal training and development exercise.

The data had now been taken down:

As our primary concern we took immediate action and the external provider has now removed the content.

Although the newspaper says this didn’t happen until they contacted IWG and Applause. It’s not clear how long the data was left in its public, exposed state.


Nobody boogies quite like you

By Lisa Vaas

That spasmodic jerking around that some of us refer to as “dancing?”

It’s the latest biometric: we can be identified by our twerking, our salsa, our rumba or our House moves with an impressive 94% accuracy rate, according to scientists at Finland’s University of Jyväskylä.

To be specific, the researchers asked 73 volunteers to dance to eight music styles: Blues, Country, Dance/Electronica, Jazz, Metal, Pop, Reggae and Rap. The dancers weren’t taught any steps; rather, they were simply told to “move any way that felt natural.”

Their study, described in a paper titled Dance to your own drum, was published in the Journal of New Music Research last week.

Identifying people by their dance moves is not what the researchers were after. They had set out to determine how music styles affect how we move:

Surely one does not move the same way in response to a song by Rage Against the Machine as to one by Bob Dylan – and research has indeed shown that audio features extracted from the acoustic signal of music influence the quality of dancers’ movements.

The original question: could they determine the style of music just by watching how people are dancing? Previous research has indicated that you can: low-frequency sound generated by kick drum and bass guitar relates to how fast you bop your head around, while high-frequency sound and beat clarity have been associated with a wider variety of movement features, including hand distance, hand speed, shoulder wiggle and hip wiggle. Dancers also increase their movements as a bass drum gets louder. Jazz is associated with lesser head speed.

It could all have to do with music’s audio features, but then again, cultural norms tell us how we’re supposed to move. Jazz? Let’s swing dance! Metal? HEADBANG!


Citrix ships patches as vulnerable servers come under attack

By John E Dunn

Citrix has issued its first set of patches fixing a nasty vulnerability that’s been hanging over some of its biggest products.

The flaw, identified as CVE-2019-19781 on 17 December 2019, affected Citrix’s Application Delivery Controller (ADC) load and application balancer, and the Citrix Gateway Virtual Private Network (VPN) appliance (previously known as the NetScaler ADC or NetScaler Gateway).

Citrix was vague about what the flaw might allow an attacker to do beyond saying that it “could allow an unauthenticated attacker to perform arbitrary code execution.”

However, it’s been clear from the start that it was serious, an impression reinforced by speculation (based on analysis of Citrix’s proposed mitigations) that the issue allows directory traversal, that is offering attackers a way to access to restricted directories without having to authenticate.

That’s potentially disastrous – the Citrix Gateway, for example, is used to enable VPN remote access so an attacker able to crawl into a network through that route could exploit that in numerous horrible ways.


China and US top user data requests in Apple transparency report

By Lisa Vaas

Governments in the US and China are at the front of the line when it comes to knocking on Apple’s door to request user data relating to fraud/phishing, according to the company’s latest transparency report.

Like any tech company that handles user data, Apple gets different types of requests: those that are made when an account holder is in imminent danger, those from law enforcement agencies (LEA) trying to help people find their lost or stolen devices, those asking for Apple’s help when thieves rip off credit card data so they can buy Apple products or services on somebody else’s dime, and in situations where investigators think an account’s been used to do something illegal.

That last category has proved particularly controversial: the FBI has come knocking on Apple’s door in notable, headline-grabbing cases, including when the FBI was looking to unlock the iPhone of the San Bernardino terrorist and, more recently, when it was looking for help in breaking encryption on the iPhones of the killer in the recent Pensacola mass shooting.

In these instances, Apple famously said no to weakening encryption. Those requests didn’t involve subpoenas, though. The San Bernardino iPhone unlocking request involved a weird court order issued under the dusty All Writs Act of 1789, while the Pensacola unlocking request came in the form of a plain old letter sent from the FBI’s lawyer to Apple’s lawyer.

As far as worldwide government account requests go for the first half of 2019, Apple says that it got a high number from China’s mainland – a total of 15,666 requests – mostly due to financial fraud and phishing investigations. When it comes to phishing attacks, a single request can cover several devices. Apple counts and reports the number of accounts identified in each request, received from each country/region.


January 21, 2020 »

What do online file sharers want with 70,000 Tinder images?

By Danny Bradbury

A researcher has discovered thousands of Tinder users’ images publicly available for free online.

Aaron DeVera, a cybersecurity researcher who works for security company White Ops and also for the NYC Cyber Sexual Assault Taskforce, uncovered a collection of over 70,000 photographs harvested from the dating app Tinder, on several undisclosed websites. Contrary to some press reports, the images are available for free rather than for sale, DeVera said, adding that they found them via a P2P torrent site.

The number of photos doesn’t necessarily represent the number of people affected, as Tinder users may have more than one picture. The data also contained around 16,000 unique Tinder user IDs.

DeVera also took issue with online reports saying that Tinder was hacked, arguing that the service was probably scraped using an automated script:

In my own testing, I observed that I could retrieve my own profile pictures outside the context of the app. The perpetrator of the dump likely did something similar on a larger, automated scale.

What would someone want with these images? Training facial recognition for some nefarious scheme? Possibly. People have taken faces from the site before to build facial recognition data sets. In 2017, Google subsidiary Kaggle scraped 40,000 images from Tinder using the company’s API. The researcher involved uploaded his script to GitHub, although it was subsequently hit by a DMCA takedown notice. He also released the image set under the most liberal Creative Commons license, releasing it into the public domain.

However, DeVera has other ideas:

This dump is actually very valuable for fraudsters seeking to operate a persona account on any online platform.

Hackers could create fake online accounts using the images and lure unsuspecting victims into scams.


FBI seizes credentials-for-sale site

By Danny Bradbury

The FBI has seized the domain for, a site that sold breached data records, after a multinational effort by law enforcement.

Authorities have arrested two 22-year-old men alleged to have operated the site. Based in Fintona, Northern Ireland, and Arnhem in the Netherlands, they are believed to have made over £200,000 (about $260,000) between them from the site.

The Internet Archive’s Wayback Machine first shows surfacing in April 2017, advertising itself as “the Most Extensive Private Database Search Engine”.

The FBI and the District of Columbia explained that the site had harvested over 12 billion records from over 10,000 data breaches, including names, email addresses, usernames, phone numbers, and passwords. The site disclosed records relating to data breaches of sites including, StockX, Dubsmash, and MyFitnessPal.

Customers could subscribe to for as little as a day, paying a minimum of $2 in return for unlimited access. UK authorities also found links between the site and sales of remote access trojans (RATs) and cryptors (tools that obfuscate malware code to avoid detection). It was available both online and also via the dark web service Tor.


FBI to inform election officials about hacking attempts

By Danny Bradbury

File this in the “What? They didn’t do this already?” pile: The FBI has announced that it will tell local election officials when hackers try to infiltrate their systems. Now, when state actors rattle the doors on election systems around the country, the people responsible for operating them will get to hear about it.

This year is shaping up to be the most challenging yet when it comes to election security. In 2020, cyberattacks against the US election will be more sophisticated than they were in the run-up to the 2016 vote. So said Shelby Pierson, the election security threats executive for the Office of the Director of National Intelligence, speaking at an Election Assistance Commission event earlier this month.

It’s probably a good idea, then, for the FBI to warn local and state election officials of hacking attempts, and last week, it announced just that.

For those of you wondering why the FBI wasn’t doing this already, the problem thus far has been the fragmented nature of the US election system. Each state has a chief official in charge of elections, but local governments and officials own and operate election systems on the ground.


Teen entered ‘dark rabbit hole of suicidal content’ online

By Lisa Vaas

You’re fat. You’re worthless. You don’t deserve to be alive.

Those are the kind of comments left on social media posts as innocent as a picture of a flower, as Sarah Lechmere – who has struggled with eating disorders – told the BBC. Social media posts also pointed her to pro-anorexia sites that gave her “tips” on how to self-harm, she said.

This is precisely why UK psychiatrists want to see social media companies forced to hand over their data – and to be taxed into paying – for research into the harms and benefits of social media use. The report, published by the Royal College of Psychiatrists, contains a forward written by Ian Russell, the father of Molly Russell, a 14-year-old who committed suicide in 2017 after entering what her father called the “dark rabbit hole of suicidal content” online.

Ian Russell describes how social media’s “pushy algorithms” trapped Molly, sequestering her in a community that encourages suffering people not only to self-harm but to also avoid seeking help:

I have no doubt that social media helped kill my daughter. Having viewed some of the posts Molly had seen, it is clear they would have normalized, encouraged and escalated her depression; persuaded Molly not to ask for help and instead keep it all to herself; and convinced her it was irreversible and that she had no hope.

… Online, Molly found a world that grew in importance to her and its escalating dominance isolated her from the real world. The pushy algorithms of social media helped ensure Molly increasingly connected to her digital life while encouraging her to hide her problems from those of us around her, those who could help Molly find the professional care she needed.

Ian Russell backs the report’s findings – particularly its calls for government and social media companies to do more to protect users from harmful content, not only by sharing content but also by funding research with a “turnover tax” that will also provide training for clinicians, teachers and others working with children, to help them identify children struggling with their mental health and to understand how social media might be affecting them.


Facebook and Instagram ban alleged ‘brainwashing’ service

By John E Dunn

Updated to include response from Elliot Shefler.

Have you ever tried to persuade a friend or family member to do something they don’t really want to?

Not easy – the person being persuaded knows you’re trying to persuade them, which makes them more likely to question your motives and resist.

Now imagine there was a way to persuade that individual to agree with your wishes by feeding them advertising on your behalf without them being aware that’s happening.

It’s the principle on which a lot of internet advertising is based, which presumably is where the idea for a startup service called the Spinner came from.

Just as conventional advertising tries to target groups of people, so the Spinner personalizes “subconscious influencing” for a specific person and no one else.

Cease and desist

Facebook and Instagram have just banned the service from their platform.

According to the BBC, Facebook is so hostile to the Spinner that it’s even sent the company a formal cease and desist.

The problem? Facebook’s letter accuses the Spinner of targeting its users via fake accounts and fake pages, activities which violate the company’s ad policies. A Facebook spokesperson told the BBC:

We have no tolerance for bad actors that try to circumvent our policies and create bad experiences for people on Facebook.


January 17, 2020 »

Oracle’s January 2020 update patches 334 security flaws

By John E Dunn

As the world’s second-largest software company, Oracle has become an organization built on big numbers.

This includes the number of security patches it issues – which with the January 2020 update reached a joint record of 334, matching an identical number released in July 2018.

Unlike rivals such as Microsoft, Oracle only releases security patches every three months so that’s part of the explanation for the size of its updates, which now routinely head towards 300.

Another factor is simply the volume of software in the company’s stable – with around a hundred products and product components in January’s update alone.

Something that jumps out is that 60 individuals and companies are credited with reporting January’s batch of flaws to Oracle, including one, Alexander Kornbrust, credited with 41 CVEs on his own.

Oracle, then, has lots of flaws to fix because, as with rival Microsoft, it has lots of people looking for them. This can only be a good thing.

Database Server

A modest 12 CVEs in total, three of which are stated as being remotely exploitable. Five are ranked ‘High’ severity, which in Oracle’s nomenclature is the top severity level, factoring in how easy it would be to exploit.

Oracle communications applications

A relatively small application category but still able to offer patches for 23 flaws which could be remotely exploited without authentication, six of which have ‘Critical’ CVSS scores above 9.


Google will now accept your iPhone as an authentication key

By Lisa Vaas

On Monday, Google pushed out an update for the iOS version of Smart Lock, its built-in, on-by-default password manager.

Smart Lock – which has been available for Google’s Chrome browser since 2017 – now also lets iOS users set up their device as the second factor in two-factor authentication (2FA), meaning that you no longer have to carry around a separate security key dongle.

Smart Lock for iOS uses the iPhone’s Secure Enclave Processor (SEP), which is built into every iOS device with Touch ID or Face ID. That’s the processor that handles data encryption on the device – a processor that oh, so many law enforcement and hacker types spend so much time complaining about… or, as the case may be, cracking for fun, fame and profit.

After you set it up, you’ll just need your iPhone or iPad, and your usual password, to use in 2FA when you sign in to Google on a desktop using Chrome.

A big plus: it uses a Bluetooth connection, rather than sending a code via SMS that could be intercepted in a SIM swap attack. In a SIM-swap fraud attack, a hijacker gets their hands on a phone number – typically by sweet-talking/social-engineering it away from its rightful owner – after which they can intercept the codes sent for 2FA that the phone number’s rightful owner set up to protect their accounts.

SIM swap fraud is one of the simplest, and therefore the most popular, ways for crooks to skirt the protection of 2FA, according to a warning that the FBI sent to US companies in October 2019.

Given that Apple introduced SEP – which stores encrypted security keys on an iOS device – with the iPhone 5S, it won’t work on earlier models. You’ll need to be running iOS 10 or later to run the Smart Lock app.


Facial recognition is real-life ‘Black Mirror’ stuff, Ocasio-Cortez says

By Lisa Vaas

During a House hearing on Wednesday, Rep. Alexandria Ocasio-Cortez said that the spread of surveillance via ubiquitous facial recognition is like something out of the tech dystopia TV show “Black Mirror.”

This is some real-life “Black Mirror” stuff that we’re seeing here.

Call this episode “Surveil Them While They’re Obliviously Playing With Puppy Dog Filters.”

Wednesday’s was the third hearing on the topic for the House Oversight and Reform Committee, which is working on legislation to address concerns about the increasingly pervasive technology. In Wednesday’s hearing, Ocasio-Cortez called out the technology’s hidden dangers – one of which is that people don’t really understand how widespread it is.

At one point, Ocasio-Cortez asked Meredith Whittaker – co-founder and co-director of New York University’s AI Now Institute, who had noted in the hearing that facial recognition is a potential tool of authoritarian regimes – to remind the committee of some of the common ways that companies collect our facial recognition data.

Whittaker responded with a laundry list: she said that companies scrape our biometric data from sites like Flickr, from Wikipedia, and from “massive networked market reach” such as that of Facebook.

Ocasio-Cortez: So, if you’ve ever posted a photo of yourself to Facebook, then that could be used in a facial recognition database?

Whittaker: Absolutely – by Facebook and potentially others.

Ocasio-Cortez: Could using a Snapchat or Instagram filter help hone an algorithm for facial recognition?

Whittaker: Absolutely.

Ocasio-Cortez: Can surveillance camera footage that you don’t even know is being taken of you be used for facial recognition?

Whittaker: Yes, and cameras are being designed for that purpose now.

This is a problem, the New York representative suggested:

People think they’re going to put on a cute filter and have puppy dog ears, and not realize that that data’s being collected by a corporation or the state, depending on what country you’re in, in order to …surveil you, potentially for the rest of your life.

Whittaker’s response: Yes. And no, average consumers aren’t aware of how companies are collecting and storing their facial recognition data.


EDRi’s guidelines call for more ethical websites

By Danny Bradbury

Most of us want to be good online citizens. That includes developing websites that have their visitors’ best interests at heart. Yet there are so many ways to get that wrong. Even a slight misstep could put visitors’ privacy or security at risk, or exclude people that might be less able than others. How can you know if you’re doing it right?

Enter European Digital Rights (EDRi), a collection of human rights groups across Europe, which has published a set of guidelines for ethical website development. It explains:

The goal of the project, which started more than a year ago, was to provide guidance to developers on how to move away from third-party infected, data-leaking, unethical and unsafe practices.

The document lists recommendations covering areas including security and privacy while listing alternatives to free online services that slurp up users’ data.

One recommendation is to host your own resources as much as possible. That means avoiding call-outs for things like third-party cookies, and avoiding frames with third-party content. It also means avoiding call-outs for CSS files, images, font files, and JavaScript libraries.

The document adds:

If downloading a resource, such as a JavaScript or font file, is not allowed by the terms of its provider, then they may not be privacy-friendly and should therefore be avoided.

It calls out large tech firms as companies offering services that ethical web developers should avoid, and provides a list of alternatives in areas including analytics, video players, and online maps. It points readers to Prism Break, a list of alternative online services that don’t track their users.

When it comes to security, a site can use DNSSEC to authenticate DNS queries, says the doc, also recommending HTTPS. It also asks website owners to provide a Tor-compatible version of their site using the Tor publishing tool Onionshare.


January 15, 2020 »

Microsoft fixes critical bugs in CryptoAPI, RD Gateway and .NET

By Danny Bradbury

Among the most serious bugs were remote code execution (RCE) flaws affecting the Windows Remote Desktop Gateway, which is a Microsoft service that lets authorised remote users connect to resources on a network via the Remote Desktop Connection (RDP) client.

These pre-authentication bugs don’t require any user interaction to exploit, and involve an attacker sending a specially crafted request via RDP. Labelled CVE-2020-0609 through 11, the bugs affect Windows Server 2012 and 2012 R2, along with Windows Server 2016 and 2019. Rated 9.8 in CVSS, these are red hot bugs that companies should fix immediately.

In an analysis of the Microsoft patches, Johannes Ullrich at SANS explained:

Remember BlueKeep? The RD Gateway is used to authenticate users and allow access to internal RDP services. As a result, RD Gateway is often exposed and used to protect the actual RDP servers from exploitation.

There were several other critical bugs in Microsoft’s patch this month, all overshadowed by the cryptographic whopper that we cover elsewhere but still important to everyday users and admins.

CVE-2020-0603 is a critical RCE bug in ASP.NET Core stemming from improper object handling in memory. A user would have to open a specially crafted file to be hit, which an attacker could send via email.


Malicious npm package taken down after Microsoft warning

By John E Dunn

Criminals have been caught trying to sneak a malicious package on to the popular Node.js platform npm (Node Package Manager).

The problem package, 1337qq-js, was uploaded to npm on 31 December, after which it was downloaded at least 32 times according to figures from npm-stat.

According to a security advisory announcing its removal, the package’s suspicious behaviour was first noticed by Microsoft’s Vulnerability Research team, which reported it to npm on 13 January 2020:

The package exfiltrates sensitive information through install scripts. It targets UNIX systems.

The data it steals includes:

  • Environment variables
  • Running processes
  • /etc/hosts
  • uname -a
  • npmrc file

Any of these could lead to trouble, especially the theft of environment variables which can include API tokens and, in some cases, hardcoded passwords.

Anyone unlucky enough to have downloaded this will need to rotate those as a matter of urgency in addition to de-installing 1337qq-js itself.


Peekaboo Moments baby-recording app has a bad database booboo

By Lisa Vaas

No need to wait until you’ve gurgled out of your mother’s womb to experience the joys of having your privacy breached, thanks to a mobile app called Peekaboo Moments.

Bithouse Inc. – the developer of the mobile app, which is designed to capture photos, audio, weight, length, video and diaries of tots starting as early as their zygote days – has left an Elasticsearch database flapping wide open, leaving thousands of infants’ videos and images exposed, unsecured and up for babbling its contents to any internet busybody who knows where to look.

The database was discovered by Dan Ehrlich, who runs the Texas-based cybersec startup Twelve Security. Ehrlich told Information Security Media Group (ISMG) that the 100GB database contains more than 70 million log files, with data going back as far as March 2019. The logs record when someone uses the Peekaboo app, what actions they took and when.

And my oh my, what actions you can take! As the Peekaboo Moment developer croons on the app’s Google Play listing, users can…

Take photos, videos for your little ones! Starting from pregnancy, newborn to every first ‘papa’ & ‘mama’, these memories will be auto-organized by age of child.

Users can also record the weight, length and birth dates of their babies, as well as their location data, in latitude and longitude, down to four decimal points: an accuracy that translates to within about 30 feet. In other words, this could be Baby’s First PII Breach.

The open database has exposed at least 800,000 email addresses, detailed device data, and links to photos and videos. The frosting on the cupcake: Ehrlich found that the Peekaboo Moments’ API keys for Facebook – which enable users to take content they’ve uploaded to Facebook and post it in the Peekaboo app – have also been exposed, potentially enabling an attacker to get access to content on users’ Facebook pages.


Apple says no to unlocking shooter’s phone; AG and Trump lash back

By Lisa Vaas

No surprise here: Apple has yet again said no to the FBI’s request to break iOS encryption – this time, as it investigates the 6 December mass shooting at a naval base in Pensacola, Florida.

No surprise redux: Attorney General William Barr is using Apple’s “No” as a “perfect” illustration of why “the public needs to be able to get access to digital evidence”. In other words, this is why we need a backdoor, the FBI says.

We have asked Apple for its help in unlocking the shooter’s phones. So far, Apple has not given any substantive assistance. This situation perfectly illustrates why it is critical that the public be able to get access to digital evidence once it’s received a court order based on probable cause.

In a press conference on Monday, Barr confirmed that the FBI’s investigation has uncovered multiple anti-American screeds posted by the killer, Mohammed Saeed Alshamrani, a member of the Saudi Royal Air Force who was taking flight classes in Florida. He murdered three young US Navy students and wounded eight others before being shot to death by authorities.

Barr said that the evidence points to the shooter being motivated by Jihadist ideology, as can be seen in messages Alshamrani posted to social media. One message stated that “the countdown has begun.” He posted messages up to two hours before the attack, and the FBI is keen to know who else he might have been communicating with.


Fleeceware is back in Google Play – massive fees for not much at all

By Paul Ducklin

Last September, we wrote about “fleeceware“, a term we coined to describe apps that charge huge amounts but give you very little in return.

Technically, the apps themselves aren’t malware, because the code in the app doesn’t do anything illegal, dangerous, sneaky, snoopy, subversive or surreptitious.

The treachery lies in the payment model – the fleeceware we identified back in September 2019 didn’t charge a fee for the app, but instead sold you a subscription to go along with the app.

And what subscriptions they were!

How about a QR code reader, much like the one already built into your mobile phone’s camera app, that was free for a three-day trial…

…but then suddenly cost you a massive €104.99 even if you uninstalled the app straight after trying it and never used it again.

The app’s free, don’t forget; it’s the subscription that you’re being charged for, and Google permits app developers to ask that sort of money.


‘Cable Haunt’ vulnerability exposes 200 million cable modem users

By John E Dunn

A fortnight in to 2020 and we have the first security flaw considered important enough to be given its own name: Cable Haunt – complete with eye-catching logo.

First discovered by Danish company Lyrebirds some time ago, Cable Haunt is an unusual flaw which in Europe alone is said to affect up to 200 million cable modems based on the Broadcom platform.

Specifically, the flaw is in a normally hidden software layer called the spectrum Analyzer (SA) used by Internet Service Providers (ISPs) to troubleshoot a subscriber’s connection quality.

According to Lyrebirds, this analyzer has several problems starting with the basic problem that the WebSocket interface used to control the tool from a web browser is unsecured.

Because parameters sent via this are not restricted by the modem, it accepts JavaScript running in the browser – which gives attackers a way in as long as they can reach the browser (although not in Firefox, apparently).

Using HTTPS instead of an exposed WebSockets would have dodged that bullet by implementing Cross-Origin Resource Sharing (CORS) security.

Having to reach a browser inside the network with access to the modem explains why the flaw is given the apparently ‘medium’ CVSS rating of 4.8. The qualification to this, of course, is that remotely compromising a browser is well within the reach of a competent hacker.


January 14, 2020 »

Google tests biometric authentication for Android autofill

By Danny Bradbury

Google is testing out a feature to make Android’s built-in password manager safer, according to online sleuths who have picked apart its software. The update, still in development, concerns the mobile operating system’s autofill feature.

In the past, entering passwords into websites and apps on your mobile phone was a huge pain because of the way mobile operating systems locked down applications. In the bad old days, using a password manager like 1Password or Dashlane on an Android device was difficult, because there was no built-in support that connected them to other apps and websites so that they could automatically fill in your credentials for you.

Instead, they’d use Android’s accessibility setting as a bridge to other apps, but it didn’t work perfectly and you had to configure it manually to begin with. The alternative was even worse – opening the password manager, looking up the password, and then copying and pasting it into the app or site you were accessing.

The answer came in the form of autofill, which lets the mobile OS fill in the password for you from a trusted list. Google introduced this feature in Android 8, (code-named Oreo), in August 2017. You could use it to take autofill input from third-party password managers, or if you wanted to keep everything in your Google account, you could use autofill with Google’s own password management service.

The problem with autofill when using Google’s own password manager was that it doesn’t ask for any extra authorization. You tap the part of the form to fill out your own credentials, and it collects the data from Google’s password manager and pastes it in without checking who you are. That means if someone else grabs your phone while you’re distracted, they could potentially log in as you.


Lottery hacker gets 9 months for his £5 cut of the loot

By Lisa Vaas

Back in November 2016, 26,500 accounts for the UK’s National Lottery got credential-stuffed like they were a bunch of Thanksgiving turkeys.

And last week, 29-year-old Anwar Batson from London, who supplied his criminal buddies with the brute-force, automated password-guessing, Dark Web-delivered tool behind the credential-stuffing attack – a hacking tool called Sentry MBA – was sentenced to up to nine months in jail.

All this, for what? The shrinky-dinky sum of £5 (USD $6.50), that’s what. As The Register reports, that was his agreed-upon cut of whatever ill-gotten goods the thieves managed to pry out of accounts.

On Friday, Crown Prosecutor Suki Dhadda told the court that Batson had downloaded Sentry MBA and joined a chat group discussing the software and swapping the configuration files necessary to use it. Batson, the father of one, “counseled others on how to hack” and “enabled them to successfully use Sentry MBA to hack others’ accounts,” Dhadda said.

At least back in May 2016, Sentry MBA was considered the most popular tool for these kind of attacks, which involve taking sets of breached credentials, combining them with configuration files that are specific to a targeted site or service, and using a hacking tool like Sentry MBA to automatically plug in the credentials to see which ones will get a crook into a live account.

If account holders have reused passcodes across sites/services, there’s much more of a chance that their credentials will get a crook into a targeted site/service. Which is why it is really, truly a bad idea to use the same password on different sites!


Microsoft now reviewing Skype audio in ‘secure’ places (not China)

By Lisa Vaas

Following reports about text transcriptions of live Skype calls being vetted by humans, meaning that sensitive conversations could have been bugged, Microsoft says it’s moved its human grading of Cortana and Skype recordings into “secure facilities”, none of which are in China.

On Friday, The Guardian published a report after talking to a former Microsoft contractor who lived in Beijing and transcribed thousands of audio recordings from Skype and the company’s Cortana voice assistant – all with little cybersecurity protection, either from hackers or from potential interception by the government.

The former contractor said that he spent two years reviewing potentially sensitive recordings for Microsoft, with “no security measures”, often working from home on his personal laptop. He told the Guardian that Microsoft workers accessed the clips through a web app running in Google’s Chrome browser, on their personal laptops, over the Chinese internet.

They received no help to protect the recordings from eavesdroppers, be they Chinese government, disgruntled workers, or non-state hackers, and were even told to work off new Microsoft accounts that all shared the same password – for “ease of management.”

The Guardian quoted the former contractor:

There were no security measures, I don’t even remember them doing proper KYC [know your customer] on me. I think they just took my Chinese bank account details.

Being British, he was put to work listening to people whose Microsoft devices were set to British English. After a while, he was allowed to work from home in Beijing, where he used a simple username and password to access the clips – a set of login credentials that he said were emailed to new contractors in plaintext. The password was the same for every employee who joined in any given year, he said.


Snake alert! This ransomware is not a game…

By Paul Ducklin

Here’s some goodish news: The Snake ransomware seems to have made the news last week on account of its name rather than its prevalence.

Because, well, SNAKE!

Like most ransomware, Snake doesn’t touch your operating system files and programs, so your computer will still boot up, log in, and let you open your favorite apps, so that in purely technical terms you have a working system…

…but all your important data files, such as documents, spreadsheets, photos, videos, music, tax returns, business plans, accounts payable and accounts receivable, are scrambled with a randomly chosen encryption key.

Scrambled files consist of the encrypted content written back over the original data, with decryption information added at the end.

The original filename and directory are recorded, the decryption key is stored too, and the special tag EKANS, which is SNAKE written backwards, finishes off the encrypted file.

Note that the decryption key for each file is itself encrypted using public-key encryption, which is a special sort of encryption algorithm in which there are two keys, rather than one, so that the key used to lock data can’t be used to unlock it.

The key used for locking data is called the public key, because you can reveal it to anyone; the unlocking key is called the private key, because as long as you keep it private, you’re the only one who can later unlock the encrypted data.


Powerful GPG collision attack spells the end for SHA-1

By Danny Bradbury

New research has heightened an already urgent call to abandon SHA-1, a cryptographic algorithm still used in many popular online services.

In a paper called SHA-1 is a Shambles, researchers Gaëtan Leurent and Thomas Peyrin have demonstrated a new, powerful attack on the system that could enable attackers to fake digital certificates for as little as $45,000.

Leurent, from INRIA in France, and Peyrin, from the Nanyan Technological University in Singapore, demonstrated their attack by creating a fake digital certificate using the GNU Privacy Guard (GPG or GnuPG) system.

Published in 1995, SHA-1 is a hashing function that creates a digital fingerprint calculated from a block of data such as a file.

Hashes of this sort serve two useful purposes: they let you and I confirm we have the same file without having to exchange the entire file again for verification; and they let me uniquely (or as good as uniquely) identify a file for later on in such a way that I don’t have to share the actual contents with you now.

This relies on one of several properties in a cryptographic hashing function, namely that is should be impossible (or as good as impossible) to create two files that have the same hash.

That’s known as a collision, and it subverts the idea that a hash pinpoints a specific file.

People had long suspected weaknesses in SHA-1, but then in 2017, researchers at CWI Amsterdam along with Google successfully performed a collision attack against the algorithm.

They were able to append a prefix to the original file being hashed that produces the same hash when prefixed to another file.


Reddit bans ‘impersonation,’ but satire and parody are still OK

By Lisa Vaas

When it comes to deepfakes, don’t worry: Reddit says it likes seeing Nic Cage in unexpected places just as much as you do.

What it doesn’t like: mimicry done with malicious intent. Reddit had already banned pornographic deepfakes in 2018. Now, in the run up to the 2020 US presidential election, it’s expanded its deepfake ban: Reddit is now prohibiting impersonation, including domains that mimic others.

Satire and parody are still safe, a Reddit admin said on Thursday in an announcement about the updated policy.

This doesn’t apply to all deepfake or manipulated content – just that which is actually misleading in a malicious way.

Here’s the updated policy:

Do not impersonate an individual or entity

Reddit does not allow content that impersonates individuals or entities in a misleading or deceptive manner. This not only includes using a Reddit account to impersonate someone, but also encompasses things such as domains that mimic others, as well as deepfakes or other manipulated content presented to mislead, or falsely attributed to an individual or entity. While we permit satire and parody, we will always take into account the context of any particular content.

Reddit says the “classic” case of impersonation is a Reddit username that tries to come off as another person or thing, be it a politician, brand, Reddit admin, or anybody/anything else. But from time to time, Redditors post things that take it beyond that and into the realm of serious misinformation attempts, such as…

…fake articles falsely attributed to real journalists, forged election communications purporting to come from real agencies or officials, or scammy domains posing as those of a particular news outlet or politician (always be sure to check URLs closely – .co does NOT equal .com!).

Impersonation is actually near the bottom of what gets reported on Reddit, the Reddit admin, u/LastBluejay, said. But even though impersonation is one of the rarest report classes, the platform wants to stay on the safe side:

We also wanted to hedge against things that we haven’t seen much of to date, but could see in the future, such as malicious deepfakes of politicians, for example, or other, lower-tech forged or manipulated content that misleads.

Reddit isn’t the only one who feels that way. The impersonation ban comes just days after Facebook banned deepfakes.


January 10, 2020 »

Is the Y2K bug alive after all?

By Paul Ducklin

Right at the end of 2019, we wrote about the “decade-ending Y2K bug that wasn’t” in a serious article with a humorous side.

In that article, we described a perennial “gotcha” facing Java programmers faced with the simple task of printing out the year.

If you tell Java to treat the date as four digits by using the abbreviation YYYY, which is a very common way of denoting the year in all sorts of other apps, you will get the right answer most of the time…

…but in some years, the answer comes out exactly one year off for just a few days at the start or the end of the calendar year.

Memories of the Y2K bug!

Y2K, or the millennium bug, was where programs that tried to save memory by storing dates as “99” instead of “1999” got confused at the end of 1999, because the sum 99+1 rolls back to 00 when you only have two digits to play with.

But it turns out that the Java bug that people were comparing to Y2K was a completely different beast.

The bug in the Java case is that Java’s shorthand to denote the current year in four digits is yyyy, and not YYYY – it really matters whether you use capital letters or not.

Confusingly, and for many people, surprisingly, the text YYYY in a Java program denotes the year in which at least half of the current week lies, as used for things like payroll and weekly accounts.

So if there are an odd few days at the start or end of a year, they’re transferred to the previous or following year when you count in weeks to do your accounts.


Hackers use system weakness to rattle doors on Citrix systems

By Danny Bradbury

Attackers are using a serious bug in Citrix products to scan the internet for weaknesses, according to experts.

The flaw, CVE-2019-19781, affects the company’s NetScaler ADC Application Delivery Controller and its Citrix Gateway. The first product is a piece of network equipment that ensures online applications perform well, using load balancing and application monitoring. The second provides remote access to applications on a company’s network or in the cloud. An attacker could use the bug to execute arbitrary code, according to Citrix, which published an advisory on 17 December.

Positive Technologies, which wrote a report of the bug on 23 December, warned that 80,000 companies were at risk. NIST gave it a 9.8 (Critical) CVSS 3.0 score.

A bug that lets attackers execute arbitrary code without even needing an account is particularly serious. Positive Technologies explained:

This vulnerability allows any unauthorized attacker to not only access published applications, but also attack other resources of the company’s internal network from the Citrix server.

Although Citrix hasn’t released details of the bug in its advisory, several researchers have suggested that it is a directory traversal vulnerability that allows someone from the outside to reach a directory that they shouldn’t access.

There are no known proof-of-concept exploits at the moment, but the SANS Internet Storm Center demonstrated on 31 December its ability to exploit weaknesses in the code and upload files to the system without “any special tools or advanced skills”.


Ransomware pounces on California schools, Las Vegas trounces attack

By Lisa Vaas

We’ve got some bad ransomware news, and we’ve got some good, cyberattack-THWARTED! news.

First, the bad: over the holiday break, crooks who are so morally bankrupt that they target the organizations that serve children pounced on schools in the US city of Pittsburg, California.

On Monday, the superintendent of Pittsburg Unified School District, Janet Schulze, put up a message about the ransomware attack on the district’s Facebook page.

She said that any and all affected and potentially affected servers had been taken offline, leaving the district’s school system without email or internet access. Phones were working, though, and the plan was to forge ahead and open school on Tuesday.

Twenty-eight minutes later, Schulze put up an update, saying that the show would indeed go on, but old-school style: sans laptops, sans internet.

We are all set for school tomorrow! We will be teaching and learning like ‘back in the day’??…without laptops and internet. Our schools have access to student information and our phones are working. We still are not able to receive email, so please call your child’s school if needed.

As of Monday, the district was working with two external IT firms and attorneys who, Schulze said, are all specialists in this kind of e-misery. She also said that the district had notified law enforcement and that the investigation and repair work were still underway.

The cybersecurity teams that are helping the school system to get back on its feet hadn’t detected any compromise of personal data as of Monday.

Cut off from the internet and email, the district’s secondary schools were given an extension – until Monday 13 January – to enter first-semester grades into the grading system. A slice of good news: the cafeteria wasn’t affected and could therefore be counted on to dish up meals for the hungry students.

Schulze didn’t give any indication as to what ransom the crooks are demanding, nor whether or not the district plans to fork anything over.


January 9, 2020 »

Browser zero day: Update your Firefox right now!

By John E Dunn

Just two days after releasing Firefox 72, Mozilla has issued an update to patch a critical zero-day flaw.

According to an advisory on Mozilla’s website, the issue identified as CVE-2019-17026 is a type confusion bug affecting Firefox’s IonMonkey JavaScript Just-in-Time (JIT) compiler.

Simply put, a JIT compiler takes JavaScript source code, as you’ll find in most web pages these days, and converts it to executable computer code, so that the JavaScript runs directly inside Firefox as if it were a built-in part of the app.

This typically improves performance, often noticeably.

Ironically, most modern apps implement what’s called DEP, short for Data Execution Prevention, a threat mitigation that helps stop crooks from sending over what looks like innocent data but then tricking the app into running that data as if it were an already-trusted program.

(Code that’s disguised as data is known in the jargon as shellcode.)

DEP means that once a program is running, the data it consumes – especially if it originates from an untrusted source – can’t be turned into executing code, whether accidentally or otherwise.

But JIT compilers have to exempt themselves from DEP controls, because converting data to code and running it is precisely what they do – and that’s why crooks love to probe for flaws in JIT systems.


Apple’s scanning iCloud photos for child abuse image

By Lisa Vaas

Apple has confirmed that it’s automatically scanning images backed up to iCloud to ferret out child abuse images.

As the Telegraph reports, Apple chief privacy officer Jane Horvath, speaking at the Consumer Electronics Show in Las Vegas this week, said that this is the way that it’s helping to fight child exploitation, as opposed to breaking encryption.

[Compromising encryption is] not the way we’re solving these issues… We are utilizing some technologies to help screen for child sexual abuse material.

Horvath’s comments make sense in the context of the back-and-forth over breaking end-to-end encryption. Last month, during a Senate Judiciary Committee hearing that was attended by Apple and Facebook representatives who testified about the worth of encryption that hasn’t been weakened, Sen. Lindsey Graham asserted his belief that unbroken encryption provides a “safe haven” for child abusers:

You’re going to find a way to do this or we’re going to do this for you.

We’re not going to live in a world where a bunch of child abusers have a safe haven to practice their craft. Period. End of discussion.

Though some say that Apple’s strenuous Privacy-R-Us marketing campaign is hypocritical, it’s certainly earned a lot of punches on its frequent-court-appearance card when it comes to fighting off demands to break its encryption.

How, then, does its allegiance to privacy jibe with the automatic scanning of users’ iCloud content?


Google voice Assistant gets new privacy ‘undo’ commands

By John E Dunn

Google’s controversial voice Assistant is getting a series of new commands designed to work like privacy-centric ‘undo’ buttons.

Assistant, of course, is inside an estimated one billion devices, including Android smartphones, countless brands of home smart speaker, and TV sets based on the Android OS.

But these are only the pioneers for an expanding AI empire. This year Assistant should start popping up in headphones, soundbars, ‘smart’ computer displays and, via Android Auto, more motor cars.

If this sounds oppressive, you could be in for a tough few years because Assistant (and rivals Alexa, Siri, Cortana, and Samsung’s Bixby) – could soon be in anything and everything a human being might reasonably expect to perform a task.

And yet 2019 was the year Google finally got the message that the system’s hidden risks might quickly become the sort of privacy itch that is hard to scratch if it’s not careful.

This included controversies over who might be listening to recordings without users having given consent. Others have likened it to a poorly regulated privacy-killing genie Google won’t voluntarily put back in the bottle.


FBI asks Apple to help it unlock iPhones of naval base shooter

By Lisa Vaas

The FBI has asked Apple to help it unlock two iPhones that belonged to the murderer Mohammed Saeed Alshamrani, who shot and killed three young US Navy students in a shooting spree at a Florida naval base last month.

Alshamrani also injured eight others before he himself was shot to death.

Late on Monday, FBI General Counsel Dana Boente sent the letter to Apple’s general counsel. The letter hasn’t been made public, but the FBI shared it with NBC, which first reported on it.

In the letter, the FBI said that it’s got a subpoena allowing it to search content on the iPhones, both of which are password-protected (and one of which Alshamrani reportedly shot and damaged, further complicating forensics on the device and its data). But so far, investigators haven’t had any luck at guessing the passcodes, the letter said.

And yes, the FBI has tried the tactics it used when it was trying to unlock the iPhone of San Bernardino terrorist Syed Farook. Namely, the bureau says that it’s asked for help from other federal agencies – it sent the iPhones to the FBI’s crime lab in Quantico, Virginia – and from experts in other countries, as well as “familiar contacts in the third-party vendor community.”

That could be a reference to the tool that the FBI used to finally break into Farook’s encrypted phone and thereby render moot the FBI versus Apple legal battle over encryption.

Though the killer was believed to have been acting alone, the FBI said in its letter that it’s not ruling anything out before the investigation is complete:

Even though the shooter is dead, [agents want to search his phones] out of an abundance of caution.

Apple sent a statement to NBC saying that it’s helping the government:

We have the greatest respect for law enforcement and have always worked cooperatively to help in their investigations. When the FBI requested information from us relating to this case a month ago, we gave them all of the data in our possession and we will continue to support them with the data we have available.


Google’s Project Zero highlights patch quality with policy tweak

By Danny Bradbury

Google’s Project Zero bug-hunting team has tweaked its 90-day responsible disclosure policy to help improve the quality and adoption of vendor patches.

Project Zero is a group of researchers that looks for zero-day vulnerabilities in technology products and services. When it finds a bug, the team informs the vendor responsible for the product and opens an internal bug report known as a tracker, shielded from public view.

The vendor then has 90 days to fix the bug before Project Zero lifts the veil. This policy, known as responsible disclosure, sits at the midpoint compared to other organizations. US CERT, for example, goes public 45 days after discovering a bug, while the Zero Day Initiative waits 120 days.

Google says that 97.7% of the bugs it reports are fixed within deadline, up from the 95.5% that it reported in the period between February 2015 and July 2019. So now, it’s expanding its focus from faster bug fixes to better ones. With that in mind, the Project Zero team has outlined some changes to its disclosure policy that it hopes will tighten up its handling of security bugs.

The most significant sees it switch to a standard policy of disclosing a vulnerability after 90 days. In the past, it has used that cutoff as the latest possible disclosure time, but has revealed a bug as soon as a vendor announced a fix. Now, in an effort to ensure that vendors thoroughly test their patches rather than rushing them out the door, it will wait for the full 90-day period before disclosing a flaw, even if the vendor has fixed it weeks beforehand.

Holding off on public bug reports should also make it easier to get patches out to users. Google explained:

…some vendors hold the view that our disclosures prior to significant patch adoption are harmful. Though we disagree (since this information is already public and being used by attackers per our FAQ here), under this new policy, we expect that vendors with this view will be incentivized to patch faster, as faster patches will allow them “additional time” for patch adoption.


REvil ransomware exploiting VPN flaws made public last April

By John E Dunn

Researchers report flaws, vendors issue patches, organizations apply them – and everyone lives happily ever after. Right?

Not always. Sometimes, the middle element of that chain – the bit where organizations apply patches – can takes months to happen. Sometimes it doesn’t happen at all.

It’s a relaxed patching cycle that has become security’s unaffordable luxury.

Take, for instance, this week’s revelation by researcher Kevin Beaumont that serious vulnerabilities in Pulse Secure’s Zero Trust business VPN (virtual private network) system are being exploited to break into company networks to install the REvil (Sodinokibi) ransomware.

His evidence comprises anecdotal reports from victims mentioning unpatched Pulse Secure VPN systems being used as a way in by REvil. Something he has since seen for himself:

I’ve now seen an incident where they can prove Pulse Secure was used to gain access to the network.


US warns of Iranian cyber threat

By Danny Bradbury

The US Department of Homeland Security has issued a total of three warnings in the last few days encouraging people to be on the alert for physical and cyber-attacks from Iran. The announcements follow the US killing of Qasem Soleimani, the commander of Iran’s IRGC-Quds Force. The warnings directly address IT professionals with advice on how to secure their networks against Iranian attack.

On Monday, the Cybersecurity and Infrastructure Security Agency (CISA), which is an agency within the DHS, released the latest publication in its CISA Insights series, which provides background information on cybersecurity threats to the US.

Without explicitly mentioning Soleimani’s killing, it referred to “recent Iran-US tensions” creating a heightened risk of retaliatory acts against the US and its global interests. Organizations should be on the lookout for potential threats, especially if they represent strategic targets such as finance, energy, or telecommunications, it said. Iranian attackers could launch attacks targeting intellectual property or mount disinformation campaigns, it said, while also raising the spectre of physical attacks using improvised explosive devices or unmanned drones.

The publication added:

Review your organization from an outside perspective and ask the tough questions – are you attractive to Iran and its proxies because of your business model, who your customers and competitors are, or what you stand for?

The same day, CISA also issued an alert specifically targeting IT pros that warned of a potential Iranian cyber response to the military strike. It recommended five actions that IT professionals could take to protect themselves, focusing on a mixture of vulnerability mitigation and incident preparation.


Facebook bans deepfakes, but not cheapfakes or shallowfakes

By Lisa Vaas

Facebook has banned deepfakes.

No, strike that – make it, Facebook has banned some doctored videos, but only the ones made with fancy-schmancy technologies, such as artificial intelligence (AI), in a way that an average person wouldn’t easily spot.

What the policy doesn’t appear to cover: videos made with simple video-editing software, or what disinformation researchers call “cheapfakes” or “shallowfakes.”

The new policy

Facebook laid out its new policy in a blog post on Monday. Monika Bickert, the company’s vice president for global policy management, said that while these videos are still rare, they present “a significant challenge for our industry and society as their use increases.”

She said that going forward, Facebook is going to remove “misleading manipulated media” that’s been “edited or synthesized” beyond minor clarity/quality tweaks, in ways that an average person can’t detect and which would depict subjects as convincingly saying words that they actually didn’t utter.

Another criteria for removal is that part about fancy-schmany editing techniques, when a video…

…is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.

Deepfake non-consensual porn made up 96% of the total number of deepfake videos online as of the first half of 2019, according to Deeptrace, a company that uses deep learning and computer vision for detecting and monitoring deepfakes.


‘Maze’ ransomware threatens data exposure unless $6m ransom paid

By John E Dunn

What’s the most effective way to fight back against a large ransomware attack?

Normally, the answer would be technical or organizational, but a new type of ransomware called Maze seems to have stirred up a very different response in one of its recent victims – bring in the lawyers and try to sue the gang behind it.

The victim this time was US cable and wire manufacturer Southwire, which last week filed a civil suit against Maze’s mysterious makers in Georgia Federal court.

This mentions a big attack involving Maze, which we know from the company’s Twitter account happened on 11 December 2019.

Given that the attackers are unknown – referred to only as “John Doe” in legal filings – this might sound like a fool’s errand. But it seems it is the way the ‘Maze Crew’ attempted to extort Southwire that led to such unorthodox tactics.

According to Bleeping Computer, the sum demanded from Southwire was 850 Bitcoins, equivalent to around $6 million.

That sounds like a lot to supply some encryption keys to unlock scrambled data, but the demand was backed by a second and more sinister threat – if the sum wasn’t paid the data would be released publicly.

That ransomware attackers can steal as well as encrypt data isn’t a new phenomenon but the possibility that sensitive data might be revealed to the world is potentially more damaging than any short-term disruption caused by the malware.

And yet, despite the seriousness of this threat, it seems that Southwire declined to pay.


US military branches ban TikTok following Pentagon’s warning

By Lisa Vaas

Last month, the Pentagon told US military to steer clear of what it sees as a national-security landmine: the singing/dancing/jokey TikTok platform.

Tell your Department of Defense employees not to download it, and wipe it if it’s already on their devices, the Defense Information Systems Agency recommended.

Some military outfits have snapped to attention and heeded the call. A number of military branches in the US have now banned the popular Chinese-owned social media app on government-issued smartphones, and some have even discouraged members of the armed forces from using it on their personal devices.

From an email sent on Friday by Marine Corps spokesman Capt. Christopher Harrison to the New York Times:

Marine Corps Forces Cyberspace Command has blocked TikTok from government-issued mobile devices. This decision is consistent with our efforts to proactively address existing and emerging threats as we secure and defend our network. This block only applies to government-issued mobile devices.

In December 2019, the Air Force amn/nco/snco Facebook page posted an email from Naval Network Warfare Command that called TikTok a “cybersecurity threat” and told users to uninstall it from their iPhones and iPads:

TikTok is a cybersecurity threat. Users are instructed NOT to install the application on their mobile device. DO NOT install Tiktok on your Government furnished mobile device. If you have this application on your device, remove it immediately.

The response of one Facebook user: “It’s amazing they actually have to be told not to do this.”

An Air Force spokeswoman noted that it’s not just TikTok that has the military worried:

The threats posed by social media are not unique to TikTok (though they may certainly be greater on that platform), and DoD personnel must be cautious when making any public or social media post.

All DoD personnel take annual cyber-awareness training that covers the threats that social media can pose, as well as annual operations security training that covers the broader issue of safeguarding information.


« older