Security & Privacy

Facebook vs. Google vs. You: How your privacy became the ball everyone’s fighting over

It’s common knowledge that Facebook (or “Meta” if you prefer) likes to track you around the internet, seeing where you go, what you look at, and then use that information to help target advertising.

It’s actually a pretty useful system if you’re an advertiser; you can target very specific groups of people and advertise to them for comparatively small amounts of money. Arguably it has powered the growth of many small businesses, and definitely some large ones, especially in the past two years where lockdown has transformed the way we do business and sell online.

I’ve used Facebook advertising many times for clients (in my past life in digital marketing) and for myself (in my current double life as an author). There’s no question that, done properly, it works.

The key is the accuracy and granularity of the targeting. Did you know it’s possible to target men in the UK, between 25 and 45, who like Doctor Who and reading? I do, because that’s almost exactly how I target my advertising when I have a new book to promote. If it’s a book aimed at younger readers, I sometimes switch gender and target the mums, mixing together some Doctor Who with Harry Potter and suitably parental type interests. It’s staggeringly easy, but it only works because we’ve been willingly (even if unwittingly) giving Facebook this kind of high-value data for years.

An important component in Facebook advertising is re-marketing; targetting people who have been to your website with more advertising for your website when they return to Facebook. All it takes is the addition of a small chunk of code to your website and you are instantly able to start targeting your audience. You’ve also just turned your website into a listening post in Facebook’s vast, global intelligence gathering network.

This was all shady enough when it was happening on our computers, but Facebook took it to another level when they adopted a “mobile first” strategy back in 2012. Some pundits thought that Facebook was late to this party but, even if they were, they certainly understood the party a lot better than others when they arrived at it. Location data added a whole new and unprecedented facet to Facebook’s advertising might; combined with the information of who our family and friends were, Facebook could now calculate what we might be interested in before we even knew it ourselves.

How does Facebook know things I didn’t tell Facebook?

Facebook isn’t listening to your conversations. It’s been tested and proven that they aren’t. The truth is actually a whole lot scarier…

Here’s a rough sketch of how it works…

Facebook knows who you are and knows what you’ve been looking at on the internet lately. Let’s say it’s a new car. Facebook knows who you are spending time with because they know where you are, where other people are, and which of those people are your friends. If you’re considering a big purchase, like a car, chances are good that you are discussing this purchase with your friends. So, maybe Facebook would be on to a good thing if it showed adverts for that car you like the look of to your friends, right?

Facebook might even know if you make that purchase, as they love to gobble up data about our purchases from our credit card providers and they know, of course, if you’ve made a purchase through Facebook Marketplace, a Facebook Ad, or Instagram.

(And, because of the way the offline data is acquired, they even know things about you if you’ve never even had a Facebook account…)

According to a recent article by Vox, Facebook and other big-data based companies are even working on algorithms that can predict the end of your relationship. (And, if that doesn’t creep you out, read The Facebook Effect by David Kirkpatrick, which includes the nightmare-inducing claim that Mark Zuckerberg created an algorithm in college to predict which of his friends would “hook up”. It was 33% accurate. That’s the guy calling the shots at Meta and Facebook)

Enter Apple and a Game Changing Privacy Wall

Facebook’s insatiable lust for data and the powerful tools it builds and puts into the hands of just about anyone with a credit card have had serious negative impacts on society; elections have been influenced, the course of referendums changed, and today dangerous disinformation still runs rampant across the platform. The people behind these things have all taken advantage of Facebook’s unprecedented influence machine.

Finally, regulators and legislators are taking note. More importantly than that, consumers are taking note as well, as people are starting to question just what happens to their data and how social media platforms work behind the scenes.

Apple has been the first of the big-tech companies to identify the inherent opportunity here. In the potentially unique position of not being reliant on huge amounts of data about its user to turn a profit, Apple was able to make a strong play out of protecting its users’ privacy by blocking the kind of tracking facilities that Facebook relies on for monitoring its users on the iPhone. For once, Apple’s vice-like hold on the software running on its phones proved to be a huge boon for consumers as it wiped out Facebooks tracking capability in a single software patch. This one simple move hit Facebook’s profit margin by over $10 billion whilst also positioning Apple as the consumer’s champion on privacy and security.

Your move, Google

Apple sells 1 in 3 smartphones worldwide, so Facebook doesn’t need to worry… right? After all, there are still all those lovely Android users out there and, thanks to the fragmentation in the Android mobile phone market there are plenty of phones in use today that aren’t running the latest version of Android and won’t be getting an update any time soon.

Well, maybe not. Google recently announced wide-ranging privacy changes of their own, including bringing an end to third-party tracking and data sharing in apps. It’s a super-fine line for Google to tread as they, like Facebook, rely on advertising as a major source of revenue and use copious amounts of user data to improve targeting and advertising effectiveness. In their blog post, Google (Alphabet) went to some pains to stress that they were not taking the “blunt approach” of others (e.g. Apple) and were committed to helping advertisers transition to new, privacy focussed technologies.

Google themselves are having real problems weaning their systems off intrusive user tracking. The most recent attempt, Federated Learning Of Cohorts (or FLoC) was roundly panned and quickly killed off. Its replacement, “Topics”, isn’t really much better – it still takes your browsing history and infers your interests from it and whilst the topic groups are allegedly broad, that doesn’t really answer the question of why Google should be looking at your browsing history at all. It’s a question being asked in the American Congress, in the EU, by a growing movement of activists, experts, and consumers, and by the UK government, all of whom have expressed an aim to see an end to all behaviourally targetted advertising.

Enter… Nick Clegg?

Yes, sliding in from stage left to take a leading role as one of Facebook’s “Big Three” (alongside Zuckerberg and Cheryl Sandberg) is Sir Nick Clegg, former deputy Prime Minister of the UK and the man so famous for his lack of integrity that there is a Wikipedia page explaining his role in triggering student riots in the UK after going back on a manifesto pledge not to introduce student tuition fees.

Deployed by Facebook to lobby governments worldwide, he’s already in the spotlight as Facebook seeks to challenge Google over its right to implement privacy controls in the Android operating system. Incredibly, I find myself realizing that Facebook has a point – Google blocking other forms of tracking but allowing their own (Topics – which others can then plug into via an API and presumably for a fee) does have all the hallmarks of a classic piece of anti-competitive market monopolization. As odious as Facebook’s tracking of users is, surely it needs to be an “all or nothing” situation; how can it be right for one company to track you, but not another?

It’s going to be a fascinating time in the digital advertising space and I can’t decide if I’m very, very glad not to be in it right now (and having to explain to clients why their campaigns aren’t working how they used to) or whether I’m going to feel like I’m missing out not being in the thick of this fight.

Ultimately, the late 2020s may be the time when personalized, behavioral advertising finally dies. Who knows, we might just have to get creative with our advertising campaigns again to compensate….

The Facebook Watch that Watches You Back?

Facebook Technologies, AKA Meta, AKA Those Guys Who Use Your Data to Sell Advertising, AKA Those Guys Who Just Lost a Huge Load of Money, have a new patent and things may be about to get interesting in the wearables space yet again.

Yanko Design first brought this to my attention with some fun renders of what a “Meta Watch” might look like, basing their designs on a 2021 patent filed by Facebook.

It’s a fun concept: a tiny touchscreen with a pair of built-in cameras to make it easy for you to make and take video calls from your watch. It’s a perfect solution to the problem of not having a portable screen that fits in your… oh, wait, you already have a phone though, right? Makes you wonder what the point of this device might be then…

Meta: The Company that Never Learns

Companies like Facebook/Meta hold huge numbers of patents. They patent far more things than they ever put into production. There’s no guarantee that this device, or anything like it, will ever see the light of day. Could Facebook enter the wearables market though? Almost definitely… because they never learn their lesson.

This prospective Meta watch has all the hallmarks of the typical Meta product; small concealed cameras, a GPS tracking option (alright, that’s not in the patent but it’s a pretty standard smart-watch component), and biometrics. That’s right, just when you thought Facebook couldn’t want any more data, now it wants to know if you’re having a hard time climbing the stairs…

Literally the least cool a pair of Ray-Bans have ever looked

Facebook’s last stab at a wearable was Facebook Glasses, a product so intrusive that the Russian FSB declared it a “spy gadget”. You’d think they’d know – they are actual spies, after all. Is there any reason to expect that they would be less invasive when there’s a chance to grab even more data?

Meta: The Company that Did Learn One Lesson

If there’s one lesson that Facebook may have learnt, its that building your product on someone else’s platform is a bad idea (unless you are a business who wants to base your marketing strategy on Facebook, in which case they will assure you that it is totally fine). Facebook took a huge financial hit recently when it revealed falling user numbers and an impact to advertising revenues linked to Apple’s decision to block the tracking technologies on which Facebook relies. Where once Facebook’s “mobile first” strategy was hailed as genius, its now turned into a nightmare for the company. They don’t control the mobile environment, Apple and Google do, and that makes Facebook vulnerable.

Solution? Get into the hardware market and regain control of the flow of data from users to Facebook. With a simple Bluetooth link to a mobile phone, Facebook’s wearable could transmit huge amounts of data (including data it previously never had to access to) direct to Facebook without interference from the mobile phone software.

Of course, a nice big juicy screen strapped to your wrist pumping out all that data is also a great way to deliver Facebook’s adverts to users. Forget sliding an advert in amongst the news feed; how about an advert for a nice cold beverage just at the moment you’re feeling hit and standing near the right kind of shop? A leak from Facebook back in 2017 revealed they had the capacity to target ads based on people’s moods. Is it a huge leap to biometric targeting?

The Metaverse is not something that the average user is ready for and the hardware is still cumbersome and expensive. A wearable is comparatively inexpensive to make, easy to market and distribute, and could quickly become a “must-have” accessory for Instagram influencers with a voracious need to constantly share new content. In exchange, Facebook will want what it always wants – access to your data that it can use to target advertising at you.

My recommendation? Don’t put Facebook’s new lo-jack on your arm.

Advertisers hit Facebook where it hurts – the balance sheet

Back in March I wrote up my six reasons why Facebook might be dying.

Top of the list were the scandals surrounding fake news and voter manipulation.

Now, it seems, the big brands are waking up to the increasing toxicity of Facebook’s brand and are pulling advertising from the platform in support of the Stop Hate for Profit campaign.

With the list of big brands pledging to ceasing spending on Facebook in the month of July, a tumbling share price has so far wiped over £70 billion dollars off the value of the company.

Facebook’s response has been characterised as anything from luke-warm to non-existent by their critics, pledging only to tag “hateful” posts in future and promised further announcements this week.

Is all of this enough to seriously change the direction of travel for Facebook, a company that has undoubtedly profited from divisive and hateful content published on its platform and that allegedly prizes its political reach and influence above everything? And, is this change enough to satisfy its critics? Is Facebook truly accountable for the content on the platform is it is merely a vessel for our own worst natures?

Why Stop Hate for Profit is a great idea that won’t work

Stop Hate for Profit is a great idea. It also has a very simple and workable plan, published on its website, for how Facebook and other platforms like it could become more accountable and combat the spread of hateful content online.

So, why don’t I think it will work?

Well, the problem comes down to numbers. Getting a number of (very) large brands to log off Facebook for a month makes headlines and hits the share price, but only for so long as investors wonder what will replace the lost revenue. And the lost revenue, so far, doesn’t amount to anything close to Facebook’s total income from advertising.

We’re in the midst of a global pandemic that has forced business to move online. It’s never been easier to sell online advertising to businesses hungry for revenue and the vast majority of Facebook’s revenue doesn’t come from big brands – it comes from a very large number of small companies spending small amounts of money.

It’s entirely possible that Facebook can weather this storm and come out of July with a healthy balance sheet. Smaller advertisers may spend more. The big brands still on the platform might spend more as well. Don’t forget, Facebook shares hit a high this time last year when they were hit with a £5 billion fine.

How does Facebook get fined £5 billion dollars and increase its share price?

It’s simple – Facebook was fined £5 billion dollars for privacy violations. The fine was less than 25% of their profit and gave investors the clearest indication yet of what Facebook could get away with and what the sanctions would be. In short – people thought the fine would be bigger and so Facebook got off cheap.

While Facebook, and Mark Zuckerberg, continue to deliver profit for shareholders, the source of that profit will not matter.

That’s why Stop Hate for Profit won’t work.

Correction: Why Stop Hate for Profit won’t work unless you support it

Facebook advertising works because the system can put it in front of people that the advertisers want to see it. Those people click it. The advertisers get what they want and Facebook get paid.

So, as I’ve said before – when the product is free, you are the product.

Fewer advertisers is one thing. Fewer advertisers generating fewer clicks is what would really move the needle in terms of Facebook’s cash flow and its long term prospects. The one thing any social media platform fears is a drop in engagement.

So, if you’re serious about supporting Stop Hate for Profit there’s a very simple way of doing so.

Log off.

Log out of Facebook, abandon your feed, and take 30 days off. 30 days to change the direction of a social network that has, in the opinions of many, begun to direct our society and democracy.

What should you do now?

You can sign up for more information on Stop Hate for Profit here. Once you’ve done that, set a reminder in your diary for July 1st. That’s going to be the start of your Facebook detox. Join me.

Hurdles to making digital contact tracing work

Digital Contact Tracing or “Exposure Alerting” has been touted as one of the key weapons in fighting coronavirus and potentially bringing an end to lockdown in countries currently keeping their populations at home and indoors to flatten the curve of infections and coronavirus deaths.

Expert opinion is that 50% of the population need to be signed up to a digital contact tracing platform for it be to effective. This is a huge number – the contract tracing system in Singapore, which has been touted as a potential model for the system in the West, had just a 12% uptake.

By contrast, contact tracing in South Korea has been a huge factor in the rapid reduction of cases whilst also being decried by privacy and civil rights campaigners as being incredibly invasive. The system, which uses cellphone location data, CCTV, and credit card records, provides the government with an incredible amount of data according to a report in Nature:

In some districts, public information includes which rooms of a building the person was in, when they visited a toilet, and whether or not they wore a mask

Mark Zatrow, reporting in Nature

So, in light of the concerns that many privacy campaigners have, how achievable is the required 50% uptake and what are the hurdles we need to overcome to get there?

How might Contact Tracing work?

This graphic from the FT sums it up nicely…

Now we’ve got the theory down… what are the issues?

Phones are designed not to do this

The current design strategy is to use Bluetooth technology to exchange pseudo-random anonymous keys between phones when they are in close proximity with each other.

This requires an app to have access to the Bluetooth “stack” when the phone is idle. As this is a security risk, it’s something that phone operating systems currently don’t allow phones to do.

This is why a collaborative approach between Apple and Google is required – an interoperable standard needs to be agreed that will circumvent this security feature on both the host phone and the phone(s) it wants to collect IDs from. The latest iteration of this is decentralised and never collects geographic data, but this is just an API – a means for the application developers to build applications that can use the contact tracing. It’s not a contact tracing app in and of itself.

And this isn’t a quick fix.

Some reports indicate that this isn’t going to be an easy change for either Apple or Google to make. There are concerns about other apps being able to access this functionality, concerns about how the change will be rolled out (especially on Android devices, where operating system changes are not controlled by Google but by individual phone vendors), and questions about how it will affect things like phone battery life.

There may be as many as 2 billion phones that lack the necessary chipset or operating system version to be able to use the API, predominantly in the possession of older users who are most at risk from Covid-19 infection.

The underlying technology limitation is around the fact that there are still some phones in use that won’t have the necessary Bluetooth or latest operating system … If you are in a disadvantaged group and have an old device or a [basic] feature phone, you will miss out on the benefits that this app could potentially offer.

Ben Wood, analyst at CCS Insight, reporting in the Financial Times

Interestingly Huawei, the Chinese phone maker banned from using Google services by the US government, have confirmed that most of their handsets will receive the update but the position for other manufacturers is less clear.

But, let’s assume Apple and Google can get this to work…

Who holds this data?

There are competing models in terms of how a contact tracing system will store data.

The option favoured by privacy advocates stores data on the phone and the phone only. If a user reports that they have been diagnosed with COVID-19, their ID is then sent to a central server which either broadcasts it out to all subscriber devices, or from where those devices can periodically download an “infection list”, where it is then compared to the keys stored on the individual phones.

As Apple and Google are only providing an API for contact tracing, this means that somebody, somewhere, has to provide and maintain the server that will store the IDs of any who self-reports. Trust is a massive factor here; although the IDs are anonymous there are still huge questions about what a “bad actor” could do with this sort of data.

Apple has a strong position on privacy advocacy, but have historically been the victims of serious security breaches. Google has also had serious data breaches and has a less than stellar record when it comes to respecting users privacy.

In my opinion, Apple and Google may be ducking their responsibility by creating an API and leaving the utilisation of the API to others, a move that creates significant risk for the end-user, as privacy expert and campaigner Jaap-Henk Hoepman explains:

However any decentralised scheme can be turned into a centralised scheme by forcing the phone to report to the authorities that it was at some point in time close to the phone of an infected person. In other words, certain governments or companies — using the decentralised framework developed by Apple and Google — can create an app that (without users being able to prevent this) report the fact that they have been close to a person of interest in the last few weeks. The platform itself may be decentralised. But the app developed on top of it breaks this protective shield and collects the contact information centrally regardless. This effectively turns our smartphones into a global mass surveillance tool

Contact Tracing Malware is Inevitable

Even if we do trust Apple, Google, and our government with this data, it seems inevitable that malware will be created that can use this API to track users without their knowledge or consent. At the moment, there is a hard wall around the Bluetooth stack – we’re about to punch a hole in it to make a door. Even a locked door is not going to be as secure as that wall used to be, and that should be a concern for any smartphone user.

Problems don’t have to start on the user’s cellphone either. Android can be installed on a wide range of devices – including Bluetooth beacons that could be installed in any location. CCTV and other surveillance technology could take a massive, and dangerous, leap forward with Android-based contact tracing applications able to track the movements of individuals.

It won’t matter than this data is anonymous. Given enough data points, anyone’s identity could be deduced even from a changing anonymous ID.

Trolls will target Contact Tracing Apps

But, let’s assume that Google and Apple find a way to provide this new API in a very secure fashion. There is no malware, only highly secure and rigorously approved applications. That would be OK, right?

Sadly, I think the final area in which contract tracing will fall down is when people start to realise that it’s wide open to abuse. Human beings, as a whole, have a history of behaving very badly once they know that they are anonymous. The more secure the contract tracing API is, the more anonymous and untraceable we become – and that leaves the system vulnerable.

Earlier this year, a German artist caused a traffic jam by faking slow traffic using 99 cellphones connected to Google Maps. Google Maps saw the slow moving phones connected to its system, assumed there was a traffic bottleneck, and people who received this updated traffic information started to avoid the road in question (which happened to run right outside Google’s offices).

Given that the whole point of a contact tracing app is to make people aware that they have, potentially, been exposed to someone with Covid-19 so that they can go into self-isolation, the potential for using a contract tracing app to cause disruption and mayhem are obvious.

Anyone who’s worked on abuse will instantly realise that a voluntary app operated by anonymous actors is wide open to trolling. The performance art people will tie a phone to a dog and let it run around the park; the Russians will use the app to run service-denial attacks and spread panic; and little Johnny will self-report symptoms to get the whole school sent home.

Security expert Professor Ross Anderson, of the University of Cambridge

Got a problem with a business? Hang around outside the offices for a few days, make sure you go to the same Greggs as some of the people who work there, then self-report with Covid-19. Got a dispute with your local council? Take a wander around the council offices and then self-report with Covid-19.

Any centralised registration of reports, or requirement for an official “Covid-19 Number”, defeats the idea of keeping the system anonymous, but without this the risk of malicious and erroneous self reporting is high.

If it’s so broken, why do it?

With whole countries on lockdown, economies under immense pressure, and people struggling to comply with social distancing measures long term, the desire for a way out of the current situation is high.

Would people trade privacy and civil liberties for the more tangible and urgent freedom of being able to move outside their own home, return to work, see friends and relatives? It’s certainly tempting.

The question each and every one of us will face is – what is my privacy worth? You may think that the answer to this is simple. You may not care if the government tracks you, you may not care if Google and Apple know where you are (chances are they already do), especially when you weigh this against the ability to leave your house, do you job, etc.

If you think that way, I will leave you with this final thought – one of the wisest things I’ve ever had said to me but one of the wisest people I’ve ever met:

Never make the mistake of assuming the system will always be benevolent.

Wes Packer

Zoom is eating up market share for video conferencing. It’s also eating up all your private information.

A report from respected internet marketing expert Doc Searls highlighted a worrying amount of data being sucked from users by video conferencing app Zoom and fed to online advertisers such as Facebook and Google.

This personal info includes, and is not limited to, names, addresses and any other identifying data, job titles and employers, Facebook profiles, and device specifications. Crucially, it also includes “the content contained in cloud recordings, and instant messages, files, whiteboards … shared while using the service.”

I had personal experience of this yesterday. After a nearly two hour video chat (which you can see here) I noticed a disturbing parity between what we’d talked about during the chat and what ads I was being pushed on Facebook and Google Discover. Whilst I had done some online research before the call, it was a typically (for me) rambling discussion and we hit on topics that turned up in my feed for what I would consider no good reason.

Zoom doesn’t have the best history with security either. The BBC has an article covering some of Zoom’s security issues.

ProTip: Pick your software with care and remember that if the product is free then you and your data are the product.

Don’t put your Face in a Book

Biometrics is the next battleground of privacy. We can control what we share with social networks and tech companies, but there is far more data in a photograph or a piece of video than we really think about. Facial recognition is a dangerous technology and governments need to catch up to ensure decent legislation is in place.

Anyone can join your private WhatsApp group using a simple Google search

Google has indexed the invite links for lots of “private” WhatsApp” groups. So, if you have a private group for your company, your team, or to talk about what Sheila from Accounts did at the Christmas Party – it’s not as secure as you might think.

And if you’re thinking “Meh, this really doesn’t bother me” it’s worth having a think about just how pervasive WhatsApp is and how often it is used for internal messaging and back-channeling even within large organisations. Motherboard tested out the process that was first documented by journalist Jordan Wildon and were able to join a group that was intended for NGOs accredited by the UN. Once they were in they not only had access to all of the participants but all their phone numbers.

How did Google index private WhatsApp groups?

There is a simple root cause to this problem – people sharing invite links on the “public” internet. Forum posts, social media, extranets… even with a basic search of my own I was able to find a huge list of “private” groups.

ProTip: Lots of people are talking about this problem, so the first few pages of Google results will now be blogs talking about this problem - go down past page 4 and you'll find the gold!

This isn’t a case of Google going inside WhatsApps systems, its simply a case of human beings sharing the link without thinking about the ramifications of doing so.

What’s Google Doing About It?

Google are basically doing… nothing. And, to be fair, they probably don’t need to. The links are public because WhatsApp made them public and its WhatsApp’s problem to deal with.

Search engines like Google & others list pages from the open web. That’s what’s happening here. It’s no different than any case where a site allows URLs to be publicly listed.

Danny Sullivan, Google Search Liason on Twitter.

What Should Google Do About It?

If I were King of Google, I’d probably want to be a bit more proactive about this. Google know what they’ve indexed. They know which of those links has been surfaced in a search engine result and which have been clicked. So, in theory, they should be one database query away from letting users affected by this security problem know that they’ve been affected and that someone, somewhere has been given the opportunity to invite themselves to a private group.

Why won’t Google do this? If I was cynical I would say because WhatsApp is a competitor in the messaging space and Google have been trying to crack messaging for years. If I wanted to be kinder, I would say that doing this, even once, is the thin end of a very big wedge. How many other times could Google be called upon to mine their systems for data to help resolve someone else’s security problem? What level of responsibility would they then have?

Maybe there’s a reason why I’m not running Google after all…

So, what should you do about Google indexing your private WhatsApp group?

If you have a “private” WhatsApp group, it’s potentially already in Google’s index. You can change the invite code through the app, but there will still be one. *Try not to give that one away!*

The important thing to understand here is that this isn’t a security issue for WhatsApp to fix – there are plenty of legitimate reasons you might want to share the link for your WhatsApp group. In my short foray into Googling for WhatsApp groups, I found some enormous lists of groups that actively encourage people to join.

“links that users wish to share privately with people they know and trust should not be posted on a publicly accessible website.”

Facebook / WhatsApp spokesperson Alison Bonny

What’s happening here is that people are sharing the link and Google are finding it. What’s needed, therefore, is a better understanding of what the public internet is and a better understanding of how to protect “in house” systems from being indexed by Google.

Your post-Google indexed my private WhatsApp/whatever checklist

Google provide clear guidelines on how to stop pages being indexed by Google. The question is whether or not your web-based intranet, extranet, CRM application, etc. actually implements these features or not.

Clearly, in many of the cases affected by this problem, the person who posted the link either

  1. Posted it to a website that should have been secure but wasn’t.
  2. Posted it to a website that they thought was secure, but wasn’t.

As a web developer, I’ve seen this mistake often – a new website is set up on a test server and the developer forgets to update the configuration to prevent Google from indexing it. It’s an easy mistake to make and, arguably, preferable to forgetting to let Google know it can index your website (not that I’ve made that mistake…) but that test site then appears in Google’s index and starts drawing in clicks. Simple to fix with a redirect, but an issue that often goes undetected.

Much to the annoyance of every SEO-loving bone in my body, you can guarantee that if you don't want Google to index it - it will find it. Trust me - Google will seek that content out like an Exocet missile with a bloodhound strapped to the nose-cone.

Working with clients, I’ve often come across scenarios where data that is expected to private in in-house systems has been accidentally exposed to the web. These errors often don’t show up on security scans because security scanners often have to pointed at the thing they are testing. Google, however, goes *everywhere* sniffing out information.

If you’re storing data online or “in the cloud” (which is the same as storing it online but probably cost a bit more and there was a salesperson involved) it’s worth being pro-active and checking that you can’t “deep link” to content on your application. Try taking a URL from your CRM system, intranet, or a “private” part of your website and, in an incognito browser window, see if you can still get to that content.

If you can… Google can.

Having traffic problems? I feel bad for you, Google. I got 99 phones but a car ain’t one

Have you ever dodged a traffic jam using Google Maps? I have… or at least I thought I had until today. Turns out, all you need to create a fake traffic jam is 99 mobile phones and a hand cart.

In Germany artist Simon Weckert successfully tricked Google Maps into reporting a traffic jam that didn’t exist, diverting swathes of traffic away from a busy road (that just happened to run right outside Google’s offices) simply by walking slowly along the road pushing a hand-cart containing 99 phones, all sharing their location data with Google Maps.

So far, so “ha ha Google are dumb”. Even Google seemed to think it was all jolly good fun and laughed it off

“Whether via car or cart or camel, we love seeing creative uses of Google Maps as it helps us make maps work better over time.”

— A Google spokesperson

But there’s a darker side to this that warrants exploration. I’ve lauded Google’s data sharing in Google Maps before as a great use of crowd-sourced data actually benefiting the the people who are sharing it. But if the system is so easy to fool, if Google is this dumb, is it a system we should be trusting as implicitly as we do?

What happens to traffic in a city if more than one person decides to copy this trick and make more than one road pseudo-impassable? Blocking traffic and disrupting travel has become a significant and important tactic of groups like Extinction Rebellion. What if, instead of needing a large group of people to block a road, you just needed a rucksack full of mobile phones? What if you were able to use those jams to redirect traffic to smaller roads creating… an actual traffic jam?

In ancient times, maps were incredibly important and cartographers changed the face of the world by letting us know what the face of the Earth actually looked like. Today, if you’ve got a smartphone then you have a street level atlas of most corners of the Earth in your pocket. Or at least, you think you do.

The question you should now be asking is… just how accurate is it and who is controlling where your directions lead?

Original article from The Guardian

Up to 500,000 WordPress websites hit by InfiniteWP security vulnerability

The Register is reporting that a security problem in the popular InfiniteWP plugin may have exposed over 300,000 websites to being hacked. The issue, which has already been patched by the plugins maker, allows a nefarious hacker to gain admin access to a WordPress site using the plugin without an admin password.

Whilst the article reports the number of affected sites at around 300,000 the plugin maker’s website lists over 500,000 installations. Given that the vulnerability may have already been exploited on websites that are now patched, the footprint of this issue could easily exceed that number.

Not the first time, not the last time for WordPress

The enormous popularity of WordPress makes it, and its most popular plugins, are prime target for website hackers. A single vulnerability in either the core code or a popular plugin can be exploited on a huge number of websites, including websites based on WordPress being used for eCommerce or other applications. It’s a problem that all large system vendors face and is a serious problem for plugin developers and for individuals and businesses running WordPress.

What do you do right now if you’re running InfiniteWP?

If you’re running InfiniteWP, you need to patch your site immediately. You can do this through your WordPress control panel as the issue has already been resolved by the plugin makers.

Long term, issues like this are a reminder that website owners need to be increasingly proactive in maintaining site security.

What can you do long term? Five Tips for safer WordPress sites (that also work for most other CMS)

Patch Early, Patch Often

If you are running WordPress, regular updates to your core code and plugins are essential. Make sure you know how to do this or have a developer you are working with who you trust to do this in a timely fashion.

Beware of Branching

Sometimes a developer will take an existing plugin and alter its code – creating their own “branch”. To avoid their changes being overwritten by automatic updates to the original plugin, they install their version of the plugin under a different name. Sensible, except for the fact that their version no longer gets any bug fixes or security updates from the original.

Branching should not be taken lightly, but often developers do it as a short-cut without thinking about the long term implications.

Invest in Backups

Accept that problems happen and make sure you have a backup solution in place so that you can roll your website back to a simpler, happier time should the worst happen. A backup solution gives you a fall back position not only for the nightmare scenario of having your website is hacked and vandalised but also if your web host goes belly up without warning and you need to move to a new hosting environment.

Invest in Security and Penetration Testing

There are a wide range of services available online that will scan your website for vulnerabilities either as a one-off service or as an ongoing arrangement. If you are serious about the security of your website, this is no longer an “optional extra” – it’s something you should be doing.

Ask yourself – Does my website have to run on WordPress?

OK, so this one is more drastic but it needs to be said – there are a lot of websites running on WordPress for no good reason. Just like the barber who cuts everyone’s hair the same regardless of what they ask for, there are lots of developers who know and trust WordPress and use it for everything – even when it isn’t the best tool for the job.

The change doesn’t have to be as drastic as replacing your entire website – tools such as Gatsby allow developers to build faster, more robust websites that can still draw content from a WordPress backend but without exposing the system to the wider internet. Referred to as “headless WordPress” sites, these sites continue to use the WordPress CMS “back office” but deliver the front end of the website through a customised layer.

The importance of support arrangements

One last tip… If you’re working with a web developer or digital agency of any type, make sure you understand what the long term support arrangements are.

I’ve worked with a number of clients recently who have paid for a website project that has no ongoing support arrangement built into the contract (in more than one instance, there was no contract at all). Websites are projects without end – like your house or your car they require maintenance, servicing, and the occasional lick of paint and redecoration to stay at their best.