chrislynch.link

Technology

The theft of code by Microsoft Co-Pilot will be decided in court.

Once upon a time, three witches sat around a cauldron. Their names were Embrace, Extend, and Extinguish. They were the three merry witches of Microsoft and under the silvery light of a crescent moon, they were casting a spell to rid the world of their mortal foe – open-source software.

I’m an old hand of the software world and I remember when Microsoft was The Enemy, with a capital “E”. They had openly described open source as “a cancer” in 2001. Extinguish, of all the Microsoft witches, was the one they wanted to call upon. Linux must die. Firefox must burn.

But then, something changed. By 2016, Steve Balmer had learned to love Linux. Extinguish slipped back into the shadows behind her stygian sisters and Embrace stepped forward, warm arms extended. The old hands had seen this trick before, of course. We knew what to expect. “Look out,” we said, “Embrace comes before Extend and Extend comes before Extinguish”. But nobody listens to old hands. There’s a reason there are so few of us.

Microsoft embraced Linux. It contributed code. It began to play the game. It put Linux at the heart of its cloud hosting platform, Azure. Embrace worked her magic so well this time around that we barely noticed that sometimes the hands around us belonged to Extend. And those hands tend to close around your throat.

When Microsoft bought Github, we old hands raised warning flags yet again. Microsoft, the old Enemy, could not be trusted. Github was too valuable, too important, to allow it to fall into their hands. But, nevertheless, it happened. We had been embraced. We would be extended. We feared being extinguished.

But, instead of extinguishing us, Microsoft had a new plan. A new witch, creeping from the dirty swamp waters of corporate strategy. This witch, all grasping hands and leash-leather skin had a name. She was called Enslave.

Thus enters Co-Pilot, Microsoft’s AI tool that can generate code for you. You ask, it writes. It’s a code genie, a wish-granting machine that creates software out of thin air. Except, it doesn’t generate anything. It doesn’t write anything. Co-pilot copies code from existing projects… existing open-source projects.

Back in the early days of the web, there was a technique called “content spinning” – it involved taking (stealing) someone else’s content, changing some words, and then passing it off as your own. All you needed was a digital thesaurus and a completely absent moral compass.

Ostensibly, Co-Pilot is no different. It’s a smarter spinner, but it’s just a spinner. It’s gobbled up as much code as it can handle, billions of lines from projects stored on Github, parsing comments, chewing up variable names, and turning the hard work of a vast number of open-source developers into copy-and-paste patterns that Microsoft can package and resell. For profit. For itself.

They say that the system generates code but there are numerous examples on the web now of code that has been taken verbatim from open-source projects. The licenses under which open-source code is released normally require attribution of any code that is reused back to the original author. Co-pilot doesn’t do this. Arguably, feeding billions of lines of open-source code into a Frankensteinian sausage machine is also not what open-source software authors had in mind when they pushed their code to Github.

I have code on Github. Had I been asked for my consent for it to be used in this way I would have said no. (And not just because I’m an old hand and Microsoft is The Enemy). Read my code? Fine. Reuse my code? Absolutely. Learn from it, copy and paste it? Crack on. But pick it up and sell it? No.

I’m an old school free software advocate. I like “Free as in Freedom” as well as “Free as in Beer”. Of course, open source code gets half-inched and put inside proprietary software. It’s inevitable. But Microsoft are committing this larceny on a grand scale, and they are breaking open source licenses (at least in spirit) to do it.

Developers with projects larger than mine aren’t taking things lying down though. Microsoft are being taken to court and the future of Co-pilot, and AI code generation, will be put to the test.

https://www.bleepingcomputer.com/news/security/microsoft-sued-for-open-source-piracy-through-github-copilot/

Co-pilot doesn’t do quite as good a job as it needs to in hiding it’s sources. Like a cub reporter cracking under interrogation from their hardened editor, it gives up the goods too easily, spitting back lumps of good that are straight copy and paste from open source projects.

Not cool, Microsoft.

This court case will take a long time to settle. Courts aren’t good at dealing with technology issues for one thing and this will be a landmark case in terms of determining what open code and open data can be used for when training AI.

Like the look of all those fun, text to image AI machines? Just remember they took a lot of images created by real, living artists to “train” the AI how to make a picture. Want to use one of those AI copy writing machines? Spare a thought for every writer who has had their work, probably without their knowledge, ground up and fed into the machine.

I’m not Luddite. I’m not here to smash the looms. I’m fascinated by AI and believe it has huge potential. I’m just not keen on people taking stuff that doesn’t belong to them. Like I said, I’m an old hand.

I do hope there will be a few old hands on the jury of this case as well…

Mastodon Tips for Writers

I’ve had my eye on Mastodon for a while. When I was last revamping my website, there was a time when I was considering running a Mastodon server as a place to host my own “microblog”. I love microblogs but I’m always wary of putting all my content on someone else’s platform – so over-investing in Twitter, Tumblr, or Instagram has never sat well with me. With my fellow writers running around like people looking for lifeboats off the Titanic as Twitter threatens to sink (either offline or into some kind of terrifying hellscape), I’m feeling a little vindicated that I’ve always tried to keep my audience on platforms where I have a degree of control.

Jumping from Twitter to Mastodon is pretty daunting though. It’s not just a different user interface but a different way of networking with different “rights and wrongs”. It’s not nearly as scary as people (mostly people with very large Twitter followings) want you to think though.

Here are my top tips/quick answers to the problems I see people complaining about the most.

Does it matter which Mastodon server I choose?

Signing up to Mastodon means picking a server to call “home”. The server that you pick only has a small impact on who else you can follow and network with. The whole point of Mastodon (and the wider “Fediverse”) is that it works by federating content between multiple, disparate servers. I’ve got two accounts, one on mastodon.social and one on writing.exchange. I can see posts from either server, and from almost any other Mastodon server, on both.

There are some servers that will block content from others, normally because the content would breach the moderation guidelines set by the server owners. This isn’t worth stressing about – if you were with happy with Twitter picking and choosing what you can see, there’s nothing to fear from Mastodon.

And, with Mastodon, you can choose to change servers at any time. I started out at toot.wales but moved because the server was oversubscribed and performance was suffering.

How do I find my Twitter friends on Mastodon?

There are a number of tools you can use to find your Twitter contacts on Mastodon, including easy and automated options like FediFinder. Finding your old tribe is only half the fun though. Hashtags are hugely powerful on Mastodon as the timeline is purely chronological, not “optimised” in the way that the Twitter feed is. You can discover fantastic new people to follow, and be a lot easier to discover, on Mastodon.

It’s a good idea to copy and paste your Mastodon ID into your Twitter profile somewhere to help people who are using tools like FediFinder to your new account.

I don’t like the Mastodon app, is there something better?

Just like with Twitter, there are plenty of different apps you can use to access Mastodon and the Fediverse. The official Mastodon app is not the best choice, it seems to exist mostly to fill the gap that would otherwise exist in App Stores if it wasn’t there. If you’re an Android user, I recommend you give Tusky a try.

Is Mastodon good for writers?

Personally, I’ve found the #writingcommunity hashtag on Mastodon to be far more community oriented that Twitter which, on a bad day, can be nothing more than a heavy downpour of authors shilling you their books with scattered showers of virtue signalling and empty praise from people you’ve never met (mostly in the hope of a follow back either from you or from someone else in the thread).

The community on Mastondon is different. There are more people asking and answering questions, more people sharing useful information, and a more genuintely supportive vibration about the place. Perhaps it’s because there’s still a certain “rebel culture” to life on Mastodon, a sense of being an outsider. We’re a smaller group, but maybe better for it.

Where can I learn more?

I highly recommend the website fedi.tips for learning more about the Fediverse. You can follow them as well. (On Mastodon, obviously)

Do I need to leave Twitter before I join Mastodon?

No. Mastodon is just another social network. You can be on Mastodon and Twitter. You can be on Mastodon and Instagram. You can been on all three, plus Facebook and LinkedIn, and Hive (whatever that is). And, as anyone who is on more than one social network will tell you – different networks are good for different things.

One thing I don’t think anyone needs to do is announce that they are leaving, or staying, on Twitter. Honestly, unless you are a major celebrity, the world doesn’t care. (And even then, it only cares a little). Save yourself the embarrassment of coming awkwardly back into the party after storming off. (Or worse, storming off but having nobody notice).

Social Media: It’s about control…

I saved this image when I first saw it back in October. Since then, Elon Musk has bought Twitter, and in the days and weeks that have followed the acquisition, there has been a crash in advertising revenue, mass layoffs, and an exodus of users. Amongst those losing their jobs are content moderators who have been at the forefront of the battle to keep misinformation and hate speech off Twitter.

Advertisers are leaving Twitter in droves and it’s up to users if they want to follow suit. For some, the platform is already becoming too toxic. Others, particularly “influencers” will be wondering if their personal brand is damaged or enhanced by being present on Twitter.

The problem for many users is – where do you go if Twitter is no longer for you?

As the image above outlines, all of the social networks are owned by someone… Except one.

Enter Mastodon

A literal “elephant in the room”, Mastodon is a decentralized social network similar to Twitter that, by design, has no one owner. Anyone with enough technical skill can set up a Mastodon server. That server becomes part of the federated network of Mastodon servers, meaning users on any server can follow and see content from users on any other server.

The experience is not quite the same as Twitter – discovery is more difficult and although Mastodon registrations have increased dramatically since Elon Musk purchased Twitter, there are still only a fraction of the users on Mastodon that there are on Twitter (and they are spread across a large number of servers, so you need to hunt them down).

If you are looking to connect and converse with other people with similar interests, Mastodon has a lot to offer. If you’re looking to reach a large number of people, you’re going to find the community small compared to Twitter (however, this does amplify your voice and mean you’re more likely to get interactions, so it’s not necessarily a bad thing).

The other thing to keep in mind with Mastodon is that it is moderated, just like Twitter is (or was), but moderation is server-specific. When you first register with Mastondon you need to pick a server and you will inherit the moderation rules of that server. If you truly want “free” social media, your only option is to run your own server, set your own rules, and then rely on federation (through the “Fediverse” of Mastodon servers) to spread your message across the network of Mastodon servers.

Setting up a Mastodon server is fairly complex, but something I think I may try… just so that if I bump into Elon and he mentions that he owns a social media platform I can say “oh yeah, I’ve got one of those as well”

What’s wrong with the Metaverse?

Nokia’s chief strategy and technology officer Nishant Batra is confident that the era of the smartphone is coming to an end and the future lies in the metaverse. He stated:

Our belief is that this device [the smartphone] will be overtaken by a metaverse experience in the second half of the decade

Nishant Batra

Nokia was famously caught flat-footed by Apple with the release of the iPhone. It was a major error; although the creation of the iPhone was shrouded in secrecy it was widely rumored that Apple was working on a phone and many of the features that were finally unveiled had been long expected. Today, Nokia are little more than a bit-part player in the consumer mobile phone world. It might seem odd to be critical of a company that brought in $21 billion of sales last year but considering that Samsung sold $72 billion of mobile phones alone in 2021, it feels legitimate to me. Nokia lost the mobile phone market – is backing the metaverse them staking a claim to the future or is it another misstep?

The race for the Metaverse

Like settlers in the Old West, companies are racing to stake their claim to the metaverse. Meta are currently even running TV advertising campaigns to let people know that they are working on it. (And, just like an ad for a dodgy Android game, the graphics in the ad look nothing like the graphics currently available).

YouTube player

The race is clearly on to produce a metaverse headset that consumers can wear without feeling like an extra in Ready Player One. Google tried and failed with Google Glass; a technically excellent product but one that nobody was ready for. Snapchat’s camera specs are either a novelty or a privacy nightmare, depending on your outlook. Gaming headsets like the Oculus deliver a good gaming experience but are nowhere near portable. To their credit, Meta seem to understand that people may not be ready for the Metaverse. A big of what they need to do to make it successful is convince us, as consumers, to buy their vision of the future.

Nobody needs Facebook attached to their actual face

Whether we need the Metaverse is, of course, a pretty vexing question but it’s less important that the question that is driving Meta, Apple, Nokia, and others to invest in this technology. It seems inevitable that the Metaverse will arrive. Having adopted and adapted to video conferencing as a norm and with increasing numbers of AR and VR experiences entering the mainstream, we are perhaps just one “killer app” away. The really big question that everyone is really trying to answer is therefore… who will own the Metaverse?

Meta really, really want to own it. They are probably the worst candidate, given the appalling job that they did, and continue to do, with managing their “walled garden” on Facebook. At least, today, you can choose to disengage from Facebook (as many, many people are). If you’re living and working inside an environment managed by Facebook, it’s going to be much, much harder.

Meta may have already latched on to their “killer app” as well – a virtual workspace where you can interact with colleagues in a way that (at least in Meta’s mind) is somehow better than Zoom, Teams, or Google Meet. Frankly, as someone who lived through two years of having to tell people that they were “on mute” or didn’t have their camera on, I dread to think what the early days of Metaversal (is that a word?) meetings will be like. Probably ghastly (and thus successful in that they will have reproduced the inherently ghastly experience that is most meetings anyway).

If history is any guide, legislators will be late to recognize the importance of laying down laws and controlling monopolies in the Metaverse. It will be too abstract, too confusing, and too difficult to control. (Updated October 27th 2022): Looks like Ofcom, the UK communications regulator, agrees with me that Meta cannot be left to self-regulate the metaverse, though.

Somehow, however, we have to wrestle control of the Metaverse away from the likes of Facebook and ensure that there are open and accessible standards governing its use, sensible laws to control the type of content and experiences that can be created, and the freedom for a new generation of creators and developers to build and innovate.

A quick Google of “Metaverse Open Standards” reveals the existence of https://metaverse-standards.org and its members list at https://metaverse-standards.org/members/. Quite frankly, I’m not sure I’d trust some of the companies on this list to follow a standard, let alone define one. Is the future of the Metaverse destined to be proprietary?

YouTube player

The impact will be real – it’s down to us to choose what that impact is.

WebAssembly 2.0 seeks to fix the gaps in current WebAssembly

I previously wrote about the excitement around WebAssembly and, having seen many technologies like this come and go in the past, my analysis of why WebAssembly might be a bad idea. In summary, it’s not that it is technologically bad – it’s simply that I don’t see a space for it in a world where computing power has been becoming thinner on the client-side for years and microprocessor prices are skyrocketing.

(And yes, I know that WebAssembly can also run on the edge of the server-side. So can a lot of things, so whilst it’s interesting I don’t see much of a point of it there either except where performance is paramount and computational cost is high).

The other thing I found odd about WebAssembly was the need to wrap it up in Javascript (which most of the time could do the job WebAssembly was being deployed for). So, I was interested to see the W3C updating the WebAssembly 2.0 specification with, amongst other things, an improved interface to Javascript.

In my original article, I predicted that Javascript isn’t going anywhere. Looks like the W3C agreed.

The new Web Assembly 2.0 specification can be found here:

WebAssembly remains a technology to watch, but I suspect mainstream adoption and market penetration will be difficult for it to achieve against stalwart web development platforms PHP and nodeJS.

Why I Think WebAssembly is a Bad Idea

If you haven’t discovered WebAssembly yet, it’s a pretty exciting technology that allows you to code in a range of languages (C++, Rust, and Typescript most notably) then compile your code into a WASM file that can run in all popular web browsers. Because the code is compiled it runs quickly and consistently across all devices. WebAssembly can’t render directly and has to rely on being called by and emitting Javascript to manipulate the DOM, but it still opens up some exciting options and introduces the potential to do things in the browser that we just can’t do right now like 3D games, heavy image manipulation, cryptography, etc.

Sounds pretty cool right? Well, that’s not all – a lot of big names are using WebAssembly including Disney+ and Shopify. There’s a growing list of projects using WebAssembly at https://madewithwebassembly.com/

So, if I’m telling you about a new way of writing code for the web that’s quicker, more portable, and lets you use the language of your choice… why do I think it’s a bad idea?

Well, we’ve been here before. More than once. We’ve been here with Java Applets. We’ve been here with ASP.net. And Flash. And Silverlight. In fact, it’s one of the negative patterns that we just don’t seem to be able to break in computer programming; we take a working client-server model and we try to make the client fatter and thicker and get it to do more heavy lifting. Then, after a while, we realize the error of our ways and we break things apart again.

We did it with the PC. We did it with the smartphone. We did it with wearables.

We just can’t stop trying to load more processing and activity into the client-side, a pattern that always ends up being reversed.

Yes, WebAssembly is good at “heavy lifting”

Here’s a great example of a project made better with WebAssembly. It’s a clever piece of software that helps researchers and scientists preview the quality of DNA sequence data. It needs to do some heavy lifting and moving this functionality out of Javascript and into WebAssembly brought some significant speed benefits to the application.

This doesn’t mean by default that WebAssembly is always faster than Javascript. Actually, if you’re not doing something computationally complex, the chances are good that WebAssembly will actually be slower than Javascript, like in these examples.

Now, I’m not criticizing the developers of this application, they’ve made something pretty impressive, but I’ve got to ask the obvious question… why do you need to number crunch DNA samples in a browser in Javascript in the first place? Personally, I wouldn’t do this on the client-side unless I absolutely had to… and I don’t have to.

But where you do your heavy lifting matters…

Let’s say I have an application that needs to do some audio manipulation. I want to do it in the browser as part of a larger software as a service application, so this is doable in Javascript but computationally intensive. I could get the data pushed up to my server and process it there, but that’s going to increase the load on my server so I’m going to need a bigger server or to set up my server to expand itself elastically as I need it. WebAssembly makes some sense here, because I can make the client do the work.

However, this makes the fundamental assumption that every client device has enough computing power to do what we want it to do. In a world where we have a shortage of microprocessors that is likely to last for another few years, increasing costs, and lower-powered devices (like Chromebooks) becoming more popular, why are we trying to push processing back on to clients again?

We did it when terminals became PCs, then moved everything to the web. We did it with smartphone apps, which we are now replacing with PWAs and single-page Javascript applications. We’re doing it with wearables and will inevitably flip back the other way there too.

Web Assembly for Serverless Computing

There’s a lot of buzz about “serverless computing” at the moment, but for me that doesn’t mean pushing processing back to the client-side. Serverless computing means running code in the cloud without having to invest in a whole server to do it. It’s about time-slicing a much bigger machine into much smaller increments than we currently do by allowing us to grab some resource on an “as we need it” basis from a large pool rather than ring-fence a chunk of resource and then waste it when we’re not using it.

Even an elastic server will waste resources, running at its smallest possible size while idle is still running. Serverless computing allows us to “scale to zero” – zero running costs when we are not running.

Cloudflare is one example of WebAssembly being used right – as a remotely executed, serverless piece of code. In this example, Cloudflare show some image resizing being done “at the edge” using a Cloudflare Worker. It’s quite neat, although in real terms doesn’t offer anything you couldn’t do with a simple server-side PHP script, Javascript file, or any other way.

The AutoCad web app is a far better example (although not as clearly explained as the Cloudflare proof of concept). Here, all the heavy lifting is done remotely so your system requirements are minimal. WebAssembly is a good choice for this as CAD is computationally expensive and has, historically, needed some beefy client-side hardware to run.

As a way of delivering more complex code and allowing it to run fast on a low-powered device, by leveraging external resources, this model makes sense and is scalable for the future. Moving complex processing client-side isn’t a good idea, unless you absolutely have to.

Mostly, it comes down to bandwidth

One of the things that people say that they want to do with WebAssembly is make games that run quickly in the browser. Sounds cool… except we already have this, and it’s called Stadia. Stadia is Google’s streaming games platform, an environment where the game (the heavy lifting) runs remotely and streams the video and audio to your computer. Stadia makes it possible to run even incredibly complex and resource-hungry games, such as Cyberpunk 2077, on relatively lower-powered hardware (like my four-year Chromebook). I don’t care how fast WebAssembly might think it is, but when the game recommends a $400 graphics card as its minimum spec, that game is not running in a browser on a Chrombook. Ever.

The only obstacle to improving the performance of server-side computing of this sort is bandwidth and this is one area of computing where we still see exponential improvement in capacity coupled with falling prices. (And yes, there is such a thing as 6G…)

Should I use Web Assembly?

If I haven’t made it clear enough yet, there’s one good use case for WebAssembly right now and that’s if you’ve got heavy lifting to do. The other oft-touted reason to use WebAssembly is that it lets you write in a wider variety of languages than you currently can for the web. Sadly, this is a lie… and not a very useful one.

Webpages are written in HTML, CSS, and Javascript. With WebAssembly they still are – you’re just calling an external WASM using your Javascript and then handling the output it gives you, manipulating the DOM, etc. So, if you don’t know HTML, CSS, and Javascript and you’re thinking WebAssembly will save you by letting you carry on programming in FORTRAN or whatever… forget it. WebAssembly doesn’t do that.

Also, what is our obsession with trying to program for the web in old languages? There’s a reason that your toolbox at home contains more than one tool. It’s because different tools are good for different things. The same is true of programming languages. If you’re so desperate to write code for the web in FORTRAN though? Have at it – just build a HTTP server in FORTRAN and then write server-side code to your heart’s content. That’s the point of writing server-side code; it doesn’t matter what language you write your code in, I’m only ever going to see the output (which will be in HTML, CSS, and Javascript).

What’s the future for WebAssembly?

I’m old enough to remember when all websites were going to be replaced by Java applets. It didn’t happen. The general web as we know it is not going to be rewritten in WebAssembly any time soon, nor are technologies like Javascript (which is hugely popular right now) going to go down without a fight.

Everyone said node.js would kill PHP. As of January 2022, PHP still runs 77.8% of the web.

Wide-scale WebAssembly adoption is a long-way off and it would not surprise me one bit if Javascript found a way to surpass the speed benefits of WASM before the new technology can really find its feet.

Bubble is Back!

I wrote Bubble back in 2015 and it has been freely available online in various forms ever since. A little while ago, when I began stripping down and amalgamating my web presence, Bubble was a short-term casualty. To be frank, I had no idea people were still using it or that people were still looking for a solution to formatting their comic book scripts.

It took a stranger filling in the contact form on my website to remind me that Bubble needed a home.

So, I’m happy to say that Bubble is making a comeback and will also be getting some upgrades very soon. The original version was written in PHP but, as I’m becoming more and more interested in Javascript and node.js development, my first job will be to convert Bubble’s code to Javascript and create a version that you can download and run in your browser without an internet connection of any sort.

Welcome back Bubble, and welcome back Bubble users!

How Google’s Pirate Update can kill off sites stealing your content

Google has revealed that it now has a specific penalty that it applies to sites that receive repeated upheld DCMA (Digital Millenium Copyright Act) takedown requests. In other words, it has a special button it can press to kill off sites hosting pirated content.

According to Google, sites hit with the “pirate penalty” can see their traffic from Google drop by an average of 89%. Quite why the reduction isn’t 100% is a different question, but it’s good to see Google taking real action against websites hosting pirated and copied content. The Pirate update actually dates back to 2014, but this is the first time in a while Google has reported on its efficacy.

In a new document released Feb 2022, Google said “we have developed a ‘demotion signal’ for Google Search that causes sites for which we have received a large number of valid removal notices to appear much lower in search results.”

It’s a little vague what constitutes a “large number” but this new penalty is an important reminder not to take it lying down if your copywritten content is being stolen and reused/shared on the web without your permission. (Especially as Google has a habit of ranking copied content above the original)

You can find more information on how to file a DCMA Takedown request with Google here.

Facebook vs. Google vs. You: How your privacy became the ball everyone’s fighting over

It’s common knowledge that Facebook (or “Meta” if you prefer) likes to track you around the internet, seeing where you go, what you look at, and then use that information to help target advertising.

It’s actually a pretty useful system if you’re an advertiser; you can target very specific groups of people and advertise to them for comparatively small amounts of money. Arguably it has powered the growth of many small businesses, and definitely some large ones, especially in the past two years where lockdown has transformed the way we do business and sell online.

I’ve used Facebook advertising many times for clients (in my past life in digital marketing) and for myself (in my current double life as an author). There’s no question that, done properly, it works.

The key is the accuracy and granularity of the targeting. Did you know it’s possible to target men in the UK, between 25 and 45, who like Doctor Who and reading? I do, because that’s almost exactly how I target my advertising when I have a new book to promote. If it’s a book aimed at younger readers, I sometimes switch gender and target the mums, mixing together some Doctor Who with Harry Potter and suitably parental type interests. It’s staggeringly easy, but it only works because we’ve been willingly (even if unwittingly) giving Facebook this kind of high-value data for years.

An important component in Facebook advertising is re-marketing; targetting people who have been to your website with more advertising for your website when they return to Facebook. All it takes is the addition of a small chunk of code to your website and you are instantly able to start targeting your audience. You’ve also just turned your website into a listening post in Facebook’s vast, global intelligence gathering network.

This was all shady enough when it was happening on our computers, but Facebook took it to another level when they adopted a “mobile first” strategy back in 2012. Some pundits thought that Facebook was late to this party but, even if they were, they certainly understood the party a lot better than others when they arrived at it. Location data added a whole new and unprecedented facet to Facebook’s advertising might; combined with the information of who our family and friends were, Facebook could now calculate what we might be interested in before we even knew it ourselves.

How does Facebook know things I didn’t tell Facebook?

Facebook isn’t listening to your conversations. It’s been tested and proven that they aren’t. The truth is actually a whole lot scarier…

Here’s a rough sketch of how it works…

Facebook knows who you are and knows what you’ve been looking at on the internet lately. Let’s say it’s a new car. Facebook knows who you are spending time with because they know where you are, where other people are, and which of those people are your friends. If you’re considering a big purchase, like a car, chances are good that you are discussing this purchase with your friends. So, maybe Facebook would be on to a good thing if it showed adverts for that car you like the look of to your friends, right?

Facebook might even know if you make that purchase, as they love to gobble up data about our purchases from our credit card providers and they know, of course, if you’ve made a purchase through Facebook Marketplace, a Facebook Ad, or Instagram.

(And, because of the way the offline data is acquired, they even know things about you if you’ve never even had a Facebook account…)

According to a recent article by Vox, Facebook and other big-data based companies are even working on algorithms that can predict the end of your relationship. (And, if that doesn’t creep you out, read The Facebook Effect by David Kirkpatrick, which includes the nightmare-inducing claim that Mark Zuckerberg created an algorithm in college to predict which of his friends would “hook up”. It was 33% accurate. That’s the guy calling the shots at Meta and Facebook)

Enter Apple and a Game Changing Privacy Wall

Facebook’s insatiable lust for data and the powerful tools it builds and puts into the hands of just about anyone with a credit card have had serious negative impacts on society; elections have been influenced, the course of referendums changed, and today dangerous disinformation still runs rampant across the platform. The people behind these things have all taken advantage of Facebook’s unprecedented influence machine.

Finally, regulators and legislators are taking note. More importantly than that, consumers are taking note as well, as people are starting to question just what happens to their data and how social media platforms work behind the scenes.

Apple has been the first of the big-tech companies to identify the inherent opportunity here. In the potentially unique position of not being reliant on huge amounts of data about its user to turn a profit, Apple was able to make a strong play out of protecting its users’ privacy by blocking the kind of tracking facilities that Facebook relies on for monitoring its users on the iPhone. For once, Apple’s vice-like hold on the software running on its phones proved to be a huge boon for consumers as it wiped out Facebooks tracking capability in a single software patch. This one simple move hit Facebook’s profit margin by over $10 billion whilst also positioning Apple as the consumer’s champion on privacy and security.

Your move, Google

Apple sells 1 in 3 smartphones worldwide, so Facebook doesn’t need to worry… right? After all, there are still all those lovely Android users out there and, thanks to the fragmentation in the Android mobile phone market there are plenty of phones in use today that aren’t running the latest version of Android and won’t be getting an update any time soon.

Well, maybe not. Google recently announced wide-ranging privacy changes of their own, including bringing an end to third-party tracking and data sharing in apps. It’s a super-fine line for Google to tread as they, like Facebook, rely on advertising as a major source of revenue and use copious amounts of user data to improve targeting and advertising effectiveness. In their blog post, Google (Alphabet) went to some pains to stress that they were not taking the “blunt approach” of others (e.g. Apple) and were committed to helping advertisers transition to new, privacy focussed technologies.

Google themselves are having real problems weaning their systems off intrusive user tracking. The most recent attempt, Federated Learning Of Cohorts (or FLoC) was roundly panned and quickly killed off. Its replacement, “Topics”, isn’t really much better – it still takes your browsing history and infers your interests from it and whilst the topic groups are allegedly broad, that doesn’t really answer the question of why Google should be looking at your browsing history at all. It’s a question being asked in the American Congress, in the EU, by a growing movement of activists, experts, and consumers, and by the UK government, all of whom have expressed an aim to see an end to all behaviourally targetted advertising.

Enter… Nick Clegg?

Yes, sliding in from stage left to take a leading role as one of Facebook’s “Big Three” (alongside Zuckerberg and Cheryl Sandberg) is Sir Nick Clegg, former deputy Prime Minister of the UK and the man so famous for his lack of integrity that there is a Wikipedia page explaining his role in triggering student riots in the UK after going back on a manifesto pledge not to introduce student tuition fees.

Deployed by Facebook to lobby governments worldwide, he’s already in the spotlight as Facebook seeks to challenge Google over its right to implement privacy controls in the Android operating system. Incredibly, I find myself realizing that Facebook has a point – Google blocking other forms of tracking but allowing their own (Topics – which others can then plug into via an API and presumably for a fee) does have all the hallmarks of a classic piece of anti-competitive market monopolization. As odious as Facebook’s tracking of users is, surely it needs to be an “all or nothing” situation; how can it be right for one company to track you, but not another?

It’s going to be a fascinating time in the digital advertising space and I can’t decide if I’m very, very glad not to be in it right now (and having to explain to clients why their campaigns aren’t working how they used to) or whether I’m going to feel like I’m missing out not being in the thick of this fight.

Ultimately, the late 2020s may be the time when personalized, behavioral advertising finally dies. Who knows, we might just have to get creative with our advertising campaigns again to compensate….

The Facebook Watch that Watches You Back?

Facebook Technologies, AKA Meta, AKA Those Guys Who Use Your Data to Sell Advertising, AKA Those Guys Who Just Lost a Huge Load of Money, have a new patent and things may be about to get interesting in the wearables space yet again.

Yanko Design first brought this to my attention with some fun renders of what a “Meta Watch” might look like, basing their designs on a 2021 patent filed by Facebook.

It’s a fun concept: a tiny touchscreen with a pair of built-in cameras to make it easy for you to make and take video calls from your watch. It’s a perfect solution to the problem of not having a portable screen that fits in your… oh, wait, you already have a phone though, right? Makes you wonder what the point of this device might be then…

Meta: The Company that Never Learns

Companies like Facebook/Meta hold huge numbers of patents. They patent far more things than they ever put into production. There’s no guarantee that this device, or anything like it, will ever see the light of day. Could Facebook enter the wearables market though? Almost definitely… because they never learn their lesson.

This prospective Meta watch has all the hallmarks of the typical Meta product; small concealed cameras, a GPS tracking option (alright, that’s not in the patent but it’s a pretty standard smart-watch component), and biometrics. That’s right, just when you thought Facebook couldn’t want any more data, now it wants to know if you’re having a hard time climbing the stairs…

Literally the least cool a pair of Ray-Bans have ever looked

Facebook’s last stab at a wearable was Facebook Glasses, a product so intrusive that the Russian FSB declared it a “spy gadget”. You’d think they’d know – they are actual spies, after all. Is there any reason to expect that they would be less invasive when there’s a chance to grab even more data?

Meta: The Company that Did Learn One Lesson

If there’s one lesson that Facebook may have learnt, its that building your product on someone else’s platform is a bad idea (unless you are a business who wants to base your marketing strategy on Facebook, in which case they will assure you that it is totally fine). Facebook took a huge financial hit recently when it revealed falling user numbers and an impact to advertising revenues linked to Apple’s decision to block the tracking technologies on which Facebook relies. Where once Facebook’s “mobile first” strategy was hailed as genius, its now turned into a nightmare for the company. They don’t control the mobile environment, Apple and Google do, and that makes Facebook vulnerable.

Solution? Get into the hardware market and regain control of the flow of data from users to Facebook. With a simple Bluetooth link to a mobile phone, Facebook’s wearable could transmit huge amounts of data (including data it previously never had to access to) direct to Facebook without interference from the mobile phone software.

Of course, a nice big juicy screen strapped to your wrist pumping out all that data is also a great way to deliver Facebook’s adverts to users. Forget sliding an advert in amongst the news feed; how about an advert for a nice cold beverage just at the moment you’re feeling hit and standing near the right kind of shop? A leak from Facebook back in 2017 revealed they had the capacity to target ads based on people’s moods. Is it a huge leap to biometric targeting?

The Metaverse is not something that the average user is ready for and the hardware is still cumbersome and expensive. A wearable is comparatively inexpensive to make, easy to market and distribute, and could quickly become a “must-have” accessory for Instagram influencers with a voracious need to constantly share new content. In exchange, Facebook will want what it always wants – access to your data that it can use to target advertising at you.

My recommendation? Don’t put Facebook’s new lo-jack on your arm.