Fin Moorhouse


Saving the Web With Very Small Amounts of Money


Something about our relationship with content on the Web has quietly turned sour. It took more than one form: ads have inflated and spilled out over the pages they occupy, sometimes blending with the author’s own word. Online journalism on the has monopolised and polarised. Few people agree on exactly what went wrong, and fewer have articulated a way out. Recently I began reading more about the state of ‘micropayments’. Micropayments offer a diagnosis: that a great deal of trouble has been caused by the way online content is (or isn’t) funded by its consumers. They also offer a solution: a new model for funding content. I find both these things convincing, and the whole story of micropayments just strikes me as intrinsically interesting anyway.

By ‘content’ I mean articles, blog posts, videos, podcasts, music, and any other digital goodies you choose to consume for entertainment or enrichment rather than necessity. Some of this lives on world-eating media platforms (YouTube, Medium); others pre-established institutions who survived the transition from paper to touchscreen (The Guardian, New York Times); others on independent corners of the web (personal blogs and podcasts). Roughly the same unfortunate dynamics operate in all three places.

In researching micropayments, I stumbled on this article titled ‘The Case for Micropayments’. The problem with free online content, it argued, is precisely that it’s free —consumers don’t expect to pay for it and they don’t get to pay for it. Content providers and platforms instead turn to ads as a source of revenue. That raises a problem:

Ultimately, those who pay for something control it. Currently, most websites that don't sell things are funded by advertising. Thus, they will be controlled by advertisers and will become less and less useful to the users. A veritable arms race has already started with more and more annoying advertisements that intrude on the user's attention in an attempt to survive ever-declining click-through rates.

The point about control is trite but accurate. Advertisers are skittish around thoughtfully provocative content that could tarnish their name, but reward cosmetically provocative content that merely grabs attention. When a site makes money by serving ads, and users have every reason to avoid them, you get severely misaligned incentives. We install ad-blockers to avoid the eyesore of an article wrestling for attention among irrelevant banners, seeking out and killing auto-play videos, and to skip the drudgery of hovering your finger over the ‘skip ad’ button on a YouTube video. So ad providers and adblockers invent increasingly ingenious and costly ways of outmanoeuvring one another. Ad blockers are no longer faintly illicit: Google has introduced an ad blocker for its own Chrome browser. The advertising revenue lost at the hands of ad blockers is estimated to be in the tens of billions (total digital ad spending in the US reached $88 billion in 2017). As such, advertisers have every reason to pour resources into thwarting them. Just over 30% of the top sites by traffic now implement some means of detecting ad blockers. Facebook has spent so much effort disguising its sponsored posts that it took one major adblocker nearly a full month to spot them. Time and money gets sunk into nefarious strategies for hiding, and then re-detecting every last banner ad or auto-play video. And ads must be targeted all the more aggressively and obtrusively at those users who decide not to use (or don’t know about) ad blockers to make up for lost ground. That’s the crucial feature of an arms race – every party involved would prefer that it never occurred.

Ads are mildly inconvenient, ugly, and massively inefficient. The average person sees between 6,000 and 10,000 ads per day — with brains adapted for an ancestral environment where any single thing as colourful and information-dense as an advertisement would have dazzled. And digital ads slow down the overall browsing experience because we have to download them in order to see them and then spot them in order to ignore them. Listening to podcasts, most of us blast the ‘fast forward’ button to skip the desperately enthusiastic read-out of copy about mattresses / supplements / trading apps. But because most of us skip the ad, and the rest of us infuriatingly bound to a steering wheel or handlebars just mentally check out; the podcast needs longer and more invasive ads to pay bills. It’s again an arms race, and the result looks ugly for everyone involved.

Ads do not pay well. A YouTube channel needs many thousands of views before the 5-second ads that play at the beginning can justify the time required to make high-quality videos, for instance. Independent content creators expecting a small but dedicated audience cannot rely on advertisements as a source of income, and nor can the smaller fry in the enormous and increasingly septic pond of online journalism. As such, smaller players die and big players consolidate. Readers (or consumers) lose out on the variety of perspectives lost in the melee. Consider the ‘silent evidence’ of every insightful, thoughtful article, video, or blog that never got written because its author could see no way to monetise it. The people who continue to offer their opinions for free are almost exclusively those people who can afford not to rely on (e.g.) their writing as a source of income.

When ads alone don’t pay bills, larger sites might turn to data mining: amassing hoards of personal information about your browsing habits, search queries, shopping preferences, and much more. Some of this is packaged and resold to the highest bidder. Personal data is thus likened to oil: fuelling massive tech companies just as the sticky crude stuff fuels industry. Data collection and advertisement are comfortable bedfellows, too: ads compound in value when they’re targeted specifically to you — so the more advertisers know about you, the more valuable their ads become.

Yet, content is increasingly consumed online. In 2007, 20% of British survey respondents read news online. In 2020, the figure is 70%. Clearly, it is easier to sell a physical newspaper than a digital one: paper media costs money by default, online news does not. The prevalent funding models on the Web have effects which spill over into the real world. From 2005 to the end of 2018, the UK lost 245 local news titles on net. The majority of the country is now served by no regional newspaper at all. This is exceptionally bad news: one King’s College study found that those UK towns with no local newspaper showed reduced community engagement ad increased distrust of public institutions. Moreover, constituencies lacking a local newspaper were measurably less likely to be mentioned in the national press during the 2015 national election — presumably because national stories are so often local stories which have percolated upwards. The author of the study summarises: “when local papers are depleted or in some cases simply don’t exist, people lose a communal voice. They feel angry, not listened to and more likely to believe malicious rumour”.

Subscriptions offer an alternative source of revenue, but subscriptions only work where the consumer is unusually loyal to a specific source (i.e. Sam Harris’ podcast), or when the provider aggregates many different sources. Netflix boomed when it was the de facto movie streaming service. Now streaming services are fragmenting, and siloing their own content. Subscriptions favour monopolies, and monopolies favour subscriptions. Platforms consolidate or wither away entirely. By contrast, fragmentation lowers subscription value and seems to incentivise piracy.

Back to ‘The Case for Micropayments’. The article continues:

Acknowledging that Web advertising is not a sufficient business model, several famous websites have announced that they will start charging subscription fees... Unfortunately, subscriptions are not a good idea on the Web… The main problem with subscription fees is that they provide a single choice: between paying nothing (thus getting nothing) and paying a large fee (thus getting everything). Faced with this decision, most users will chose to pay nothing and will go to other sites. It is rare that you will know in advance that you will use a site enough to justify a large fee and the time to register. Thus, most people will only subscribe to very few sites: the Web will be split up into disconnected "docu-islands" and users will be prevented from roaming over the full docuverse.

This seems like an accurate description of the state of play in 2020 – but the phrases ‘docu-islands’ and ‘docuverse’ did seem oddly archaic. I admit I yelped out loud, on checking the article’s date, to see it had been written in 1998.

Around this time, micropayments really were expected become an indispensable feature of the Web. In writing the specification for HTTP, the authors of the early Web were forced to anticipate the most common errors that a user might encounter, and encode them in the standard server error codes we still use today. 404 still means “page not found”; 401 means the client is unauthorised. Standard stuff. When, however, was the last time you encountered an error code 402? The creators of the web standards thought it would represent an equally ubiquitous error: payment required. Error code 402 remains, more than two decades on, ‘reserved for future use’.

So little has changed about the way content creators make money online in more than two decades:

On today’s web, advertising has become the dominant business model. It’s something Sir Tim Berners Lee and his colleagues at CERN probably couldn’t have imagined when they published the first website nearly three decades ago.

In fact, new and invidious dynamics have emerged which the dotcom pioneers likely could not have foreseen. Smarter recommendation algorithms reward clicks and neglect quality – more clicks means more eyeballs, and more eyeballs means more ad revenue. In fact, they seem to have discovered a perverse shortcut towards their end goal of ‘recommend to the user what they want to see’ — no longer just learning user preferences ex post, but actively changing them to render preferences more predictable.

Is it a stretch to wonder if the funding model (or lack thereof) for online journalism played some causal role in the widely-reported upsurge of ‘post-truth’ political discourse? The mechanism isn’t hard to imagine: emotional appeals, divisive claims, inflammatory half-truths etc. are going to rake in more clicks, and so generate more ad revenue, than nuance. No matter your revenue model, surely more clicks means more cash? Not quite — when you pay with money as well as time, you’re incentivised to spend your money on something you anticipate will be valuable, or else freely tip for something only after you found it valuable. Clickbait doesn’t work when it relies on anyone reading the whole thing and choosing to tip. This point can be extended: how often do you find yourself clicking on an attention-grabbing headline on your Twitter timeline only to furiously skim the article for the juiciest morsels of controversy or intruige? Wouldn’t the expectation of needing to pay something like $0.20 for the privilege jolt you into reassessing your reasons for engaging with content, and the kind of content you engage with in the first place? Might it be reasonable to expect, on the margin, curiosity to replace procrastination as the reason for clicks? Note that the costs of aimless procrastination are already there in the time you hand over to it — throwing real money into the mix, however little, just makes the cost salient.

In short, almost all online content has failed to find a remotely functional revenue model. Why?

We so often hear that we have ‘come to expect’ online content to be free. Suggesting some article or YouTube video might be worth paying for comes as an affront. On encountering a paywall, most of us instantly search for a place to read about the same news story without the hassle of plugging in a credit card number or email. And it is because of this learned entitlement, we are told, that we cannot have good things — we get exactly what we are prepared to pay for.

I think this diagnosis is attractive but wrong. My guess is that most people would agree that other people have become too entitled to free content, but not them. If most people think some particular thing is true of other people and false of them, then most people are wrong. Of course, people do pay for content online: anybody who pays for a Netflix subscription, or buys music on Bandcamp, is signalling with their wallet that they are prepared to spend money for content they care about.

The story of iTunes drives this point home. In the early 2000s, digital music piracy was rife. The peer-to-peer file sharing software Napster had provided everyone with an internet connection with a means of sharing and collecting music; virtually free of charge and legal liability. At its peak, more than 60% of U.S. college dorm network traffic consisted of mp3s flying across the country. Every minimally competent internet user had a way of downloading half the world’s music without paying a dime. Even with the full weight of major record labels, digital music stores with names like ‘MusicNet’ and ‘Pressplay’ proved to be flops. The online retailers which were able to survive operated on a subscription model.

Then iTunes arrived, and charged $0.99 per song. It was well-designed, relatively frictionless, and fun to use. As the tired phrase goes, it ‘just worked’. Even as competitors arrived undercutting iTunes’ prices, users stayed loyal to the uncluttered environment Apple had created. People were and are willing to spend money on digital content they could find cheaper or for free elsewhere.

Here’s another way to see why the ‘sheer selfishness’ explanation is wrong. Suppose the next time you read e.g. an article by a favourite journalist, you discover that clicking a button at the bottom of the page will instantly and automatically transfer $0.20 from your wallet to the journalist’s. No details to fill in: a single click. How often would you choose to click that button in similar circumstances? My guess is that most people are sufficiently warm-blooded to click the button more than 20% of the time for e.g. a well-written 3000+ word article or an enlightening 10-minute explainer video. I have watched video series, listened to podcasts, and read free online blogs and resources which rival in informative content the very best lectures I went to at university. Come to think of it, I often learned more listening to a podcast walking to the lecture hall than I did for the next hour. If I were presented with some one-tap option to express my gratitude with a very small, ‘token’ amount of money, I would often do so unhesitatingly. If the small amount of money were required of me to access the content, I would be just as happy to cough up. This is a dynamic common to a whole range of nudges: I am prepared to do something if only it’s (i) presented to me, and (ii) exceptionally easy.

It might be beginning to sound like money is being left on the table here. If content providers offered an easy way to pay very small amounts of money for units of content, I’m suggesting that enough people would pay to rival advertising as a source of revenue. Why don’t they? Why was there never an iTunes for journalism, or blog posts, or videos?

One response is that this begs the question. In certain pockets of the internet, tipping small amounts of money already is either straightforward or attractive enough to rival advertisements for revenue. The Guardian seems to have flourished online through soliciting voluntary donations rather than installing paywalls. The popular Chinese social media app WeChat recently installed a tip button for its users’ content. For those authors whose potential audience use WeChat, far more money can be made through self-publishing your work in return for tips than through selling your work by the word to a journalistic outlet (who in turn make money primarily through advertisements). For one such journalist, “on average he receives $602 per article, and once received ​$4,815 for a short piece he wrote. He used to get paid ​$75 per thousand-words by magazines”.

Moreover, so-called “self-media” is less sensitive to the pressures that more established outlets feel from advertisers reluctant to place ads next to potentially controversial content; or even from outright censorship. It also reaches audiences which conventional (or ‘legacy’) media cannot: some people are reluctant to positively seek out a news website but happy to engage with content if it can be integrated with their existing social media feed. In the case of WeChat, one article reports that, “[o]ver half of active users say the main reason they use WeChat is to get the news”.

A further explanation for the web’s micropayment-shaped hole is that payments online always involve the same flat costs of spending 30-or-so seconds digging out and plugging in your card details and possibly email, and then entrusting another payment platform or website with that information. A blog post or podcast episode worth $0.20 is just not worth that faff – even less so if we’re performing the same gymnastics half a dozen times a day. This is true, but it’s a shallow explanation. The question we should now be asking is: why are small online payments still so clunky?

Maybe the single most significant limiting factor here is the technology that underlies online payments. Almost every online transaction conducted today will have been handled by one of two payment infrastructures: Visa or MasterCard. These networks often charge small flat fees (approx $0.20) for transactions, or otherwise impose minimum fees. Credit card payments work (extremely) well for medium to large transfers of money (say $2 and upwards) where security and relative immediacy are valued and anonymity and privacy are not. Where does that leave micropayments? In a recent Wired article, Zeynep Tufekci observes:

Right now, we’re stuck where the automobile industry was when cars were still “horseless carriages,” wagon-wheeled monstrosities with high centers of gravity and buggy seats. We’re still letting an older technology—credit cards, designed for in-­person transactions, with high fees and financial surveillance baked in—determine the shape of a new technological paradigm. As a result, that paradigm has become twisted and monopolized by its biggest players. This is one of the modern internet’s greatest errors; it’s past time that we encounter “402 Payment Required” for real.

Credit cards found popular use in the 1960s, and were awkwardly ported to the digital realm in the late 90s, shortly followed by the arrival of online-only payment platforms like PayPal or Stripe. Ultimately, these newer systems are still wrappers around the same credit card technology they are often framed as replacing or improving upon. PayPal, for instance, acts as a middleman between credit cards; paying interchange fees on their behalf.

Unsurprisingly, alternative payment technologies now exist which are far better suited to micropayments. Bitcoin is emphatically not one of them. Clearing a transaction on the Bitcoin network typically costs (in August 2020) about $4 – more than 16 times as much as Visa and Mastercard charge. Other cryptocurrencies do hold out much more promise.

The point here is not that e.g. Visa, Mastercard, and other big payment processing networks are doing their job badly — that they got complacent in the knowledge that incumbents have vanishing hopes of displacing them. Coil CEO Stefan Thomas notes in one interview, “[t]hey are really good at their jobs, and it’s not like I can point at those systems and say, “You’re using an outdated this or that.” These are well-architected, and battle-tested systems”. Transferring money, it turns out, involves a whole lot more than decrementing my money counter and incrementing yours. You need protections and insurances in place against fraud; you need to distribute liabilities; some way of trusting each link in the chain; a way to handle chargebacks; often some way of converting between currencies and so on. Nor would it be right to say that the reason big payment processors are so expensive and inconvenient for small web payments because they’re these stubbornly immovable middlemen through which every payment must unfortunately pass — opportunistically skimming from every transaction or otherwise imposing an unnecessary centralised bottleneck. On the one hand, middlemen can and do serve an indispensable purpose, and the value they offer very often outweighs the costs they impose (versus decentralised, direct transfer). On the other, decentralised networks which eliminate middlemen are often slower and far costlier — Bitcoin being the prime example. It’s more accurate to say that different use cases are best served by different solutions. Credit cards aren’t suited to micropayments not because they’re generally outmoded, but because the represent the solution to a different set of problems.

I first encountered the ideas of micropayments sitting in a half-empty college lecture room to see a well-known figure in crypto pitch his new cryptocurrency. He made sure to stress the inadequacy of credit card infrastructure to handle very small transactions. Naturally, we were promised that this new altcoin is it: it’s scalable, it’s secure, and transaction costs are rock-bottom. Now we just need every payment system in the world needs to make the switch.

Regrettably, not even the shiniest new cryptocurrency can avoid the cold truths of network effects. Adopting the latest cryptocurrency XYZ as a means of transferring money is useful to the extent that other people accept XYZ – and their incentive to adopt XYZ varies with the number of other people they expect to use XYZ, and so on in a hall of mirrors. If the adoption of a new platform plays out over the course of years, a content provider typically cannot afford to offer any kind of micropayment solution that relies on the network effects of competing payment technologies. Partial adoption in the case of micropayments for online content might prove to be particularly tricky if the old model relied on ads, because from the consumer perspective there’s little incentive to get round to adopting some new payment technology solely in order to avoid looking at ads. If the partial adoption is from a subscription model, things looks marginally brighter: a consumer reluctant to shell out on certain subscriptions might find themselves wanting to read paywalled articles enough that adopting some micropayment technology is worth the effort. But there’s a dazzling array of cryptocurrencies and other platforms vying for dominance, and its hard to imagine any kind of convergence even within the next half-decade or so. As such, the best-case scenario for these incumbent micropayment technologies is a world in which users are forced to sign up for a bunch of different platforms to pay different content providers — effectively screening off all but the most tech-literate users. In sum: the tech already exists to facilitate fast, low-fee micropayments. The challenge here is multilateral adoption.

That’s not the only problem. Even some perfectly suited platform were adopted by everyone, we’re left with the recalcitrant problem of the human attention span. The computer scientist Nick Szabo notes the mental expenditure involved whenever we compare goods of any value. It looks like an irredeemable bug in the human source code that the effort we spend comparing $1 goods is typically so much more than 1% of the effort we spend comparing $100 goods. I am personally blessed with this special kind of neuroticism that requires me to anguish over any shopping decision, and the less significant the better. Coke Zero or Diet Coke? Anxiety is the dizziness of freedom.

Szabo’s point is that even once the technology allows for seamless ‘one-click’ micropayments, we are always going to be left with the burden of deciding whether this particular bit of content is worth it. Since the cost of that decision has a floor much higher than the fractions of dollars that micropayments involve, then few micropayments are ever worth it. Do I want to pay $0.20 for this 5-minute video, or this article? Wouldn’t that video be better value? Let’s call the whole thing off. In general, better to bundle goods together. That’s why we do get to choose our meals at a restaurant, but we don’t get to tailor every ingredient. In Szabo’s own words:

This [decision] entails a significant mental cost, which sets the most basic lower bounds on transaction costs. For example, comparing the personal value of a large, diverse set of low-priced goods might require a mental expenditure greater than the prices of those goods (where mental expenditure may be measurable as the opportunity costs of not engaging in mental labor for wages, or of not shopping for a fewer number of more comparable goods with lower mental accounting costs). In this case it makes sense to put the goods together into bundles with a higher price and an initutive synergy, until the mental accounting costs of shoppers are sufficiently low.

So a future for micropayments faces two basic problems: reaching a critical mass of users of whatever credit card replacement technology is used to implement them, and navigating the ‘mental accounting costs’ of payments of any size. Now I want to explore ways these problems have been solved. Do micropayments have a future, and what does it look like?

One way to dodge the network-effect problem is to find yourself already in control of one of the biggest networks in the world. It looks as if micropayment plans are already cooking away at various social media giants, each planning to leverage their large existing user base by implementing their own internal currency. Facebook is reportedly planning to launch its own native digital payments this year – colloquially tagged ‘Facebucks’ (Libra is a different beast). Since Facebook owns WhatsApp and Instagram, it is easy to imagine a full-blooded market growing up inside the walls of the Facebook garden. Your friend could write blog posts or publish family recipes, and charge a couple of Facebucks – which they could pay on in turn. Similarly, Apple’s new credit card grants cashbacks in ‘Apple Cash’: not real money, but redeemable in the app store or Apple Music. Just as Facebook is able to capitalise on its \approx 2 billion users, Apple Cash works because of Apple’s control over a vast ecosystem of content.

In 2016 the streaming platform Twitch introduced their own internal currency of ‘Bits’. Bits are purchased in bulk with real money (1500 Bits would cost me £19.40) and are used to buy ‘Cheers’. When your your favourite streamer is live, you can join the chat and Cheer a specified number of Bits. Depending on the size (cost) of your cheer, an animated emoji-type sticker will appear in the chat, and possibly on the stream itself. The streamer and the audience both know the cost of each such Cheer, and a big enough Cheer is often enough to garner sincere thanks from the streamer for the rest of the audience to see. Top Cheerers earn special badges and coveted slots on Cheering leaderboards.

Internal micropayments even have uses beyond paying for content. A nascent Twitter alternative called Twetch uses its internal currency not just in tipping posts, but also in dealing with trolling. Instead of blocking a user outright, users can just force certain accounts to pay a ‘troll toll’ for the privilege of interacting with them. On Twitter, it remains virtually costless to harass another user until you get blocked. It’s about the incentives, dummy.

So: one way to implement micropayments that don’t suck is to build a system inside your own walled garden. Even if that garden contains some two billion people, it still has walls. How might micropayments finally be realised on the world-wide-capital-W-Web? To find clues, consider internet pre-history.

The World Wide Web did not emerge fully formed from its complicated birth. Before the various military and academic infranets were able to form anything resembling an internet, they needed to be able to speak to one another. For a period, they spoke past one another – various protocols worked with specific machines and those machines only. Those protocols formed networks, but those networks criss-crossed and overlapped without ever fully joining. During that time, major corporations and political bodies each had their favoured protocols. Eventually and perhaps inexorably, many disjoint networks became one internet (precise chronology aside). Whatever the shared protocols ended up being, everyone had an interest in coordinating around one such protocol for each application. Compare: it doesn’t really matter whether everyone drives on the left or the right, but we better pick one and stick to it. In practice, this led to a suite of protocols – TCP/IP – which have survived almost unmodified to the present day. If you want to retrieve text-based documents from a server (i.e. visit virtually any website on earth), your computer and the server share a language called HTTP to do so. Crucially, the requirements placed on users of the internet by these protocols were minimal and undemanding: they were thin wrappers with wide scope for variety, experimentation, and competition at the level of the underlying hardware and software. It was just enough to guarantee that computers could talk to one another, and nothing more. As a result, few people now know or care what the servers that give us our daily internet look like or what OS they run: they just work.

Are we with respect to payments as our World Wide Web’s predecessors were with respect to sending emails and ‘hypertext’ documents? This is the claim (or hope) of an initiative to implement a new protocol for ‘payments across payment systems’: the Interledger Protocol. The analogy goes like this: before the introduction and implementation of agreed-upon communication protocols, different network systems, operating systems, and even hardware were not interoperable. If I wanted to speak to your computer, I had better check it belonged to the same network, and/or was made by the same manufacturer, and/or ran the same software. The absence of shared protocols meant that smaller networks were not able to form anything like an internet. We are in that position nowadays with respect to payments: many payment service providers and gateways exist, with limited means of talking to one another — particularly across currencies and borders. Credit cards constitute the most universal medium for online payments, but they’re clunky and the underlying ideas and infrastructure were never made with the internet in mind. Cryptocurrencies of various kinds boast a dizzying array of capabilities, but for the most part exchanges must be used to convert between currencies (including from fiat to crypto and vice-versa). Not to mention, no cryptocurrency has nearly high enough adoption to constitute a viable de facto ‘currency of the web’. As such, the analogy continues, we need some kind of shared protocol for conducting online payments within and across payment providers and technologies.

Here’s the pitch from interledger.org:

Interledger is an open protocol suite for sending payments across different ledgers. Like the Internet, connectors route packets of money across independent networks. The open architecture and minimal protocol enable interoperability for any value transfer system.

Interledger can make confident claims about being able to process all manner pf payment sizes, speeds, and levels of security precisely because it doesn’t come with any established way of doing those things; but rather provides a way of connecting existing platforms and technologies. Like the early web protocols, the idea of Interledger is to abstract away as much as possible while giving other systems just enough common vocabulary to talk to one another. Being unopinionated is the point; because everything the protocol leaves unasserted is something left open to choice. Choice means competition. TCP/IP worked in part because it was just about the most minimal abstraction going. If you weren’t happy with your backend application, you could switch to any number of alternatives knowing that they will all be able to talk to the very same users the previous one did. By analogy, Interledger sets out a minimal “set of rules that define how nodes should send value”. It includes the concepts it needs to involve in order to achieve that stated aim — some notion of packets, nodes, stages, success conditions, etc. You’re required to specify an address to send the money, an amount, and some extra metadata. No less, but scarcely more. “It’s hard to imagine what you would take away to further simplify”, Stefan Thomas notes.

Interledger protocol is an abstraction that connects payment platforms; it doesn’t provide the technology for micropayments. By analogy, a protocol for sending documents by TCP/IP is worthless without servers and backend applications to do the sending and without actual humans to get their hands on a computer and start sending and receiving them. We have the cornerstone; now it’s time to build around it.

Enter Coil, a micropayment service in the form of a browser extension that connects content creators with paying users. Remember how the Interledger protocol in principle supports transfers as tiny as one hundredth of a cent? Coil leverages this possibility to implement payment ‘streaming’; where a user continually pays the creator for the time they spend reading / watching / viewing / listening to their stuff.

Some more detail. As a user, you pay Coil a $5 per month subscription. Bundled into your subscription are subscriptions to a bunch of other sites or site perks which you would might not consider using otherwise: an ad-free version of Imgur, access to all content on a neat new video platform called Cinammon which pays its creators more than the obvious competitors. However, the real magic happens when a Coil subscriber visits the website of a content creator who has signed up to receive Coil payments. For every second the user spends on that site (as long as they’re active), Coil begins streaming tiny amounts of money from user to creator — to the tune of about $0.36 an hour. Coil automatically decides how to slice up the $5 per month between the sites you visit. The final piece of the puzzle as far as technology goes is the Web Monetization API, a JavaScript browser API which plugs into Interledger and allows a website or browser extension to set up a payment stream. This makes Coil’s slow trickle of money possible. As a creator, getting started involves signing up to the site and adding some code to your site. It’s unnervingly simple — a single meta tag:

<meta name="monetization" content="$my-payment-address">

After that, you can detect who among your visitors is using Coil, and reward them in any way you see fit: with exclusive content, by removing ads, whatever.

I think Interledger has a good shot at solving the ‘multilateral adoption’ problem. That leaves the second barrier to micropayments: remember Nick Szabo’s point about the ‘mental accounting cost’ of deciding when and how much to pay people, which he thought implied a lower bound on payments and thereby ruled out micropayments? I think the idea of payment streaming, in the model of Coil, solves that problem by bundling all the tiny decisions I have to make (“was this blog post worth $0.20?”) into a single decision to subscribe to the streaming service, after which decisions amounts get delegated to a simple time-on-site calculation. The future looks bright.

And Coil is by no means the only player here. The ‘Unlock Protocol’ promises a similar deal using its own blockchain where creators can grant access to paying members — “taking back subscription and access from the domain of middlemen — from a million tiny silos and a handful of gigantic ones — and transforming it into a fundamental business model for the web.” Then there’s Scroll, a subscription-based extension / app which vanishes ads for paying members. Flattr gives users some extra control over who their money goes to, how much they pay, and how often. Finally, the Brave browser comes bundled with tipping functionality that works with independent sites and big platforms. On YouTube, Twitter, and Twitch, you see a little icon allowing you to send some chosen amount of the Ethereum-based ‘basic attention token’ (BAT) cryptocurrency designed for this very job. If the recipient hasn’t signed up to Brave’s tipping scheme, just let them know they’ve been tipped in this way and they’ll find their balance tethered to their (e.g. Twitter) account and ready to be accessed. And if a website has signed up to receive contributions from Brave users, an icon in the address bar lets you send a small (or large) payment in a couple of clicks. Creators who get tipped in this way can convert their BATs to money, or pay them on to support other creators.

What these solutions have in common is that they can be introduced incrementally; without having to wait for a critical mass of paid-up subscribers. That’s because you can set your site to deliver ads to those who haven’t yet signed up, and remove them (or supplement with extra content) to those who have. No threshold for the number of users already signed up, so fewer network effects to worry about.

The last story in this hopeful trend towards better ways of funding the web is a fund established with a view to fostering these fairer, healthier ways of monetising the web. It’s called Grant For The Web, it tallies to a sweet $100 million, and you can read about it on its very own lush homepage. It’s supported by some unimpeachably good eggs: Coil, the Mozilla Foundation, and Creative Commons.

Time to wrap up. If you remember anything, let it be this: content on the Web is broken; it’s broken in large part because of how it currently makes (and can’t make) money; and things did and do not have to be that way. It is often most comfortable to chalk up big apparent failures or inefficiencies or losses like this to some unchangeable feature of the world, or human nature, or plain arithmetic. Sometimes those explanations are right — sometimes apparent bugs turn out to be indispensable features. Price tags might inconvenience consumers who wish that more things were free, but the world obviously goes better when some things cost money. It is true that people have come to expect more and more content to be served up for free. Sometime we hear that this indicates a fundamental barrier to micropayments on the web: people just won’t pay for this stuff; we’ve spoiled ourselves; we got too much of a good thing. So we have ads, everywhere and unavoidably, and we have an arms race of ad blocking and escalation. But I submit there is no such barrier. We expect free stuff online by habit; not necessity or nature. What about the technology, or the business models? Here too is good news. Interledger and the Web Monetization API are trying to do what the early internet protocols did for email, file sharing, and the World Wide Web: enabling competition between payment systems. Alternatives to credit card payments — notably in the form of cryptocurrencies — solve the ‘minimum transaction cost’ problem and make transferring very small amounts of money feasible. Coil and its competitors solve the ‘mental accounting cost’ problem by bundling payment decisions into a single subscription and streaming money to creators automatically. And Grant For The Web means that the open source organisations which shape the Web are getting behind the idea in a big way — skin is emphatically in the game. Widespread adoption will take time, but Coil and other micropayment technologies play well with other sources of revenue so there’s no either-or. People have seen micropayments on the near horizon ever since error code 402 was written into the first HTTP spec. In the more than two decades since, they never arrived. But maybe this time is different, and we can look forward to a Web where micropayments finally get their moment. Watch this space.

Links and Further Reading

If you liked this post, give the Brave browser a whirl and you can tip me while you’re at it!

If you spotted any mistakes or you have any comments, get in touch.

⟵ back to writing