Table of Contents
- Part 0: Introduction (you're reading it)
- Lots more to come!
Introduction
I started outlining this series months ago, while I was on sabbatical, but never got around to starting the actual words. I've got a new job now (at OnePass, a refreshingly sensible company providing actually-useful services, which is sadly not the norm at the moment) -- work is extremely busy, but I do need to think about things other than that and politics sometimes, so let's get this going!
All year, I've been mulling the problem of Trust Architectures: how do we share information about "trust" online. As I'll discuss under Use Cases (next time), I think it's getting to be Steam Engine Time to take it seriously. Between the AI Slopocalypse spewing nonsense all over the Web, and the social networks succumbing to Advanced Enshittification, it's getting ever-harder to understand who to trust.
This isn't even remotely a new problem, mind -- it was a pretty old topic when we explored adding this sort of thing to Trenza way back in 2001. But it's rarely been taken really seriously, and most of the better attempts have wound up buried inside proprietary walled gardens that don't necessarily have the human user's best interests at heart.
There appears to be a lot of relatively recent literature on the topic, some of it possibly even good (I'm cautiously intrigued by the OpenRank project). But much of it is obsessively focused on Blockchain, which I'm rather skeptical about (I still consider it to be 90% a solution in search of problems), and most appears to have a lot of assumptions baked in.
So let's step back, and tease this apart. I'm going to intentionally go in a bit naively, so as not to be too biased by everyone else's assumptions, and explore the topic from first principles, winding up with a very high-level sketch of how things might work. Once I have straight what I think are the interesting use cases, requirements, and architectural parameters, we can take a properly critical look at what's already out there.
I expect this to take at least 6-7 installments, likely more like 10 before I'm done -- it's a big, chewy problem with a lot of facets. As I add parts, I'll add them to the Table of Contents at the top of the Dreamwidth version of this post. I'll likely edit some of these posts as we go and folks point out additional nuances; I'll try to be good about crediting folks who point stuff out, so call me on it if you feel like you haven't been acknowledged properly.
This is not fully-baked yet: I'm going to be thinking out loud. That's why this is "towards" -- I'm seeking to make progress here, and we'll see where it winds up. It's possible that we'll find that the One True Trust Architecture already exists, and we should be lobbying for everyone to adopt it. It's also entirely possible that we'll conclude that the problem is insoluble in principle, and give up. (Hopefully not.) The goal is to come to a better shared understanding of the topic, and ideally some actionable ideas about how to deal with the problem.
I hope you'll join in. While I'm going to do a lot of talking over the next couple of months, it's going to be a lot more productive if you chime in with your thoughts and ideas to add to that.
I'm intentionally posting this on Dreamwidth because despite (or maybe because of) its antiquity and old-fashioned UX, it's still the best place for posting and discussing complex, long-form topics, free from the AIs and enshittification consuming most other places.
So I'm planning to post primarily to Dreamwidth, mirror to Medium and LinkedIn since some of the technical crowd mainly knows me there, and link from Mastodon and Bluesky. (But not Facebook, which I've mostly given up on, or Xitter, which I've entirely abandoned.) On platforms that have tagging, I'll be using #TrustArch as the tag for this series.
Comments are welcome at all of those places -- I'm curious to see where I get good conversations -- but the authoritative copy of these posts will be Dreamwidth, and that's the copy that will get edited and updated as this evolves.
That said, a couple of ground rules. I don't want to see comments saying that if it's not 100% perfect, it's not worth trying. (I'm reasonably certain that it's impossible to make this perfect, but I'm moderately confident we could create something helpful.) And I'll be downright scornful of naive claims that we should just leave this for AI to deal with -- while I think it's likely to get quite powerful over the next decade, I'm not at all sanguine that it's going to be trustworthy to that degree any time in the foreseeable future.
But aside from that sort of thing, I'd love to get some serious conversation going. So come along, share your thoughts, and let's tease apart this important problem!
(no subject)
Date: 2025-09-29 12:20 am (UTC)(no subject)
Date: 2025-09-29 12:53 am (UTC)Quite broad -- indeed, this line of thought wound up broader than just trust, although that's my main focus. Plan is to cover a lot of use cases next time.
(no subject)
Date: 2025-09-29 10:27 am (UTC)Having a photograph (with film negative) was considered strong evidence for 150 years, because it was difficult and expensive to alter film after it was shot, but for the previous tens of millennia of human history, all pictures were hand-made, and it was cheaper to get a picture of something that didn't happen than of something that did. Having a picture was no evidence at all that it actually happened, and with AI-generated photographs and even video, we're now back to that epistemological state.
David Friedman says that in traditional Middle Eastern historiography, the reliability of a story was documented by the names of the reputable people the story had passed through, and the more such steps it had passed through, the more reliable. Of course, (to get "meta"), either he or I could have misunderstood something in there.
(no subject)
Date: 2025-09-29 10:50 am (UTC)Interesting -- not even remotely a use case I was thinking about, but probably fits the framework. I'll add it to the list -- thanks!
(no subject)
Date: 2025-09-30 12:11 am (UTC)Attestation -- who are the people who are willing to lend their names to the chain, and what do you know about their principles, abilities, and levels of caution? (For non-individual-person examples, consider kosher certifications, or UL, or Consumer Reports.)
I'm interested in seeing how you scope "trust" -- authentication, reproducing experiments, and evaluation skills/tools all seem to fit in. Among other things.
(no subject)
Date: 2025-09-30 01:00 am (UTC)I'm going to be spending some time on that, across several posts. The topic of Identity turns out to be central to any of this making any sense at all, and it's a very complex problem with a bunch of facets. Indeed, the problem of Identity Theft -- whether that be a person's email address being stolen or a company getting bought -- turns out to be critical enough that the protocol has to be partially designed around it.
(no subject)
Date: 2025-09-30 08:21 am (UTC)(no subject)
Date: 2025-09-30 07:24 pm (UTC)I'm primarily concerned with the higher level: which sources of information are trustworthy, and along what dimensions. Rating individual data points is the degenerate case: possibly interesting, but a lot less useful in the grand scheme of things.
So the "word-of-mouth stories" is closer, but it's more like "which sources of news are considered trustworthy by the people I consider trustworthy for news, recursing transitively?" That's a huge, complicated problem (because almost every word in that is nuanced), but IMO a very timely one. And the more I look at it, the more use cases I find that roughly boil down to that problem -- the next article, which lists a sampling of those use cases, is probably going to be the longest in the series.
(no subject)
Date: 2025-09-29 08:49 pm (UTC)