Samuel Ludford
3 min readAug 14, 2021

--

Another great post! Many excellent things going on here - gonna restrict myself to some loose comments on some of the later bits.

I think the biggest, most practically significant claim you make here is encapsulated in the phrase "Game B lies in good mechanism design." I've not delved deeply into Game B, but the sense I've always been left with when I have is that Game A is identified with something like 'zero-sum thinking'. Sometimes it's also identified with defect-defect outcomes, but in an ambiguous kind of way which suggests the distinction between psychology and mechanism is simply not made in that world, which is ironic given that this is exactly what game theory illuminates (as you articulate in your discussion of the PD). It could be argued that the psychology / mechanism distinction is many ways more important than the Game A / Game B distinction if what we're interested in is cooperative outcomes, as opposed to cooperative attitudes. It's a subtle point though, and difficult to make.

I was intrigued by the bit about the revelation principle, which I've not encountered before. On a certain reading, it almost seems to be saying that any coordination problem that can be solved at all can be solved using a trustless mechanism. This brought to mind some of Vitalik Buterin's writings on the design philosophy behind Ethereum 2's proof-of-stake consensus model (https://medium.com/@VitalikButerin/a-proof-of-stake-design-philosophy-506585978d51), where he argues that the integrity of a (trustless) blockchain always ultimately depends on the integrity of the (trustful) social layer that contains it. In the quoted passage where Hurwicz and Reiter talk about the "givens" of the problem, perhaps the thing about real-world problems is that those givens are never stable except in the context of some trustful outer layer. (And indeed, this is exaclty the question that separates Austrian school free-market types - who are happy to take a system of agents with preferences as ontological givens - from e.g. Hegelian Marxists - who characteristically reject the givenness of these preferences, arguing that economic forces act on and shape them. i.e. from the perspective of the latter the former simply deny the existence of the social layer their whole theory depends on.)

Anyway, seems there's a potential risk here that 'trust' is treated as a tacit appeal to transcendence (and therefore bad), while 'trustlessness' is identified with immanence (and therefore goodness). I think many bitcoin people and free-market enthusiasts have this kind of outlook. But what can get forgotten is that what we're really after is something more like 'immanent trust' - trust without appeal to a transcendent authority. You can of course implement a trustless mechanism that solves the prisoner's dilemma - centralised authoritarianism would do it, Hobbes style. But if we restrict ourselves only to decentralised trustless mechanisms - i.e. purely economic ones - I very much doubt that we can design solutions to PD's and other multi-polar traps without making question-begging assumptions about what the givens are. Point being: while mechanism design is totally the right frame to look at this, it would be an easy mistake to identify this with economic (or trustless) mechanism design, when what is ultimately required is social mechanism design which can't possibly avoid get caught up in matters of trust.

Anyways just some quick thoughts - good stuff!

--

--

Samuel Ludford
Samuel Ludford

Written by Samuel Ludford

I’m a London based writer interested in technology, subculture, and philosophy. I blog at divinecuration.github.io

No responses yet