This is a preview of a research series titled âDAOsclarosisâ. This first preview is published here with the remainder of articles posted on other forums. Feedback in comments or via email/DMâs is always welcomed. LaTeX isnât enabled yet on the forums so it may be easier to read on our notion page
Posted on our Notion: Notion â The all-in-one workspace for your notes, tasks, wikis, and databases.
DAOsclarosis: on the persistence of faulty models in governance
âDemosclerosis isnât a problem you solve Itâs a problem you manage.â Jonathan Rauch, DEMOSCLEROSIS The Silent Killer of American Government, 1994
The DAO Corollary
F.K.A. Amdahlâs Corollary
The most efficient way to implement a piece of software is to do it all yourself.
No time is wasted communicating (or arguing); everything that needs to be done is done by the same person, which increases their ability to maintain the software; and the code is by default way more consistent.
Turns out âmore efficient (alt effective
)â doesnât mean âfaster (both in performance and time to delivery)â. When there are more people working on the same problem, we can parallelized more at once.
When we break work up across a team, in order to optimise for the team, we often have to put more work in, individually, to ensure that the work can be efficiently parallelized. This includes explaining concepts, team meetings, code review, pair programming, etc. But by putting that work in, we make the work more parallelized, speeding up and allowing us to make greater gains in the future.
Amdahlâs Law
Amdahlâs law can be formulated as follows:
In other words, it predicts the maximum potential speedup (Slatency), given a proportion of the task, p, that will benefit from improved (either more or better) resources, and a parallel speedup factor, s.
To demonstrate, if we can speed up 10% of the task (p=0.1) by a factor of 5 (s=5), we get the following:
Thatâs about an 9% speedup. Eh, fair enough. If we can swing it, sounds good.
However, if we can speed up 90% of the task (p=0.9) by a factor of 5 (s=5), we get the following:
Thatâs roughly a 250% increase! Big enough that itâs actually worth creating twice as much work; it still pays off, assuming the value of the work dwarfs the cost of the resources.
s \rightarrow \infty, which means \frac{p}{s} \rightarrow 0, so we can also drop the \frac{p}{s} term if we can afford potentially infinite resources at no additional cost.
In other words, if 90% of the work can be parallelised, we can achieve a theoretical maximum speedup of 10x, or a 900% increase. This is highly unlikely, but gives us a useful upper bound to help us identify where the bottleneck lies.
Generalising a PID to the amount of work
Typically, we start off with a completely serial process. In order to parallelize, we need to do more work. It doesnât come for free.
This means that when computing s, the parallel speedup, we should divide it by the cost of parallelisation.
For example, if the cost is 2, that means that making the work parallelisable (without actually increasing the number of resources) makes the parallel portion take twice as long as it used to. (The serial portion is unchanged.)
So, if we take the example from earlier, where 90% of the work is parallelisable but it costs twice as much to parallelize, weâll get the following result:
Itâs still about a 117 \% increase in output! However, if p=0.1, then thereâs really very little point in adding more resources.
And if the cost of parallelisation is greater than the potential speedup, bad things happen:
Adding 4 more resources slows us down by 23%. Many of us have seen this happen in practice with poor parallelization techniquesâpoor usage of locks, resource contention (especially with regards to I/O), or even redundant work due to mismanaged job distribution.
So, What Does It All Mean?
Amdahlâs law tells us something very insightful:
When the value of your work is much greater than the cost, you should optimise for parallelism, not efficiency.
The cost of a weekly two-hour team meeting is high (typically in the $1000s each time), but if it means that you can have 7 people on the team, not 3, itâs often worth it.
Delivering faster means you can deliver more.
Better to have 10 people working on 5 problems and doing a better job than it is to have 10 people working on 10 problems.
The former will lead to fewer conflicts, fewer defects, and a much more motivated team. I.e. p and s produce greater returns, faster than the amount of work.
Conversely, if all the knowledge of how the product works is in one personâs head, pâ0. While thereâs no impact to efficiency this way, it limits our ability to produce, because one person can only do so much. Adding more people just makes things slower.