Consent is easily abstracted as a boolean function.
An agent A's consent function is defined C: S -> B
C(s) = True if the A agrees to s else False
Where s is an action in S, the set of all actions which may be undertaken by another agent.
Where B is the set {True, False}
However, in the popular conception of consent, there are some complications.
First is the ability to reason. Therefore if A is mentally incapacitated, C(s) = False for all s
Second is information. Therefore if A is told a lie about an action, C(s) = False for all s
The astute reader will note that both restrictions serve the same purpose -- to ensure A can accurately assess the consequences of s.
Let us call the information given to A and reasoning abilities of A the wisdom of A, W.
We rule that consent is not possible when W is under a certain threshold k.
Now we have C(s, W, k), which is always False if W < k.
But how should W be computed?
Bits of information extracted by A from the information supplied?
This seems reasonable.
It takes into account how much information A is given and also how well A can process it.
However, it does not take into account the magnitude of the consequences of action s.
solution is to take into account A's utility function, U.
U* = U(s,I) is what A, under wisdom I, predicts to be the change in utility caused by s
U' = U(s,W) is what A, with wisdom W, predicts to be the change in utility caused by s
The intuition is that when the actual utility differs too much from A's prediction of utility, A cannot consent.
In particular, if U' is much higher than U*, A is over-optimistic, and cannot consent.
One question is the selection of I, for which there are two possible candidates.
The first is for I to be infinite. This makes U* be the exact utility.
This is appealing and elegant mathematically, but flawed.
Suppose action s has a large negative utility in the far future but neither A nor acting agent B are aware of this.
A should be able to consent to action s under this case, when it would not be allowed under I as infinite.
The second is for V to be the wisdom of acting agent B. This makes U* the utility as predicted by B.
Therefore if B is malicious, and attempts to lower the utility of A while deceiving A, he expects U* to be low.
This causes U' to be much higher than U*, preventing consent, which is the ideal.
Therefore, let U* = U(S,V)
There are two obvious methods by which the utility discrepancy D may be computed.
D = U'/U*
This method suffers when U* = 0 or even when U* is near 0.
Intuitively, this makes consent near impossible when expected change in utility is small.
D = U'-U*
This difference metric makes more sense.
Intuitively, D limits the utility that can be lost each time consent is given.
However, unlike the previous metric, this one is exploitable by the technique of consent splitting.
Consider an action s where U' is much larger than U*.
This action can be subdivided into many subactions, for each of which U' is only slightly larger than U*
This allows A to consent to every subaction even when A may not consent to s.
The following mitigation is proposed.
Limit utility loss per unit time.
Therefore, if A must require at least D/k seconds to consent to action s with utility discrepancy D under threshold k.
This does not seem like an elegant solution, but none better have yet to be found.
Now we have C(s, D, k, t), where D = U(s,V)-U(s,W)
Generalizing consent for relativistic reference frames under time dilation is left as an exercise to the reader.
This concludes the lecture on consent. I applaud anyone who makes it this far.