> logistic mixing just transforms probabilities from [0,1] to [-inf,+inf]
Actual implementations map [0..SCALE] to [-SCALE/2..SCALE/2].
Also its really not the point at all.
> You don't need a reliability counter.
Its a matter of terminology.
We can use something like this:
Code:
// probability estimations of current bit value
q1 = bit ? p1 : SCALE-p1;
q2 = bit ? p2 : SCALE-p2;
if( q1<q2 ) w1++; else w2++;
w = w1/(w1+w2);
And it would be a valid update function for both linear and logistic mixer,
ie it would improve compression over static mixing.
Sure its not the best one, but instead you can call w1,w2 "reliability counters" 
> Think about which of the two statistical distributions you would
> trust more, the more skewed one, or the more balanced? Remember how
> the statistics are build, the counter very slowly (well depends on
> your counter-update scheme) build up skew if any, present skew means
> there has been a _lot_ bias occured.
That's only right with very skewed symbol distributions, like in
high-order contexts, and counters initialized with 0.5.
But it doesn't apply to occurrence counters (ie p=n0/(n0+n1)),
and to schemes like unary coding, and to data types different
from high order context histories.
Also always predicting bit=0 if you have p1=1 and p2=SCALE/2
is not optimal in either case (I mean using the most skewed
estimation, like you suggested).
> You should trust a _lot_ bias (like if you know you have a biased
> coin, you're going to win with the balanced or the biased coin?).
You're right, but it doesn't apply to compression at all.
Here you have to consider codelength instead of plain number of guesses.
> Charles did the first PPM-SSE based on that argument with
> log-scaling over three probabilities (like your two) back then.
Logistic mixing is unrelated to log scaling.
Also it was only SEE, SSE was invented by another person.