1. SSE is useless for mixture of different predictions because it was not designed for this purpose. Probability interpolation in SSE is also nearly useless because compression gain is negligible and execution time grows at order of magnitude. Thus, We need modeler with following properties:
a) possibility to attach new auxilary submodel without affecting to old ones;
b) weak dependency on input probability;

2. Suppose, We have some context model that generates probability estimation for symbol $Prob_i^0$, $i$ is count of encoded symbols. This estimation is input for SSE-modeler that generates estimation $Prob_i^1 = frac {Nom_i} {DeNom_i}$. Suppose, our auxilary submodel gives small correction to probability estimation $Prob_i^1$, for simplicity of calculations, We can choose this correction as $Prob_i = \frac {Nom_i + \varepsilon Nom_i} {DeNom_i + \varepsilon Nom_i}$. By weak definition of probability, $\sum_i Nom_i = \sum_i \delta_i DeNom_i$, $\delta_i = 1$ - symbol is encoded on $i$th step, $\delta_i = 0$ - coding failure. From this equation, We get value of $\varepsilon$:
$$\varepsilon = \frac {\sum_i \delta_i DeNom_i - Nom_i} {\sum_i Nom_i - \delta_i Nom_i}$$
Also, good choice for probability correction is $Prob_i = \frac {Nom_i} {DeNom_i + \varepsilon (DeNom_i-Nom_i)}$.

3. Code sample for DeNom ~ 2^15:
extern int Nom, DeNom; 
struct TSE_MODELER {
enum { S2_INIT=1 << 17, MAX_ADD=1 << 18, S1_BOUND=1 << 24 };
void init() { S1=0; S2=S2_INIT; }
void tuneProb(int ndn[]) {
ndn[0]=Nom; ndn[1]=DeNom;
int i=CLAMP(MyMulDiv(S1,Nom,S2),1-Nom,MAX_ADD);
Nom += i; DeNom += i;
if (abs(S1) > S1_BOUND) { S1 >>= 1; S2 >>= (S2 > S1_BOUND/64); }
if (S2 > (1 << 30)) { S1 >>= 1; S2 >>= 1; }
}
void update0(int ndn[]) { S1 -= ndn[0]; S2 += ndn[0]; }
void update1(int ndn[]) { S1 += ndn[1]-ndn[0]; }
int S1, S2;
};