I'm trying to prove that the constrained convex minimization problem with decision variable $\boldsymbol{x} \in \mathbb{R}^{n}$ given by
$$ \min_{\boldsymbol{x}} \Vert \boldsymbol{x} \Vert_{2} \text{ s.t.}$$
$$ \Vert \boldsymbol{x} \Vert_{1} = 1, $$
$$ \boldsymbol{0} \leq \boldsymbol{x} \leq \boldsymbol{1} $$
has an optimal solution $\boldsymbol{x}^{\star}$ where $x_{i}^{\star} = 1/n$, for all $i$. I 'know' this solution to be true by solving the optimization problem numerically (and it makes intuitive sense) but I haven't been able to prove it analytically. I tried to invoke the necessary and sufficient KKT conditions of the Lagrangian $\mathcal{L}(\boldsymbol{x}, \boldsymbol{\mu}, \lambda)$ but I couldn't quite figure out how to choose $(\boldsymbol{\mu}^{\star}, \lambda^{\star})$ and then prove it to be a saddle point. Any help would be appreciated. Cheers!