n = 10;
eqSystem[n_] :=
Table[x[i] ==
2./n*Sum[If[i != j, (x[i] - x[j])^(-1), 0], {j, 1, n}], {i, 1, n}]
testSystem = eqSystem[n];
vars = Table[x[i], {i, 1, n}];
cond = Table[x[i] < x[j], {i, 1, n}, {j, i + 1, n}] // Flatten;
You could try turning it into a minimization problem by replacing the heads of each element in testSystem with Subtract (I.e. subtract each lhs from its rhs), and taking the squared sum total of the list of equations. If all the equations are satisfied the squared sum total of lhs-rhs squareResid should be 0:
diffs = Subtract @@@ testSystem;
squareResid = Total[(#[[1]] + #[[2]])^2 & /@ diffs];
On my laptop, this takes about 5 seconds for n=10 and the squared residual is ~6*10^-12 so all equations are pretty close to satisfied. The danger is that these arg values testSolutions don't perfectly satisfy the equations, and depending on the magnitude of the gradient of the objective function the arg values could be far from the true root value, so it's important to verify these results:
({nMinResid, testSolutions} =
NMinimize[Join[{squareResid}, cond], vars]) // AbsoluteTiming
{5.20252, {6.13811*10^-12, {x[1] -> -1.5367, x[2] -> -1.13267,
x[3] -> -0.785613, x[4] -> -0.463586, x[5] -> -0.15335,
x[6] -> 0.15335, x[7] -> 0.463586, x[8] -> 0.785613,
x[9] -> 1.13267, x[10] -> 1.5367}}}
The thing I'd be worried about is the fact that eqSystem has things like (x[i] - x[j])^(-1) in it, so the objective function could explode when x[i]s are close to x[j]s
for n=100, it takes about 100 s and the squared residual is ~6*10^-10 (supressing the output of the arg values):
({nMinResid, testSolutions} =
NMinimize[Join[{squareResid}, cond], vars]); // AbsoluteTiming
(*{104.63, Null}*)
nMinResid
(*5.72012*10^-10*)
I don't have time to run the n=1000 case as of writing (I can update if I do have time later), but this does seem faster than trying to use NSolve