InYou've implemented a program to produce a histogram (though in tabular rather than visual form). That lets you judge "by eyeball" whether the distribution looks uniform. You would expect each bin to contain 1% of the samples. But how much deviation is allowable before you lose confidence?
Since you used the word "proof" in your question, I feel compelled to mention that eyeballing would not be good enough proof in academic terms. Statisticians actually have quantitative tests (Pearson's Chi-Squared test) to answer these questions. (Such tests are frequently used in medical publications, "proof" would mean testing whetherfor example.)
The hypothesis is: "These outputs of rng.Next(1, 101) producescame from a discrete uniform distribution. You could do that" You would start by calculating \$\chi^2\$. Then, you look up the \$\chi^2\$ value in the table for totalNumbers - 1 degrees of freedom. That gives you the probability that you have a uniformly distributed number generator.
That is Pearson's Chi-squared test. While it would be tricky to actually derive the table and thus automate the entire test, you could at least compute \$\chi^2\$, which is easy to do.