If you were to read some of my past answers/comments about outliers, and outlier removal, you would note that I can be quite irate about people who think very little about removing so-called outliers.
So first let me commend you for at least having some scruples.
And second, maybe surprisingly coming from me, I see nothing wrong with you simply ignoring said "anomaly". As long as you clearly disclose this (as you did in the question), note that you know it is "true data" (and not an error), but that it is so exceptional as to bias your model, then simply ignore it.
And you may also want to ignore the datapoints at 83 and 61 (only 1 observation in these ranges of your data).
And if you then clearly state that your model is only valid in the range of 0 to ~40 Mg C ha−1 (it really would be pushing it to claim up to 50, as you have essentially no observations in that range), then there is no issue. That is the range your model can claim to be valid in, and you are simply excluding datapoints beyond that range.
And trying to use "robust models" or other such tools is fool's gold; you have basically no observation beyond ~40, so making claims that you are modelling beyond this value is pointless.
Now, if you had observed a lot of values between ~40 and ~400, you could make a broader claim, but then you would not have an outlier, would you? You would have an extreme value, and it would be fair to account for it in your (broader) model.
But note that I would have be my (usual) irate if you had tried to, e.g., estimate a parameter of the population. Excluding data in such a case is akin to falsification; you will claim an incorrect estimate of the, e.g., mean. But in your case, trying to model, it is actually the proper practice to exclude data points outside the range where you have sufficient data for your mode (otherwise, you will suffer from the same issues as if you were trying to extrapolate outside the range of your model; see nice illustrations of this issue -including some xkcd cartoons- e.g. here).