If you are removing outliers the, in most situations you need to document that you're doing so and why. If this is for a scientific paper, or for regulatory purposes, this could result in having your final statistics discounted and/or rejected.
The better solution is to identify when you think you're getting bad data (e.g. when people pull wires), then identify when people are pulling wires, and pull the data for that reason. This will probably also result in some 'good' data points being dropped, but you now have a 'real' reason to tag and discount those data points at the collection end rather than at the analysis end. As long as you do that cleanly and transparently, it's far more likely to be acceptable to third parties. If you remove data points related to pulled wires, and you still get outliers, then the probable conclusion is that the pulled wires are not the (only) problem -- the further problem could be with your experiment design, or your theory.
One of the first experiments my mom had when returning to university to finish her BSc was one where students were given a 'bad' theory about how a process worked, and then told to run an experiment. Students who deleted or modified the resulting 'bad' data points failed the assignment. Those who correctly reported that their data was in disagreement with the results predicted by (the bad) theory, passed. The point of the assignment was to teach students not to 'fix' (falsify) their data when it wasn't what was expected.
Summary: if you're generating bad data, then fix your experiment (or your theory), not the data.