I have a dataset with latitude/longitude of hotels of a "destination".
A destination is a city neighbourhood, whole city, or small region, usually having between 3 and 50 hotels.
About 1% of the latitudes/longitudes are totally erroneous (due to GPS/software/human error).
For instance, here are the locations of hotels in Nelson and Chiyoda:
Nelson: 
Chiyoda: 
As you can see, Nelson has no erroneous coordinates, most hotels in the city center and 3 ones in the suburbs. But Chiyoda clearly has one erroneous hotel.
I have tried using the LOF algorithm with a k=2
The Chiyoda outlier get 52 while the 3 suburub hotels in Nelson get 13, 3, 3.
QUESTION: Is LOF is the most adapted algorithm to my problem?
Also, I chose k=2 pretty randomly, it might not be the best.
The algorithm must work for tiny but dense city neighbourhoods like Upper East Side, but also for larger and more sparsely-populated areas like The Hamptons.
Algorithm speed is not a problem, I have time.