I thought to add a recent enhanced fork of fdupes, jdupesjdupes, which promises to be faster and more feature rich than fdupes (e.g. size filter):
jdupes . -rS -X size-:50m > myjdups.txt
This will recursively find duplicated files bigger than 50MB in the current directory and output the resulted list in myjdups.txt.
Note, the output is not sorted by size and since it appears not to be build in, I have adapted @Chris_Down answer above to achieve this:
jdupes -r . -X size-:50m | {
while IFS= read -r file; do
[[ $file ]] && du "$file"
done
} | sort -n > myjdups_sorted.txt