Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

5
  • 2
    Isn't this only for dirty entries? I don't think that's the issue on my system as they are all clean -- the delay is not in writing back dirty pages but in defragmenting space left by removing clean ones. Commented Jan 7, 2016 at 10:23
  • Yes , this is for dirty pages , i think you can also fix other performance problems by setting tuned to dynamic mode. Commented Jan 7, 2016 at 10:28
  • 1
    "Since Linux 2.6, [the bdflush] system call is deprecated and does nothing. It is likely to disappear altogether in a future kernel release. Nowadays, the task performed by bdflush() is handled by the kernel pdflush thread." man7.org/linux/man-pages/man2/bdflush.2.html Commented Feb 22, 2019 at 1:05
  • I wish there was a solution for this because about 3 minutes ago I just ran into a situation where the command sudo apt-get update would not work because the /etc/apt/sources file was being read from an OLD copy stored in Cache... I needed to manually clear cache in order for the new file to be read and Apt to work again. So unnecessarily labor-heavy. I wish linux experience would not be so painful, because this been happening to more than 15+ years now. Very disappointing. Commented Feb 9, 2022 at 16:55
  • I can understand a file being held in Cache for long periods of time. But are you telling me Linux system does not have any kind of real-time verification for what of these files had changes in the meantime? A simple verification check would solve that. Back in my days, this was called a BUG. Commented Feb 9, 2022 at 16:57