7

I developed a process named X0 that continually appends some lines to a CSV file to register some electrical data as voltage or power.

This process run a python program and is started using following command:

sudo systemctl start getdata.Hypontech.system

The python code used to open CSV file in appending mode is following

def OpenDataFile():
    global f
    dt = datetime.now()
    global iFileDay
    iFileDay = dt.day
    sFileDate = dt.strftime("%Y-%m-%d")
    sFile = "./data.H1/Measures." + sFileDate + ".csv"
    print("FILE: " + sFile)
    f = open(sFile, "a")
    f.write(sFileDate + ";" + sHeader)
    f.write("\r\n")

When I use cat or tail Linux command, I see only lines written before X0 process is started.

How can I display last CSV lines (or content) without stopping X0 process?

If it is impossible using Linux standard command, is there a tool that allows that?

5
  • As far as I know, when a program is holding a write-lock on a file, you can't read from it. So what you would want to do is change your program so everytime it needs to write data, it opens the file for writing, append to the file, and close the filehandle. This should allow you to cat or tail the file and get the most recent data out of it. Commented yesterday
  • 2
    @LPChip AFAIK the mandatory lock feature in Linux was unreliable, optional since Linux 4.5 (2016) and no longer supported in 5.15 (2021) and above. But even when it kinda worked, a program that wanted to lock a file had to request a lock explicitly, I think. And then another program that wanted to read a locked (fragment of a) file would either block or get an error. If my hypothesis about the write buffer is right then your advice (closing the file) will probably work; still closing and reopening the file again and again does not seem to be a good practice, the OP may just need to flush. Commented yesterday
  • 1
    @LPChip: That is true for "DOS share modes" which still exist in Windows (where a program needs to specifically opt in to sharing the file for reads, or reads/writes, when it is opening the file) but no such thing exists on Linux. Commented yesterday
  • The answer with flush() is good, this is the right thing to do. Nevertheless that answer does not answer any of your explicit questions. The answer to the title is "cat or tail you used; the fact you did not see the expected lines means they were not (yet) in the file". After learning about buffering and how to improve your program, do you still want to see lines you were expecting to see without stopping the process? If not then the question shall be rebuild to "why don't I see lines I expect and what to do to be able to?". Also see: XY problem. Commented yesterday
  • 4
    tail -f (follow) will display new lines when the OS sees them; as others have said this is subject to buffering Commented 20 hours ago

1 Answer 1

12

Python file writes are buffered, so even though you keep calling Python's write(), the underlying OS write() system call isn't called until the in-memory buffer fills, or the file is closed.

If you can edit the Python script, you should add a flush() after each write, and/or open the file in line-buffered mode

Flush:

f.write(sFileDate + ";" + sHeader)
f.write("\r\n")
f.flush()

Line-buffered mode:

f = open(sFile, "a", buffering=1)
3
  • Yes, buffering can be a pain for this kind of scenario. A web server I use buffers writes to the access logs for performance, which is good and fine when there's a lot of traffic passing through - but not so useful when trying to observe activity in a mostly-idle development environment. Commented 14 hours ago
  • 1
    Good answer. With my upvote, you hereby join the 10000 point club! Congrats! :) Commented 4 hours ago
  • 1
    @AmazonDiesInDarkness w00t (party) (cake) (balloons) Commented 3 hours ago

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.