The 1st part of this answer will advocate developers not to rely on system hardening (such as PID randomization) for sole protection.
Some obsolete programs relied on PIDs as entropy source (credit goes to commentator Steffen Ullrich who provided insight under this post). Randomizing the PID provided ineffective solution to this for the obvious reason that PIDs are mostly 32-bit - very insufficient amount entropy can be provided from this. Some other non-solutions by random PIDs are mentioned in the answer to the linked Q in the OP.
The true solution to these problems, is fixing the application. Consider these:
For external attackers, PIDs needn't and shouldn't be exposed in the first place. Uniqueness isn't the only requirement for an identifier for a session between a server and a client, see relevant OWASP cheatsheet topic on this.
For internel attackers that can query the PIDs of processes, they shouldn't enter your system in the first place, or when they do, they should be sandbox and constrained. For example file uploads should be virus-scanned; code and command injection should be prevented.
For attackers stuck on the wall, ... :p. just shoot them with a riffle.
There are however downsides of random PIDs in terms of usability:
- It makes the order of program start obscure,
- it makes typing PIDs harder,
- it makes reading PIDs in logs difficult.
The 2nd part discuss status quo on the specific topic of research of PID randomization as mitigation of exploits.
I checked these outlets from OpenBSD, they seem to realize that randomized PIDs aren't as great an idea as it seemed, so they might have retracted relevant papers. Regardless whether or not they did, I can't seem to find the info.
Search engines are slow on this, info exist but no discussion of merit. AI augmented chatbots cites the linked Q in the OP as information source, but only facts about implementations and no concensus on whether random PIDs are beneficial. And there seem to be very few research paper on this as well, even when I set time range to 1995-2010.
As the 3rd part, answering the question in the title:
Does deterministically random PIDs solve the problems of truely random PIDs?
From comment:
Similar with many other hardening measures which don't fix the actual problem but just make it harder to exploit.
to address the effectiveness of my idea, permuted PIDs have one-to-one mapping with incrementing sequences, so at worst they're not worse than incrementing PIDs, so technically speaking, it does improve over true random PIDs, which suffer from collision due to birthday paradox.
The problem of new processes accidentally receive a signal from an old process that mistake them as someone else is a bigger problem in reality. As suggested by Steffen Ullrich, we can take inspiration from the TCP/IP stack:
To do this a list of recently closed ports is maintained.
I called this "dead process retention" in discussion with him, and suggested retention can be limited in duration, or total number of dead processes. But this Q is about random PIDs, so that'll be the scope for another post.
major problem with random PIDs is that they repeat more quickly than sequential PIDsno it is not. How is it a problem? What is the impact and severity?