2

A few years ago, the following occurred (citing from Wikipedia):

In 2021, researchers from the University of Minnesota submitted a paper titled "On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits" to the 42nd iteration of a conference. They aimed to highlight vulnerabilities in the review process of Linux kernel patches, and the paper was accepted for presentation in 2021. The Linux kernel is a widely used open-source operating system component that forms the core of the Linux operating system, which is a popular choice in servers and in consumer-oriented devices like the Steam Deck, Android and ChromeOS. Their methods involved writing patches for existing trivial bugs in the Linux kernel in ways such that they intentionally introduced security bugs into the software. Four patches were submitted by the researchers under pseudonyms, three of which were rejected by their respective code reviewers who correctly identified the buggy code. The fourth patch was merged, however, during a subsequent investigation it was found that the researchers had misunderstood the way the code worked and had submitted a valid fix. This attempt at including bugs was done without Institutional Review Board (IRB) approval. Despite undergoing review by the conference, this breach of ethical responsibilities was not detected during the paper's review process.

It appears that, luckily enough, this idiotic endeavor did not actually lead to any security bugs actually being introduced into the Linux kernel.

I am, however, wondering about what would have happened had they actually succeeded in adding vulnerable code to the kernel. It does not appear that they intended to exploit the code, or for it to be exploited by anyone else, but this would still have left the kernel vulnerable.

Let's assume a hypothetical in which they actually did succeed in doing this, and nobody noticed or fixed the bugs until a malicious actor later found and exploited them (leading to millions or billions in damages to various people/companies using Linux), and UMN was then discovered as being the deliberate source of it.

Would the researchers (or even UMN itself) be civilly or criminally liable for having introduced the security bugs ? If so, what torts/offenses would they have committed ?

Criminal/civil fraud (the obtained gain being the results for their paper and what they get from publishing the paper) or civil negligence seems like they could be plausible for this, but as a layman I'm not very confident of my analysis and my research hasn't yielded much that seems definitive, there especially seems to be basically 0 precedent for anything like this, so I have no idea as to the actual answer.

1 Answer 1

3

as a layman I'm not very confident of my analysis and my research hasn't yielded much that seems definitive, there especially seems to be basically 0 precedent for anything like this, so I have no idea as to the actual answer.

In this case, you are right to be unsure because the actual answer is unsure, mostly for the key reason that you have identified, which is that there is basically zero precedent for anything like this. You research was correct in leaving you without much confidence regarding the answer.

A good starting point is to recognize that there are no precedents squarely on point. It is also a tricky question for a variety of reasons. The best we can do is to stack up what we do know and to try to see how this applies to this fact pattern.

Maliciously defective products

There are numerous examples, mostly in wartime or military directed action (but also in incidents of domestic criminal or terrorist conduct), of a manufacturer deliberately producing a defective product to undermine the consumer.

For example, during World War II, a French car maker deliberately designed the dipsticks in vehicles produced for Nazi military vehicles and the insistence of the French Vichy government, in a manner that would make it appear that the vehicles still had oil left when they were actually out of oil and on the verge of suffering engine failures as a result. While this was morally justified, it was also clearly criminal sabotage of the Nazi military under Vichy law and would have been an intentional tort that would have justified a lawsuit against the car maker as well.

More recently, intentionally defective explosive laden kit was provided by an Israeli controlled company to Hezbollah militias in Lebanon, and intentionally defective explosive laden drone controllers were provided by Ukrainian military affiliated sources to the Russian military in the currently pending Ukraine war.

The norm of strict liability in tort thwarted by waivers of liability

It is also clear, under the generally applicable standard of product liability law, that in the absence of a waiver from the consumer, that the company that made that software containing the hack would have strict liability for the damage that it caused, even if they weren't aware of it. But, this damage can be waived and almost universally is waived in all commercial software licenses. So, the producer of the commercial software would have no liability as a result of the waiver.

There is no liability to the direct user of the defective open source software

This also means that since the commercial software producer has no liability that it can't sue the creator of the hackable component into the software for fraud, because it suffered no damages.

Is the defective open source software disclaimer of liability valid?

An open source software license likewise contains a disclaimer of liability, which is effective to protect the open source software creator from liability, at least for simple negligence.

But public policy provides that a disclaimer of liability is void for harm caused by intentional, willful or wanton, reckless, or bad faith conduct.

And, while, by assumption, the makers of the open source software did not intend to exploit or have others exploit the known defect, they did intentionally put their knowingly defective open source software component into the stream of commerce while intentionally concealing the defect. So, their disclaimer, at least as to the commercial software distributor that used their open source software component, is probably void as a matter of law.

Of course, the exact language of the disclaimer and what it actually disclosed could be relevant. If the disclaimer stated, plainly and conspicuously that this open source software may contain serious flaws that allow it to be hacked, and the people who used it just ignored that warning, the case that there was any sort of wrongdoing would be diminished.

Is privity with the end user required?

So, one of the core questions would be whether the end user, who was harmed by a defective open source software component that the person they bought the software from who created the overall software product that they received did not know was defective, can bring a lawsuit directly against the person who knowingly created the defective open source software that was unknowingly used by someone else.

Common law fraud v. lack of direct reliance

On one hand, usually tort liability (as opposed to contractual warranties) don't require a direct contractual interaction between the person suing and the person who is sued.

But, on the other hand, lawsuits for fraud and fraudulent concealment, do generally require a reliance on the part of the person suing on a misstatement of a presently existing fact, or reliance on the part of the person suing on the person being sued not saying something, to state a common law fraud action. And, in this case, the person harmed probably had no knowledge at all that there was even an interaction between the maker of the defective open source software component and the person that made the ultimate combined piece of software that they used. So, a common law fraud lawsuit probably isn't available to the end user either, even though the disclaimer of liability may have been void as contrary to public policy.

Remaining legal theories

This leaves two plausible legal theories: (1) a negligence action directly against the wrongdoer that isn't barred by the disclaimer because the disclaimer was void as contrary to public policy, or (2) a rather obscure tort cause of action, called "injurious falsehood" for intentionally make a misrepresentation that will harm a third-party.

Problems with an injurious falsehood claim

But the second legal theory is not accepted by all U.S. states as a valid legal theory, and even if it is accepted by a relevant U.S. state, isn't always recognized in the case of fraudulent concealments of information, even when it is recognized as a valid legal theory when an affirmative misrepresentation is made by someone, and it often requires an intent to harm a third party.

The lack of fraudulent concealment liability is because the duty of someone making a subcomponent of a product to make a disclosure of something directly to an end user or third-party often isn't recognized as valid unless it is clearly established, for example, by statute (as in the case of publicly held securities, where the duty is created by the '33 Securities Act and '34 Securities Exchange Act).

Ultimately, then, an injurious falsehood tort claim would probably fail.

A potential negligence claim

Ultimately, a simple common law negligence claim is probably the strongest argument that could be made for an end user of the software who was harmed to sue to person who made the knowingly defective open source software. There is a general common law duty to use reasonable care to prevent harm to third-parties. This claim does not require a direct relationship between the person whose negligence causes the harm and the person who suffers the harm. It is not barred by a legally valid disclaimer from the person who causes the harm. The harm caused was reasonably foreseeable. And, the negligent action of the person causing the harm did cause damages.

I can't point to any cases on point where this theory was actually successful.

Would punitive damages be available for a negligence claim?

An award of punitive damages would probably not be allowed since there was not actually an intent to cause harm, although recklessness could conceivably justify punitive damages. But this claim might provide, at least, a basis for recovering compensatory damages from the people who put the knowingly defective open source code into the stream of commerce knowing that it would likely make software using the code vulnerable to hacking without disclosing this known flaw.

Comparative fault in a negligence claim considered

Another issue would be whether comparative fault and/or an intervening cause would relieve the person who wrote the defective code of some or all liability.

The person who wrote the code, by assumption, did not benefit from the defective code, and did not personally exploit the defect in the code or tell anyone else about the defect in the code.

In contrast, some malicious third-party did intentionally exploit the defect in the code and benefitted from that defect in some way.

In a comparative fault regime, the liability for the total harm caused has to be allocated by a judge or jury on a percentage basis between all people who were at fault in causing the harm. It wouldn't be surprising, for example, to see fault allocated 10% to the people who wrote the knowingly defective code, and 90% to the people who actually exploited the defect.

Is a negligence claim barred by an intervening cause?

Also, all liability can be eliminated if there is an intervening cause of the harm between a negligent party and the person who was harmed. But since that intervening cause must normally be an unforeseeable intervening cause, and the end result, and this third-party malicious hacker was foreseen, this defense might not apply.

Why might a plaintiff prefer negligence to fraud?

A person suing for damages in this scenario might very well prefer to bring a negligence claim to a fraud claim, even though punitive damages are clearly available in a fraud case, but might not be available on a negligence claim.

Why?

Because harm caused by negligence is almost always covered by the wrongdoers insurance coverage, while fraud by the wrongdoer is almost never covered by the wrongdoers insurance coverage.

Also, the institutions for which the researchers worked would usually have vicarious liability for the wrongdoing of their researchers, and direct negligence liability for failing to insist on human subjects review for the project.

So, it is much more likely that the claimants could collect with the institution and multiple insurance policies on the line (both the personal homeowner's/renter's policies of the researchers and the institution's policies), than it is that they could collect a fraud judgment that can only be collected from the personal assets of the researchers which are likely to be modest relative to the scale of the harm done.

Stochastic terrorism liability compared

But, I say that with caution, because this scenario bears close resemblance to "stochastic terrorism".

Stochastic terrorism is a form of political violence instigated by hostile public rhetoric directed at a group or an individual. Unlike incitement to terrorism, stochastic terrorism is accomplished with indirect, vague or coded language, which grants the instigator plausible deniability for any associated violence.2 A key element of stochastic terrorism is the use of media for propagation, where the person carrying out the violence may not have direct connection to any other users of violent rhetoric.

The only case in history where anyone has been found to have criminal or civil liability for stochastic terrorism that I can recall, is the case of the Rwandan genocide, where, unlike the case in this question, the harm was fully intended by the people who incited random third-parties to genocidal violence, and looked more like a traditional case of intentionally inciting or soliciting violence.

A fair amount of academic scholarship argues that there should be civil and/or criminal liability for stochastic terrorism, but so far, no U.S. legislature or court has adopted this legal theory as a basis for civil or criminal liability.

Given that courts have generally not recognized claims arising from alleged stochastic terrorism, it isn't obvious that the less culpable claim of negligence in the facts in this question could be supported as a legal theory either.

Criminal liability considered

Criminal charges that can be based upon negligence are rare, especially when physical injury does not result, and criminal negligence is close to what would be considered recklessness in a civil case. So, while it isn't absolutely impossible to rule out criminal liability in the absence of a specific criminal statute enacted at some point in the future addressing this kind of conduct, it doesn't seem likely.

Conclusion

Ultimately, the strongest theory would be a simple civil negligence lawsuit for damages, which would probably be a minor share of the total damages suffered compared to the share of liability of the actual malicious hacker who exploited the flaw.

This is also the claim it would be most likely that the claimants would be able to collect upon if a judgment is entered in their favor. Mostly likely the malicious hacker can't be located, and other legal theories would not be covered by liability insurance or create vicarious liability for the institutions for which the researchers worked.

But, even this is not a sure thing and doesn't really have any close precedents to support it. There are also many steps of the analysis above where a court could zig where I zagged and reach a different conclusion, either on the underlying negligence liability issue, or on the complicating matter of the validity of the open source code liability disclaimer.

One suspects that the quirky set of facts in this question are unlikely to recur, so the legal questions presented in this question may never be resolved definitively.

3
  • 1
    Incredible answer - I didn't expect anyone would be so interested in the question as to write a 2200 word answer, thanks! I'll add that I was thinking that the kernel devs themselves might have a cause of action for fraud (for reputational damages) but I guess that wouldn't involve damages anywhere as large as those that would be caused by widespread exploitation, and it does make sense that the victims of that would have a hard time saying they were the ones defrauded (unless perhaps the kernel devs also themselves got hacked, but again that'd be a small part of the damages caused as a whole) Commented Feb 13 at 13:00
  • 1
    @GabrielRavier Only scratching the surface. Should damage to reputation from a third-party's poor workmanship be compensable (unlike defamation based on statements)? Are the researchers third-party beneficiaries of the disclaimer to the end user? Is there or should there be liability for "risking" (i.e. creating a risk of harm even before the harm actually happens)? Given the pervasiveness of disclaimers should defective software be actionable at all without an express warranty? Is a disclaimer to the world valid without privity? Should institutions have vicarious liability for researchers? Commented Feb 13 at 14:55
  • 2
    @GabrielRavier It's amazing what ohwilleke finds interesting enough to write something that seems like a miniature law review article here. 3-4 paragraphs in I was certain he would be the author. Commented Feb 13 at 17:18

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.