Sure, but that's ultimately a pretty unlikely attack vector. An attacker still needs to exploit some unknown vulnerability of your web browser in order to get something malicious going.
Sometimes I dream about a 100% secure OS. Maybe formal verification is the key, or Rust, I don’t know. But I would love to know that I can't be hacked.
The problem is that for the overwhelming majority of use cases the isolation features that are violated by security bugs are not being used for real isolation, but for manageability and convenience. Virtualization, physical host segregation, etc are used to achieve greater isolation. People don't necessarily care about these flaws because they aren't actually exposed to the worst case preconditions. So the amount of contributor attention you could get behind a "100% secure OS" might not be as large as you are hoping. Anyway if you want to work on such things there are various OS development efforts floating around.
Their view that security bugs are just normal bugs remains very immature and damaging. It it somewhat mitigated by Linux having so many eyes on it and so many developers, but a lot of problems in the past could have bee avoided if they adopted the stance the rest of the industry recognizes as correct.
From their perspective, on their project, with the constraints they operate under, bugs are just bugs. You're free to operationalize some other taxonomy of bugs in your organization; I certainly wouldn't run with "bugs are just bugs" in mine (security bugs are distinctive in that they're paired implicitly with adversaries).
To complicate matters further, it's not as if you could rely on any more "sophisticated" taxonomy from the Linux kernel team, because they're not the originators of most Linux kernel security findings, and not all the actual originators are benevolent.
> From their perspective, on their project, with the constraints they operate under, bugs are just bugs.
That's a pretty poor justification. Their perspective is wrong, and their constraints don't prevent them from treating security bugs differently as they should.
> almost any bugfix at the level of an operating system kernel can be a “security issue” given the issues involved (memory leaks, denial of service, information leaks, etc.)
On the level of the Linux kernel, this does seem convincing. There is no shared user space on Linux where you know how each component will react/recover in the face of unexpected kernel behaviour, and no SKUs targeting specific use cases in which e.g. a denial of service might be a worse issue than on desktop.
I guess CVEs provide some of this classification, but they seem to cause drama amongst kernel people.
In the context of the kernel, it’s hard to say when that’s true. It’s very easy to fix some bug that resulted in a kernel crash without considering that it could possibly be part of some complex exploit chain. Basically any bug could be considered a security bug.
Classifying bugs as security bugs is just theater - and any company or organization that tries to classify bugs that way is immature and hasn't put any thought into it.
First of all "security" is undefined. Second, nearly every bug can be be exploited in a malicious way, but that way is usually not easy to find. So should every bug be classified as a security bug?
Or should only bugs where a person can think of a way on the spot during triage to exploit that bug as a security bug? In that case only a small subset of your "security" bugs are classified as such.
Cool. So social engineering it is. You are your own worst enemy anyways.
Such as?
To complicate matters further, it's not as if you could rely on any more "sophisticated" taxonomy from the Linux kernel team, because they're not the originators of most Linux kernel security findings, and not all the actual originators are benevolent.
That's a pretty poor justification. Their perspective is wrong, and their constraints don't prevent them from treating security bugs differently as they should.
On the level of the Linux kernel, this does seem convincing. There is no shared user space on Linux where you know how each component will react/recover in the face of unexpected kernel behaviour, and no SKUs targeting specific use cases in which e.g. a denial of service might be a worse issue than on desktop.
I guess CVEs provide some of this classification, but they seem to cause drama amongst kernel people.
QED.
First of all "security" is undefined. Second, nearly every bug can be be exploited in a malicious way, but that way is usually not easy to find. So should every bug be classified as a security bug?
Or should only bugs where a person can think of a way on the spot during triage to exploit that bug as a security bug? In that case only a small subset of your "security" bugs are classified as such.
It is meaningless in all cases.
Nonsense.