If someone can read my Signal keys on my desktop, they can also:
Replace my Signal app with a maliciously modified version
Install a program that sends the contents of my desktop notifications (likely including Signal messages) somewhere
Install a keylogger
Run a program that captures screenshots when certain conditions are met
[a long list of other malware things]
Signal should change this because it would add a little friction to a certain type of attack, but a messaging app designed for ease of use and mainstream acceptance cannot provide a lot of protection against an attacker who has already gained the ability to run arbitrary code on your user account.
If you read anything, at least read this link to self correct.
This is a common area where non-security professionals out themselves as not actually being such: The broken/fallacy reasoning about security risk management. Generally the same “Dismissive security by way of ignorance” premises.
It’s fundamentally the same as “safety” (Think OSHA and CSB) The same thought processes, the same risk models, the same risk factors…etc
And similarly the same negligence towards filling in holes in your “swiss cheese model”.
“Oh that can’t happen because that would mean x,y,z would have to happen and those are even worse”
“Oh that’s not possible because A happening means C would have to happen first, so we don’t need to consider this is a risk”
…etc
The same logic you’re using is the same logic that the industry has decades of evidence showing how wrong it is.
Decades of evidence indicating that you are wrong, you know infinitely less than you think you do, and you most definitely are not capable of exhaustively enumerating all influencing factors. No one is. It’s beyond arrogant for anyone to think that they could 🤦🤦 🤦
Thus, most risks are considered valid risks (this doesn’t necessarily mean they are all mitigatable though). Each risk is a hole in your model. And each hole is in itself at a unique risk of lining up with other holes, and developing into an actual safety or security incident.
In this case
signal was alerted to this over 6 years ago
the framework they use for the desktop app already has built-in features for this problem.
this is a common problem with common solutions that are industry-wide.
someone has already made a pull request to enable the electron safe storage API. And signal has ignored it.
Thus this is just straight up negligence on their part.
There’s not really much in the way of good excuses here. We’re talking about a run of the mill problem that has baked in solutions in most major frameworks including the one signal uses.
I was just nodding along, reading your post thinking, yup, agreed. Until I saw there was a PR to fix it that signal ignored, that seems odd and there must be some mitigating circumstances on why they haven’t merged it.
If someone can read my Signal keys on my desktop, they can also:
Signal should change this because it would add a little friction to a certain type of attack, but a messaging app designed for ease of use and mainstream acceptance cannot provide a lot of protection against an attacker who has already gained the ability to run arbitrary code on your user account.
Not necessarily.
https://en.m.wikipedia.org/wiki/Swiss_cheese_model
If you read anything, at least read this link to self correct.
This is a common area where non-security professionals out themselves as not actually being such: The broken/fallacy reasoning about security risk management. Generally the same “Dismissive security by way of ignorance” premises.
It’s fundamentally the same as “safety” (Think OSHA and CSB) The same thought processes, the same risk models, the same risk factors…etc
And similarly the same negligence towards filling in holes in your “swiss cheese model”.
…etc
The same logic you’re using is the same logic that the industry has decades of evidence showing how wrong it is.
Decades of evidence indicating that you are wrong, you know infinitely less than you think you do, and you most definitely are not capable of exhaustively enumerating all influencing factors. No one is. It’s beyond arrogant for anyone to think that they could 🤦🤦 🤦
Thus, most risks are considered valid risks (this doesn’t necessarily mean they are all mitigatable though). Each risk is a hole in your model. And each hole is in itself at a unique risk of lining up with other holes, and developing into an actual safety or security incident.
In this case
Thus this is just straight up negligence on their part.
There’s not really much in the way of good excuses here. We’re talking about a run of the mill problem that has baked in solutions in most major frameworks including the one signal uses.
https://www.electronjs.org/docs/latest/api/safe-storage
I was just nodding along, reading your post thinking, yup, agreed. Until I saw there was a PR to fix it that signal ignored, that seems odd and there must be some mitigating circumstances on why they haven’t merged it.
Otherwise that’s just inexcusable.
The PR had some issues regarding files that were pushed that shouldn’t have been, adding refactors that should have been in separate PRs, etc…
Though the main reason is that Signal doesn’t consider this issue a part of their threat model.
(for Android) https://molly.im/ restores the encryption to this file and adds other useful things