#📚reference
The concept of computer security is wide and hard to grasp in a few words. Well, actually, we can:
>Computer systems are inherently unreliable due to the impossibility of solving the [halting problem](https://en.wikipedia.org/wiki/Halting_problem), which is a consequence of Kurt Gödel's [second incompleteness theorem](https://plato.stanford.edu/entries/goedel-incompleteness/#SecIncThe), which is a fundamental fact about arithmetics and, therefore, any general computation.
But does it help to clarify the matter? Probably not.
In practical software engineering, security is a trade-off between the outcome of a taken shortcut and the risk of someone exploiting our self-confidence. Technically speaking, any system can be broken, but some attacks are so complex and expensive that no one (or only a handful) of actors have the capacity to perform them. In this sense, security is closely related to the concept of [[tech debt]]. It is a gamble. But we can either play smart or play ignorant.
In some areas, like banking systems or government services, regular security audits are mandatory, and companies are bound by law to comply with the regulations.
The art of creating secure systems is a counterpart to the art of [[decomposition]]. But while in the latter, we break up problems into smaller pieces to tackle them, in the former, we isolate components of the system to prevent access from less privileged processes to more privileged ones. In some sense, this is the only thing we can do: build sufficiently strong walls around each program to reduce the risk of contamination.
So, are we all doomed? Not really. The prudent answer to the security problem is not a technical solution (again, impossible, as Kurt Gödel proved) but a policy. Security should be addressed at the organization level. For example, we may stop letting all engineers work with client data, hide user passwords from ourselves, establish recurrent internal security audits, encourage [[defensive programming]], and prioritize security issues over other bugs. Another condition for a wise security policy is a clear prioritization of risks. This is where [[product management]] shines. By identifying areas where failure is most dangerous, we may channel our limited resources without trying to fix everything.
---
<font style="color: #F86759">Contributors:</font> *[[Mykhailo]]*
<font style="color: #F86759">Last edited:</font> *2024-03-27*