This article is the second post on security terms explained in easy, every-day-life examples. The first post can be found here.
We meet our friends Alice and Bob again, who send each other letters and Eve, who doesn’t like what they have. Eve therefore tries to have a look in their letters, trying to find out secrets about Alice and or Bob to gain advantage she can use against them.
Let’s imagine following scenario: Bob has bought a new lock to his mailbox. This lock is advertised as the ultimate smart lock, with machine learning, blockchain and AI and has a simple button that recognises the user. Unfortunately Bob ordered the lock on Wish.com (reference for readers from the future: back in 2021, this was an internet shopping platform for cheap stuff, with often surprisingly bad outcome). The lock arrives and exactly as the description states, consists of only one single black button instead of a keyhole. Bob installs the lock and is very happy. Each day, he simply presses the button and the mailbox opens.
Eve tries to steal the letters at night and is unable to open the mailbox with the same button. It works! Or does it? The company that produced the lock has a well-kept secret: The lock keeps track of the time. It will only open the mailbox between 7AM and 10PM and will react to the first button press only. Also, it will only open, if there are letters in the mailbox. If Eve knows that, she can simply come first in the morning and steal the letters. Bob will be unable to open his mailbox the same day (because it’s empty), and Eve will have access to all of Bob’s mail.
What’s the problem here? The company keeps the inner workings of the device a secret to hide an implementation, that is obviously insecure. If they would openly speak about it, people could exploit that flaw. On the other hand, creative people could find the flaw and implement an improvement.
This concept is known as „security through obscurity“. In more general terms, it is hiding the inner workings of a mechanism or piece of code from the public. Hiding the key to your front door under the doormat is another example. Everybody that knows the location, can enter your house (and it’s public knowledge, that many people still do keep the keys under the doormat nevertheless). So how does a system with a password differ from that concept? If the security concept in general is made public, and the password is part of the security concept, this is not security by obscurity. As an example, if Bob installs a new lock on his mailbox, which is a simple pin-tumbler lock, the knowledge how that works is public. If he manages to keep the key secret, the lock is (more or less) secure. A part of the concept of this lock is, that the key must be kept private. Bob can take measures to not loose his key or keep it on him everywhere he goes. As he doesn’t know how his smart-lock works, he can’t take appropriate measures.
The opposite of security through obscurity could be seen as the „open design principle“ (a core concept in open source) , „Kerckhoffs’s principle“ or as „security be design“.