For anyone searching for the best christmas song of all time, here it is! You're welcome. https://www.youtube.com/watch?v=qmz7QMCd_WE
Happy holidays, abolish capitalism, fight fascism!
Come work with me and a bunch of crazy Germans! http://blog.recurity-labs.com/2017-09-06/work-with-us
"Decrypting the Alt-Right: How to Recognize a Fascist" https://www.youtube.com/watch?v=Sx4BVGPkdzk
Huh, guess I forgot about this place for a while.
This very moment is the most fucked up anything has ever been. And it's seriously only going to get worse.
Working this way, the effort of building security into the system and providing assurances that it can repel attacks is visible to the project management and can be planned like any other development activity. This is in stark contrast to the previous model of "testing security in" where a "security test" at the end of development will produce an unknown number of weaknesses and where the development organisation has no way of predicting the effort required to "fix" the issues that were found.
Key systems engineering practices in this model are:
+ Architecture analysis or "threat modelling" to find out which risks the system need to mitigate using security controls.
+ Security engineering work to inform the system design and validate that the right security controls are implemented.
+ Security verification of the implemented controls to provide evidence that they are indeed in place and effective.
In an alternative model to the "develop software, then test it before allowing deployment" way of assuring security,
we start planning for assurance during the early development phase. Our goal is to make sure that producing the evidence needed to show that the necessary security controls are in place and effective is as easy as possible.
2. The test measured performance at a single point-in-time, which says nothing about the future.
We know nothing about the probability of there being flaws left that were not found in the limited time/capability of the tester.
We know nothing about how large the risk of the developers introducing new flaws in the future is.
Problems with this approach:
1. We are no closer to answering the CISO's question of "is it OK to deploy this on the internet?".
Well we fixed all the reported issues, that means we are "secure" now, right?
No! It's a complete non-sequitur: Q: Is the system secure? A: Well, we found these bugs!
The traditional "penetration test" approach to security assurance:
Step 1: Contract a security consultant that seems to know what they are doing
Step 2: Set a time limit of how many of their hours we can afford to spend
Step 3: Give them access to some kind of test setup which might or might not reflect reality
Step 4: Maybe give them some documentation for the system (do we even have any?)
Step 5: The consultants produce a report which is basically a list of potential security issues.
Hypothetical scenario:
We are a CISO and our company is about to launch a new application on the Internet.
This application has "sensitive information" only authorized users should be allowed to see.
How do we make sure the "sensitive information" is protected against attackers?
What we want is "security assurance", i.e. evidence based assertions about how the application will resist attacks.
There is a light that never goes out. https://www.youtube.com/watch?v=OrgKtFVKTmI
Day 40 of earache. Doctor says infection is gone and I should wait it out/reduce pressure with nose spray + paracetamol. Kill me now.
And you will know us by the trail of beat-up Nazis.
Cross-posting, but whatever.
Sweden will not give in to fear. Fuck terrorism.
(Also is it just me or is "coder" possibly the worst word for "software engineer"?)