Master of code: From source code analyses & vulnerability search
The security of an IT product depends largely on carefully developed source code. Dietmar Rosenthal specialises in finding even the last weak point in it.
Specialist for source code analysis
"With software, the source code is effectively 'the product'. Developers look into the source code if they want to understand how the product really works. Auditors & evaluators looking for vulnerabilities do too."
Contact:
+49 201 8999 647
Write an e-mail
For us in the Software Evaluation (SWE) product group, source code is the start and end point of all security analyses. It is the blueprint from which - at least in theory - all the properties of the product can be read. However, this is also what makes source code analysis so challenging: it is just as difficult to design a secure product as it is to analyse the blueprint of the product, i.e. the source code, for security vulnerabilities. This can be seen from the fact that, even years later, new security vulnerabilities are still being found in old open source packages, known as CVEs, which are published in security advisories.
My role as a technical expert is twofold: On the one hand, I support employees in being able to find security vulnerabilities specifically in the source code, e.g. through training in source code review, or through tools that can reliably replace manual work. On the other hand, I help to document the source code in the first place so that our customers and employees can understand the processes in the product.
The buzzword is security-by-design. Are the processes in the product designed in such a way that problems cannot arise in the first place, i.e. is it a product that already has security built in? Or does the user have to be extremely careful not to make any mistakes during operation to avoid being robbed or spied on? You can actually tell from the source code, even if it is difficult.
I was actually already an expert in source code review as a teenager, even though I didn't know the word back then. I studied documentation and the source code of my home computers and wondered what could go wrong.
Later, I worked as a "full-stack developer" alongside my studies and immediately afterwards, without ever losing the fun of analysing. As an evaluator at TÜVIT, I analysed a lot of source code right from the start and now continue to do so seamlessly as a technical expert.
The biggest issues at the moment are the testing of open source packages and the documentation of the product design from the source code.
The systematic listing of all open source components in a product is called "software composition analysis". This is the gold standard, i.e. the ideal way to find all known vulnerabilities in a product. However, the tools on the market are aimed at manufacturers and developers, not testers and evaluators. They do not work in security certification - we have to find new ways to do this.
When it comes to documenting the product design from the source code, we put the cart before the horse again: Many manufacturers put the cart before the horse by describing and "retelling" the source code in detail for certification purposes. However, this is not good practice for developers. They would insert the documentation directly into the source code as comments. We now extract these comments from the source code and give them back to the evaluators.
The biggest technological upheaval at the moment is clearly generative artificial intelligence (AI) and large language models (LLM). Software developers are again at the forefront, they are already having particularly "boring" code written by ChatGPT and similar models. The obvious question would be, is the source code that ChatGPT writes actually secure and correct? Or does ChatGPT make typical mistakes that a human developer would not make? The other obvious question would be: Can LLM recognise typical vulnerabilities in the source code and relieve the burden on human reviewers?
The latter question turns out to be naive, because it is not at all clear what "typical vulnerabilities" are to be considered in a security certification. Is it the incorrect operation of so-called cryptographic mechanisms? Is it tangible errors in formulae? Weaknesses in product design, i.e. a lack of "security by design"? Here we have to compare each point: Do LLMs deliver better or worse results than existing tools? After all, the BSI and the community rightly expect us to use the latest technology.
The programmer who specifically clicks on YouTube videos to trick the algorithm is already an old meme. Today, it would probably be TikTok videos. As an IT security professional, I don't think I'm any more critical of products than other IT enthusiasts. I'm perhaps a little frustrated that IT security doesn't play the role it should and avoid apps that offer a bit of convenience without security having been considered.